m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{1371}
hide / edit[0] / print
ref: -0 tags: nanotube tracking extracellular space fluorescent date: 02-02-2017 22:13 gmt revision:0 [head]

PMID-27870840 Single-nanotube tracking reveals the nanoscale organization of the extracellular space in the live brain

  • Extracellular space (ECS) takes up nearly a quarter the volume of the brain (!!!)
  • Used the intrinsic fluorescence of single-walled carbon nanotubes @ 1um, 845nm excitation, with super-resolution tracking of diffusion.
    • Were coated in phospholipid-polyethylene glycol (PL-PEG), which display low cytotoxicity compared to other encapsulants.
  • 5ul, 3ug/ml injected into the ventricles of young rats; allowed to diffuse for 30 minutes post-injection.
  • No apparent response of the microglia.
  • Diffusion tracking revealed substantial dead-space domains in the ECS.
    • As compared to patch-clamp loaded SWCNTs
  • Estimate from parallel and perpendicular diffusion rates that the characteristic scale of ECS dimension is 80 to 270nm, or 150 +- 40nm.
  • The ECS nanoscale dimensions as visualized by tracking similar in dimension and tortuosity to electron microscopy.
  • Viscosity of the extracellular matrix from 1 to 50 mPa S, up to two orders of magnitude higher than the CSF.
  • Positive control through hyalurinase + several hours to digest the hyaluronic acid.
    • But no observed changes in morphology of the neurons via confocal .. interesting.
    • Enzyme digestion normalized the spatial heterogenaity of diffusion.

{990}
hide / edit[2] / print
ref: Peikon-2009.06 tags: Peikon Fitzsimmons Nicolelis video tracking walking BMI Idoya date: 01-06-2012 00:19 gmt revision:2 [1] [0] [head]

PMID-19464514[0] Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

  • yepp.

____References____

[0] Peikon ID, Fitzsimmons NA, Lebedev MA, Nicolelis MA, Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.J Neurosci Methods 180:2, 224-33 (2009 Jun 15)

{648}
hide / edit[0] / print
ref: notes-0 tags: ME270 light video tracking date: 12-04-2008 00:07 gmt revision:0 [head]

Section 3 - Video tracking and host computer control

With the microcontroller done, we then moved to controlling it via a video-tracking computer. At this point, we had created a simple program for testing out parallel port control of the light's three axes using the keyboard (tilt, pan, and shutter). This program was split into two files, a main, and a set of subroutines that could then be called and compiled into the full video tracking program. It uses libparapin to abstract interaction with the parallel port in userspace.

First, the main loop, which is very simple:

#include "parallelout.h"
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

char g_main_loop ; 

int main(int argc, char *argv[])
{
	g_main_loop = 1; 
	char c;
	parallel_setup(); 
 
	while(1){
		c = fgetc(stdin);
		interpret_cmd(c); 
	}
}

Second, the parallel port controller. This uses a thread and circular queue to provide asynchronous, non-blocking control of the communications channel. Non-blocking is critical, as the program waits a small period between low-high transition of the interrupt pin (pin 4) for the MSP430 to read the status of the three lines.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <parapin.h>
#include <pthread.h>

#include "parallelout.h"

char g_q[1024]; //queue for the commands. 

int g_q_rptr; //where to read the next command from. 
int g_q_wptr; //where to put the next command

double g_velPan; 
double g_velTilt; 

void stepstep(void){
       int i = 0;
       for(i=0; i<20000; i++){
               set_pin(LP_PIN[4]);
               clear_pin(LP_PIN[4]);
	       set_pin(LP_PIN[5]);
               clear_pin(LP_PIN[5]);
	       set_pin(LP_PIN[5]);
               clear_pin(LP_PIN[5]);
       }
}

void velstep(int n){
	//printf("velstep %d\n", n); 
	clear_pin(LP_PIN[4]);
	if(n&0x1) set_pin(LP_PIN[2]) ; 
	else clear_pin(LP_PIN[2]); 
		
	if(n&0x2) set_pin(LP_PIN[3]) ; 
	else clear_pin(LP_PIN[3]); 
	set_pin(LP_PIN[4]); 
	//leave it up, so the msp430 knows it is a velocity command. 
}
void openShutter(){
	printf("opening shutter\n"); 
	clear_pin(LP_PIN[4]);
	set_pin(LP_PIN[2]); 
	set_pin(LP_PIN[3]); 
	set_pin(LP_PIN[4]); 
	clear_pin(LP_PIN[4]); //clear the trigger to indicate a shutter command. 
}
void closeShutter(){
	printf("closing shutter\n"); 
	clear_pin(LP_PIN[4]);
	clear_pin(LP_PIN[2]); 
	clear_pin(LP_PIN[3]); 
	set_pin(LP_PIN[4]); 
	clear_pin(LP_PIN[4]); //clear the trigger to indicate a shutter command. 
}
void smallShutter(){
	printf("small shutter\n"); 
	clear_pin(LP_PIN[4]);
	clear_pin(LP_PIN[2]); 
	set_pin(LP_PIN[3]); 
	set_pin(LP_PIN[4]); 
	clear_pin(LP_PIN[4]); //clear the trigger to indicate a shutter command. 
}
void stopMirror(){
	printf("stop mirror\n"); 
	clear_pin(LP_PIN[4]);
	set_pin(LP_PIN[2]); 
	clear_pin(LP_PIN[3]); 
	set_pin(LP_PIN[4]); 
	clear_pin(LP_PIN[4]); //clear the trigger to indicate a shutter command. 
}

void parallel_setup(){
	if (pin_init_user(LPT1) < 0)
		exit(0);

	pin_output_mode(LP_DATA_PINS | LP_SWITCHABLE_PINS);
	clear_pin(LP_PIN[2]);
	clear_pin(LP_PIN[3]); 
	clear_pin(LP_PIN[4]); 
	pthread_t thread1;
	//start the queue-servicing thread. 
	pthread_create ( &thread1, NULL, pq_thread, NULL ); 
}

void interpret_cmd(char cmd){
	//these codes don't make much sense unless you are 
	//controlling from a keyboard.
	switch(cmd){
		case 'w': velstep(0); g_velPan-=1.0; break; //pan to right (looking at but of light)
		case 'a': velstep(1); g_velTilt+=1.0; break; //tilt toward.
		case 's': velstep(2); g_velPan+=1.0; break; //pan to left
		case 'd': velstep(3); g_velTilt-=1.0; break; //tilt away
		case 'o': openShutter(); break; 
		case 'c': closeShutter(); break; 
		case 'x': smallShutter(); break;
		case ' ': stopMirror(); g_velPan=0; g_velTilt=0; break; 
	}
}

void usleep(int us){
	timespec ts; 
	ts.tv_sec = 0; 
	ts.tv_nsec = us * 1000; 
	nanosleep(&ts, NULL);
}

extern int g_main_loop ; 

void* pq_thread(void* a){
	while(g_main_loop){
		if(g_q_wptr > g_q_rptr){
			char cmd = g_q[g_q_rptr % sizeof(g_q)]; 
			g_q_rptr++; 
			interpret_cmd(cmd); 
		}
		usleep( 200 ); // run at max 500hz update.
		// the msp430 takes about 125us to service the parallel port irq.
	}
	return (void*)0;
}

void enqueue(char cmd){
	//this should be sufficiently atomic so there is no thread contention.
	g_q[g_q_wptr % sizeof(g_q)] = cmd; 
	g_q_wptr++; 
}

Then, we worked on the video tracking program. I will omit the some of the noncritical sections involving the firewire (ieee1394), Xv (video display) and X11 (window manager) calls, as the whole program is long, ~1000 lines. Below is 'main' -- see the comments for a detailed description.

int main(int arc, char *argv[]){
	int i; 
	double t1, t2, t3, t4; 
	t1 = t2 = t3 = t4 = 0.0; 
	signal(SIGINT, cleanup); //trap cntrl c
	signal(SIGPIPE, cleanup);
	//turn off output buffering for ttcp!
	//setvbuf(stdout,(char*)NULL,_IONBF,0);
	//init buffers for old tracking... 
	for(int i=0; i<4; i++){
                g_buffer[i] = (short*)malloc(640*480*sizeof(short)); 
        }
        g_averagefb = (int*)malloc(640*480*sizeof(int)); 
        g_velfb = (int*)malloc(640*480*sizeof(int));
        g_lastfb = (unsigned char*)malloc(640*480);	
        g_trackedfb = (unsigned char*)malloc(640*480);	
        for(i=0; i < 640*480; i++){
                g_averagefb[i] = 0; 
        }
	//Step -2: set up threads (the display runs on a seperate thread
	// to keep from blocking the iscochronous recieve channel
	// and hence causing the frame rate to drop. 
	pthread_mutex_init(&g_dispthread_mutex, NULL); 
	pthread_cond_init(&g_dispthread_cond, NULL); 
	//STEP -1: init the parallel port for control of mirror (this also starts that thread)
	parallel_setup(); 
	//Step 0: small shutter so we can track the light easily. 
	smallShutter(); 
	//Step 0.5: move the mirror to the close left (from but of light) for calibration
	//for reference (from the viewpoint of the cord end of the light): 
	// pan left : s
	// pan right: w
	// tilt toward: a
	// tilt away: s
	// small shutter: x
	// open shutter: o
	// closed shutter: c
	// stop mirrors : <space>
	for(i=0; i<10; i++){
		enqueue('s'); //to the left if you are looking at the but of the light.
		enqueue('a'); //the tilt axis has far fewer steps for full range than
		enqueue('s'); // the pan axis hence requires a much higher velocity - 
		enqueue('s'); // so enqueue more 's'. 
		enqueue('s'); 
		enqueue('s'); 
	}
	//Step 1: Open ohci and assign a handle to it.
	//==================================================================================================================
	init_cards();
        //Step 2: Get the camera nodes and describe them as we find them.
	//==================================================================================================================
	init_cams();
        //Step 3: Setup Capture
	//==================================================================================================================
	setup_cams();
	//Step 4: Start sending data
	//==================================================================================================================
	start_iso_transmission();
	//start the other thread. 
	pthread_t thread1;
	pthread_attr_t attr;
	pthread_attr_init(&attr);
	pthread_create( &thread1, &attr, display_thread, 0 ); 
	
	//Main event loop
	while(g_main_loop==true){
		for( i=0; i<numCameras; i++){
			if(dc1394_dma_single_capture(&camera[i]) != DC1394_SUCCESS){
				fprintf(stderr, "dma1394: Failed to capture from cameras\n");
				cleanup(0);
			}
		}
		t2=get_time();
		for( i=0; i<numCameras; i++){
			//display_frames_old(i);
			display_frames(i);
			/*if((g_frame%60) < 15){
				velstep(0); 
			}else if((g_frame%60) < 45){
				velstep(2); 
			}else {
				velstep(0); 
			}	*/			
			if(dc1394_dma_done_with_buffer(&camera[i]) != DC1394_SUCCESS){
				fprintf(stderr, "dma1394: Can't release dma bufer\n");
			}
		}
		
		if(g_frame % 60 == 0){
			printf("frame dt: %f (%f) track time %f (%f)\n", t2-t4, 1/(t2-t4), tracktime, 1/(tracktime)); 
		}
		//start with the state machine for the calibration -- 
		if(g_frame == CALIB_0){
			enqueue(' '); //stop it
			printf("!!assuming that the mirror reached it's limit!!\n");
			for( i=0; i<3; i++){
				//now that we have put the mirror into a corner, give it velocity
				// to move to the center of the FOV so that we may turn on 
				// feedback-based tracking. 
				enqueue('w'); //to the left if you are looking at the but of the light.
				enqueue('d'); 
				enqueue('w'); //again, pan motor has many more steps/range than tilt
				enqueue('w');
				enqueue('w');
				enqueue('w');
				enqueue('w');
				enqueue('w');
				enqueue('w');
			}
		}
		if(g_frame == CALIB_1){
			enqueue(' '); //stop it
			printf("!!assuming light is centered now!!\n");
		}
		if(g_frame == CALIB_2){
			enqueue('x');  
		}
		t4 = t2; 
		g_frame++; 
	}
	cleanup(0); 
	return 0;
}

Our tracking algorithm periodically opens and closes the shutter on the light. It is impossible to track a target based on brightness or even pattern detection, since the light is so bright it is impossible to image what it hits and what it does not with our cameras of limited dynamic range. (The human eye, of course, has far better dynamic range.) During the period when the light is off, we wait for the camera shutter speed to stabilize, then average the brightest spot over 10 consecutive frames to obtain a target position. Then, the shutter is opened, and visual feedback is used with a simple PD controller to guide the light to the target. When the device is deployed, we will make the update non-periodic and purely contingent on the detection of motion or of decreased solar cell output. See below for the thread that implements this logic, as well as blits the image onto the screen.

void* display_thread(void* ptr ){
	make_window(); 
	while(g_main_loop){
		if(pthread_cond_wait(&g_dispthread_cond, &g_dispthread_mutex) == 0){
			pthread_mutex_unlock(&g_dispthread_mutex); 
			double t1 = get_time(); 
			g_first_frame=false;
			//convert into the XV format (this seems very inefficient to me...)
			for(unsigned int i=0; i< g_framewidth*g_frameheight; i++){
				g_fb[i] = g_trackedfb[i]+ 0x8000;
			}
			double c_r, c_c; 
			new_track(0, &tcam[0], g_trackedfb, 
				g_framewidth, g_frameheight, CONTRAST_MIN, SEARCHR, 
				GAUSSDROPOFF, NUMBER_OF_MARKERS, &c_r, &c_c);
			xv_image=XvCreateImage(display, info[adaptor].base_id, 
				format_disp, (char*)g_fb, g_framewidth, g_frameheight*numCameras);
			XvPutImage(display, info[adaptor].base_id, window, gc, xv_image, 0, 
				0, g_framewidth, g_frameheight*numCameras, 0, 0,
				g_windowwidth, g_windowheight);
			free(xv_image); 
			
			//do some dumb control (finally!)
			// initially, guide the light to the center of the screen. 
			if(g_frame > CALIB_1 && g_frame <= CALIB_2){
				g_target_c = 320.0; 
				g_target_r = 240.0; 
				servo_mirror(c_c, c_r); //get it stuck on the center! 
			}
			int time = g_frame - CALIB_2; 
			// below is the *main loop* for cycling the shutter open/close
			if(g_frame > CALIB_2){
				if(time % 300 < 240){
					servo_mirror(c_c, c_r); 
				}
				if(time % 300 == 240){
					enqueue('c'); 
					enqueue(' '); 
				}
				if(time % 300  >= 260 && time % 300  < 280 ){
					g_target_c += c_c; 
					g_target_r += c_r; 
				}
				if(time % 300 == 280){
					enqueue('x'); 
					g_target_c /= 20.0; 
					g_target_r /= 20.0; 
				}
			}
			double t2 = get_time(); 
			tracktime = t2 - t1 ; 
		}
		//normalize_com(NUMBER_OF_MARKERS);
		XFlush(display);
		while(XPending(display)>0){
			XNextEvent(display,&xev);
			switch(xev.type){
				case ConfigureNotify:
					g_windowwidth=xev.xconfigure.width;
					g_windowheight=xev.xconfigure.height;
				break;
				case KeyPress:
					switch(XKeycodeToKeysym(display,xev.xkey.keycode,0)){
						case XK_q:
						case XK_Q:
							g_main_loop = false; 
							//cleanup(0);
						break;
						}
				break;
			}
		} //XPending
	}
	if ((void *)window != NULL){
		XUnmapWindow(display,window);
	}
	fprintf(stderr,"dma1394: Unmapped Window.\n");
	if (display != NULL){
		XFlush(display);
	}
	return (void*) 0;
}

The PD controller uses very pessimistic values for the coefficients, as we discovered that the timing resolution on out older linux computer is low - about 5ms. This means that if too many velocity step commands are sent to the parallel port thread at one time, it will get backlogged, which will induce a phase-shift between control and actuation of velocity. Hence, the light must move rather slowly, on the order of one velocity step on each axis per frame. Again, below.

void servo_mirror(double c_c, double c_r ){
	double dc = c_c - g_target_c; //for now assume that we want to stabilize in
	double dr = c_r - g_target_r; // the center.
	double vgain = 8.0 ; 
	double pgain = 1.0/80.0; 
	int lim = 1; 
	double c = dc + g_velPan*vgain ; 
	int ccmd = 0; 
	int rcmd = 0; 
	if(c > 0){
		for(int i=0; i<c*pgain && i < lim; i++){
			enqueue('w');
			ccmd --; 
		}
	}
	if(c < 0){
		for(int i=0; i<c*-1.0*pgain && i < lim; i++){
			enqueue('s');
			ccmd ++; 
		}
	}
		
	vgain *= 1.5; //tilt mirror moves quicker!
	double r = dr + g_velTilt*vgain;
	if(r>0){
		for(int i=0; i<r*pgain && i < lim; i++){
			enqueue('d');
			rcmd--; 
		}
	}
	if(r<0){
		for(int i=0; i<r*-1.0*pgain && i < lim; i++){
			enqueue('a');
			rcmd++; 
		}
	}
	//this for debugging loop stability problems in matlab. 
	//printf("%f %f %d %f %f %d\n", dc, g_velPan*vgain, ccmd, dr, g_velTilt*vgain, rcmd); 
	//if(dr + g_velTilt*vgain > 0) enqueue('d'); OLD
	//if(dr + g_velTilt*vgain < 0) enqueue('a'); 
}

Our video tracking algorithm first uses a tree-like algorithm to quickly and robustly search for the brightest region in the scene; we presume, somewhat simplistically, that this will be the target. When the device is put into use with an actual monkey cage, we'll surround the camera with high-intensity infrared LEDs to effectively illuminate a retroreflector placed on the monkey's head. Below is the code which performs this computation.

//make a blur matrix
//void blur(frame_info* frame, unsigned char * fb, int framewidth, int downsamp, int downsamp_w, int downsamp_h){
void blur(int camno, track_cam* tcam, unsigned char* fb, int framewidth, int downsamp_r, int downsamp_c, int downsamp_w, int downsamp_h){
	//initialize contrasts
	for(int m=0; m<downsamp_r * downsamp_c; m++){
		tcam[camno].frame.sum[m]=0;
		tcam[camno].frame.contr_min[m]=255;
		tcam[camno].frame.contr_max[m]=0;
	}
	for(int k=0; k<downsamp_r; k++){	
		for(int row=k*downsamp_h; row<k*downsamp_h+downsamp_h; row++){
			for(int j=0; j<downsamp_c; j++){
				for(int col=j*downsamp_w; col<j*downsamp_w+downsamp_w; col++){
					tcam[camno].frame.sum[j+(k*downsamp_c)]+=int(fb[row*framewidth+col]);
					if(int(fb[row*framewidth+col])>tcam[camno].frame.contr_max[j+(k*downsamp_c)]){
						tcam[camno].frame.contr_max[j+(k*downsamp_c)]=int(fb[row*framewidth+col]); //introducing a contrast check.  
					}
					if(int(fb[row*framewidth+col])<tcam[camno].frame.contr_min[j+(k*downsamp_c)]){
						tcam[camno].frame.contr_min[j+(k*downsamp_c)]=int(fb[row*framewidth+col]); //introducing a contrast check
					}
				}
			}
		}
	}
}

//blob_search function
//search through the sum matrix and find the brightest sums
//void blob_search(frame_info* frame, marker* marker, int num_markers, int contrast_min){
void blob_search(int camno, track_cam* tcam, int num_markers, int contrast_min, int downsamp_r, int downsamp_c){
	//frame->num_blobs=0; //innocent until proven guilty
	for(int i=0; i<num_markers; i++){
		int blob_val=0;
		for(int m=0; m<downsamp_r*downsamp_c; m++){
			if(tcam[camno].frame.sum[m]>blob_val && tcam[camno].frame.contr_max[m]-tcam[camno].frame.contr_min[m]>contrast_min){ //has to have a big contrast to be a blob (CONTRAST is user defined macro)
				blob_val=tcam[camno].frame.sum[m]; //the new max'
				tcam[camno].marker[i].downsamp_loc=m; //the sum integer (0-255)
				//frame->num_blobs++;
			}
		}
		tcam[camno].frame.sum[tcam[camno].marker[i].downsamp_loc]=0; //kill the one we just found so we can find the next biggest one.
	}
}

//brightest_pix_search function
//search through the blobs for the brightest pixel
//void brightest_pix_search(unsigned char * fb, frame_info* frame, marker* marker, int num_markers, int framewidth, int downsamp, int downsamp_w, int downsamp_h){
void brightest_pix_search(unsigned char * fb, int camno, track_cam* tcam, int num_markers, int framewidth, int downsamp_r, int downsamp_c, int downsamp_w, int downsamp_h){
	//br_pix_info[0] is the row
	//br_pix_info[1] is the col
	//br_pix_info[2] is the value
	for(int i=0; i<num_markers; i++){
		tcam[camno].marker[i].br_pix_val=0; //always has to start low
		for(int row=int(floor(tcam[camno].marker[i].downsamp_loc/downsamp_c))*downsamp_h; row<int(floor(tcam[camno].marker[i].downsamp_loc/downsamp_c))*downsamp_h+downsamp_h; row++){
			for(int col=tcam[camno].marker[i].downsamp_loc%downsamp_c*downsamp_w; col<tcam[camno].marker[i].downsamp_loc%downsamp_c*downsamp_w+downsamp_w; col++){
				if(int(fb[row*framewidth+col])>tcam[camno].marker[i].br_pix_val){ //if it is greater than the brightest pixel then store its info
					tcam[camno].marker[i].br_pix_row=row; //save the row
					tcam[camno].marker[i].br_pix_col=col; //save the column
					tcam[camno].marker[i].br_pix_val=int(fb[row*framewidth+col]); //save the value
				}
			}
		}
	}
}

The blocking (or blobbing) and search algorithm yields the estimated location of the brightest pixel in the image. This is passed to a specialized array-growth region growing algorithm which dynamically expands a region around the suggested brightest pixel to include all pixels that are within a threshold of brightness to the brightest. The region growing algorithm then computes the center of mass from the list of pixel coordinates, which are then passed to the PD and target location routines.

void region_grow(unsigned char * src, unsigned short* dest, 
		int w, int h, int b_r, int b_c, double* c_r, double* c_c){
	//need to do an expansion from the brightest point. 
	//this is sorta a random-access op - which is bad. 
	unsigned short fill = 0xff00; 
	int n = 0; 
	short r, c; 
	int  i, p; 
	unsigned char thresh = 20 ; 
	unsigned char brightest = src[w*b_r + b_c]; 
	g_rows[n] = b_r; 
	g_cols[n] = b_c; 
	n++; 
	int sta = 0; 
	int end = 0; 
	int lim = sizeof(g_rows)/sizeof(int); 
	while(n < lim && n > sta){
		//loop through all the new points, adding to the set as we go. 
		end = n; 
		for(i=sta; i < end; i++){
			r = g_rows[i]; 
			c =  g_cols[i]; 
			r++; //down
			if(r >= 0 && r < h && c >= 0 && c < w && n < lim){
				p = r*w +c; 
				if(brightest - src[p] < thresh){
					src[p] = 0; 
					dest[p] = fill; 
					g_rows[n] = r; 
					g_cols[n] = c; 
					n++; 
				}
			}
			r -= 2; //up.
			if(r >= 0 && r < h && c >= 0 && c < w && n < lim){
				p = r*w +c; 
				if(brightest - src[p] < thresh){
					src[p] = 0; 
					dest[p] = fill; 
					g_rows[n] = r; 
					g_cols[n] = c; 
					n++; 
				}
			}
			r++; //center
			c++; //to the right. 
			if(r >= 0 && r < h && c >= 0 && c < w && n < lim){
				p = r*w +c; 
				if(brightest - src[p] < thresh){
					src[p] = 0; 
					dest[p] = fill; 
					g_rows[n] = r; 
					g_cols[n] = c; 
					n++; 
				}
			}
			c-=2; //to the left. 
			if(r >= 0 && r < h && c >= 0 && c < w && n < lim){
				p = r*w +c; 
				if(brightest - src[p] < thresh){
					src[p] = 0; 
					dest[p] = fill; 
					g_rows[n] = r; 
					g_cols[n] = c; 
					n++; 
				}
			}
		}//end loop over past points. 
		sta = end; 
	}
	//calculate the center of mass. 
	double cm_r = 0; 
	double cm_c = 0; 
	for(i=0; i<n; i++){
		cm_r += g_rows[i]; 
		cm_c += g_cols[i]; 
	}
	cm_r /= n; 
	cm_c /= n; 
	*c_r = cm_r; 
	*c_c = cm_c; 
	//printf("point: %f %f %d \n",  cm_r, cm_c, g_frame++); 
	int cm_r_i, cm_c_i; 
	cm_r_i = (int)cm_r; 
	cm_c_i = (int)cm_c; 
	if(cm_c_i >= 0 && cm_c_i < w && cm_r_i >= 0 && cm_r_i < h)
		dest[cm_r_i*w + cm_c_i] = 0xffff; 
}

And that is, roughly, the entirety of the video tracking program! (Most of the rest of the code deals with the firewire bus and other less interesting details.) We conclude with a picture of the whole setup in the office.