m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{1104} is owned by tlh24.
{1384}
hide / edit[0] / print
ref: -0 tags: NET probes SU-8 microfabrication sewing machine carbon fiber electrode insertion mice histology 2p date: 03-01-2017 23:20 gmt revision:0 [head]

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration

  • SU-8 asymptotic H2O absorption is 3.3% in PBS -- quite a bit higher than I expected, and higher than PI.
  • Faced yield problems with contact litho at 2-3um trace/space.
  • Good recordings out to 4 months!
  • 3 minutes / probe insertion.
  • Fab:
    • Ni release layer, Su-8 2000.5. "excellent tensile strength" --
      • Tensile strength 60 MPa
      • Youngs modulus 2.0 GPa
      • Elongation at break 6.5%
      • Water absorption, per spec sheet, 0.65% (but not PBS)
    • 500nm dielectric; < 1% crosstalk; see figure S12.
    • Pt or Au rec sites, 10um x 20um or 30 x 30um.
    • FFC connector, with Si substrate remaining.
  • Used transgenic mice, YFP expressed in neurons.
  • CA glue used before metabond, followed by Kwik-sil silicone.
  • Neuron yield not so great -- they need to plate the electrodes down to acceptable impedance. (figure S5)
    • Measured impedance ~ 1M at 1khz.
  • Unclear if 50um x 1um is really that much worse than 10um x 1.5um.
  • Histology looks realyl great, (figure S10).
  • Manuscript did not mention (though the did at the poster) problems with electrode pull-out; they deal with it in the same way, application of ACSF.

{1341}
hide / edit[20] / print
ref: -0 tags: image registration optimization camera calibration sewing machine date: 07-15-2016 05:04 gmt revision:20 [19] [18] [17] [16] [15] [14] [head]

Recently I was tasked with converting from image coordinates to real world coordinates from stereoscopic cameras mounted to the end-effector of a robot. The end goal was to let the user (me!) click on points in the image, and have the robot record that position & ultimately move to it.

The overall strategy is to get a set of points in both image and RW coordinates, then fit some sort of model to the measured data. I began by printing out a grid of (hopefully evenly-spaced and perpendicular) lines via a laserprinter; spacing was ~1.1 mm. This grid was manually aligned to the axes of robot motion by moving the robot along one axis & checking that the lines did not jog.

The images were modeled as a grating with quadratic phase in u,v texture coordinates:

p h(u,v)=sin((a hu/1000+b hv/1000+c h)v+d hu+e hv+f h)+0.97 (1)

p v(u,v)=sin((a vu/1000+b vv/1000+c v)u+d vu+e vv+f v)+0.97 (2)

I(u,v)=16p hp v/(2+16p h 2+16p v 2) (3)

The 1000 was used to make the parameter search distribution more spherical; c h,c v were bias terms to seed the solver; 0.97 was a duty-cycle term fit by inspection to the image data; (3) is a modified sigmoid.

I was then optimized over the parameters using a GPU-accelerated (CUDA) nonlinear stochastic optimization:

(a h,b h,d h,e h,f ha v,b v,d v,e v,f v)=Argmin u v(I(u,v)Img(u,v)) 2 (4)

Optimization was carried out by drawing parameters from a normal distribution with a diagonal covariance matrix, set by inspection, and mean iteratively set to the best solution; horizontal and vertical optimization steps were separable and carried out independently. The equation (4) was sampled 18k times, and equation (3) 34 billion times per frame. Hence the need for GPU acceleration.

This yielded a set of 10 parameters (again, c h and c v were bias terms and kept constant) which modeled the data (e.g. grid lines) for each of the two cameras. This process was repeated every 0.1 mm from 0 - 20 mm height (z) from the target grid, resulting in a sampled function for each of the parameters, e.g. a h(z) . This required 13 trillion evaluations of equation (3).

Now, the task was to use this model to generate the forward and reverse transform from image to world coordinates; I approached this by generating a data set of the grid intersections in both image and world coordinates. To start this process, the known image origin u origin z=0,v origin z=0 was used to find the corresponding roots of the periodic axillary functions p h,p v :

3π2+2πn h=a huv/1000+b hv 2/1000+(c h+e h)v+d hu+f h (5)

3π2+2πn h=a vu 2/1000+b vuv/1000+(c v+d v)u+e vv+f v (6)

Or ..

n h=round((a huv/1000+b hv 2/1000+(c h+e h)v+d hu+f h3π2)/(2π) (7)

n v=round((a vu 2/1000+b vuv/1000+(c v+d v)u+e vv+f v3π2)/(2π) (8)

From this, we get variables n h,origin z=0andn v,origin z=0 which are the offsets to align the sine functions p h,p v with the physical origin. Now, the reverse (world to image) transform was needed, for which a two-stage newton scheme was used to solve equations (7) and (8) for u,v . Note that this is an equation of phase, not image intensity -- otherwise this direct method would not work!

First, the equations were linearized with three steps of (9-11) to get in the right ballpark:

u 0=640,v 0=360

n h=n h,origin z+[30..30],n v=n v,origin z+[20..20] (9)

B i=[3π2+2πn ha hu iv i/1000b hv i 2f h 3π2+2πn va vu i 2/1000b vu iv if v] (10)

A i=[d h c h+e h c v+d v e v] and

[u i+1 v i+1]=mldivide(A i,B i) (11) where mldivide is the Matlab operator.

Then three steps with the full Jacobian were made to attain accuracy:

J i=[a hv i/1000+d h a hu i/1000+2b hv i/1000+c h+e h 2a vu i/1000+b vv i/1000+c v+d v b vu i/1000+e v] (12)

K i=[a hu iv i/1000+b hv i 2/1000+(c h+e h)v i+d hu i+f h3π22πn h a vu i 2/1000+b vu iv i/1000+(c v+d v)u i+e vv+f v3π22πn v] (13)

[u i+1 v i+1]=[u i v i]J i 1K i (14)

Solutions (u,v) were verified by plugging back into equations (7) and (8) & verifying n h,n v were the same. Inconsistent solutions were discarded; solutions outside the image space [0,1280),[0,720) were also discarded. The process (10) - (14) was repeated to tile the image space with gird intersections, as indicated in (9), and this was repeated for all z in (0..0.1..20) , resulting in a large (74k points) dataset of (u,v,n h,n v,z) , which was converted to full real-world coordinates based on the measured spacing of the grid lines, (u,v,x,y,z) . Between individual z steps, n h,originn v,origin was re-estimated to minimize (for a current z ):

(u origin z+0.1u origin z+0.1) 2+(v origin z+0.1+v origin z) 2 (15)

with grid-search, and the method of equations (9-14). This was required as the stochastic method used to find original image model parameters was agnostic to phase, and so phase (via parameter f ) could jump between individual z measurements (the origin did not move much between successive measurements, hence (15) fixed the jumps.)

To this dataset, a model was fit:

[u v]=A[1 x y z x 2 y 2 z 2 w 2 xy xz yz xw yw zw] (16)

Where x=x10 , y=y10 , z=z10 , and w=2020z . w was introduced as an axillary variable to assist in perspective mapping, ala computer graphics. Likewise, x,y,z were scaled so the quadratic nonlinearity better matched the data.

The model (16) was fit using regular linear regression over all rows of the validated dataset. This resulted in a second set of coefficients A for a model of world coordinates to image coordinates; again, the model was inverted using Newton's method (Jacobian omitted here!). These coefficients, one set per camera, were then integrated into the C++ program for displaying video, and the inverse mapping (using closed-form matrix inversion) was used to convert mouse clicks to real-world coordinates for robot motor control. Even with the relatively poor wide-FOV cameras employed, the method is accurate to ±50μm , and precise to ±120μm .

{1059}
hide / edit[0] / print
ref: -0 tags: electrode implantation spring modeling muscles sewing date: 01-16-2012 17:30 gmt revision:0 [head]

PMID-21719340 Modelization of a self-opening peripheral neural interface: a feasibility study.

  • Electrode is self-opening, and they outline the math behind it. This could be useful!