m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{396} is owned by tlh24.
{1471}
hide / / print
ref: -0 tags: python timelapse script date: 07-30-2019 20:45 gmt revision:3 [2] [1] [0] [head]

Edited Terrence Eden's script to average multiple frames when producing a time-lapse video from a continuous video. Frames are averaged together before decimation, rather than pure decimation, as with ffmpeg. Produces appealing results on subjects like water. Also, outputs a video directly, without having to write individual images.

python
#!/usr/bin/python
import cv2
import sys

#   Video to read
print str(sys.argv[1])
vidcap = cv2.VideoCapture(sys.argv[1])

#   Which frame to start from, how many frames to go through
start_frame = 0
frames = 61000

#   Counters
count = 0
save_seq = 0
decimate = 10
rolling = 16 # average over N output frames
transpose = False

if(transpose):
	h = vidcap.get(3)
	w = vidcap.get(4)
else:
	w = vidcap.get(3)
	h = vidcap.get(4)

fourcc = cv2.VideoWriter_fourcc(*'mp4v')
writer = cv2.VideoWriter("timelapse.mp4", fourcc, 30, (int(w), int(h)), True)

avglist = []

while True:
	#   Read a frame
	success,image = vidcap.read()
	if not success:
		break
	if count > start_frame+frames:
		break
	if count >= start_frame:
		if (count % decimate == 0):
			#   Extract the frame and convert to float
			avg = image.astype('uint16') # max 255 frames averaged. 
		if (count % decimate > 0 and count % decimate <= (decimate-1)):
			avg = avg + image.astype('uint16')
		if (count % decimate == (decimate-1)):
			#   Every 100 frames (3 seconds @ 30fps)
			avg = avg / decimate
			if(transpose):
				avg = cv2.transpose(avg)
				avg = cv2.flip(avg, 1)
			avg2 = avg; 
			for a in avglist:
				avg2 = avg2 + a
			avg2 = avg2 / rolling; 
			avglist.append(avg); 
			if len(avglist) >= rolling:
				avglist.pop(0) # remove the first item. 
			
			avg2 = avg2.astype('uint8')
			print("saving "+str(save_seq))
			#   Save Image
			# cv2.imwrite(filename+str('{0:03d}'.format(save_seq))+".png", avg)
			save_seq += 1
			writer.write(avg2)
			if count == frames + start_frame:
				break
	count += 1
writer.release()

{1453}
hide / / print
ref: -2019 tags: lillicrap google brain backpropagation through time temporal credit assignment date: 03-14-2019 20:24 gmt revision:2 [1] [0] [head]

PMID-22325196 Backpropagation through time and the brain

  • Timothy Lillicrap and Adam Santoro
  • Backpropagation through time: the 'canonical' expansion of backprop to assign credit in recurrent neural networks used in machine learning.
    • E.g. variable rol-outs, where the error is propagated many times through the recurrent weight matrix, W TW^T .
    • This leads to the exploding or vanishing gradient problem.
  • TCA = temporal credit assignment. What lead to this reward or error? How to affect memory to encourage or avoid this?
  • One approach is to simply truncate the error: truncated backpropagation through time (TBPTT). But this of course limits the horizon of learning.
  • The brain may do BPTT via replay in both the hippocampus and cortex Nat. Neuroscience 2007, thereby alleviating the need to retain long time histories of neuron activations (needed for derivative and credit assignment).
  • Less known method of TCA uses RTRL Real-time recurrent learning forward mode differentiation -- δh t/δθ\delta h_t / \delta \theta is computed and maintained online, often with synaptic weight updates being applied at each time step in which there is non-zero error. See A learning algorithm for continually running fully recurrent neural networks.
    • Big problem: A network with NN recurrent units requires O(N 3)O(N^3) storage and O(N 4)O(N^4) computation at each time-step.
    • Can be solved with Unbiased Online Recurrent optimization, which stores approximate but unbiased gradient estimates to reduce comp / storage.
  • Attention seems like a much better way of approaching the TCA problem: past events are stored externally, and the network learns a differentiable attention-alignment module for selecting these events.
    • Memory can be finite size, extending, or self-compressing.
    • Highlight the utility/necessity of content-addressable memory.
    • Attentional gating can eliminate the exploding / vanishing / corrupting gradient problems -- the gradient paths are skip-connections.
  • Biologically plausible: partial reactivation of CA3 memories induces re-activation of neocortical neurons responsible for initial encoding PMID-15685217 The organization of recent and remote memories. 2005

  • I remain reserved about the utility of thinking in terms of gradients when describing how the brain learns. Correlations, yes; causation, absolutely; credit assignment, for sure. Yet propagating gradients as a means for changing netwrok weights seems at best a part of the puzzle. So much of behavior and internal cognitive life involves explicit, conscious computation of cause and credit.
  • This leaves me much more sanguine about the use of external memory to guide behavior ... but differentiable attention? Hmm.

{1396}
hide / / print
ref: -0 tags: rogers thermal oxide barrier neural implants ECoG coating accelerated lifetime test date: 12-28-2017 02:29 gmt revision:0 [head]

PMID-27791052 Ultrathin, transferred layers of thermally grown silicon dioxide as biofluid barriers for biointegrated flexible electronic systems

  • Thermal oxide proved the superior -- by far -- water barrier for encapsulation.
    • What about the edges?
  • Many of the polymer barrier layers look like inward-rectifiers:
  • Extensive simulations showing that the failure mode is from gradual dissolution of the SiO2 -> Si(OH)4.
    • Even then a 100nm layer is expected to last years.
    • Perhaps the same principle could be applied with barrier metals. Anodization or thermal oxidation to create a thick, nonporous passivation layer.
    • Should be possible with Al, Ta...

{1387}
hide / / print
ref: -1977 tags: polyethylene surface treatment plasma electron irradiation mechanical testing saline seawater accelerated lifetime date: 04-15-2017 06:06 gmt revision:0 [head]

Enhancement of resistance of polyethylene to seawater-promoted degradation by surface modification

  • Polyethylene, when repeatedly stressed and exposed to seawater (e.g. ships' ropes), undergoes mechanical and chemical degradation.
  • Surface treatments of the polyethlyene can improve resistance to this degradation.
  • The author studied two methods of surface treatment:
    • Plasma (glow discharge, air) followed by diacid (adipic acid) or triisocyanate (DM100, = ?) co-polymerization
    • Electron irradiation with 500 kEV electrons.
  • Also mention CASING (crosslinking by activated species of inert gasses) as a popular method of surface treatment.
    • Diffuse-in crosslinkers is a third, popular these days ...
    • Others diffuse in at temperature e.g. a fatty acid - derived molecule, which is then bonded to e.g. heparin to reduce the thrombogenicity of a plastic.
  • Measured surface modifications via ATR IR (attenuated total reflectance, IR) and ESCA (aka XPS)
    • Expected results, carbonyl following the air glow discharge ...
  • Results:
    • Triisocyanate, ~ 6x improvement
    • diacid, ~ 50 x improvement.
    • electron irradiation, no apparent degradation!
      • Author's opinion that this is due to carbon-carbon crosslink leading to mechanical toughening (hmm, evidence?)
  • Quote: since the PE formulation studied here was low-weight, it was expected to lose crystallinity upon cyclic flexing; high density PE's have in fact been observed to become more crystalline with working.
    • Very interesting, kinda like copper. This could definitely be put to good use.
  • Low density polyethylene has greater chain branching and entanglement than high-density resins; when stressed the crystallites are diminished in total bulk, degrading tensile properties ... for high-density resins, mechanical working loosens up the structure enough to allow new crystallization to exceed stress-induced shrinkage of crystallites; hence, the crystallinity increases.

{1131}
hide / / print
ref: -0 tags: DBS basal ganglia paradoxical kinesis reaction time date: 02-21-2012 19:52 gmt revision:1 [0] [head]

PMID-16758482 "Paradoxical kinesis" is not a hallmark of Parkinson's disease but a general property of the motor system.

  • Paradoxical kinesis is the idea that PD patients will suddenly spring to movement when propted by an extreme situation.
  • "Results showed that external cues and urgent conditions decreased movement duration (Urgent External Cue < External Cue < Self Generated) and reaction time (Urgent External Cue < External Cue)"
  • Results indicate that there is no difference in speed or reaction time improvement between controls and PD patients; it is a general property of the motor system.

{718}
hide / / print
ref: notes-0 tags: thesis timetable contingency plan hahaha date: 12-06-2011 07:15 gmt revision:5 [4] [3] [2] [1] [0] [head]

Timetable / Plan:

  1. Get recording technology finished & assembled.
    1. Hardware
      1. Clean up prototype 2. Test in-chair with Clementine.
      2. Decide upon a good microelectrode-to-headstage connector with Gary.
      3. Fit headstage PCB into head-mounted chamber. Select battery and fit that too.
      4. Assemble one; contract Protronics to assemble 3 more.
      5. Contract Protronics to assemble 4 receiver boards.
    2. Software
      1. Headstage firmware basically complete; need to add in code for LFP measurement & transmission.
      2. Need some simple sort-client; use existing "Neurocaml" source as a basis. Alternately, use Rppl, inc's open-source "Trellis" electrophysiology suite.
      3. Integrate UDP reception into the BMI suite.
      4. Get an rugged all-in-one computer for display of BMI task - at tablet PC in a plexiglas box would be perfect.
    3. Due: June 30 2009
  2. Monkeys.
    1. Test in-cage recording with Clementine. He's a bit long in the tooth now, and does not have enough cells in M1/premotor cortices to do BMI control.
    2. Select two monkeys, train them on 2D target acquisition with a joystick using Joey's chair and setup. Make sure the monkeys can learn the 2D task in a reasonable amount of time; we don't want to waste time on dumb monkeys.
    3. Arrange for implantation surgeries this summer, depending on the availability of neurosurgeon.
    4. Work with Gary Lehew to assemble microelectrodes & head-mounted chamber.
    5. Get an ethernet drop in the vivarium for transmission of data.
    6. Due: August 30 2009
  3. Experiments
    1. Test & refine task 1 with both monkeys. Allow a maximum of 1 month to learn task 1. Neuron class (x/y/z) selected based on correlational structure (PCA of firing rate).
      1. Will have to get them to turn off Wifi (in same wireless band as the headstages) in the vivarium.
      2. Batteries will need to be replaced daily.
      3. Data will be inspected daily, to eliminate possible confounds / fix bugs / optimize the probability that the monkey learns.
      4. Expected data rate per headstage, given mean firing rate of 40Hz, full waveform storage, one LFP channel sampled at 1Khz = 3.5Gb / day. 1.5Tb drive, $120, will take 100 days to fill with data from 4 headstages.
      5. Very occasionally interleave 4-target test trials after the first week of learning, with both 'y' and 'z' neurons used to control the y-axis.
    2. Test & refine task 2 with both monkeys, in position control; here, record for a minimum of 1 month.
      1. Adjust cursor and target sizes to maintain task difficulty; measure asymptotic performance in bits/sec.
      2. Interleave randomly positioned target acquisition with stereotyped target sequences to measure neuronal tuning curves.
      3. Occasionally perturb cursor to see if there is an internal expectation of cursor motion.
    3. Switch task 2 to velocity control. Measure performance and learning effects of the switch. Train the monkey on this for at least 2 weeks, or until performance asymptotes.
    4. Shuffle the neuron class to make it non-topological, and re-train on position control in task 2 (this to test if topology matters). Train monkey for at least 3 weeks.
    5. Continue recording for as long as it seems worthwhile to do so.
    6. Due: February 1 2010
  4. Writing
    1. Write the DBS paper. This can be done in parallel with many other things, and should take about a month off and on.
    2. Keep good notes during experiments, write everything up within 1-2 months of finishing the proposed experiments.
    3. Write thesis.
    4. Due : June 2010

Contingency Plan:

  1. Recording technology does not work / cannot be made workable in a reasonable amount of time (Reasonable = 4 months.)
    1. Use Plexon, record for as long as possible (or permissible given our protocol - 4 hours) while monkey is in chair. If monkeys will not go into REM/SWS in a chair, as seems likely given what I've tried, scratch the sleep specific aim.
    2. Focus instead on making the simplified BMI work. Will have to assume that neuron identity does not change between sessions.
  2. Monkey surgery fails.
    1. Unlikely. If it does happen, we should just get another monkey. As Joey's travails in publishing his paper show, it is best to have two monkeys that learn and perform the same task.
    2. Even if the implants don't last as long as all the others, the core experiments can be completed within 2 months. Recording quality from even our worst monkey has lasted much longer than this.
  3. Monkey does not learn the BMI
    1. Focus on figuring out why the monkeys cannot learn it - start by re-implementing Dawn Taylor's kludgy autoadaptive algorithm, and go from there.
    2. Focus on sleep. Put a joystick into the cage, and train the monkey on relatively complex sequences of movement to see if there is replay.
    3. Use the experiment as a springboard to test more complicated decoding algorithms with the help of Zheng.
  4. There are no signs of replay.
    1. Try different mathematical methods of looking for replay.
    2. If still nothing, report that.

{912}
hide / / print
ref: Carlton-1981.1 tags: visual feedback 1981 error correction movement motor control reaction time date: 12-06-2011 06:35 gmt revision:1 [0] [head]

PMID-6457106 Processing visual feedback information for movement control.

  • Vusual feedback can correct movement within 135ms.
  • Measured this by simply timing the latency from presentation of visual error to initiation of corrective movement.

{826}
hide / / print
ref: work-0 tags: PSD FFT periodogram autocorrelation time series analysis date: 07-19-2010 18:45 gmt revision:3 [2] [1] [0] [head]

Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data Scargle, J. D.

  • The power at a given frequency as computed by a periodigram (FFT is a specific case of the periodigram) of a gaussian white noise source with uniform variance is exponentially distributed: P z(z)=P(x<Z<z+dz)=e zdzP_z(z) = P(x&lt;Z&lt;z+dz) = e^{-z}dz
    • The corresponding CDF: 1e z 1- e^{-z} or P(Z>z)=e zP(Z&gt;z) = e^{-z} which gives the probability of a large observed power at a given freq.
    • If you need to average N samples, then P(Z>z)=1(1e z) NP(Z&gt;z) = 1 - (1-e^{-z})^N where Z=max nPow(ω n)Z = max_n Pow(\omega_n)
  • Means of improving detection using a periodogram:
    • Average in time - this means that N above will be smaller, hence a spectral peak becomes more significant.
      • Cannot average too much - at some point, averaging will start to attenuate the signal!
    • Decrease the number of frequencies inspected.
  • Deals a good bit with non-periodic sampling, which i guess is more common in astronomical data (the experimenter may not take a photo every day, or the same time every day (clouds!).

{761}
hide / / print
ref: life-0 tags: NYTimes genius talent skill learning date: 06-27-2009 18:36 gmt revision:1 [0] [head]

http://www.nytimes.com/2009/05/01/opinion/01brooks.html?_r=1 -- the 'modern view' of genius. Makes sense to me.

  • quote: "By practicing in this way, performers delay the automatizing process. The mind wants to turn deliberate, newly learned skills into unconscious, automatically performed skills. But the mind is sloppy and will settle for good enough. By practicing slowly, by breaking skills down into tiny parts and repeating, the strenuous student forces the brain to internalize a better pattern of performance." -- exactly!!
  • quote: The primary trait she possesses is not some mysterious genius. It’s the ability to develop a deliberate, strenuous and boring practice routine.
  • It's not who you are, it's what you do. (law of the cortex: you get good at what you do).
  • The subconcious / ability to push skills to the subconcious should not be neglected. Insight apparently is mostly subconcious, and rapid decisions are too - the rational/concious brain is simply too slow and deliberate to form realtime behavior & reactions, but as the above quote highlights, it is also too 'lazy' and accepting to carefully hone a true skill. This requires attention.
  • From the guardian -- "Sometimes an overload of facts is the mark of a dull and pedestrian mind, the antithesis of intelligence."
    • also: "Intelligence is a matter of output, not scores on a test." We know genius & talent by it's output.

{757}
hide / / print
ref: life-0 tags: perl disaser films vs time date: 06-15-2009 23:02 gmt revision:1 [0] [head]

My friend Joey recently showed me the trailer to "The Road", and I banefully observed that it was "yet another disaster film". This made me wonder if the number of disaster films is increasing with time - a question that was easily answered with the help of perl, matlab, and Wikipedia's list of disaster films.

First, I saved the page, then converted the list of dates contained therein into a matlab-formatted string with the following quick-n-dirty script:

$source = $ARGV[0]; 
open(FH, "< $source"); 
@j = <FH>; #slurp the entire file into one string. 
print "dates = ["; 
$first = 1; 
foreach $l (@j){
	while ($l =~ /\((\d{4})\)/gs ){
		if(not $first){
			print ","; 
		}
		print $1 ; 
		$first = 0; 
	}
}
close FH; 
print "]; \n"; 

then plotted it in matlab:

hist(dates, 20) %average over 5-year periods

yielding:

thereby validating my expectations that the number of disaster films has increased with time! (Note i did not say the percentage of total films - that might be constant :-)

{637}
hide / / print
ref: notes-0 tags: wireless spectrum FCC regulation nytimes date: 10-13-2008 22:52 gmt revision:0 [head]

My comments on this blog post, preseved here for posterity:

I agree with William’s first point, spectrum is ‘owned’ by everybody; the government’s only purpose is to regulate it so that it remains an effective communication medium. Like the bandwidth that it uses, the communication system is optimally owned by users, hence it is a bad idea to auction off segments of spectrum for exclusive use by corporations.

Examine at what happened to the 2.4 GHz band, an area where water absorption is high and most households have a 1kw noise generator (microwave oven): EVERYONE USES IT because it is FREE and OPEN, no licenses required. Just look at all the innovation created for this band: 802.11, bluetooth, ZigBee, cordless phones, wireless remotes, and others. If 802.11 was in the 700-1GHz band someone or a company could easily make long-distance wireless repeaters & mesh-network nodes, sell them to consumers, and everyone could SIP for FREE without paying Verizon / ATT etc. This could set it up as a pyramid scheme, where to get on the network you simply have to buy a mesh node repeater, and with it became part of the ‘corporation’ which provided your wireless services. A certain part of the purcase & access price would, of course, need to go to pay for backbone connections, service, matenance and extending connection to remote areas, but this too can be solved and managed efficiently with something like 1 phone = 1 share.

With coprotations, you either have redundancy (two networks w/ twice as many cell towers) or a monopoly; neither are economically efficient. A re-allocation of prime wireless spectrum back to the correct owners - the citizens - would spur American Innovation greatly and simultaneously cut communication costs. The technology is changing, and the policy should too!

Anyway, i’m sick of paying $0.10 for 100 bytes of data (txt messages) when audio data costs ~1/500th that.

{538}
hide / / print
ref: notes-0 tags: two-photon laser imaging fluorescence lifetime imaging FRET GFP RFP date: 01-21-2008 17:23 gmt revision:0 [head]

images/538_1.pdf

{370}
hide / / print
ref: -0 tags: gore curibata pencil art NYTimes magazine travel brazil date: 05-20-2007 16:35 gmt revision:1 [0] [head]

An awesome pencil drawing of al gore, in May 20th issue of NYtimes magazine.

Curibata, Brazil - a city unusual for its urban planning, ecological mind, bussing system, affluence (compared to the rest of Brazil, and ratio of parks to buildings. I would like to go there.