m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{508} is owned by tlh24.
[0] Rasch B, Gais S, Born J, Impaired Off-Line Consolidation of Motor Memories After Combined Blockade of Cholinergic Receptors During REM Sleep-Rich Sleep.Neuropsychopharmacology no Volume no Issue no Pages (2009 Feb 4)

[0] Froemke RC, Merzenich MM, Schreiner CE, A synaptic memory trace for cortical receptive field plasticity.Nature 450:7168, 425-9 (2007 Nov 15)

{1546}
hide / / print
ref: -1992 tags: Linsker infomax Hebbian anti-hebbian linear perceptron unsupervised learning date: 08-04-2021 00:20 gmt revision:2 [1] [0] [head]

Local synaptic learning rules suffice to maximize mutual information in a linear network

  • Ralph Linsker, 1992.
  • A development upon {1545} -- this time with lateral inhibition trained through noise-contrast and anti-Hebbian plasticity.
  • {1545} does not perfectly maximize the mutual information between the input and output -- this allegedly requires the inverse of the covariance matrix, QQ .
    • As before, infomax principles; maximize mutual information MIH(Z)H(Z|S)MI \propto H(Z) - H(Z | S) where Z is the network output and S is the signal input. (note: minimize the conditional entropy of output given the input).
    • For a gaussian variable, H=12lndetQH = \frac{ 1}{ 2} ln det Q where Q is the covariance matrix. In this case Q=E|ZZ T|Q = E|Z Z^T |
    • since Z=C(S,N)Z = C(S,N) where C are the weights, S is the signal, and N is the noise, Q=CqC T+rQ = C q C^T + r where q is the covariance matrix of input noise and r is the cov.mtx. of the output noise.
    • (somewhat confusing): δH/δC=Q 1Cq\delta H / \delta C = Q^{-1}Cq
      • because .. the derivative of the determinant is complicated.
      • Check the appendix for the derivation. lndetQ=TrlnQln det Q = Tr ln Q and dH=1/2d(TrlnQ)=1/2Tr(Q 1dQ) dH = 1/2 d(Tr ln Q) = 1/2 Tr( Q^-1 dQ ) -- this holds for positive semidefinite matrices like Q.

  • From this he comes up with a set of rules whereby feedforward weights are trained in a Hebbian fashion, but based on activity after lateral activation.
  • The lateral activation has a weight matrix F=IαQF = I - \alpha Q (again Q is the cov.mtx. of Z). If y(0)=Y;y(t+1)=Y+Fy(t)y(0) = Y; y(t+1) = Y + Fy(t) , where Y is the feed-forward activation, then αy(inf)=Q 1Y\alpha y(\inf) = Q^{-1}Y . This checks out:
x = randn(1000, 10);
Q = x' * x;
a = 0.001;
Y = randn(10, 1);
y = zeros(10, 1); 
for i = 1:1000
	y = Y + (eye(10) - a*Q)*y;
end

y - pinv(Q)*Y / a % should be zero. 
  • This recursive definition is from Jacobi. αy(inf)=αΣ t=0 infF tY=α(IF) 1Y=Q 1Y\alpha y(\inf) = \alpha \Sigma_{t=0}^{\inf}F^tY = \alpha(I - F)^{-1} Y = Q^{-1}Y .
  • Still, you need to estimate Q through a running-average, ΔQ=1M(Y nY m+r nmQ NM)\Delta Q = \frac{ 1}{M}( Y_n Y_m + r_{nm} - Q_{NM} ) and since F=IαQF = I - \alpha Q , F is formed via anti-hebbian terms.

To this is added a 'sensing' learning and 'noise' unlearning phase -- one optimizes H(Z)H(Z) , the other minimizes H(Z|S)H(Z|S) . Everything is then applied, similar to before, to a gaussian-filtered one-dimensional white-noise stimuli. He shows this results in bandpass filter behavior -- quite weak sauce in an era where ML papers are expected to test on five or so datasets. Even if this was 1992 (nearly forty years ago!), it would have been nice to see this applied to a more realistic dataset; perhaps some of the following papers? Olshausen & Field came out in 1996 -- but they applied their algorithm to real images.

In both Olshausen & this work, no affordances are made for multiple layers. There have to be solutions out there...

{1545}
hide / / print
ref: -1988 tags: Linsker infomax linear neural network hebbian learning unsupervised date: 08-03-2021 06:12 gmt revision:2 [1] [0] [head]

Self-organizaton in a perceptual network

  • Ralph Linsker, 1988.
  • One of the first (verbose, slightly diffuse) investigations of the properties of linear projection neurons (e.g. dot-product; no non-linearity) to express useful tuning functions.
  • ''Useful' is here information-preserving, in the face of noise or dimensional bottlenecks (like PCA).
  • Starts with Hebbian learning functions, and shows that this + white-noise sensory input + some local topology, you can get simple and complex visual cell responses.
    • Ralph notes that neurons in primate visual cortex are tuned in utero -- prior real-world visual experience! Wow. (Who did these studies?)
    • This is a very minimalistic starting point; there isn't even structured stimuli (!)
    • Single neuron (and later, multiple neurons) are purely feed-forward; author cautions that a lack of feedback is not biologically realistic.
      • Also note that this was back in the Motorola 680x0 days ... computers were not that powerful (but certainly could handle more than 1-2 neurons!)
  • Linear algebra shows that Hebbian synapses cause a linear layer to learn the covariance function of their inputs, QQ , with no dependence on the actual layer activity.
  • When looked at in terms of an energy function, this is equivalent to gradient descent to maximize the layer-output variance.
  • He also hits on:
    • Hopfield networks,
    • PCA,
    • Oja's constrained Hebbian rule δw i<L 2(L 1L 2w i)> \delta w_i \propto &lt; L_2(L_1 - L_2 w_i) &gt; (that is, a quadratic constraint on the weight to make Σw 21\Sigma w^2 \sim 1 )
    • Optimal linear reconstruction in the presence of noise
    • Mutual information between layer input and output (I found this to be a bit hand-wavey)
      • Yet he notes critically: "but it is not true that maximum information rate and maximum activity variance coincide when the probability distribution of signals is arbitrary".
        • Indeed. The world is characterized by very non-Gaussian structured sensory stimuli.
    • Redundancy and diversity in 2-neuron coding model.
    • Role of infomax in maximizing the determinant of the weight matrix, sorta.

One may critically challenge the infomax idea: we very much need to (and do) throw away spurious or irrelevant information in our sensory streams; what upper layers 'care about' when making decisions is certainly relevant to the lower layers. This credit-assignment is neatly solved by backprop, and there are a number 'biologically plausible' means of performing it, but both this and infomax are maybe avoiding the problem. What might the upper layers really care about? Likely 'care about' is an emergent property of the interacting local learning rules and network structure. Can you search directly in these domains, within biological limits, and motivated by statistical reality, to find unsupervised-learning networks?

You'll still need a way to rank the networks, hence an objective 'care about' function. Sigh. Either way, I don't per se put a lot of weight in the infomax principle. It could be useful, but is only part of the story. Otherwise Linsker's discussion is accessible, lucid, and prescient.

Lol.

{1493}
hide / / print
ref: -0 tags: nonlinear hebbian synaptic learning rules projection pursuit date: 12-12-2019 00:21 gmt revision:4 [3] [2] [1] [0] [head]

PMID-27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

  • Here we show that the principle of nonlinear Hebbian learning is sufficient for receptive field development under rather general conditions.
  • The nonlinearity is defined by the neuron’s f-I curve combined with the nonlinearity of the plasticity function. The outcome of such nonlinear learning is equivalent to projection pursuit [18, 19, 20], which focuses on features with non-trivial statistical structure, and therefore links receptive field development to optimality principles.
  • Δwxh(g(w Tx))\Delta w \propto x h(g(w^T x)) where h is the hebbian plasticity term, and g is the neurons f-I curve (input-output relation), and x is the (sensory) input.
  • The relevant property of natural image statistics is that the distribution of features derived from typical localized oriented patterns has high kurtosis [5,6, 39]
  • Model is a generalized leaky integrate and fire neuron, with triplet STDP

{1447}
hide / / print
ref: -2006 tags: Mark Bear reward visual cortex cholinergic date: 03-06-2019 04:54 gmt revision:1 [0] [head]

PMID-16543459 Reward timing in the primary visual cortex

  • Used 192-IgG-Saporin (saporin immunotoxin)to selectively lesion cholinergic fibers locally in V1 following a visual stimulus -> licking reward delay behavior.
  • Visual stimulus is full-field light, delivered to either the left or right eye.
    • This is scarcely a challenging task; perhaps they or others have followed up?
  • These examples illustrate that both cue 1-dominant and cue 2-dominant neurons recorded from intact animals express NRTs that appropriately reflect the new policy. Conversely, although cue 1- and cue 2-dominant neurons recorded from 192-IgG-saporin-infused animals are capable of displaying all forms of reward timing activity, ‘’’they do not update their NRTs but rather persist in reporting the now outdated policy.’’’
    • NRT = neural reaction time.
  • This needs to be controlled with recordings from other cortical areas.
  • Acquisition of reward based response is simultaneously interesting and boring -- what about the normal, discriminative and perceptual function of the cortex?
  • See also follow-up work PMID-23439124 A cholinergic mechanism for reward timing within primary visual cortex.

{1387}
hide / / print
ref: -1977 tags: polyethylene surface treatment plasma electron irradiation mechanical testing saline seawater accelerated lifetime date: 04-15-2017 06:06 gmt revision:0 [head]

Enhancement of resistance of polyethylene to seawater-promoted degradation by surface modification

  • Polyethylene, when repeatedly stressed and exposed to seawater (e.g. ships' ropes), undergoes mechanical and chemical degradation.
  • Surface treatments of the polyethlyene can improve resistance to this degradation.
  • The author studied two methods of surface treatment:
    • Plasma (glow discharge, air) followed by diacid (adipic acid) or triisocyanate (DM100, = ?) co-polymerization
    • Electron irradiation with 500 kEV electrons.
  • Also mention CASING (crosslinking by activated species of inert gasses) as a popular method of surface treatment.
    • Diffuse-in crosslinkers is a third, popular these days ...
    • Others diffuse in at temperature e.g. a fatty acid - derived molecule, which is then bonded to e.g. heparin to reduce the thrombogenicity of a plastic.
  • Measured surface modifications via ATR IR (attenuated total reflectance, IR) and ESCA (aka XPS)
    • Expected results, carbonyl following the air glow discharge ...
  • Results:
    • Triisocyanate, ~ 6x improvement
    • diacid, ~ 50 x improvement.
    • electron irradiation, no apparent degradation!
      • Author's opinion that this is due to carbon-carbon crosslink leading to mechanical toughening (hmm, evidence?)
  • Quote: since the PE formulation studied here was low-weight, it was expected to lose crystallinity upon cyclic flexing; high density PE's have in fact been observed to become more crystalline with working.
    • Very interesting, kinda like copper. This could definitely be put to good use.
  • Low density polyethylene has greater chain branching and entanglement than high-density resins; when stressed the crystallites are diminished in total bulk, degrading tensile properties ... for high-density resins, mechanical working loosens up the structure enough to allow new crystallization to exceed stress-induced shrinkage of crystallites; hence, the crystallinity increases.

{1279}
hide / / print
ref: -0 tags: parylene plasma ALD insulation long-term saline PBS testing date: 04-02-2014 21:32 gmt revision:0 [head]

PMID-23024377 Plasma-assisted atomic layer deposition of Al(2)O(3) and parylene C bi-layer encapsulation for chronic implantable electronics.

  • This report presents an encapsulation scheme that combines Al(2)O(3) by atomic layer deposition with parylene C.
  • Al2O3 layer deposited using PAALD process-500 cycles of TMA + O2 gas.
  • Alumina and parylene coating lasted at least 3 times longer than parylene coated samples tested at 80 °C
    • That's it?
  • The consistency of leakage current suggests that no obvious corrosion was occurring to the Al2O3 film. The extremely low leakage current (≤20 pA) was excellent for IDEs after roughly three years of equivalent soaking time at 37 °C.
    • Still, they warn that it may not work as well for in-vivo devices, which are subject to tethering forces and micromotion.

{1152}
hide / / print
ref: -0 tags: impedance digital transmission line date: 03-14-2012 22:20 gmt revision:0 [head]

http://web.cecs.pdx.edu/~greenwd/xmsnLine_notes.pdf -- Series termination will work, provided the impedance of the driver + series resistor is matched to the impedance of the transmission line being driven.

School has been so long ago, I've forgotten these essentials!

{760}
hide / / print
ref: -0 tags: LDA myopen linear discriminant analysis classification date: 01-03-2012 02:36 gmt revision:2 [1] [0] [head]

How does LDA (Linear discriminant analysis) work?

It works by projecting data points onto a series of planes, one per class of output, and then deciding based which projection plane is the largest.

Below, to the left is a top-view of this projection with 9 different classes of 2D data each in a different color. Right is a size 3D view of the projection - note the surfaces seem to form a parabola.

Here is the matlab code that computes the LDA (from myopen's ceven

% TrainData and TrainClass are inputs, column major here.
% (observations on columns)
N = size(TrainData,1);
Ptrain = size(TrainData,2);
Ptest = size(TestData,2);

% add a bit of interpolating noise to the data.
sc = std(TrainData(:)); 
TrainData =  TrainData + sc./1000.*randn(size(TrainData));

K = max(TrainClass); % number of classes.

%%-- Compute the means and the pooled covariance matrix --%%
C = zeros(N,N);
for l = 1:K;
	idx = find(TrainClass==l);
		% measure the mean per class
	Mi(:,l) = mean(TrainData(:,idx)')';
		% sum all covariance matrices per class
	C = C + cov((TrainData(:,idx)-Mi(:,l)*ones(1,length(idx)))');
end

C = C./K; % turn sum into average covariance matrix
Pphi = 1/K;
Cinv = inv(C);

%%-- Compute the LDA weights --%%
for i = 1:K
	Wg(:,i) = Cinv*Mi(:,i);
		% this is the slope of the plane
	Cg(:,i) = -1/2*Mi(:,i)'*Cinv*Mi(:,i) + log(Pphi)';
		% and this, the origin-intersect.
end

%%-- Compute the decision functions --%%
Atr = TrainData'*Wg + ones(Ptrain,1)*Cg;
	% see - just a simple linear function! 
Ate = TestData'*Wg + ones(Ptest,1)*Cg;

errtr = 0;
AAtr = compet(Atr');
	% this compet function returns a sparse matrix with a 1
	% in the position of the largest element per row. 
	% convert to indices with vec2ind, below. 
TrainPredict = vec2ind(AAtr);
errtr = errtr + sum(sum(abs(AAtr-ind2vec(TrainClass))))/2;
netr = errtr/Ptrain;
PeTrain = 1-netr;

{65}
hide / / print
ref: Laubach-2003.03 tags: cluster matlab linux neurophysiology recording on-line data_analysis microstimulation nicolelis laubach date: 12-17-2011 00:38 gmt revision:4 [3] [2] [1] [0] [head]

IEEE-1215970 (pdf)

  • 2003
  • M. Laubach
  • Random Forests - what are these?
  • was this ever used??

follow up paper: http://spikelab.jbpierce.org/Publications/LaubachEMBS2003.pdf

  • discriminant pusuit algorithm & local regression basis (again what are these? lead me to find the lazy learning package: http://iridia.ulb.ac.be/~lazy/

____References____

Laubach, M. and Arieh, Y. and Luczak, A. and Oh, J. and Xu, Y. Bioengineering Conference, 2003 IEEE 29th Annual, Proceedings of 17 - 18 (2003.03)

{91}
hide / / print
ref: notes-0 tags: perl one-liner svn strip lines count resize date: 03-22-2011 16:37 gmt revision:13 [12] [11] [10] [9] [8] [7] [head]

to remove lines beginning with a question mark (e.g. from subversion)

svn status | perl -nle 'print if !/^?/' 

here's another example, for cleaning up the output of ldd:

ldd kicadocaml.opt | perl -nle '$_ =~ /^(.*?)=>/; print $1 ;' 

and one for counting the lines of non-blank source code:

cat *.ml | perl -e '$n = 0; while ($k = <STDIN>) {if($k =~ /\w+/){$n++;}} print $n . "\n";'

By that metric, kicadocaml (check it out!), which I wrote in the course of learning Ocaml, has about 7500 lines of code.

Here is one for resizing a number of .jpg files in a directory into a thumb/ subdirectory:

ls -lah | perl -nle 'if( $_ =~ /(\w+)\.jpg/){ `convert $1.jpg -resize 25% thumb/$1.jpg`;}'
or, even simpler:
ls *.JPG | perl -nle '`convert $_ -resize 25% thumb/$_`;'

Note that -e command line flag tells perl to evaluate the expression, -n causes the expression to be evaluated once per input line from standard input, and -l puts a line break after every print statement. reference

For replacing charaters in a file, do something like:

cat something |  perl -nle '$_ =~ s/,/\t/g; print $_'

{846}
hide / / print
ref: -0 tags: perl shuffle lines from sdtdin date: 10-31-2010 13:57 gmt revision:0 [head]

Shuffle lines read in from stdin. I keep this script in /usr/local/bin on my systems, mostly for doing things like ls | shuffle > pls.txt && mplayer -playlist pls.txt

#!/usr/bin/perl -w
use List::Util 'shuffle';

while (<STDIN>) {
    push(@lines, $_);
}
@reordered = shuffle(@lines);
foreach (@reordered) {
    print $_;
}

{818}
hide / / print
ref: work-0 tags: perl fork read lines external program date: 06-15-2010 18:08 gmt revision:0 [head]

Say you have a program, called from a perl script, that may run for a long time. Get at the program's output as it appears?

Simple - open a pipe to the programs STDOUT. See http://docstore.mik.ua/orelly/perl/prog3/ch16_03.htm Below is an example - I wanted to see the output of programs run, for convenience, from a perl script (didn't want to have to remember - or get wrong - all the command line arguments for each).

#!/usr/bin/perl

$numArgs = $#ARGV + 1;
if($numArgs == 1){
	if($ARGV[0] eq "table"){
		open STATUS, "sudo ./video 0xc1e9 15 4600 4601 0 |";
		while(<STATUS>){
			print ; 
		}
		close STATUS ; 
	}elsif($ARGV[0] eq "arm"){
		open STATUS, "sudo ./video 0x1ff6 60 4597 4594 4592 |";
		while(<STATUS>){
			print ; 
		}
		close STATUS ; 
	}else{ print "$ARGV[0] not understood - say arm or table!\n"; 
	}
}

{796}
hide / / print
ref: work-0 tags: machine learning manifold detection subspace segregation linearization spectral clustering date: 10-29-2009 05:16 gmt revision:5 [4] [3] [2] [1] [0] [head]

An interesting field in ML is nonlinear dimensionality reduction - data may appear to be in a high-dimensional space, but mostly lies along a nonlinear lower-dimensional subspace or manifold. (Linear subspaces are easily discovered with PCA or SVD(*)). Dimensionality reduction projects high-dimensional data into a low-dimensional space with minimum information loss -> maximal reconstruction accuracy; nonlinear dim reduction does this (surprise!) using nonlinear mappings. These techniques set out to find the manifold(s):

  • Spectral Clustering
  • Locally Linear Embedding
    • related: The manifold ways of perception
      • Would be interesting to run nonlinear dimensionality reduction algorithms on our data! What sort of space does the motor system inhabit? Would it help with prediction? Am quite sure people have looked at Kohonen maps for this purpose.
    • Random irrelevant thought: I haven't been watching TV lately, but when I do, I find it difficult to recognize otherwise recognizable actors. In real life, I find no difficulty recognizing people, even some whom I don't know personally - is this a data thing (little training data), or mapping thing (not enough time training my TV-not-eyes facial recognition).
  • A Global Geometric Framework for Nonlinear Dimensionality Reduction method:
    • map the points into a graph by connecting each point with a certain number of its neighbors or all neighbors within a certain radius.
    • estimate geodesic distances between all points in the graph by finding the shortest graph connection distance
    • use MDS (multidimensional scaling) to embed the original data into a smaller-dimensional euclidean space while preserving as much of the original geometry.
      • Doesn't look like a terribly fast algorithm!

(*) SVD maps into 'concept space', an interesting interpretation as per Leskovec's lecture presentation.

{685}
hide / / print
ref: BrashersKrug-1996.07 tags: motor learning sleep offline consolidation Bizzi Shadmehr date: 03-24-2009 15:39 gmt revision:1 [0] [head]

PMID-8717039[0] Consolidation in human motor memory.

  • while practice produces speed and accuracy improvements, significant improvements - ~20% also occur 24hours later following a period of sleep. Why is this? We can answer it with the recording system!

____References____

[0] Brashers-Krug T, Shadmehr R, Bizzi E, Consolidation in human motor memory.Nature 382:6588, 252-5 (1996 Jul 18)

{678}
hide / / print
ref: Rasch-2009.06 tags: sleep cholinergic acetylcholine REM motor consolidation date: 02-18-2009 17:27 gmt revision:0 [head]

PMID-19194375[0] "Impaired Off-Line Consolidation of Motor Memories After Combined Blockade of Cholinergic Receptors During REM Sleep-Rich Sleep."

  • In REM sleep there is high, almost to wake-like, levels of ACh activity (in the cortex? they don't say).
  • Trained subjects on a motor task after a 3-hour period of slow wave sleep.
  • Then administered ACh (muscarinic + nicotinic) blockers or placebo
  • Subjects with blocked ACh reception showed less motor consolidation. So, ACh is needed! (This is consistent with Ach being an attentional / selective signal for activating the cortex).

____References____

{660}
hide / / print
ref: -0 tags: perl one-liner search files cat grep date: 02-16-2009 21:58 gmt revision:2 [1] [0] [head]

In the process of installing compiz - which I decided I didn't like - I removed Xfce4's window manager, xfwm4, and was stuck with metacity. Metacity probably allows focus-follows-mouse, but this cannot be configured with Xfce's control panel, hence I had to figure out how to change it back. For this, I wrote a command to look for all files, opening each, and seeing if there are any lines that match "metacity". It's a brute force approach, but one that does not require much thinking or googling.

find . -print | grep -v mnt | \
perl -e 'while($k = <STDIN>){open(FH,"< $k");while($j=<FH>){if($j=~/metacity/){print "found $k";}}close FH;}' 
This led me to discover ~/.cache/sessions/xfce4-session-loco:0 (the name of the computer is loco). I changed all references of 'metacity' to 'xfwm4', and got the proper window manager back.

{614}
hide / / print
ref: Froemke-2007.11 tags: nucleus basalis basal forebrain acetylcholine auditory cortex potentiation voltage clamp date: 10-08-2008 22:44 gmt revision:2 [1] [0] [head]

PMID-18004384[0] A synaptic memory trace for cortical receptive field plasticity.

  • nucleus basalis = basal forebrain!
  • stimulation of the nucleus basalis caused a reorganization of the auditory cortex tuning curves hours after the few minutes of training.
  • used whole-cell current-clamp recording to reveal tone-evoked excitatory and inhibitory postsynaptyic currents.
  • pairing of nucleus basalis and auditory tone presentation (2-5 minutes) increased excitatory currents and decreased inhibitory currents as compared to other (control) frequencies.
  • tuning changes required simultaneous tone presentation and nucleus basalis stimulation. (Could they indiscriminately stimulate the NB? did they have to target a certain region of it? Seems like it.)
    • did not require postsynaptic spiking!
  • Pairing caused a dramatic (>7-fold) increase in the probability of firing bursts of 2+ spikes
  • Cortical application of atropine, an acetylcholine receptor antagonist, prevented the effects of nucleus basalis pairing.
  • the net effects of nucleus basalis pairing are suppression of inhibition (20 sec) followed by enhancement of excitation (60 sec)
  • also tested microstimulation of the thalamus and cortex; NB pairing increased EPSC response from intracortical microstim, but not from thalamic stimulation. Both cortical and thalamic stimulation elicited an effect in the voltage-clamped recorded neuron.
  • by recording from the same site (but different cells), they showed that while exitation persisted hours after pairing, inhibition gradually increased commensurate with the excitation.
  • Thus, NB stimulation leaves a tag of reduced inhibition (at the circuit level!), specifically for neurons that are active at the time of pairing.

____References____

{588}
hide / / print
ref: notes-0 tags: linear discriminant analysis LDA EMG date: 07-30-2008 20:56 gmt revision:2 [1] [0] [head]

images/588_1.pdf -- Good lecture on LDA. Below, simple LDA implementation in matlab based on the same:

% data matrix in this case is 36 x 16, 
% with 4 examples of each of 9 classes along the rows, 
% and the axes of the measurement (here the AR coef) 
% along the columns. 
Sw = zeros(16, 16); % within-class scatter covariance matrix. 
means = zeros(9,16); 
for k = 0:8
	m = data(1+k*4:4+k*4, :); % change for different counts / class
	Sw = Sw + cov( m ); % sum the 
	means(k+1, :) = mean( m ); %means of the individual classes
end
% compute the class-independent transform, 
% e.g. one transform applied to all points
% to project them into one plane. 
Sw = Sw ./ 9; % 9 classes
criterion = inv(Sw) * cov(means); 
[eigvec2, eigval2] = eig(criterion);

See {587} for results on EMG data.

{525}
hide / / print
ref: notes-0 tags: skate sideskate freeline date: 12-19-2007 04:50 gmt revision:1 [0] [head]

Tim's list of skate-like devices, sorted by flatland speed, descending order:

  1. rollerblades / in-line skates. clap skates and xcountry training skates are up here too.
  2. skateboard -- skateboarders in central park can do the whole loop (~7 miles?) in about ~20 minutes = 21mph average. You can get some very fast wheels, bearings, and boards.
  3. streetboards / snakeboards -- great acceleration. Unlike sideskates, freelines, and Xliders, you do not have to reserve / use muscle capacity to keep from doing a split; all can be put into whipping the board up to speed.
  4. Onshoreboards -- Don't have one, but looks like a randal in back there. These things are kinda heavy - 13bs for the largest - but should be pumpable to high speed? Compared to the flowlabs, all axles (when going straight) are perpendicular to the direction of motion, so there should be little more than the rolling resistance of the 8 wheels. Note dual skate wheels on the back - I presume this was to cut costs, as good inline skate wheels are much cheaper than good skateboard wheels.
  5. sideskates -- these generally have higher top-end speed compared to freelines, but worse acceleration. rolling resistance is comparable to a skateboard; they have large patchs of urethane in contact with the ground, with no rotational shear from a axle at angle to road.
  6. freeline -- these are far more stable at speed than sideskates. However, contact patch with ground undergoes rotational shear, which in addition to the softer urethane and higher loading, makes for more friction than sideskates.
  7. Hammerhead -- faster than below because it has one standard skate truck. Have not tested it.
  8. Flowlab -- the wheels are not co-axial, so there will always be more rolling resistance than a skateboard. Urethane and bearing quality is low on these boards (e.g. 608zz electric motor bearings), simply because they need so many of both and must cut costs to compete with skateboards!
  9. The Wave -- seriously, slow. downhill speed is ok, no speed wobbles - but no powerslides either.
  10. Xliders -- The videos make it look rather slow. But, it also looks very choreographic / dance-like.
  11. Tierney Rides -- hard to pump, but not impossible. Dumb because it is easy to tilt the deck a bit to much, hit the edge, and slide out (the coefficient of friction of hard maple << urethane wheel). Tried to learn it for a while, but the over tilt / deck slide bruised my ankles too many times. This makes it bad for both downhill and flatland. On the plus side, these are very well made boards - buy one & put some randals on it :)

{506}
hide / / print
ref: notes-0 tags: gcc inline assembler blackfin date: 11-22-2007 19:13 gmt revision:3 [2] [1] [0] [head]

So, you want to write inline assembly for the blackfin processor, perhaps to speed things up in a (very) time-constrained environment? Check this first:

  • calling ASM from C on a Blackfin
  • Inline assembly with gcc (general)
  • gcc manual entry for constraints (general)
  • The general format is, as per the refs, asm("some assembly":"output constraints"(c out args):"input constraints"(c in args):"clobbered regs");
  • 'volatile' just means that the compiler should not move the instruction around and/or delete it. This may actually be good for checking - if you tell gcc that it may not delete an instruction, but gcc doesn't know where to put it, it will complain -- and not compile.
  • If you are using C / C++ preprocessor macros in the inline assembly, you must first compile the C code down to assembly (using -S flag), then run gcc with the flag -x assembler-with-cpp As the C preprocessor macros are necessarily in headers, just include them on the command line (e.g. in the makefile) with the -include flag.

Nobody seems to have a complete modifier list for the blackfin, which is needed to actually write something that won't be optimized out :) here is my list --

  • d -- use a data register, e.g r0 - r7. don't use 'r' for this ala x86 !
  • a -- use one of the addressing registers.
  • = -- register is written (output only)
  • + -- register is both read and written (output only)

examples:

  • asm volatile("%0 = w[p5];":"=d"(flags));
    • flags should be in a data register, it is written output.
  • asm volatile("bitclr(%0, RS_WAITIRQ_BIT)":"+d"(state));
    • state must be in a data register and it is both read and written (which is true - a bit is modified, and the input state matters). Must be an output register, not an input -- you cannot use the '+' constraint with inputs.

Constraints for particular machines - does not include blackfin.

  • however, it should be in the gcc tree -- and, well, the source is online...
  • here are the comments from /gcc/config/bfin/bfin.md :
; register operands
;     d  (r0..r7)
;     a  (p0..p5,fp,sp)
;     e  (a0, a1)
;     b  (i0..i3)
;     f  (m0..m3)
;     B
;     c (i0..i3,m0..m3) CIRCREGS
;     C (CC)            CCREGS

{427}
hide / / print
ref: notes-0 tags: perl one liner hex convert date: 08-15-2007 23:55 gmt revision:0 [head]

I wanted to take lines like this:

272 :1007A500EB9FF5F0EA9E42F0E99D42F0E89C45F0AA
and convert them into proper hex files. hence, perl:

perl -e 'open(FH, "awfirm.hex"); @j = <FH>; foreach $H (@j){ $H =~ s/^s+d+s//; $H =~ s/\//; print $H; }'

{409}
hide / / print
ref: bookmark-0 tags: optimization function search matlab linear nonlinear programming date: 08-09-2007 02:21 gmt revision:0 [head]

http://www.mat.univie.ac.at/~neum/

very nice collection of links!!

{390}
hide / / print
ref: notes-0 tags: SFN deadlines date: 06-14-2007 20:29 gmt revision:0 [head]

{220}
hide / / print
ref: math notes-0 tags: linear_algebra BLAS FFT library programming C++ matrix date: 02-21-2007 15:48 gmt revision:1 [0] [head]

Newmat11 -- nice, elegant BLAS / FFT and matrix library, with plenty of syntactic sugar.

{216}
hide / / print
ref: notes-0 tags: perl one-liner match grep date: 02-17-2007 17:45 gmt revision:2 [1] [0] [head]

to search for files that match a perl regular expression: (here all plexon files recorded in 2007)

locate PLEX | perl -e 'while ($k = <STDIN>){ if( $k =~ /PLEXdddd07/){ print $k; }}'

{141}
hide / / print
ref: learning-0 tags: motor control primitives nonlinear feedback systems optimization date: 0-0-2007 0:0 revision:0 [head]

http://hardm.ath.cx:88/pdf/Schaal2003_LearningMotor.pdf not in pubmed.

{28}
hide / / print
ref: bookmark-0 tags: motivation willpower dicipline date: 0-0-2006 0:0 revision:0 [head]

http://www.stevepavlina.com/

{75}
hide / / print
ref: bookmark-0 tags: linux command line tips rip record date: 0-0-2006 0:0 revision:0 [head]

http://www.pixelbeat.org/cmdline.html

{34}
hide / / print
ref: bookmark-0 tags: linear_algebra solution simultaneous_equations GPGPU GPU LUdecomposition clever date: 0-0-2006 0:0 revision:0 [head]

http://gamma.cs.unc.edu/LU-GPU/lugpu05.pdf