You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Sanchez J, Principe J, Carmena J, Lebedev M, Nicolelis MA, Simultaneus prediction of four kinematic variables for a brain-machine interface using a single recurrent neural network.Conf Proc IEEE Eng Med Biol Soc 7no Issue 5321-4 (2004)

hide / / print
ref: -2020 tags: Principe modular deep learning kernel trick MNIST CIFAR date: 10-06-2021 16:54 gmt revision:2 [1] [0] [head]

Modularizing Deep Learning via Pairwise Learning With Kernels

  • Shiyu Duan, Shujian Yu, Jose Principe
  • The central idea here is to re-interpret deep networks, not with the nonlinearity as the output of a layer, but rather as the input of the layer, with the regression (weights) being performed on this nonlinear projection.
  • In this sense, each re-defined layer is implementing the 'kernel trick': tasks (like classification) which are difficult in linear spaces, become easier when projected into some sort of kernel space.
    • The kernel allows pairwise comparisons of datapoints. EG. a radial basis kernel measures the radial / gaussian distance between data points. A SVM is a kernel machine in this sense.
      • As a natural extension (one that the authors have considered) is to take non-pointwise or non-one-to-one kernel functions -- those that e.g. multiply multiple layer outputs. This is of course part of standard kernel machines.
  • Because you are comparing projected datapoints, it's natural to take contrastive loss on each layer to tune the weights to maximize the distance / discrimination between different classes.
    • Hence this is semi-supervised contrastive classification, something that is quite popular these days.
    • The last layer is of tuned with cross-entropy labels, but only a few are required since the data is well distributed already.
  • Demonstrated on small-ish datasets, concordant with their computational resources ...

I think in general this is an important result, even if its not wholly unique / somewhat anticipated (it's a year old at the time of writing). Modular training of neural networks is great for efficiency, parallelization, and biological implementations! Transport of weights between layers is hence non-essential.

Classes still are, but I wonder if temporal continuity can solve some of these problems?

(There is plenty of other effort in this area -- see also {1544})

hide / / print
ref: Darmanjian-2005.03 tags: recording wifi 802.11 DSP BMI Principe date: 01-03-2012 02:13 gmt revision:2 [1] [0] [head]

IEEE-1419566 (pdf) A Portable Wireless DSP System for a Brain Machine Interface

  • 1400Mw (yuck!!), large design, PCMCIA 802.11 card @ 1.8 Mbps, external SRAM for models
  • implemented LMS and as expected it's faster on the Texas Instruments C33 floating-point DSP.


Darmanjian, S. and Morrison, S. and Dang, B. and Gugel, K. and Principe, J. Neural Engineering, 2005. Conference Proceedings. 2nd International IEEE EMBS Conference on 112 -115 (2005)

hide / / print
ref: Darmanjian-2006.01 tags: wireless neural recording university Florida Principe telemetry msp430 dsp nordic date: 04-15-2009 20:56 gmt revision:1 [0] [head]

PMID-17946962[0] A reconfigurable neural signal processor (NSP) for brain machine interfaces.

  • use a Texas instruments TMS320VC33 200MFLOPS (yes floating point) DSP,
  • a nordic NRF24L01,
  • a MSP430F1611x as a co-processor / wireless protocol manager / bootloader,
  • an Altera EPM3128ATC100 CPLD for expansion / connection.
  • uses 450 - 600mW in use (running an LMS algorithm).


[0] Darmanjian S, Cieslewski G, Morrison S, Dang B, Gugel K, Principe J, A reconfigurable neural signal processor (NSP) for brain machine interfaces.Conf Proc IEEE Eng Med Biol Soc 1no Issue 2502-5 (2006)

hide / / print
ref: Sanchez-2004.01 tags: BMI nicolelis florida Carmena Principe date: 04-06-2007 21:02 gmt revision:3 [2] [1] [0] [head]

PMID-17271543[] http://hardm.ath.cx:88/pdf/sanchez2004.pdf