m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1470}
hide / / print
ref: -2019 tags: neuromorphic optical computing date: 06-19-2019 14:47 gmt revision:1 [0] [head]

Large-Scale Optical Neural Networks based on Photoelectric Multiplication

  • Critical idea: use coherent homodyne detection, and quantum photoelectric multiplication for the MACs.
    • That is, E-fields from coherent light multiplies rather than adds within a (logarithmic) photodiode detector.
    • Other lit suggests rather limited SNR for this effect -- 11db.
  • Hence need EO modulators and OE detectors followed by nonlinearity etc.
  • Pure theory, suggests that you can compute with as few as 10's of photons per MAC -- or less! Near Landauer's limit.

{1467}
hide / / print
ref: -2017 tags: neuromorphic optical computing nanophotonics date: 06-17-2019 14:46 gmt revision:5 [4] [3] [2] [1] [0] [head]

Progress in neuromorphic photonics

  • Similar idea as what I had -- use lasers as the optical nonlinearity.
    • They add to this the idea of WDM and 'MRR' (micro-ring resonator) weight bank -- they don't talk about the ability to change the weihts, just specify them with some precision.
  • Definitely makes the case that III-V semiconductor integrated photonic systems have the capability, in MMACs/mm^2/pj, to exceed silicon.

See also :

{1464}
hide / / print
ref: -2012 tags: phase change materials neuromorphic computing synapses STDP date: 06-13-2019 21:19 gmt revision:3 [2] [1] [0] [head]

Nanoelectronic Programmable Synapses Based on Phase Change Materials for Brain-Inspired Computing

  • Here, we report a new nanoscale electronic synapse based on technologically mature phase change materials employed in optical data storage and nonvolatile memory applications.
  • We utilize continuous resistance transitions in phase change materials to mimic the analog nature of biological synapses, enabling the implementation of a synaptic learning rule.
  • We demonstrate different forms of spike-timing-dependent plasticity using the same nanoscale synapse with picojoule level energy consumption.
  • Again uses GST germanium-antimony-tellurium alloy.
  • 50pJ to reset (depress) the synapse, 0.675pJ to potentiate.
    • Reducing the size will linearly decrease this current.
  • Synapse resistance changes from 200k to 2M approx.

See also: Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element

{5}
hide / / print
ref: bookmark-0 tags: machine_learning research_blog parallel_computing bayes active_learning information_theory reinforcement_learning date: 12-31-2011 19:30 gmt revision:3 [2] [1] [0] [head]

hunch.net interesting posts:

  • debugging your brain - how to discover what you don't understand. a very intelligent viewpoint, worth rereading + the comments. look at the data, stupid
    • quote: how to represent the problem is perhaps even more important in research since human brains are not as adept as computers at shifting and using representations. Significant initial thought on how to represent a research problem is helpful. And when it’s not going well, changing representations can make a problem radically simpler.
  • automated labeling - great way to use a human 'oracle' to bootstrap us into good performance, esp. if the predictor can output a certainty value and hence ask the oracle all the 'tricky questions'.
  • The design of an optimal research environment
    • Quote: Machine learning is a victim of it’s common success. It’s hard to develop a learning algorithm which is substantially better than others. This means that anyone wanting to implement spam filtering can do so. Patents are useless here—you can’t patent an entire field (and even if you could it wouldn’t work).
  • More recently: http://hunch.net/?p=2016
    • Problem is that online course only imperfectly emulate the social environment of a college, which IMHO are useflu for cultivating diligence.
  • The unrealized potential of the research lab Quote: Muthu Muthukrishnan says “it’s the incentives”. In particular, people who invent something within a research lab have little personal incentive in seeing it’s potential realized so they fail to pursue it as vigorously as they might in a startup setting.
    • The motivation (money!) is just not there.

{4}
hide / / print
ref: bookmark-0 tags: google parallel_computing GFS algorithm mapping reducing date: 0-0-2006 0:0 revision:0 [head]

http://labs.google.com/papers/mapreduce.html