PMID-15142952 Visual binding through reentrant connectivity and dynamic synchronization in a brain-based device
- Controlled a robot with a complete (for the time) model of the occipital-inferotemporal visual pathway (V1 V2 V4 IT), auditory cortex, colliculus, 'value cortex'.
- Synapses had a timing-dependent assoicative BCM learning rule
- Robot had reflexes to orient toward preferred auditory stimuli
- Subsequently, robot 'learned' to orient toward a preferred stimuli (e.g. one that caused orientation).
- Visual stimuli were either diamonds or squares, either red or green.
- Discrimination task could have been carried out by (it seems) one perceptron layer.
-
- This was 16 years ago, and the results look quaint compared to the modern deep-learning revolution. That said, 'the binding problem' is imho still outstanding or at least interesting. Actual human perception is far more compositional than a deep CNN can support.
|