On growing synthetic dendrites – with Hermann Cuntz - #25
The observed variety of dendritic structures in the brains is striking. Why are they so different, and what determine the branching patterns? Following the dictum “if you understand it, you can build it”, the lab of the guest builds dendritic structures in a computer and explore the underlying principles. Two key principles seem to be to minimize (i) the overall length of dendrites and (ii) the path length from the synapses to the soma.
--------
1:34:34
On neuroscience foundation models - with Andreas Tolias - #24
The term “foundation model” refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example. The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies. We discuss the design, validation, use of this and future neuroscience foundation models.
--------
1:31:43
On human whole-brain models - with Viktor Jirsa - #23
A holy grail of the multiscale approach for physical brain modelling is to link the different scales from molecules, via cells and local neural networks, up to whole-brain models. The goal of the Virtual Brain Twin project, lead by today’s guest, is to use personalized human whole-brain models to aid clinicians in treating brain ailments. The podcast discusses how such models are presently made using neural field models, starting with neuron population dynamics rather than molecular dynamics.
--------
1:55:21
On 40 years with the Hopfield network model - with Wulfram Gerstner - #22
In 1982 John Hopfield published the paper "Neural networks and physical systems with emergent collective computational abilities" describing a simple network model functioning as an associative and content-addressable memory. The paper started a new subfield in computational neuroscience and led to the influx of numerous theoretical scientists, in particular physicists, to the field. The podcast guest wrote his PhD thesis on the model in the early 1990s, and we talk about the history and present impact of the model.
--------
1:27:49
On models for short-term memory - with Pawel Herman - #21
The leading theory for learning and memorization in the brain is that learning is provided by synaptic learning rules and memories stored in synaptic weights between neurons. But this is for long-term memory. What about short-term, or working, memory where objects are kept in memory for only a few seconds? The traditional theory held that here the mechanism is different, namely persistent firing of select neurons in areas such as prefrontal cortex. But this view is challenged by recent synapse-based models explored by today’s guest and others.