Eva Löcherbach, Université Paris 1 (http://samm.univ-paris1.fr/Eva-Locherbach-889)
Probabilistic spiking neuronal nets
Content: A discrete time stochastic neural network model: a system of interacting chains with memory of variable length; a case study: correlations between successive inter spike intervals; a continuous time model: systems of interacting point processes with memory of variable length; models without reset: Hawkes processes; stationary states in an infinite system; perfect simulation and Kalikow decompositions; statistical model selection in a class of systems of spiking neurons; short-term synaptic facilitation and working memory.
Patricia Reynaud-Bouret, Université Côte D´Azur (https://math.unice.fr/~reynaudb/)
Biological neural networks: simulation, connectivity and coding hability
Content: Biological neural networks are an amazing structure. They exchange information as very sparse point processes and their energy consumption is minimal. Nevertheless, they can encode so many stimuli or behaviors! They can also learn and keep memories for decades ….
Statistics and simulations can help us to understand a bit more what’s going on in this amazing system, even if we just barely scratch the surface.
I will explain how one can use the Hawkes process to model neural networks and one can simulate processes that can reach the size of small monkey brains or human brain areas (10^7 to 10^8 neurons). I will also explain how to reconstruct functional connectivity. Indeed, thanks to model selection, one can estimate an interaction graph between the neurons that are recorded and interpret it as the best tradeoff between variance and bias term of a Hawkes model. Thanks to this functional connectivity graph, one can decode / predict animal behaviors and try to understand what are the properties of the underlying neuronal network. Finally, I will show how statistics and minimax theory can also help to rephrase the theoretical problem of neuronal encoding for place cells and grid cells.