Skip to the content.

Tom Burns is giving a series of lectures:

Week 1

Title: Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance Abstract: Broadly, there are two types of neurons: excitatory and inhibitory. Inhibitory neurons are amazingly diverse compared with excitatory neurons. Why? Using a computational model with realistically-sized groups of excitatory neurons (representing memories) associated together in a network of memories, we highlight a potentially biologically-plausible and behaviourally-useful function of inhibitory neuron diversity in memory. Two findings in particular standout: (1) inhibitory diversity can quadruple the range of memory retrieval; and (2) balancing the strength of different inhibitory neurons’ influence on excitatory neurons can dramatically change how the network of memories become activated, balancing and extracting both geometric and topological information about the network.

Week 2

Title: Simplicial Hopfield networks Abstract: Hopfield networks are artificial neural networks which store memory patterns on the states of their neurons by choosing recurrent connection weights and update rules such that the energy landscape of the network forms attractors around the memories. How many stable, sufficiently-attracting memory patterns can we store in such a network using $N$ neurons? The answer depends on the choice of weights and update rule. Inspired by setwise connectivity in biology, we extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex. Simplicial complexes are higher dimensional analogues of graphs which naturally represent collections of pairwise and setwise relationships. We show that our simplicial Hopfield networks increase memory storage capacity. Surprisingly, even when connections are limited to a small random subset of equivalent size to an all-pairwise network, our networks still outperform their pairwise counterparts. Such scenarios include non-trivial simplicial topology. We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models.

Week 3

Title: Detecting danger in gridworlds using Gromov’s Link Condition Abstract: Gridworlds have been long-utilised in AI research, particularly in reinforcement learning, as they provide simple yet scalable models for many real-world applications such as robot navigation, emergent behaviour, and operations research. We initiate a study of gridworlds using the mathematical framework of \textit{reconfigurable systems} and \textit{state complexes} due to Abrams, Ghrist & Peterson. State complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. The main contribution of this work is a modification to the original Abrams, Ghrist & Peterson setup which we introduce to capture agent braiding and thereby more naturally represent the topology of gridworlds. With this modification, the state complexes may exhibit geometric defects (failure of \textit{Gromov’s Link Condition}). Serendipitously, we discover these failures occur exactly where undesirable or dangerous states appear in the gridworld. Our results therefore provide a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents, and offer useful safety information (in geometric and topological forms) for incorporation in or analysis of machine learning systems. More broadly, our work introduces tools from geometric group theory and combinatorics to the AI community and demonstrates a proof-of-concept for this geometric viewpoint of the task domain through the example of simple gridworld environments.

Week 4

Title: Efficient, probabilistic analysis of combinatorial neural codes Abstract: Artificial and biological neural networks (ANNs and BNNs) can encode inputs in the form of combinations of individual neurons’ activities. These combinatorial neural codes present a computational challenge for direct and efficient analysis due to their high dimensionality and often large volumes of data. Here we improve the computational complexity – from factorial to quadratic time – of direct algebraic methods previously applied to small examples and apply them to large neural codes generated by experiments. These methods provide a novel and efficient way of probing algebraic, geometric, and topological characteristics of combinatorial neural codes and provide insights into how such characteristics are related to learning and experience in neural networks. We introduce a procedure to perform hypothesis testing on the intrinsic features of neural codes using information geometry. We then apply these methods to neural activities from an ANN for image classification and a BNN for 2D navigation to, without observing any inputs or outputs, estimate the structure and dimensionality of the stimulus or task space. Additionally, we demonstrate how an ANN varies its internal representations across network depth and during learning.