Talks and Events

ARENA Symposium - Abstract Representations in Neural Architectures: From Human Cognition to Artificial Intelligence

Speaker: For more information, see Description
Date: 12 March 2025
Place: TeaP Conference 2025

How knowledge is represented in the human mind and brain is one of the most fundamental questions of cognitive and brain sciences. It is central to many domains of cognition—from visual perception and object recognition, learning of regularities and semantic knowledge about the world, to language processing. However, an overarching framework that is rich enough to capture knowledge representations at various depths and levels of abstraction is still lacking. In this regard, recent developments in artificial intelligence (AI), in particular Deep Neural Networks (DNNs), are making promising advances and will be exemplified in the talks of this symposium. In Talk 1, Triesch will present novel learning approaches for DNNs that exploit the temporal and/or multimodal structure of sensory information during infants’ extended interactions with objects. In Talk 2, Schommartz will present how gaze fixations during natural scene viewing differ between children and adults, and how predictions from AI models can help to understand the age differences. In Talk 3, Nicholls will present a study that examines to what extent hierarchical scene knowledge is represented on a neural level during object recognition. In Talk 4, Çelik will present a study that demonstrates LLMs as useful tools to semantically annotate a large pool of words out-of-context and modeling brain data while subjects listen to this audiobook. At the end, we will discuss to what extent such concerted efforts of psychologists and computer scientists improve the understanding of how representations emerge and are coded across different types of neural architecture.

Talk 1: Computational Modeling of the Development of Abstract Object Representations

Jochen Triesch
Goethe-Universität Frankfurt

What are the origins of abstract knowledge about objects? Infants and toddlers learn about objects quite differently from today’s artificial intelligence systems. Here we aim to better understand these processes by developing computational models of how infants and toddlers acquire abstract object representations and the ability to recognize objects and object categories independent of viewpoint, distance, etc. For this, we have developed novel learning approaches for deep neural networks that exploit the temporal and/or multimodal structure of sensory information during extended interactions with objects. For example, we harness head-mounted eye tracking in toddlers and train computational models with toddlers’ first-person visual input, demonstrating that strong object representations can result from just minutes of such experience. Furthermore, we highlight the benefits of toddlers’ gaze behavior for successful learning. We also consider learning in models receiving computer-rendered visual inputs, where we can precisely control the input statistics. We show how additional linguistic input, even if rare and noisy, promotes the formation of abstract object categories. Furthermore, we demonstrate how our time-based learning approach can lead to the emergence of very abstract concepts such as „kitchen object“ or „garden object“. Finally, we study the role of behavior and knowledge of executed manipulation actions (e.g., how an object was turned) and demonstrate how this additional information can further enrich the learned representations. Overall, we elucidate what computational principles seem to underlie the emergence of abstract object representations in infants and toddlers.

Talk 2: Eye Gaze Patterns and Reinstatement in Children, Adults, and Artificial Intelligence Models During Naturalistic Viewing

Iryna Schommartz1,2, Bhavin Choksi3, Gemma Roig3, Yee Lee Shing1,2
1Department of Psychology, Goethe University Frankfurt
2IDeA – Center for Individual Development and Adaptive Education
3Computer Science Department, Goethe University Frankfurt

In developmental research, differences in cognition and perception during image viewing can result in varied processing and subsequent memory of scene elements. Additionally, scan paths during scene perception may provide insights into pattern completion for partially incomplete images. However, the extent to which eye-gaze patterns predict subsequent memory and how these patterns differ between children and adults remains unclear. To investigate this, we measured gaze fixations while children (aged 6 to 11) and young adults (aged 19 to 30) viewed 60 naturalistic images. Later, gaze fixations were measured during image reinstatement on the blank screen after being cued with partially occluded images. Using the representational similarity analysis of the fixation-based heat maps, we observed that adults exhibited higher encoding-retrieval eye-gaze reinstatement than children, suggesting a prolonged developmental trajectory for eye-gaze reinstatement. It correlated with greater memory accuracy, reflecting the consolidation of scan paths. Further, we analyzed the differences between the scan paths in children and adults using MultiMatch—a metric measuring the similarity between scan paths across multiple dimensions. We observed consistent differences between the scan paths made by adults and children. We also used various state-of-the-art AI models to uncover further if they can preferentially predict the scan paths of an age group. We aim to thoroughly investigate and discuss our findings and their implications for both cognitive neuroscience as well as for building foveation-based AI models.

Talk 3: Representations of Hierarchical Scene Information in the Brain

Victoria I. Nicholls1, Lea Widmayer1, Melissa Võ1,2
1Goethe University Frankfurt, Department of Psychology, Scene Grammar Lab
2Neuro-Cognitive Psychology, Department of Psychology, LMU Munich

Our knowledge of scenes is thought to have a hierarchical structure: at the lowest level are often smaller, local objects (e.g., a soap), followed by so-called “anchors,” often larger objects like a sink. Together they form a “phrase,” a meaningful and functionally organized subset of a scene. Multiple phrases combined form a scene. What has not been established so far is whether this hierarchical scene knowledge is represented on a neural level, which brain regions might be involved, and the dynamics of accessing this knowledge. To examine this, participants were presented with an isolated object (local or anchor) either as a word label, image, or target word in the context of a search task, followed by a blank period while we recorded MEG. Using representational similarity analysis (RSA) with models representing different levels of scene knowledge, we analyzed each stimulus presentation and blank period to determine whether participants access representations about the objects only or additionally access phrase and scene representations.

Talk 4: Large Language Models as Artificial Semantic Annotators

Emin Çelik, Mariya Toneva
MPI for Software Systems, Saarbrücken, Germany

Comprehensive study of how the semantic representation of a word changes with context via human behavior or brain recordings is difficult due to the sheer number of possible contexts. Here, we considered large language models (LLMs) as a model organism that is tasked with assessing word meaning. As a first step, we tested whether LLMs can rate a large set of individual words across a number of semantic properties similarly to the way humans do. The words included abstract and concrete nouns, verbs, and adjectives. The semantic properties also covered a wide range, from sensory and motor to social and emotion-related properties. Specifically, we used GPT-4 Turbo and Llama3.1-8B to produce such ratings by using prompts that mimicked the original queries presented to human raters. We found that there was a close match between these rating estimates and those produced by humans. Overall, our results suggest that LLMs are useful tools to semantically annotate a large pool of words out-of-context. In the future, we plan to use our method to annotate a whole book with words in context and model fMRI and MEG data while subjects listen to this audiobook.

A compositional neural code of invariant visual word recognition

Speaker: Dr. Aakash Agrawal
Date: 11th March 2025 , 12:30 – 14:00
Place: PEG building, room 5G 129, Westend Campus, Goethe University

Humans exhibit a remarkable ability to read words accurately even when their internal letters are jumbled, pointing to a robust yet flexible orthographic processing system. Concurrently, reading expertise refines the visual system to distinguish highly similar letters and track their relative positions, enabling us to differentiate between words like FORM and FROM across varying sizes and locations. To probe whether a common neural mechanism underlies these capabilities, we developed cognitive and computational models of invariant word recognition. First, behavioral experiments clarified how letters integrate into word forms. Next, convolutional neural networks trained to recognize words revealed precise mechanism of how word-selective units, which emerged with literacy training, encode letter identity, and positions relative to word boundaries. Finally, using 7T fMRI and MEG, we localized this “ordinal” code in the anterior ventral visual pathway (VWFA) and observed its emergence around 220 ms post-stimulus. This unified framework explains both our resilience to letter transpositions and our precise encoding of letter order, offering novel insights into the neuronal mechanisms underlying invariant word recognition in both human brains and artificial systems.

ARENA Lecture Series: Why is a raven like a writing desk? Mapping and tracking visual object representations

Speaker: Prof. Heida Sigurðardóttir
Date: 6th February 2025, 12:00 – 14:00
Place: PEG building, Seminar room 5G 170, Westend Campus, Goethe University

Humans exhibit a remarkable ability to read words accurately even when their internal letters are jumbled, pointing to a robust yet flexible orthographic processing system. Concurrently, reading expertise refines the visual system to distinguish highly similar letters and track their relative positions, enabling us to differentiate between words like FORM and FROM across varying sizes and locations. To probe whether a common neural mechanism underlies these capabilities, we developed cognitive and computational models of invariant word recognition. First, behavioral experiments clarified how letters integrate into word forms. Next, convolutional neural networks trained to recognize words revealed precise mechanism of how word-selective units, which emerged with literacy training, encode letter identity, and positions relative to word boundaries. Finally, using 7T fMRI and MEG, we localized this “ordinal” code in the anterior ventral visual pathway (VWFA) and observed its emergence around 220 ms post-stimulus. This unified framework explains both our resilience to letter transpositions and our precise encoding of letter order, offering novel insights into the neuronal mechanisms underlying invariant word recognition in both human brains and artificial systems.

ARENA Lecture Series: Taming the neuroscience literature with predictive and explanatory models

Speaker: Prof. Bradley Love
Date: 4th February 2025, 12:00 – 14:00
Place: DIPF building, Room Erwin Stein (1. OG), Westend Campus, Goethe University

Recording

Models can help scientists make sense of an exponentially growing literature. In the first part of the talk, I will discuss using models as predictive tools. In the BrainGPT.org project, we use large language models (LLMs) to order the scientific literature. On a benchmark, BrainBench, that involves predicting experimental results from methods, we find that LLMs exceed the capabilities of human experts. Because the confidence of LLMs is calibrated, they can team with neuroscientists to accelerate scientific discovery. In the second part of the talk, I focus on models that can provide explanations bridging behaviour and brain measures. Unlike predictive models, explanatory models can offer interpretations of key results. I'll discuss work that suggests intuitive cell types (e.g., place, grid, concept cells, etc.) are of limited scientific value and naturally arise in complex networks, including random networks. In this example, the explanatory model is serving as a baseline which should be surpassed prior to making strong scientific claims. I'll end by noting the complementary roles explanatory and predictive models play.

ARENA Lecture Series: Neural Representations Underlying Flexible Behavior

Speaker: Prof. Mona Garvert
Date: 26th November 2024 , 12:00 – 14:00
Place: PEG building, seminar room 1G 150, Westend Campus, Goethe University

Recording

In our ever-changing world, the ability to adapt to novel situations is essential. This flexibility relies on our ability to draw from past experiences and apply general principles to new situations. For example, when choosing from a menu in an unfamiliar restaurant, we instinctively apply past dining experiences to guide our decision. Such generalization is a cornerstone of adaptive behavior, allowing us to make informed decisions without relearning strategies for every new scenario. In this talk, I explore how the human brain enables such behavior. I will demonstrate that the brain constructs hippocampal cognitive maps, traditionally known for encoding spatial relationships, to also represent other types of relational knowledge, providing a flexible foundation for generalization and novel inference. When stimuli can be part of multiple relational maps, the orbitofrontal cortex selectively activates relevant maps tailored to the specific task. Additionally, our research reveals that with time and consolidation, the brain refines these cognitive maps into more abstract representations, capturing the broader relational structure of experiences beyond specific stimuli. In summary, our findings illustrate the remarkable adaptability of neural representations in the human brain. They demonstrate how these representations are not just static archives of past experiences but are dynamic tools actively reshaped to aid decision-making and behavior in ever-changing environments.

ARENA Journal Club

Speaker: Santiago Galella
Date: 19 November 2024, 12:00 - 14:00
Place: Online Zoom

ARENA Journal Club

Speaker: Vicky Nicholls
Date: 22 October 2024, 12:00 - 14:00
Place: Online Zoom

ARENA Workshop: The role of cognitive maps in structure learning and reasoning

Speaker: Dr. Stephanie Theves
Date: 11th October 2024 , 13:00 – 14:30
Place: PEG 1G 131, Westend Campus, Goethe University

How does the human brain transform experiences into concepts and how do we use those representations flexibly? Recent evidence suggests that the ability to extract commonalities and to mark distinction across experiences to build generalisable knowledge is supported by the same brain mechanisms that create cognitive maps of physical spaces. In my talk I will present a series of behavioral and neuroimaging (fMRI) studies that suggest that the hippocampal-entorhinal system encodes the structure of behaviorally relevant conceptual spaces, thereby supporting processes like rapid updating of category boundaries as well as the abstraction of category prototypes and inference of new states. Finally, I will consider the relation between representational mechanisms in the hippocampus and general cognitive performance.

ARENA Workshop: How babies built basic representations of the world around

Speaker: Professor Moritz Köster
Date: 11th October 2024 , 15:00 – 16:30
Place: PEG 1G 131, Westend Campus, Goethe University

Human flexible adaptation relies on both individual learning mechanisms and the (culture-)specific learning environments we grow up in. With my research I aim to illuminate the ontogenetic foundations of this adaptation process in the infant years. I will report about infants’ neural mechanisms for the acquisition of basic physical concepts (such as object categories and physical events), focusing on the theta rhythm and predictive processing, and how these early developing concepts are shaped by social and cultural learning experiences, beginning in the first year of life. Given the topic of the workshop I will emphasize on the predictive processing account and how it may inform our understanding of early learning processes and the theta-gamma neural code, as a neural mechanism that may implement a fundamental and ontogenetically preserved neural principle for making predictions and updating predictive models. To summarize, my talk will span from a more general perspective of the interplay between the individual and the environment in human early development, down to the neural level and basic learning mechanisms that may facilitate adaptation in the developing brain.

ARENA Lecture Series: Beyond mapping of the human brain: characterizing the causal role of large-scale network interactions in supporting complex cognition

Speaker: Dr Michal Ramot
Date: 7th October 2024 , 12:00 – 14:00
Place: PEG, Seminar room 5G 170, Westend Campus, Goethe University

Recording

Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.

ARENA Journal Club

Speaker: Emin Celik
Date: 24 September 2024, 12:00 - 14:00
Place: Online: Zoom

ARENA Journal Club

Speaker: Arthur Aubret
Date: 30th July 2024, 12:00 - 14:00
Place: Online: Zoom

ARENA Lecture Series: Decoding Mental Disorders: Pharmacological Challenges and LLM-Brain Interfaces

Speaker: PD Dr. med. Oliver Grimm, MSc
Date: 23rd July 2024, 14:00 – 16:00
Place: PEG, Room 5G 170, Westend Campus, Goethe University

Recording

AI language models like ChatGPT are transforming psychiatry, offering potential for improved diagnosis, treatment planning, and patient support. These models also provide insights into brain function, showing similarities with various mental states. The LOEWE DYNAMIC Center is embarking on innovative research projects that aim to understand psychopathology from a dynamic network perspective. For this endeavour, Grimm and colleagues are interested in the alignment between brain activity and large language models (LLMs) to advance our understanding of mental disorders. Upcoming studies will utilize pharmacological fMRI (Grimm et al. 2021) and MEG techniques, focusing on the effects of ketamine and dopaminergic agents on processing in the brain. The talk will discuss, how LLM brain alignment might help with that and how collaboration within the ARENA framework is of interest. The research might explore several key areas of alignment between human brain function and LLM processing via pharmacological challenge or in psychiatric patients. Key areas of study include next-word prediction, surprise calculation, or contextual embeddings. This research builds on recent findings of shared computational principles between human brains and LLMs, aiming to provide novel insights into language processing and its alterations in mental disorders. Studies have shown that both systems engage in continuous next-word prediction, calculate post-onset surprise, and rely on contextual embeddings for word representation (Goldstein et al., 2022). By investigating how pharmacological challenge tasks modulate these alignment patterns, the planned studies aim to gain novel insights into the neural basis of language processing and its potential alterations in mental disorders. The talk will offer some background from psychiatry, AI and discuss the research plan as well as collaboration opportunities.

ARENA Journal Club

Speaker: Iryna Schommartz
Date: 25th June 2024, 12:00 - 14:00
Place: Online: Zoom

ARENA Journal Club

Speaker: Cosimo Iaia
Date: 28th May 2024, 12:00 - 14:00
Place: Online

ARENA Lecture Series: The Science and the Engineering of Intelligence

Speaker: Professor Tomaso A. Poggio
Date: 24th May 2024, 12:00 – 14:00
Place: PEG, Seminar room 1G 161 , Westend Campus, Goethe University

Recording

In recent years, artificial intelligence researchers have built impressive systems. Two of my former postdocs — Demis Hassabis and Amnon Shashua — are behind two main recent success stories of AI: AlphaGo and Mobileye, based on two key algorithms, both originally suggested by discoveries in neuroscience: deep learning and reinforcement learning. But now recent engineering advances of the last 4 years — such as transformers, perceivers and MLP mixers— prompt new questions: will science or engineering win the race for AI? Do we need to understand the brain in order to build intelligent machines? or not? A related question is whether there exist theoretical principles underlying those architectures, including the human brain, that perform so well in learning tasks. A theory of deep learning could solve many of today’s problems around AI, such as explainability and control. Though we do not have a full theory as yet, there are very good reasons to believe in the existence of some fundamental principles of learning and intelligence. I will describe one of them which revolves around the curse of dimensionality. Others are about key properties of transformers and LLMs such as ChatGPT. I will argue that in the race for intelligence, understanding fundamental principles of learning and applying them to brains and machines is a compelling and urgent need.

ARENA Lecture Series: A European perspective on structural barriers to women`s career progression in neuroscience

Speaker: Teresa Spano, Ashly Bourke
Date: 07th May 2024, 12:00 - 14:00
Place: Seminar room 5G 170,PEG, Westend Campus, Goethe University

Recording

Despite an unprecedented number of women entering neuroscience, and decades-long recruitment and retention efforts, women continue to be disproportionately underrepresented in European academic tenure-track faculty and leadership positions. This Perspective focuses on two major career points where women exhibit diminished representation: the transition from postdoctoral fellow to junior professor and the promotion to more senior (tenured) faculty positions. We discuss below recently implemented country-specific and Europe-wide initiatives supporting equal career progression and propose further concrete steps to be taken to break down the structural barriers that prevent women’s progression up the academic career ladder as European neuroscientists.

ARENA Journal Club

Speaker: Bhavin Choksi and Martina Vilas
Date: 30th April 2024, 12:00 - 14:00
Place: Online

Getting Aligned on Representational Alignment

ARENA Lecture Series: Distilling the core visual and semantic dimensions underlying mental representations of objects

Speaker: Martin Hebart
Date: 9th April 2024 , 12:00 - 14:00
Place: Seminar room 5G 170,PEG, Westend Campus, Goethe University

Recording

Understanding the nature of our mental representations is a central aim of the cognitive sciences. In this talk, I will discuss past, present, and future work from our lab targeted at (1) unraveling the nature of these representations, (2) revealing their neural substrate along the ventral visual system, and (3) identifying the representations uniquely associated with vision and semantics. To achieve these aims, we draw on a range of methods ranging from computational modeling of large-scale online behavioral data, the development and use of densely-sampled neuroimaging datasets comparing representations of images and words and those of sighted and blind individuals, and a direct comparison of neural representations of objects in humans and macaque monkeys. Together, our present results support a multifaceted view where humans make sense of the world around them by combining a set of representational dimensions to structure their environments, form categories and communicate their knowledge with others.

ARENA Lecture Series: Frankfurter Bürger-Universität Winter Semester Event, Bridging AI and Brain; Exploring Abstract Knowledge

Speaker: Gemma Roig
Date: 15th February 2024, 18:00 - 19:00
Place: Goethe University, Campus Westend, Seminarhaus, room SH 3.102

Recording

You can find the program here: [buerger.uni-frankfurt.de/143422054/programmbroschure-frankfurter-burger-universitat-wintersemester-2023-24.pdf](https://www.buerger.uni-frankfurt.de/143422054/programmbroschure-frankfurter-burger-universitat-wintersemester-2023-24.pdf)

ARENA Workshops - Computational Models for Neuroscience, introducing Net2Brain toolbox

Speaker: Timothy Schaumlöffel, Bhavin Choksi
Date: 6th February 2024, 10:00 - 12:00
Place: PEG 5.G170, Westend Campus, Goethe University

Welcome to our workshop on Computational Models for Neuroscience, where we will delve into the fascinating intersection of artificial intelligence and cognitive research. In recent years, deep neural networks (DNNs) have emerged as powerful computational models for understanding the complexities of the primate visual cortex. Numerous studies have highlighted the potential of DNNs in unravelling the computational principles and neurobiological mechanisms behind visual processing.
To facilitate this cutting-edge research, we introduce Net2Brain, a comprehensive toolbox designed to map model representations to human brain data. Unlike existing toolboxes that primarily focus on supervised image classification models, Net2Brain goes beyond by enabling the extraction of activations from diverse visual tasks, including semantic segmentation, depth estimation, and action recognition. With over 600 pre-trained DNNs and support for custom models, Net2Brain simplifies the entire process from feature extraction to analysis, offering a seamless pipeline for researchers. The toolbox computes representational dissimilarity matrices (RDMs) over activations, allowing for in-depth comparisons with brain recordings using representational similarity analysis (RSA) and weighted RSA, employing both ROI-based and searchlight analyses.
Net2Brain is an open-source toolbox that comes with preloaded brain data for immediate testing, and it seamlessly accommodates the integration of your own recorded data. Join us as we explore the vast potential of Net2Brain in advancing our understanding of the brain's visual processing through computational models.

ARENA Workshops - MNE-Python

Speaker: Jack Taylor
Date: 30th January 2024, 09:00 - 12:00
Place: PEG 5.170, Westend Campus, Goethe University

MNE-Python is a library that has rapidly become one of the most widely used tools for M/EEG analysis. In this brief workshop, after a recap on the basics of the event-related potential (ERP) approach to M/EEG analysis, we'll walk through an analysis of some example data in MNE-Python and explore options for pre-processing and epoching the data. Finally, most likely in R, we will explore options for fitting robust models to epoched data that can be used to describe patterns and test hypotheses.

ARENA Journal Club

Speaker: Vicky Nicholls
Date: 28th November 2023
Place: Online

ARENA Journal Club

Speaker: Timothy Schaumlöffel and Arthur Aubret
Date: 24th October. 2023
Place: Online

ARENA Journal Club

Speaker: Iryna Schommartz
Date: 26th September. 2023
Place: Online

ARENA Journal Club

Speaker: Cosimo Iaia
Date: 25th July. 2023
Place: Online