Alessandro Gifford, Jiawei Li - Two presentations from the Neural Dynamics of Visual Cognition Lab

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ธ.ค. 2024
  • Title: Two presentations from the Neural Dynamics of Visual Cognition Lab
    Title 1: In silico discovery of representational relationships across visual cortex
    Abstract: Human vision is mediated by a complex interconnected network of cortical brain areas that jointly represent visual information. While these areas are increasingly well understood in isolation, their representational relationships remain elusive: what representational content is shared between areas or unique to a specific area? Here we determined representational relationships by developing relational neural control (RNC). RNC generates and explores in silico functional magnetic resonance imaging (fMRI) responses for large amounts of images, finding controlling images that align or disentangle responses across areas, thus indicating their shared or unique representational content. We used RNC to investigate the representational relationships for univariate and multivariate fMRI responses of early- and mid-level visual areas. Quantitatively, a large portion of representational content is shared across areas, and unique representational content increases as a function of cortical distance. Qualitatively, we isolated the visual features determining shared or unique representational content, which changed when controlling univariate or multivariate responses. Closing the empirical cycle, we validated the in silico discoveries on in vivo fMRI responses by presenting the controlling images to an independent set of subjects. Together, this reveals how visual areas jointly represent the world as an interconnected network.
    Title 2: How do we understand language across different people and modalities?
    Abstract: In everyday life, we are surrounded by different types of language materials: the news from podcasts, the stories in the novels or the annoying morning call from a hotel reception or the sweet good-night message from a crush. These messages come in different forms-visual or auditory-and from various people in different environments. Yet, we effortlessly understand all the information. I'm always curious about how our brain accomplishes this, and to explore that question, I used EEG combined with language models in my research.
    In the first part of my talk, I'll introduce how our brain functions in a 'cocktail party' setting: when there are multiple speakers, how do we manage to focus on one while ignoring others? In the second part, I'll present my current project on cross-modality semantic representation-exploring how our brain achieves the same understanding whether we read or listen to a story.

ความคิดเห็น •