•  Location: 164 Angell StreetRoom: 402, Innovation Zone

    Title: How are dynamical systems composed for complex behavior?

    Laura Driscoll, Ph.D.

    Senior Scientist at the Allen Institute for Neural Dynamics

    Abstract: Computational processes in neural systems emerge through learning across multiple timescales; from evolution and development to immediate, in-context adaptation. Yet fundamental questions remain: Which neural architectures confer evolutionary advantages? How do experiences shape circuit dynamics? What principles govern how specific computations arise during training? My group addresses these questions using data-driven models, simulations, and analytical methods. Building on a decade of research across multiple labs, we focus on fixed point structures, termed “dynamical motifs”, that serve as computational primitives. We’ve discovered that these motifs can be flexibly composed to solve diverse tasks, with rapid learning often involving novel recombination of existing motifs rather than construction of entirely new dynamics. However, the principles governing motif composition remain poorly understood, motivating our simulation-based approach. I will present two ongoing projects that illustrate this framework: Dynamical motifs underlying foraging behavior: How fundamental dynamical motifs support naturalistic decision-making and navigation. How task structure shapes computational dynamics: The relationship between problem structure and the organization of dynamical systems that solve it.
    Very little is known about how humans and other animals compose elements of past learning to solve similar problems in new situations. To explore these and related questions, I recently joined the Allen Institute for Neural Dynamics. My group will utilize data-driven models, simulations, and analytical methods, with close ties to experimental groups collecting behavioral and neural data. We will examine how previous learning shapes behavior in novel environments.
    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    “Visual motion estimation, revisited”

    William Bialek, Ph.D.

    Professor of Physics & Biophysics at Princeton University

    The estimation of movement in the visual system is one of the best studied examples of computation in the brain, with results dating back to the 1800s. There is work both on neurons and behavior, and on organisms ranging from insects to primates, with surprisingly common themes. One idea is that the brain’s algorithms for motion estimation are optimized, allowing for estimates that are as accurate or reliable as possible. The problem is that the predictions depend on the underlying joint statistics of movies and motion in the world, and on the relevant noise levels. In the absence of independent measurements of these quantitates, a theory of optimal estimation just becomes a model with parameters to be fit to the data. My experimental colleagues have developed a specialize instrument that provides a “fly’s eye view” of the visual world, allowing us to sample the joint distribution of movies and motion in natural environments. Theory tells us how to construct the form of optimal motion estimators, quantitatively, as averages over these estimates. We also know the irreducible noise levels in the fly’s retina, we can record the responses of motion sensitive neurons deep in the fly’s brain, and we can map related these computations to the connectome. This defines a program of comparing optimal estimates with the real computations done by the brain, in detail and with no free parameters. We are far form done but I will described our progress.

     

    Pizza will be served for lunch!

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    “Entropy Rate of Meaningful Discourse”

    Misha Tsodyks, Ph.D.

    Misha will discuss the attempts to leverage Large Language Models to evaluate information content of meaningful discourse by separating meaning from phrasing.

     

    Pizza will be offered for lunch!

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    Katharina Duecker, Ph.D.

    Title: The next-generation Human Neocortical Neurosolver (HNN): biorealistic cell models reveal the importance of dendritic spiking dynamics in interpreting human EEG/MEG signals

    Join us for the CCBS Flash Talk Lunch series! These talks will be interactive with open discussion, and small-group conversation. Pizza will be available during the talk.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    “What does the neuron do? A self-supervised dynamical model for Neuroscience and AI”

    Dmitri Chklovskii, Ph.D.

    Research Associate Professor, Department of Neuroscience, New York University

    Abstract: The Rectified Linear Unit (ReLU)—a staple of modern AI—is memoryless and thus misses the rich temporal structure of biological neurons. We recast the neuron as a self-supervised learner of an underlying stochastic dynamical system and introduce the Rectified Spectral Unit (ReSU), a neuronal model that couples rectification with spectral learning of stimulus dynamics. Trained without backpropagation on natural-scene translations, a three-layer ReSU network recapitulates key computations in the Drosophila motion-vision pathway. These results position ReSU networks as a principled framework for modeling sensory circuits and as a biologically motivated alternative to backpropagation-trained ReLU networks.

    Mitya Chklovskii’s research is to reverse engineer the brain on the algorithmic level. Informed by anatomical and physiological neuroscience data, his group develops algorithms that model brain computation and solve machine learning tasks. Chklovskii is also on the faculty of the NYU Medical Center. Before coming to the Simons Foundation in 2014, he was a group leader at Janelia Farm where he initiated and led a collaborative project that assembled the largest-at-the-time connectome, a comprehensive map of neural connections in the brain. Before that, he was an associate professor at Cold Spring Harbor Laboratory in New York, Sloan Fellow at the Salk Institute and Junior Fellow of the Harvard Society of Fellows. He holds a Ph.D. in physics from the Massachusetts Institute of Technology.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    “Flexible Error Monitoring in Context-dependent Behavior” 

    Atsushi Kikumoto, Ph.D.

    Research Associate in Cognitive and Psychological Sciences, Brown University

    Join us for the first talk of the CCBS Flash Talk series! These talks will be interactive with open discussion, and small-group conversation.

    Pizza will be available during the talk.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    Sheridan Feucht, Ph.D. Candidate, Northeastern University

    What is a word to an LLM? Individual tokens are often semantically unrelated to the meanings of the words/concepts they comprise, meaning that LLMs must build up representations of word meaning in-context that are separate from token identities. In this talk, I show how these multi-token words can be used as a wedge to separate conceptual representations from literal token representations. I describe our “dual-route theory of induction”, which argues that models copy literal token information in parallel with “fuzzy” word representations, and show that this “fuzzy” concept induction is vital for other tasks, like translation. Finally, I describe our recent work analyzing the geometry of these independent subspaces, which suggests that they are likely useful for more than just copying.

    View Full Event  
  •  Location: 185 Meeting StreetRoom: Marcuvitz Auditorium
    Dr. Tai Sing Lee
    Professor in Computer Science, Computer Science Department and Center for the Neural Basis of Cognition
    Carnegie Mellon University

    This talk will highlight recent advances in applying deep learning and digital twin models to neuroscience, and how these approaches inform artificial intelligence. By examining the responses of V1 and V4 neurons to a large dataset of natural images (~40,000), we find that their neural codes display remarkable complexity and diversity, challenging long-standing assumptions. Using deep learning–based digital twins of these neurons, we conduct both classical neurophysiological experiments in silico and novel computational studies to probe their selectivity for natural image features, the neural mechanisms that support such selectivity, and the computational constraints that shape their topological organization. These insights provide a deeper understanding of how concepts are learned and composed in the nervous system, with potential implications for building more flexible and interpretable AI systems.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 402 - Innovation Zone

    Risk Attitude: Preference or Perception?

    Christian Ruff, Ph.D.

    Professor of Neuroeconomics and Decision Neuroscience, University of Zurich

    Risk attitude – the willingness to accept uncertainty for the possibility to gain larger rewards – is often seen as a stable personality trait akin to a ‘taste for risk’. However, this widespread notion is contradicted by findings that risk attitudes can change, and sometimes completely reverse, over different contexts and even repetitions of the identical choice problems. The neurocomputational processes giving rise to these fluctuations in risk attitude have remained elusive. In this talk, I will present a recent line of work from my lab that sheds light on these processes. In a series of experiments combining computational modelling, population-receptive field modelling of fMRI data, and transcranial magnetic stimulation, we find that apparent risk attitudes do not originate from (dis)tastes for risk encoded in motivational brain systems but rather from Bayesian perceptual inference on noisy magnitude representations in parietal cortex. The indvidual characteristics of these neuro-computational perceptual processes can causally account for a variety of empirical effects, including individual differences, preference reversals, context effects, and changes of risk attitudes with acute stress. Taken together, our work suggests that risk attitude may not reflect subjective valuation of uncertainty but rather perceptual mis-estimation, with profound implications for psychological, economic, and neuroscience theories of risk-taking and the corresponding clinical applications.

    Christian Ruff is Full Professor of Neuroeconomics and Decision Neuroscience at the Department of Economics of the University of Zurich. After studying Psychology, Cognitive Science, and Neurobiology in Freiburg/Germany and Vancouver/Canada, he obtained a PhD in Cognitive Neuroscience at University College London in 2007. He stayed at UCL as Senior Research Fellow until 2010, when he took up his position in Zurich. In his research, Christian studies human motivation, decision-making, and learning with the aim to develop models that can be used to explain and predict choices and social behavior across many diverse contexts.

    View Full Event  
  •  Location: TBA

    We are delighted to announce CISE 2025 - an interdisciplinary conference on Curiosity Information Seeking & Exploration.

    Speakers:

    • Jacqueline Gottlieb, Columbia University
    • Tali Sharot, UCL / MIT
    • George Loewenstein, Carnegie Mellon University
    • Alireza Modirshanechi, EPFL
    • Russell Golman, Carnegie Mellon University
    • Valeria González, Reed College
    • Caroline Charpentier, University of Maryland
    • Robert Wilson, Georgia Institute of Technology
    • Kate Nussenbaum, Boston University
    • Haoxue Fan, Brown University

    For more information see https://ciseconf.github.io/CISE_2025 and please share this information with your networks. 

    View Full Event  
  •  Location: Metcalf Research BuildingRoom: Dome Room

    The Carney Center for Computational Brain Science and the Brainstorm Program is organizing a two-part computational modeling workshop with a focus on computational modeling of cognition, behavior, and brain/behavior relationships. Workshop attendees will learn the basic tools for understanding, developing, and applying models to brain science questions, and have the opportunity to apply these techniques in a novel behavioral dataset.

    Part 1 (July 14-22) will consist of workshops and live tutorials, including daily lectures spanning basic to advanced topics, accompanied by hands-on coding tutorials. Attendees will learn the basic tools for understanding, developing and applying computational models, with a focus on hypothesis testing, quantitative fitting, Bayesian methods, and model checks and comparisons. Additionally, advanced modeling sessions will provide a deeper theoretical understanding and application of complex modeling techniques, hierarchical Bayesian modeling (including BayesFlow), automated model discovery, and sequential sampling models. The latter includes in-depth introductions to the HSSM and the EMC2 packages.

    During Part 2 (July 23 - August 1) , participants will have the opportunity to work in teams to apply these skills to analyze a real dataset provided by the organizers, with potential for novel discoveries. Prizes will be awarded for models with the most predictive power, rigor, creativity, and innovation.

    The 2025 workshop syllabus is available here.

    This workshop is open to the members of the Brown community, and is designed for researchers across fields, backgrounds and levels of experience: computation “novices” with no experience and those with more computational experience who may want to augment their toolkit with advanced approaches to parameter estimation or specific classes of models. Although there is no computational experience required, those with modeling backgrounds will still benefit from the advanced modules, and will have the opportunity to learn new skills and state-of-the-art computational approaches.

    Please reach out to Sebastian Musslick (sebastian_musslick@brown.edu) with any questions

    Participation is limited, but we do keep a waitlist.

    Register Here
    View Full Event  
  •  Location: Metcalf Research BuildingRoom: 107

    Please join ANCOR (AI, Neuro, CogSci Research talks) on Monday 3/17 at 11am featuring Sam Lippl (PhD student at Columbia University/Zuckerman Institute). 

    Zoom: https://brown.zoom.us/j/93766900152

    View Full Event  
  •  Location: 115 Waterman StreetRoom: 447

    Classical computation in connectionist models

    Aditya Yedetore, Boston University Linguistics (PhD student)

    The success of Classical (i.e., symbolic) linguistic theories suggests that at least in the linguistic domain, the mind constructs structured symbolic representations and processes them with rules of symbol manipulation. However, modern Connectionist (i.e., neural) models may provide an alternative foundation for linguistic computation: Connectionist models often lack built-in mechanisms for Classical representation and processing, yet they perform impressively on a wide array of linguistic tasks. Connectionist models may not challenge the Classical approach if they implement Classical computers. This is no mere theoretical possibility: when trained on simple symbol manipulation tasks, small Connectionist models develop structured symbolic representations, a key aspect of Classical computation. This raises the possibility that Connectionist models trained on natural data also develop such representations. However, structured representation is not sufficient for Classical computation. The processing of the structured representations by the Connectionist models must involve abstract symbol manipulation of the sort that Classical theories posit. Else, Connectionist models may still challenge the Classical account of human linguistic capacities. We study this question by testing if Connectionist models trained on simple symbol processing tasks develop Classical processing mechanisms. We find evidence suggesting that Connectionist models that succeed on such tasks do implement Classical models. To the extent that such findings generalize to models trained on naturalistic data, such a result would suggest that modern Connectionist models do not challenge Classical theories of human language.

    View Full Event  
  •  Location: Watson Center for Information Technology (CIT)Room: 477

    Hi friends,

    Next Monday we are hosting another ANCOR (AI, Neuro, Cog Research talks) speaker! Ekdeep Lubana (Postdoc, Harvard) joins ANCOR to present his work Dynamics of Concept Learning and Emergent Abilities in Neural Networks.


    Date: Monday, Feb 10, 11am
    Location: CIT 477, 115 Waterman St.

    All are welcome!
    Mikey + Aalok

     

    https://brown-ancor.github.io/

    Zoom Link
    View Full Event  
  •  Location: Watson Center for Information Technology (CIT)Room: 477

    Ciana Deveau (Brown/NIH) joins ANCOR (AI, Neuro, Cog Research talks) to present”Recurrent cortical networks encode natural sensory statistics via sequence filtering”.

    View Full Event  
  •  Location: 164 Angell StreetRoom: Innovation Zone, 4th floor

    Speaker: Eghbal Hosseini (Postdoc, MIT)

    Title: Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language

    View Full Event  
  •  Location: 164 Angell StreetRoom: Innovation Zone - 402

    A predictive coding perspective on oscillatory traveling waves

    Andrea Alamia, Centre de Recherche Cerveau et Cognition (CerCo), CNRS, Université de Toulouse, Toulouse, France

    This talk presents a few studies that aim to interpret oscillatory travelling waves in the predictive coding framework. In the first part, I’ll introduce a simple model of the visual cortex based on predictive coding mechanisms, in which physiological communication delays between levels generate alpha-band rhythms. Interestingly, these oscillations propagate as traveling waves across levels, both forward (during visual stimulation) and backward (during rest). Remarkably, experimental EEG data matched the predictions of our model. In the second part of the talk, I’ll present two studies that indirectly investigate the link between predictive coding mechanisms and traveling waves experimentally: the first one investigates the effect of a powerful psychedelics drug, N,N, dimethyltryptamine (DMT), on alpha-band oscillations, and the second one interprets the pattern of oscillatory traveling waves in schizophrenic patients in the light of Predictive Coding. In the last part of the talk, I will show some (very) preliminary results on a statistical learning paradigm that directly explores the link between traveling waves and predictive coding processes.

    View Full Event  
  •  Location: 164 Angell StreetRoom: Innovation Zone - 4th floor

    Large Language Models vs Human Brain: mapping and decoding the language code in neural systems

    Jean-Rémi King, Ph.D.

    While deep learning has made major progress in natural language processing, these algorithms fall short of the compute and data efficiency of the human brain. Here, we here systematically evaluate the similarities and differences between these two systems. For this, we collect, gather and analyze large-scale datasets of magneto/electro-encephalography (M/EEG), functional Magnetic Resonance Imaging (fMRI), and intracranial recordings. After investigating where and when deep language algorithms function similarly to the brain, we show that long-range forecasts make them more similar to it. This systematic comparison provides an operational foundation to decode language and semantics from brain responses to speech listening, images, videos, reading and text typing. Overall, these findings underscore the potential of integrating AI and neuroscience to unify cognitive tasks within a common computational framework.
    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th floor - 429

    Join us for our weekly interdepartmental journal club to discuss recent work in cognitive, computational, and systems neuroscience. For more info, contact Kati Conen (katherine_conen@brown.edu)

    View Full Event  
  •  Location: 164 Angell StreetRoom: Carney Innovation Zone, 4th floor

    ShiNung Ching, Associate Professor Electrical & Systems Engineering,
    Washington University in St. Louis

    A fundamental challenge in computational neuroscience has been the translation of multimodal data into formal mathematical and computational models that can reveal biophysical mechanisms in neural circuits and their connection to behavior. In this presentation, I will describe our recent efforts in this domain, focusing on extracting from data the generative dynamics that give rise to overt observations of brain activity. Specifically, I will describe how we have adapted tools from Bayesian filtering and algorithmic optimization toward the problem of parametrically learning high-dimensional, biophysically interpretable models of network interactions involving hundreds to thousands of neural populations. These techniques place a particular emphasis on model-building at the level of individuals, which in turn provides leverage on revealing idiosyncrasies in brain mechanisms. In this regard, I will highlight two ways in which we are leveraging the obtained models. First, I will discuss our newly developed methods to directly interrogate the intrinsic dynamics within models, toward assessing topological similarity across individual (brain) dynamics and the functional salience thereof. Second, I will describe how we are using models to predict input-output relationships within brain networks and their responses to exogenous, causal perturbations. In addition to basic mechanistic insights, these approaches enable us to design brain stimulation protocols that are tailored to individuals and defined in terms of dynamical targets that can be linked to specific functional endpoints; to conclude I will briefly describe ongoing work to validate this latter premise.

    View Full Event  
  •  Location: 164 Angell StreetRoom: Innovation Zone

    Cortical Computations Underlying the Integration of Perceptual Priors and Sensory Processing

    Tahereh Toosi, Ph.D.

    Postdoctoral Research Scientist, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University

    The ability of the visual system to store and use learned information, or perceptual priors, is essential for interpreting complex visual scenes, such as identifying obscured objects or imagining scenes not currently visible. This process relies on the interaction between processing incoming sensory data and existing knowledge stored in the synaptic strengths throughout the brain. Although the importance of top-down and bottom-up integration is recognized, the precise ways in which they enable the brain to piece together information from different sources remain largely unknown.

    My research aims to reveal the mechanisms underlying these processes by demonstrating how the brain’s need to function reliably in noisy environments influences the development of these pathways, enabling visual processing abilities like resolving visual occlusion and visual imagination. The phenomenon of illusory contours and shapes, exemplified by the Kanizsa optical illusion and Rubin’s face-vase illusion, serves as an ideal case study for how the brain combines sensory input with past experiences to create a coherent perception. Previous studies have shown that such illusory contours invoke activation in specific layers (L2/3) of the early visual cortex but not in others (L4). I will demonstrate the recapitulation of these findings within a deep convolutional model optimized for object recognition, powered by a theory-grounded, biologically plausible algorithm that processes activations through forward and feedback pathways iteratively. This represents the first instance of a large-scale, image-computable model that, while primarily optimized for recognizing objects, also explains how illusions are perceived in the visual cortex as a result of integrating sensory data with learned information.

    Zooming out, the insights from this computational modeling suggest a resolution to the debate over whether the brain functions primarily as a generative or a pattern recognition neural network, and explaining a number of experimental findings regarding specificity of computations in cortical layers.

    View Full Event  
  •  Location: MacMillan HallRoom: 115

    Network Cognition in a Curious World

    Dani Bassett, Ph.D. | J. Peter Skirkanich Professor at the University of Pennsylvania 

    In this talk, I will describe a notion of network cognition that manifests in how we engage with the curious world around us. To do so, I will draw together three lines of inquiry in mind, brain, and computation. I’ll begin with a line of inquiry into connective curiosity (“How do we connect bits of information as we walk about the world?”), then move into graph learning (“How do we build larger network models from those connections?”), and finally end in network control theory (“How is that model building constrained by the brain’s own connective structure?”). The studies discussed will span experiment, model, and theory, and bridge human behavior, neural representations, and computational science. Together they frame a formal investigation into network cognition and motivate future inquiry. 

    View Full Event  
  •  Location: Metcalf Research BuildingRoom: Auditorium

    Deconstructing human reinforcement learning

    Anne Collins, Ph.D. Associate Professor of Psychology at the University of California, Berkeley

    Reinforcement learning frameworks have contributed tremendously to our better understanding of learning processes in brain and behavior. However, this remarkable success obscures the reality of multiple underlying processes that support humans’ unique flexibility and adaptability. In this talk, I will show that not accounting for such underlying processes in computational cognitive modeling weakens the generalizability and interpretability of findings, with important consequences in neuroscience, developmental, clinical research. I will present multiple approaches to disentangle the multiple processes that support flexible learning, including episodic and working memory processes. This works highlights the importance of studying learning as a multi-dimensional phenomenon that relies on multiple separable but inter-dependent computational mechanisms. Insights from how the brain implements learning is essential to informing generalizable, interpretable cognitive modeling. 

    View Full Event  
  •  Location: 164 Angell StreetRoom: Innovation Zone, 402

    Building Performant and Brain-Like Recurrent Models from Neurons and Astrocytes

    Leo Kozachkov, Ph.D., Postdoctoral Associate, McGovern Institute for Brain Research, MIT

    The brain’s ability to perform challenging tasks is facilitated by its many inductive biases—hardwired biological features that predispose it to process information in certain ways over others. These features include anatomically distinct brain areas, as well as specialized cell types such as neurons and glia. Inductive biases grant the brain computational powers that currently surpass artificial intelligence systems in many domains. In this seminar, I will cover two recent avenues of research that leverage the brain’s inductive biases to build highly performant, recurrent artificial networks.

    In the first half of my talk, I will discuss recent progress in understanding the computational role of different cell types. I will focus on neuron-glial interactions. An intriguing fact is that most human brain cells are not neurons, but rather glia. There is mounting experimental evidence suggesting that astrocytes, a specialized type of glial cell, play a significant role in learning, memory, and behavior. However, our theoretical understanding is lagging far behind. I will cover recent work that aims to bridge this gap by relating dynamical, energy-based neuron-astrocyte networks to powerful AI models such as Modern Hopfield Networks and transformers.

    In the second half of my talk, I will discuss how and why the brain maintains a balance between flexibility and stability through “dynamic attractors”, which are reproducible patterns of neural activity in response to (potentially time-varying) stimuli. This work reveals an unexpected and useful theoretical link between dynamic attractors and modularity. Specifically, recurrent neural networks with dynamic attractors can be combined into large, modular “networks of networks”, reminiscent of the brain’s macroscopic organization, in ways that provably preserve stability. These higher-order, stable networks can then be optimized for state-of-the-art performance on benchmark sequential processing tasks, demonstrating that dynamic stability is a useful inductive bias for building brain-like performant recurrent models.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 402, Innovation Zone

    “Bridging scales in intelligent systems– from octopus skin to mouse brain”

    Leenoy Meshulam Ph.D.

    Swartz Theory Fellow, University of Washington

    For an animal to perform any function, millions of neurons in its nervous system furiously interact with each other. Be it a simple computation or a complex behavior, all biological functions involve many individual units. A theory of function must specify how to bridge different levels of description at different scales. For example, to predict the weather, it is irrelevant to follow the velocities of every molecule of air. Instead, we use coarser quantities of aggregated motion of many molecules, e.g., pressure fields. Statistical physics provides us with a theoretical framework to specify principled methods to systematically ‘move’ between descriptions of microscale quantities (air molecules) to macroscale ones (pressure fields). Can we hypothesize equivalent frameworks in the nervous system? How can we use descriptions at the level of neurons and synapses to make precise predictions of activity and behavior? My research group will develop theory, modeling, and machine learning tools to discover generalizable forms of scale bridging across species and behavioral functions. In this talk, I will present lines of previous, ongoing, and proposed research that highlight the potential of this vision. I shall focus on two seemingly very different systems: mouse brain neural activity patterns, and octopus skin cells activity patterns. In the mouse, we reveal striking scaling behavior and hallmarks of a renormalization group- like fixed point governing the system. In the octopus, camouflage skin pattern activity is reliably confined to a (quasi-) defined dynamical space. Finally, I will touch upon the benefits of comparing across animals to extract principles of multiscale function in the nervous system, and propose future directions to investigate how macroscale properties, such as memory or camouflage, emerge from microscale level activity of individual cells. 

    View Full Event  
  •  Location: 164 Angell StreetRoom: 402 - Innovation Zone

    “Successes and failures of machine learning models of sensory systems”

     

    Jenelle Feather, Ph.D. 

    Flatiron Research Fellow at the Center for Computational Neuroscience

    The environment is full of rich sensory information. Our brain can parse this input, understand a scene, and learn from the resulting representations. The past decade has given rise to computational models that transform sensory inputs into representations useful for complex behaviors such as speech recognition or image classification. These models can improve our understanding of biological sensory systems and may provide a test bed for technology that aids sensory impairments, provided that model representations resemble those in the brain. In this talk, I will discuss my research program, which aims to develop methods to compare model representations with those of biological systems and to use insights from these methods to better understand perception and cognition. I will cover experiments in both the auditory and visual domains that bridge between neuroscience, cognitive science, and machine learning. By investigating the similarities and differences between computational model representations and those present in biological systems, we can use these insights to improve current computational models and better explain how our brain utilizes robust representations for perception and cognition. 

    View Full Event  
  •  Location: 164 Angell StreetRoom: 402, Innovation Zone

    “The Relational Bottleneck and the Emergence of Cognitive Abstractions”

    Taylor Webb, Ph.D.

    Human cognition is characterized by a remarkable ability to transcend the specifics of limited experience to entertain highly general, abstract ideas. Efforts to explain this capacity have long fueled debates between proponents of symbol systems and statistical approaches. In this talk, I will present an approach that suggests a novel reconciliation to this long-standing debate, by exploiting an inductive bias that I term the relational bottleneck. This approach imbues neural networks with key properties of traditional symbol systems, thereby enabling the data-efficient acquisition of cognitive abstractions, without the need for pre-specified symbolic representations. I will also discuss studies of perceptual decision confidence that illustrate the need to ground cognitive theories in the statistics of real-world data, and present evidence for the presence of emergent reasoning capabilities in large-scale deep neural networks (albeit requiring far more training data than is developmentally plausible). Finally, I will discuss the relationship of the relational bottleneck to other inductive biases, such as object-centric visual processing, and consider the potential mechanisms through which this approach may be implemented in the human brain.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 402, Innovation Zone

    Gabriel Kreiman, Ph.D.

    Professor, Harvard University, Children’s Hospital Boston

    What information do neurons along the ventral visual cortex represent? Exhaustively examining all possible images is empirically impossible. Therefore, to investigate stimulus preferences, investigators have used a combination of intuitions derived from previous studies, natural stimulus statistics, and serendipitous findings. Here I will describe an approach to uncover what neurons want using a real-time, unbiased, systematic algorithm based on computational models of the ventral visual cortex. We use a generative deep neural network as a vast and diverse hypothesis space. A genetic algorithm searches this space for stimuli guided by neuron preferences. We show that this approach can rapidly generate synthetic images that trigger high activations, both in model units as well as in real neurons, in many cases even higher activations than those elicited by large numbers of hand-picked natural stimuli or images derived from conventional approaches. This approach forces us to revisit how we think about neural coding in the ventral visual cortex. I will also show the results of psychophysics experiments where humans are asked to describe the images that trigger high activation patterns in inferior temporal cortex neurons, reinforcing the notion that neurons in the ventral visual cortex represent complex visual features but not semantic categories. Finally, I will show that similar conclusions can be drawn by scrutinizing the representations in artificial neural networks as coarse approximations to the processing steps along the ventral stream.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 402: Innovation Zone

    Please join us for a seminar with Larry F. Abbott, Ph.D.


    William Bloor Professor of Theoretical Neuroscience and Professor of Physiology and Cellular Biophysics (in Biological Sciences)

    Principal Investigator at Columbia’s Zuckerman Institute

    Co-director of Columbia’s Kavli Institute for Brain Science

     

    Title: “Modeling the Navigational Circuitry of the Fly”

    Abstract: Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.

    View Full Event  
  •  Location: 164 Angell Street, Providence, RI 02912Room: Carney Innovation Zone, 4th Floor

    The Carney Institute offers an Advanced SEEG Analysis Workshop alongside this Data Challenge (Jan 16th - Jan 19th, 2024). This is a week of tutorials on how to conduct computational analyses of SEEG signals offered by world leaders on these topics. Some topics. include preprocessing SEEG data, identifying sharp wave ripples, detecting replay, visual encoding and more! If you are interested in joining the workshop, you can indicate this in the Registration form. Please read this attachment.

    View Full Event  
  •  Location: 164 Angell StreetRoom: Innovation Zone

    NetPyNE provides programmatic and graphical interfaces to develop data-driven multiscale brain neural circuit models using Python and NEURON. Users can define models using a standardised JSON-compatible rule-based declarative format. Based on these specifications, NetPyNE will generate the network in NEURON, enabling users to run parallel simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis. NetPyNE also facilitates model sharing by exporting and importing standardized formats: NeuroML and SONATA.

    To participate in the hands-on portion of the workshop attendees will need to register with Open Source Brain https://www.opensourcebrain.org/. It is a one-click registration using email, github account or ORCID.

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Speaker: Anne Collins, Ph.D., Associate Professor, University of California, Berkeley

    The importance of goals in human learning.

    Reinforcement learning frameworks have contributed tremendously to our better understanding of learning processes in brain and behavior. However, this remarkable success obscures the reality of multiple underlying processes, and in particular hides how executive functions set the stage over which reinforcement learning computations operate. In this talk, I will show that executive functions define the learning substrates for other learning mechanisms, setting the stage for what we learn about. Across multiple studies, we find that the goals humans set define their intrinsic motivations, as well as the states and actions over which they learn. Our results emphasize the blurry boundary between “fast” and “slow” processes and show that flexible human cognition can be supported by leveraging simple computational processes over internally defined inputs. Clarifying the contributions and interaction of different learning processes is essential to understanding individual learning differences, particularly in clinical populations and development. This work highlights the importance of studying learning as a multi-dimensional phenomenon that relies on multiple separable but inter-dependent computational mechanisms.

    Lunch will be provided after the seminar.

    View Full Event  
  •  Location: 164 Angell Street, 4th FloorRoom: Innovation Zone

    This Fall ICERM is hosting a semester-long program focusing on issues at the intersection between math and neuroscience. This program will bring in prominent computational and mathematical neuroscientists from abroad. The first of three weeklong workshops will be the week of Sept 18, focusing on Neuronal Network Dynamics.

    Please join us for an informal networking wine and cheese event sponsored by the Carney Center for Computational Brain Science with attendees of this workshop on Tues Sept 19 at 5:30pm-7:30pm, at 164 Angell St, 4th floor.

    Please note also that each workshop will define a set of “open questions” that will serve as problems for mathematicians to work on. We are hoping that some of these questions are inspired by problems defined by the Carney community, and reciprocally, that the open questions will inspire other work at Brown. Feel free to bring your ideas to this event and any other throughout the semester. We will host another wrap up event at the end of the semester to crystallize these discussions and inspire new collaborative work.

    View Full Event  
  •  Location: Smith-Buonanno Hall, G-01; in-person, with Zoom option for non-local participants

    The Carney Center for Computational Brain Science and the Brainstorm Program is organizing a two-week computational modeling workshop with a focus on computational modeling of cognition, behavior, and brain/behavior relationships. Workshop attendees will learn the basic tools for understanding, developing, and applying models to brain science questions, and have the opportunity to apply these techniques in a novel dataset.

    Week 1 will consist of workshops and live tutorials, including daily lectures spanning basic to advanced topics, accompanied by hands-on coding tutorials. Attendees will learn the basic tools for understanding, developing and applying computational models, with a focus on hypothesis testing, quantitative fitting, bayesian methods, and model checks and comparisons. Additionally, advanced modeling sessions will provide a deeper theoretical understanding and application of complex modeling techniques.

    During Week 2, participants will have the opportunity to work in teams to apply these skills to analyze a real dataset provided by the organizers, with potential for novel discoveries. Prizes will be awarded for models with the most predictive power, rigor, creativity, and innovation.

    For details on last years’ workshops and modeling competition, visit the Center for Computational Brain Science website. Previous syllabi are available here. We will cover most of the same basic topics, with a few tweaks and additions (based on participant input and guest speakers).

    Intended Audience: This workshop is open to the members of the Brown community, and is designed for researchers across fields, backgrounds and levels of experience: computation “novices” with no experience and those with more computational experience who may want to augment their toolkit with advanced approaches to parameter estimation or specific classes of models. Although there is no computational experience required, those with modeling backgrounds will still benefit from the advanced modules, and will have the opportunity to learn new skills and state-of-the-art computational approaches.

    Maximum number of participants: Participation is limited to 20, but we do keep a waitlist.

    Register here.

    Organizers: Andra Geana, Debbie Yee, Alana Jaskir, Michael Frank

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Please join the Carney Institute for a Mini-Symposium on the Zimmerman Innovation Awards in Brain Science. Previous awardees will share their projects, how they fit the goals of the program, and how the funding helped propel their science. The event will also include an overview of the application and review process as well as an open Q&A session.

    10:00 - Overview of the Innovation Awards Program
    10:25 - Greg Valdez / Lalit Beura - “Optimizing housing conditions to accelerate the translation of research using mouse models of Alzheimer’s Disease”
    10:50 - Kate O’Connor-Giles / Erica Larschan - “Identifying drivers of coordinated synaptic gene expression across neuronal subtypes”
    11:15 - Theresa Desrochers / Matthew Nassar - “Beyond Steady State: Mapping frontal representations onto sequential choices through reinforcement learning”
    11:40 - Q&A about the upcoming application cycle

    The 2023 call for applications is now open in UFunds and the application deadline is September 1.

    Refreshments will be served.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor, Innovation Zone

    Earl Miller, Ph.D., Picower Institute and Deot of Brain and Cognitive Sciences at MIT.

    For a long time, the brain was thought to function like clockwork, with specialized parts working together due to physical connections. However, in recent decades, our understanding has undergone a major shift. While the individual parts and anatomical connections are still important, many cognitive functions are driven by emergent properties - higher-level properties that arise from the interactions between the parts. A key aspect of these emergent properties are brain waves, oscillating rhythms of electrical activity that allow millions of neurons to self-organize and control our thoughts, much like a crowd doing ‘the wave’.

    View Full Event  
  •  Location: ICERMRoom: 121 South Main Street

    Brain Rhythms Connect Physiology and Cognition

    A lecture by Dr. Nancy Kopell, Professor of Mathematics and Statistics, Boston University. Reception begins at 4:30 p.m.

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Special Seminar: “Traveling Waves in Cortex: Spatiotemporal Dynamics Shape Perceptual and Cognitive Processes”- Lyle Muller, Ph.D., Assistant Professor, Mathematics, Western University.

    With new multichannel recording technologies, neuroscientists can now record from cortex with high spatial and temporal resolution. Early recordings during anesthesia observed waves traveling across the cortex. While for a long time traveling waves were thought to disappear in awake animals, in recent work we have revealed traveling waves during awake states, where activity is more difficult to analyze. Whether these waves play active functional roles in sensory perception and cognitive processes, however, has remained unclear.

    In my research, I have introduced new computational methods for detection and quantification of spatiotemporal patterns in multisite recordings. These methods have revealed that small visual stimuli consistently evoke waves traveling outward from the point of input in primary visual cortex of the awake monkey. Further, we have recently found that spontaneous cortical activity is structured into waves traveling across visual area MT, and that these spontaneous waves modulate both excitability of local networks and the probability of faint stimulus detec-tion. Our results thus indicate that spontaneous and stimulus-evoked waves play active roles in sensory processes. We aim to understand the general computational roles for these waves in upcoming computational and mathematical work.

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Special Seminar: “Mechanisms Underlying Natural and Artificial Modulations of Sensory Representations”- Agostina Palmigiano, Ph.D.,, Postdoctoral Research Scientist in the Mortimer B. Zuckerman Mind Brain Behavior Institute.

    Neuronal representations of sensory stimuli depend on the behavioral context and associated reward. In the mouse brain, joint representations of stimuli and behavioral signals are present even in the earliest stage of cortical sensory processing. In this work, we propose a parallel between optogenetic and behavioral modulations of activity and characterize their impact on V1 processing under a common theoretical framework. We first infer circuitry from large-scale V1 recordings of stationary animals and demonstrate that, given strong recurrent excitation, the cell-type-specific responses imply key aspects of the known connectivity.

    Next, we analyze the changes in activity induced by locomotion and show that, in the absence of visual stimulation, locomotion induces a reshuffling of activity, which we describe theoretically, akin to that we had found in response to optogenetic perturbation of excitatory cells in mice and monkeys. We further find that, beyond reshuffling, additional cancellation among inhibitory interneurons needs to occur to capture the effects of locomotion. Specifically, we leverage our theoretical framework to infer the inputs that explain locomotion-induced changes in firing rates and find that, contrary to hypotheses of simple disinhibition (inhibition of inhibitory cells), locomotory drive to individual inhibitory cell types largely cancel. We show that this inhibitory cancellation is a property emerging from V1 connectivity structure.

    This work is a first step towards elucidating the disparate and still poorly-understood role of non-sensory signals in the sensory cortex, and uncovering the dynamical mechanisms that underlie their effect Furthermore, it establishes a foundation for future research to explore the relationship between adaptable sensory representations and cognitive flexibility.

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Carney Special Seminar: “The Impacts of Environmental Inference on Decision Strategies”- Tahra Eissa, Ph.D., Postdoctoral Fellow, University of Colorado - Boulder.

    The world around us has a statistical structure that we can leverage to improve our choices. Learning these key features of our environment is therefore useful for optimizing our decision-making strategies, allowing us to balance efficiency with flexibility. In this seminar, I will apply computational models to the study of human behavior to address questions on how we utilize environmental information and the brain mechanisms that support environmental inference. First, I will discuss how humans modulate their decision-making strategies in different environments and show that individuals apply a diverse set of strategies that vary in their complexity, accuracy, and types of observable errors. Second, I will present work on how environmental features can be learned and stored in the brain. Finally, I will briefly address how computational models and human behavior can be combined with human intracranial electrode recordings to directly probe how environmental inference is encoded in the brain. These studies set the groundwork for future investigation on how we update our environmental beliefs and corresponding decision strategies, which can improve physiological understanding of cognition as well as support translation applications for those with impaired cognitive function.

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Special Seminar: The Brain in Motion: “Causes and Dynamics of Drifting Neural Representations”- Shanshan Qin, Ph.D., postdoctoral researcher at the Harvard John A. Paulson School of Engineering and Applied Sciences.

    Recent experiments have revealed that neural population activity associated with stable sensation and action continually changes over days and weeks— a phenomenon called representational drift. To address the origin and dynamics of such drift, I employed the Hebbian/anti-Hebbian network with noisy synaptic updates to dissect the properties of drifting receptive fields during learning. The model reveals how degeneracy and noise generically lead to representation drift during representation learning. The drifting receptive fields of individual neurons can be characterized by a coordinated random walk, resulting in a stable representational similarity of population codes over time. This model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes several testable predictions. At the end of my talk, I will also discuss the implications of representational drift.

    View Full Event  
  •  Location: Carney Institute, 164 Angell Street, 4th FloorRoom: Innovation Zone

    Valentin Wyart (Ecole Normale Supérieure - PSL University, Paris, France)

    Making sense of uncertain environments, a cognitive process modeled across domains as statistical inference, constitutes a difficult yet ubiquitous challenge for human intelligence. Recent research has identified the limited computational precision of human inferences as a surprisingly large contributor to the variability of perceptual and reward-guided decisions made under uncertainty. In this talk, I will review the theoretical and experimental evidence obtained by my group which, taken together, provides key insights into the origin, impact and function of this cognitive noise for human learning and decision-making. Moving beyond the classical description of internal noise as a performance-limiting constraint for cognitive systems, I will present unpublished findings from recurrent neural networks and large datasets of human participants that delineate the adaptation and the emergent benefits of cognitive noise in response to specific forms of uncertainty.

    View Full Event  
  •  Location: Carney Innovation ZoneRoom: 164 Angell St. 4th Floor

    Please mark your calendars for the next MRF/BNC Users meeting, which will be Monday, March 13, at noon in the Carney Innovation Zone at 164 Angell St. We hope to see you in person but we will again have a Zoom option for those wishing to tune in remotely.

    This month, Haley Keglovits and Apoorva Bhandari will co-present work from their on-going project: “Task structure shapes the geometry of control representations in PFC”.

    This meeting will be streamed over Zoom for those unable to participate in-person.
    Lunch will be provided.

    Please RSVP for this event to help us gauge attendance and cater to any food restrictions you may have.

    RSVP: https://forms.gle/XEM9PMmzCaKs2f6eA

    View Full Event  
  •  Location: Sidney E. Frank Hall for Life SciencesRoom: Marcuvitz Auditorium

    Theory of the Multiregional Neocortex: Large-scale Neural Dynamics and Distributed Cognition

    View Full Event  
  •  Location: Brown University

    Scope and Goal
    We can all feel exhausted after a day of work, even if we have spent it sitting at a desk. The intuitive concept of mental effort pervades virtually all domains of human information processing and has become an indispensable ingredient for general theories of cognition. However, inconsistent use of the term across cognitive sciences, including cognitive psychology, education, human-factors engineering and artificial intelligence, makes it one of the least well-defined theoretical constructs across fields.

    The purpose of our two-day workshop is to bridge this gap by (a) offering hands-on tutorials on different computational approaches used to model mental effort and by (b) fostering discussion about the operationalization of mental effort among scientists from different research communities and modeling backgrounds.

    Keynote: Daniel Kahneman (Princeton University)

    List of Speakers (alphabetical order)

    • Danielle Bassett (University of Pennsylvania)
    • Michael Inzlicht (University of Toronto)
    • Yuko Munakata (University of California, Davis)
    • Amitai Shenhav (Brown University)

    List of Tutorial Instructors (alphabetical order)

    • Anastasia Bizyaeva (Princeton University)
    • Alexander Fengler (Brown University)
    • Michael J. Frank (Brown University)
    • Andra Geana (Brown University)
    • Renée S. Koolschijn, Hanneke den Ouden (Radboud University)
    • Randall O’Reilly (University of California, Davis)

    Please visit: https://sites.google.com/view/mental-effort/general-information

    View Full Event  
  •  Location: Zoom & Carney Institute, 164 Angell StreetRoom: Innovation Zone

    Read Montague will present a seminar entitled “Decoding human neuromodulatory signaling and its connection to reinforcement learning”. Read will be talking about his latest machine learning methods for decoding sub-second changes in dopamine, norepinephrine, and serotonin neurochemical signals from humans and how they relate to reward-based learning and decision making.

    Limited seating, zoom link is now available.

    View Full Event  
  •  Location: TBD

    The 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM2022)

    Over the last few decades, reinforcement learning and decision making have been the focus of an incredible wealth of research spanning a wide variety of fields including psychology, artificial intelligence, machine learning, operations research, control theory, animal and human neuroscience, economics and ethology. Key to many developments in the field has been interdisciplinary sharing of ideas and findings. The goal of RLDM is to provide a platform for communication among all researchers interested in “learning and decision making over time to achieve a goal”. The meeting is characterized by the multidisciplinarity of the presenters and attendees, with cross-disciplinary conversations and teaching and learning being central objectives along with the dissemination of novel theoretical and experimental results. The main meeting will be single-track, consisting of a mixture of invited and contributed talks, tutorials, and poster sessions.

    Confirmed Speakers
    • Josh Tenenbaum, Massachusetts Institute of Technology
    • Yunzhe Liu, University College London
    • Jill O’Reilly, University of Oxford
    • Nao Uchida, Harvard University
    • Melissa Sharpe, University of California, Los Angeles
    • Alexandra Rosati, University of Michigan
    • Frederike Petzschner, Brown University
    • Oriel Feldman-Hall, Brown University
    • Scott Niekum, University of Texas at Austin
    • Satinder Singh Baveja, University of Michigan and DeepMind
    • Stephanie Tellex, Brown University
    • Martha White, University of Alberta
    • Sonia Chernova, Georgia Tech
    • Jeannette Bohg, Stanford University
    • Jakob Foerster, Facebook AI Research

    Stay tuned for updates as the conference gets closer.

    Learn More
    View Full Event  
  •  Location: Zoom

    Center for Computational Brain Science Seminar Series: “Sequences and modularity of dynamic attractors in inhibition-dominated neural networks”

    Carina Curto, Ph.D.
    Professor
    Department of Mathematics
    Pennsylvania State University

    Abstract: Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. These results enable us to design networks that count stimulus pulses, track position, and encode multiple locomotive gaits in a single central pattern generator circuit.

    View Full Event  
  •  Location: Zoom

    Join the Carney Institute for Brain Science, in conjunction with Love Data Week, for a Carney Methods Meetup featuring Ani Eloyan, assistant professor of biostatistics at Brown, who will discuss methods for defining and estimating clinically relevant biomarkers, such as from longitudinal fMRI.

    Carney Methods Meetups are informal gatherings focused on methods for brain science, moderated by Jason Ritt, Carney’s scientific director of quantitative neuroscience. Videos and notes from previous Meetups are available on the Carney Institute website.

    View Full Event  
  •  Location: Sidney E. Frank Hall for Life SciencesRoom: 220

    Oren Shriki, Ph.D.

    Department of Cognitive and Brain Sciences
    Ben-Gurion University of the Negev, Israel
    Abstract

    The critical brain hypothesis proposes that our brain is poised close to the border between two qualitatively different dynamical states. Whereas sub-critical dynamics are characterized by premature termination of activity propagation, super-critical dynamics are associated with runaway excitation. The talk will review evidence from recent years regarding this hypothesis and introduce the concept of neuronal avalanches, spatiotemporal cascades of activity whose sizes obey a power-law distribution. They are observed in a wide range of experiments from small-scale cortical networks to large-scale human EEG and MEG and are considered as evidence for critical brain dynamics. The avalanche analysis provides novel measures which reflect the underlying neural gain and are sensitive to changes in the balance of excitatory and inhibitory processes. Consequently, deviations from critical dynamics could serve as neuromarkers for disorders associated with altered balance. The utility of such neuromarkers will be demonstrated in several contexts, including epilepsy, prolonged wakefulness, schizophrenia, and disorders of consciousness.

    View Full Event  
  •  Location: Carney Institute for Brain ScienceRoom: Innovation Zone

    Note: You may also attend this event via Zoom (Meeting ID: 978 5998 6393 | Passcode: 451768). This workshop requires you to be logged into Zoom through your Brown account.

    Carney Methods Meetups are informal gatherings focused on methods for brain science, moderated by Jason Ritt, Carney’s scientific director of quantitative neuroscience. Carlos Vargas-Irwin, assistant professor (research) of neuroscience, and Tommy Hosman, research engineer at BrainGate, will join Ritt in an open discussion of current debates over the validity and interpretation of some leading methods (UMAP, t-SNE) of data dimensionality reduction. While application of dimensionality reduction to neuroscience data is important and ubiquitous, the methods are challenging to understand, with few analytic guarantees on the results. Some recent papers raise questions about whether these methods are doing what practitioners think they are doing.

    Videos and notes from previous meetups are available on the Carney Institute website.

    View Full Event  
  •  Location: Zoom

    Carney Methods Meetups are informal gatherings focused on methods for brain science, moderated by Jason Ritt, Carney’s scientific director of quantitative neuroscience. Mete Tunca, associate director of research systems and services in Brown’s Center for Computation and Visualization, will join Ritt in an open discussion about data storage and management challenges produced by increasingly large experimental data sets, such as produced by fluorescence imaging, multichannel electrode recordings, fMRI, bioinformatics, and behavioral videos.

    Experimentalists need to choose wisely between options ranging from local hard drive stacks, university servers (e.g. Isilon or Stronghold), and/or commercial cloud providers, but often with limited guidance for the diverse sources and uses of data across brain science.

    Please direct questions to Jason Ritt.

    Notes from previous Meetups are available online.

    Please note, this workshop requires you to be logged into Zoom through your Brown account.

    View Full Event  
  •  Location: Sidney E. Frank Hall for Life SciencesRoom: 220

    Title:  Reverse engineering neural control of behavior in Hydra

    View Full Event  
  •  Location: Zoom

    What does the next generation of scientists think of the future of brain science research? 

    Brown University graduate students and a recent alum will join the Carney Institute on July 27 for an engaging conversation about their research experiences and where they think the field is headed. This event will feature:

    • Kaitlyn Hajdarovic, Ph.D. candidate in the Neuroscience Graduate Program
    • Marc Powell, postdoctoral associate at the University of Pittsburgh Department of Neurological Surgery. Powell received a Ph.D. in biomedical engineering from Brown in January 2021.
    • Jae-Young Son, Ph.D. student in cognitive, linguistic and psychological sciences

    This conversation will be moderated by Diane Lipscombe, Reliance Dhirubhai Ambani Director of the Carney Institute, and Christopher Moore, associate director of the Carney Institute.

    Watch previous Carney Conversations
    View Full Event  
  •  Location: Zoom

    Carney Methods Meetups are informal gatherings focused on methods for brain science, moderated by Jason Ritt, Carney’s scientific director of quantitative neuroscience. Sheridan Center writing associates Shanelle Reilly, MCB Ph.D. candidate, and Meghan Gonsalves, NSGP Ph.D. candidate, will join Ritt in an open discussion of writing and communication strategies, data visualization, and other aspects of preparing scientific manuscripts.

    Graduate students and postdocs are encouraged to send in advance any questions or “case studies” related to their own research to Jason Ritt.

    Notes from previous Meetups are available online.

    Please note, this workshop requires you to be logged into Zoom through your Brown account.

    View Full Event  
  •  Location: Zoom

    Join Carney’s Center for Computational Brain Science (CCBS) on June 29 for a seminar featuring Arvind Kumar, Ph.D., associate professor in the Division of Computational Science and Technology at the KTH Royal Institute of Technology in Stockholm, Sweden.

    Dr. Kumar is a computational neuroscientist studying the dynamics of information processing properties of neuronal networks. In this talk, he will explore the relationship between Network connectivity and network activity, and will extend this analysis to the spatiotemporal dynamics of neuromodulators.

    View Full Event  
  •  Location: Zoom

    Join Carney’s Center for Computational Brain Science (CCBS) on June 10 for a seminar featuring Adam Calhoun, Ph.D., postdoctoral research associate at Princeton University.

    Abstract

    Animals produce behavior by responding to a mixture of cues that arise both externally (sensory) and internally (neural dynamics and states). These cues are continuously produced and can be combined in different ways depending on the needs of the animal. However, the integration of these external and internal cues remains difficult to understand in natural behaviors. To address this gap, Calhoun has developed an unsupervised method to identify internal states from behavioral data, and he has applied it to the study of a dynamic social interaction. During courtship, Drosophila melanogaster males pattern their songs using cues from their partner. This sensory-driven behavior dynamically modulates courtship directed at their partner. Calhoun uses his unsupervised method to identify how the animal integrates sensory information into distinct underlying states. He then uses this to identify the role of courtship neurons in either integrating incoming information or directing the production of the song, roles that were previously hidden. Additionally, Calhoun shows how song is produced by a diverse range of visual cues whose importance changes depending on behavioral context, and he identifies the visual neurons that send this information from the eye into the brain. Calhoun’s results reveal how animals compose behavior from previously unidentified internal states, a necessary step for quantitative descriptions of animal behavior that link environmental cues, internal needs, neuronal activity and motor outputs.

    View Full Event  
  •  Location: Zoom

    Michael S. Goodman ’74 Memorial Seminar Series

    Speaker: Sebastian Musslick, Ph.D. student, Princeton University

    Title: On the Rational Bounds of Human Cognition

    Abstract: Humans are remarkably limited in the number of tasks they can execute simultaneously. This limitation is not only apparent in daily life, it is also a universal assumption of most theories of human cognition. Yet, a rationale for why the human brain is subject to this constraint remains elusive. In this talk, I will draw on insights from neuroscience, psychology and machine learning to suggest that limitations in the brain’s ability to multitask result from a fundamental computational dilemma in neural architectures. Through graph-theoretic analysis, neural network simulation and behavioral experimentation, I will demonstrate that neural systems face a tradeoff between learning efficiency (promoted through the shared use of neural representations across tasks) and multitasking capability (achieved through the separation of neural representations between tasks). Theoretical analyses show that it can be optimal for a neural system to prioritize efficient learning of single tasks at the expense of its ability to execute them simultaneously, across a broad range of conditions. These results suggest that our inability to multitask reflects a rational solution to a fundamental computational dilemma faced by neural architectures. I will demonstrate that this tradeoff can explain a variety of behavioral and neural phenomena related to human multitasking and conclude by outlining consequential computational dilemmas that may help explain other, seemingly irrational constraints on human cognition.

    View Full Event  
  •  Location: Zoom

    Please join the Carney Institute for Brain Science for a special seminar featuring Kanaka Rajan, Ph.D., assistant professor at Icahn School of Medicine at Mount Sinai.

    Brown authentication is required.

    View Full Event  
  •  Location: Zoom

    Join Carney’s Center for Computational Brain Science (CCBS) on May 25 for a seminar on “Extracting structure from high-dimensional neural data,” featuring Carsen Stringer, computational neuroscientist and group leader at the Howard Hughes Medical Institute Janelia Research Campus.

    Stringer completed her postdoctoral work with Marius Pachitariu and Karel Svoboda at Janelia, and her Ph.D. work with Kenneth D. Harris and Matteo Carandini at University College London. She develops tools for understanding high-dimensional visual computations and neural representations of behavior.

    Abstract

    Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from a bipedal robot simulation, and neural responses to visual textures. We additionally found that texture identity could be decoded from these neural responses, but that the neural representations of visual texture differed from artificial neural network representations.

    View Full Event  
  • The Interdisciplinary Training in Computational, Cognitive, and Systems Neuroscience (ICoN) is a pre-doctoral program in computational cognitive neuroscience. Funds from this program will support the training of advanced pre-doctoral candidates who are capable of applying a combination of empirical and theoretical approaches that decisively addresses their scientific questions about the mind and brain. 

    On May 7 at 3:30 p.m., join the PIs and current students to learn about the ICoN training program and how to apply to join the next cohort of students.  

    Applications for this year’s program are due May 21, 2021. 

    View Full Event  
  •  Location: Zoom

    Please join Carney’s Center for Computational Brain Science (CCBS) on March 8 for a special seminar on “Untangling brain-wide current flow using neural network models,” featuring Kanaka Rajan, assistant professor of neuroscience at the Icahn School of Medicine at Mount Sinai and the Friedman Brain Institute.

    Abstract:

    The Rajan Lab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally.

    We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone.

    We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this powerful framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

    View Full Event  
  •  Location: Zoom

    Join the Carney Institute for Brain Science in conjunction with Love Data Week for a Carney Methods Meetup, an informal gathering focused on methods for brain science, on Thursday, February 11, at 3 p.m.

    This event will be moderated by Jason Ritt, Carney’s scientific director of quantitative neuroscience, and feature Samuel Watson, director of graduate studies for the Data Science Initiative.

    Please note, this workshop requires you to be logged into Zoom through your Brown account.

    Notes from previous Meetups are available online.

    View Full Event  
  •  Location: Zoom

    Join Carney’s Center for Computational Brain Science (CCBS) for a seminar on “Practical sample-efficient Bayesian inference for models with and without likelihoods.” This event will feature Luigi Acerbi, Ph.D., an assistant professor at the University of Helsinki.

    Abstract:
    Bayesian inference in applied fields of science and engineering can be challenging because in the best-case scenario the likelihood is a black-box (e.g., mildly-to-very expensive, no gradients) and more often than not it is not even available, with the researcher being only able to simulate data from the model. In this talk, I review a recent sample-efficient framework for approximate Bayesian inference, Variational Bayesian Monte Carlo (VBMC), which uses only a limited number of potentially noisy log-likelihood evaluations. VBMC produces both a nonparametric approximation of the posterior distribution and an approximate lower bound of the model evidence, useful for model selection. VBMC combines well with a technique we (re)introduced, inverse binomial sampling (IBS), that obtains unbiased and normally-distributed estimates of the log-likelihood via simulation. VBMC has been tested on many real problems (up to 10 dimensions) from computational and cognitive neuroscience, with and without likelihoods. Our method performed consistently well in reconstructing the ground-truth posterior and model evidence with a limited budget of evaluations, showing promise as a general tool for black-box, sample-efficient approximate inference — with exciting potential extensions to more complex cases.

    Links:

    • MATLAB toolbox
    • VBMC paper
    • VBMC with noisy likelihoods paper
    • Inverse binomial sampling preprint
    View Full Event  
  •  Location: Zoom

    Join Carney’s Center for Computational Brain Science (CCBS) for a seminar on “Neural reinforcement: re-entering and refining neural dynamics leading to desirable outcomes.” This event will feature Vivek Athalye, Ph.D., postdoctoral researcher at Columbia University.

    Abstract:
    How do organisms learn to do again, on-demand, a behavior that led to a desirable outcome? Dopamine-dependent cortico-striatal plasticity provides a framework for learning behavior’s value, but it is less clear how it enables the brain to re-enter desired behaviors and refine them over time. Reinforcing behavior is achieved by re-entering and refining the neural patterns that produce it. We review studies using brain-machine interfaces which reveal that reinforcing cortical population activity requires cortico-basal ganglia circuits. Then, we propose a formal framework for how reinforcement in cortico-basal ganglia circuits acts on the neural dynamics of cortical populations. We propose two parallel mechanisms: i) fast reinforcement which selects the inputs that permit the re-entrance of the particular cortical population dynamics which naturally produced the desired behavior, and ii) slower reinforcement which leads to refinement of cortical population dynamics and more reliable production of neural trajectories driving skillful behavior on-demand.
    View Full Event  
  •  Location: Zoom

    “Geometry of Object Representation in Visual Hierarchies”

    Haim Sompolinsky, Ph.D.
    The Hebrew University  

    Abstract: Neurons in object representations in top stages of the visual hierarchy exhibit high selectivity to object identity as well as to identity-preserving variables, including location, orientation and scale. suggesting that changes in the object representations from low to high processing stages are related to changes in the geometry of object manifolds. Each manifold consists of the set of population responses to stimuli belonging to the same object.

    In my talk, I will present recent work that elucidates the relation between manifold geometry and object-identity computations. I will discuss two kinds of computations. The first is object classification. I will describe new measures of manifold radius and dimensions that predict the ability to support object classification (Chung et al., PRX, 2018). Based on these measures, we characterize the changes in manifold geometry as signals propagate across layers of Deep Convolutional Neural Networks (DCNNs). Recordings from neurons in various stages of the visual systems, have been similarly analyzed, allowing us to test the correspondence between DCNNs and the visual hierarchy in the visual cortex.

    In a recent unpublished work with Ben Sorscher (Stanford), we have studied the ability to learn new objects and object categories from just a few examples (the few shot learning problem). We show that feature layers in DCNNs exhibit a remarkable ability in few shot learning of new categories. To explain this performance, we develop a new theory of the geometry of concept formation, that delineates the salient geometric features that underlie rapid concept formation in artificial and brain sensory hierarchies.

    View Full Event  
  •  Location: Zoom

    Please join Carney’s Center for Computational Brain Science (CCBS) on November 18 for a special seminar on “Differential Resilience of Neurons and Networks with Similar Behavior to Perturbation,” featuring Eve Marder, Ph.D., university professor and Victor and Gwendolyn Beinfield Professor of Biology at Brandeis University.

    Please note, you must be logged into Zoom through your Brown account to join this event. 

    Abstract:

    Both computational and experimental results in single neurons and small networks demonstrate that very similar network function can result from quite disparate sets of neuronal and network parameters. Using the crustacean stomatogastric nervous system, we study the influence of these differences in underlying structure on differential resilience of individuals to a variety of environmental perturbations, including changes in temperature, pH, potassium concentration and neuromodulation. We show that neurons with many different kinds of ion channels can smoothly move through different mechanisms in generating their activity patterns, thus extending their dynamic range.

    View Full Event  
  •  Location: Zoom


    The Brown DeepLabCut+ Users Group, hosted by the Carney’s Center for Computational Brain Science, will hold its inaugural meeting Thursday, October 8, 2-3 p.m.

    The goal of the user group is to provide a local community for peer support and information sharing, and to guide future decisions on local resources, such as support for running DLC on Oscar.

    In this first meeting, moderated by Jason Ritt, Carney’s scientific director of quantitative neuroscience, and Maria Daigle, research assistant in the Department of Neuroscience, we will discuss the group’s organization, and invite all attendees to share any issues they are facing using DLC in their research, and/or troubleshooting advice. Going forward we expect to collect and share technical documentation, and provide a forum to match users needing help with local expertise.

    Please respond prior to the meeting to this short questionnaire.

    View Full Event  
  •  Location: Zoom

    Join the Carney Institute for Brain Science for a conversation about how traditional brain recording techniques (MEG/EEG) are coming together with new computational tools to inform new directions for brain science research. 

    This event will be moderated by Diane Lipscombe, Reliance Dhirubhai Ambani Director of the Carney Institute, and Christopher Moore, associate director of the Carney Institute, and it will feature Stephanie Jones, associate professor of neuroscience at Brown University, and Frederike Petzschner, who will join the Carney Institute this year as a fellow. 

    View Full Event  
  •  Location: Zoom (links will be provided)

    This two-week workshop, organized by Carney’s Center for Computational Brain Science, will provide the basic tools for understanding, developing and applying models to brain science questions, from high-level cognition (how do I choose where to eat lunch?), to neural mechanisms (how do our neurons decide whether an animal is a dog or a lion?).

    This workshop is designed for researchers across fields, backgrounds and levels of experience: computation “novices” with no experience and those with more computational experience who have not yet mastered the science of model selection and parameter estimation, or wish to learn more on specific classes of models.

    Week 1

    Week 1 will cover methods and challenges of using computational models for hypothesis testing and quantitative fitting of behavioral data and brain-behavior relationships. Topics include model validation and selection, posterior predictive checks, maximum likelihood, hierarchical estimation, neural regressors, etc. We will have daily lectures and discussion, as well as hands-on coding tutorials, and advanced sessions providing a deeper understanding of complex modeling topics, pitfalls and concepts, for participants already familiar with basic techniques.

    Week 2

    In Week 2, participants will have a chance to participate in a collaborative modeling challenge, using a novel dataset that integrates across multiple aspects of cognition and perception, including cross-species neural data. Prizes will be given for models with best predictive power, rigor, creativity and innovation.

    Computational experience is not required.

    For details on last years’ workshops, visit the Center for Computational Brain Science website. View last year’s syllabus. We will cover most of the same basic topics, with a few tweaks and additions (to be determined based on participant input).

    Participation is limited to 20. Please use this form to sign up.

    View Full Event  
  •  Location: Zoom

    Join the Carney Institute for a weekly informal gathering on methods for brain science, featuring rotating topics selected by the Brown brain science community. Vote for your preferred topic using this form.

    This week’s topic is animal behavioral monitoring. We will be joined by David Sheinberg, professor of neuroscience at Brown.

    Please note, this workshop requires you to be logged into Zoom through your Brown account. Click to learn more.

    View Full Event  
  •  Location: Zoom

    Join the Carney Institute for a weekly informal gathering on methods for brain science, featuring rotating topics selected by the Brown brain science community. Vote for your preferred topic using this form. 

    This week’s topic is Deep sequencing platforms/technologies. We will be joined by Christoph Schorl, assistant professor of biology (research) at Brown and facility director of the Brown Genomics Facility. 

    Please note, this workshop requires you to be logged into Zoom through your Brown account. Click to learn more.

    View Full Event  
  •  Location: Zoom

    The first Brown Unconference is a remote gathering which provides an opportunity for researchers across campus to come together and explore advances in computational sciences at the intersection of data science and AI with other sciences including biology, physics, chemistry, engineering, neuroscience and cognitive science.

    Our goal is to foster an accessible and welcoming environment open to members of the Brown community across all disciplines and levels of expertise. We want to celebrate Brown’s uniqueness and help foster collaborations across disciplines. Students and postdocs are especially encouraged to present their research to the broader community. The Unconference will include opportunities to meet researchers across scientific interests, hear from invited speakers, and receive feedback from diverse points of view.

    We encourage submissions at all points of the scientific process.

    Important Dates

    • Abstract Submission Deadline: June 15, 2020
    • Conference: June 29-30, 2020

    Schedule
    We will host the following events distributed across the two days of the conference. Please find instructions on how to get involved in the next section. The detailed schedule will be published closer to the conference.

    • Lightning talks: 2-3 minute talks (2 slides max) – for those who may be in the early stages of their research, to introduce themselves, share their interests, pitch projects, or simply network with other members of the conference.
    • Short talks: 12 minute talks – for those ready to present their research in an informal setup. Presented research can be in progress and data can be preliminary.
    • Networking & Mind Match: Tailored social and networking programming. The unconference is a safe space to share ideas so feel free to send work in progress. Members of the Brown community from all academic backgrounds are encouraged to submit an abstract!

    Register to Attend
    Register to attend the conference virtually. By registering, you will receive a notification when we announce the schedule with links to the Crowdcast pages.

    Submit an Abstract
    If you’re interested in talking about your research, please submit an abstract.

    View Full Event  
  • >

    Join the Carney Institute for a weekly informal gathering on methods for brain science, featuring rotating topics selected by you, the Brown brain science community! Please vote for next week’s topic using this form.

    This week’s topic is “Dynamical Models of Neural Systems”, presented by Bjorn Sandstede, Ph.D. Bjorn is a Professor of Applied Mathematics and is the Director of the Data Science Initiative.

    Please note, this workshop requires you to be logged into Zoom through your Brown account. Click to learn more.

    View Full Event  
  •  Location: Zoom

    “Information and randomization in exploration and exploitation,” featuring:

    Robert Wilson, Ph.D.

    Assistant Professor

    University of Arizona

    Abstract: The explore-exploit dilemma is a fundamental behavioral dilemma faced
    by any organism that can learn. Should we explore new options in the
    hopes of learning something new or exploit options we already know to
    be good. In this talk I will present evidence that people use two
    distinct strategies for solving the explore-exploit dilemma: directed
    exploration, in which information seeking drives exploration by
    choice, and random exploration, in which behavioral variability drives
    exploration by chance. In addition I will present initial evidence
    showing that these two types of exploration rely on dissociable neural
    systems.

    Contact tiantian_li1@brown.edu for the Zoom meeting password.

    View Full Event  
  •  

    Anna Schapiro, Ph.D.

    University of Pennsylvania

    “Learning distributed representations in the human brain”

     Join via Zoom: https://brown.zoom.us/j/95275026350

    Please note, this seminar requires you to be logged into Zoom through your Brown account. Click to learn more.

    Abstract: The remarkable success of neural network models in machine learning has relied on the use of distributed representations — activity patterns that overlap across related inputs. Under what conditions does the brain also rely on distributed representations for learning? There are benefits and costs to this form of representation: it allows rapid, efficient learning and generalization, but is highly susceptible to interference. We recently developed a neural network model of the hippocampus that proposes that one subregion (CA1) may employ this form of representation, complementing known pattern-separated representations in other subregions. This provides an exciting domain to test ideas about learning with distributed representations, as the hippocampus learns much more quickly than the neocortical areas that have often been proposed to contain these representations. I will present modeling and empirical work that provide support for the idea that parts of the hippocampus do indeed learn using distributed representations. I will also present ideas about how hippocampal and neocortical areas may interact during sleep to further transform these representations over time.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor Innovation Space

    Nowadays the technology to create videogames and impressive photorealistic scenes is for everyone to reach. Frameworks such as Unity 3D Engine are easy to understand and work with or without any coding experience. So Why not using it to visualize data? You will learn basics on game objects components, UI design and concepts of volume rendering. Just bring your laptop with Unity Engine installed.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor Innovation Space

     

    This workshop will provide a high-level overview of the main areas in machine learning with emphasis on supervised and unsupervised ML. We will review the strengths and limitations of ML, describe the bias-variance tradeoff which is one of the most important concepts in ML. Finally, we will review the main steps in a typical ML workflow and the most common problems to avoid.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor Innovation Space

    GitHub Actions is a relatively new service by GitHub that allows you to run tests, deploy code, and more all from within your repo. This workshop will cover some of the basic use cases of GitHub Actions as well as show some of the more creative ways it can be used to automate your workflows. Attendees should be familiar with Git and GitHub.

     

    Instructor: Mary McGrath 

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor Innovation Space

    Intro to PyTorch, by Minju Jung

     

    Nowadays, PyTorch is one of the most popular deep learning libraries. In this tutorial, I will cover the basic operations of PyTorch and building a neural network model for image classification. The basic knowledge of deep learning and coding experience with python are required.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor Innovation Space

    Data Science Computation and Visualization Workshop

     

    EXPLORATORY DATA ANALYSIS WITH PANDAS IN PYTHON, PART TWO with Andras Zsom

    Exploratory data analysis (EDA) is the first step of any data science project. In the second part of this pandas tutorial, I’ll walk through various visualization types you can use to better understand the properties of your data at a glance using pandas. Coding experience with python is required but no experience with the pandas package is necessary to follow the tutorial.

     

    Friday 2/28 @ 12pm

    Carney Innovation Space, 4th Floor

    164 Angell Street

    Pizzas and sodas will be served. Sponsored by the Data Science Initiative and organized by the Center for Computation and Visualization.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4th Floor Innovation Space

    Data Science Computation and Visualization Workshop

     

    EXPLORATORY DATA ANALYSIS WITH PANDAS IN PYTHON, PART ONE with Andras Zsom

     Exploratory data analysis (EDA) is the first step of any data science project. In the first part of this pandas tutorial, I’ll walk through how to read in csv, excel, and sql data into a pandas data frame, how to select specific rows and columns based on index or condition, and how to merge and append various data frames. Coding experience with python is required but no experience with the pandas package is necessary to follow the tutorial.

     

    Friday 2/14 @ 12pm

    Carney Innovation Space, 4th Floor

    164 Angell Street

    Pizzas and sodas will be served. Sponsored by the Data Science Initiative and organized by the Center for Computation and Visualization.

    View Full Event  
  •  Location: 164 Angell StreetRoom: 4TH Floor Innovation Space

    Data Science Computing and Visualization Workshop (DSCoV)

    Topic: DataLad
    Instructor: Yaroslav Halchenko


    DataLad provides a data portal and a versioning system for everyone, DataLad lets you have your data and control it too. For details, see https://www.datalad.org/.

    Friday, November 22 , 12:00 PM
    164 Angell Street, 4th Floor Innovation Space
    Organized by CCV; Sponsored by DSI

    View Full Event