Danielle S. Bassett, Randy M. Bruno, Elizabeth A. Buffalo, Michael E. Coulter, Hermann Cuntz, Stanislas Dehaene, James J. DiCarlo, Pascal Fries, Karl J. Friston, Asif A. Ghazanfar, Anne-Lise Giraud, Joshua I. Gold, Scott T. Grafton, Jennifer M. Groh, Elizabeth A. Grove, Saskia Haegens, Kenneth D. Harris, Kristen M. Harris, Nicholas G. Hatsopoulos, Tarik F. Haydar, Takao K. Hensch, Wieland B. Huttner, Matthias Kaschube, Gilles Laurent, David A. Leopold, Johannes Leugering, Belen Lorente-Galdos, Jason N. MacLean, David A. McCormick, Lucia Melloni, Anish Mitra, Zoltán Molnár, Sydney K. Muchnik, Pascal Nieters, Marcel Oberlaender, Bijan Pesaran, Christopher I. Petkov, Gordon Pipa, David Poeppel, Marcus E. Raichle, Pasko Rakic, John H. Reynolds, Ryan V. Raut, John L. Rubenstein, Andrew B. Schwartz, Terrence J. Sejnowski, Nenad Sestan, Debra L. Silver, Wolf Singer, Peter L. Strick, Michael P. Stryker, Mriganka Sur, Mary Elizabeth Sutherland, Maria Antonietta Tosches, William A. Tyler, Martin Vinck, Christopher A. Walsh, Perry Zurn
This 27th Ernst Strüngmann Forum was proposed by Wolf Singer, Terry Sejnowski, and Pasko Rakic to evaluate how research into cerebral cortex has progressed over the last three decades. The starting point was a Dahlem Workshop on the neurobiology of neocortex (Rakic and Singer 1988), followed by the 5th Ernst Strüngmann Forum on dynamic coordination in the brain (von der Malsburg et al. 2010). What trajectories did research take? Which questions are currently being confronted, and what is needed to address these now?
This volume synthesizes the resulting discourse that took place in Frankfurt, Germany, from April 8–13, 2018, and is comprised of two types of contributions. Specific aspects of the theme are presented in chapters that were originally drafted before the Forum. These “background papers” have since been revised, based on extensive peer-review, and offer an up-to-date assessment on these topics. In Chapters 5, 9, 13, and 17, the working groups provide an overview of their multifaceted discussions. Edited to ensure accessibility, these chapters should not be understood as proceedings or consensus statements. Their intent is to summarize perspectives, expose diverging opinions and remaining open questions, as well as to highlight areas for future enquiry.
Each Forum creates its own unique dynamics and puts demands on all who participate. Every invitee played an active role throughout this process, and for their efforts, I wish to thank them all. I extend a special word of appreciation to the Program Advisory Committee, to the authors and reviewers of the background papers, as well as to the moderators of the individual working groups (Pasko Rakic, Peter Strick, Jennifer Groh, and David Poeppel). Special recognition goes to the rapporteurs of the working groups (Debra Silver, David Leopold, Kenneth Harris, and Lucia Melloni), for to draft a report during the Forum and finalize it in the months thereafter is never a simple matter. Finally, I extend my sincere appreciation to Wolf Singer, Terry Sejnowski, and Pasko Rakic, whose commitment and enthusiasm for science were essential to the entire endeavor.
To conduct its work, the Ernst Strüngmann Forum relies on institutional stability and an environment that encourages free thought. The generous support of the Ernst Strüngmann Foundation, established by Dr. Andreas and Dr. Thomas Strüngmann in honor of their father, enables the Ernst Strüngmann Forum to serve science and pursue its mandate: “to expand knowledge in basic science and identify directions for future research.” In addition, I wish to acknowledge the work of our Scientific Advisory Board, the supplemental backing provided by the German Science Foundation, and the support that we receive from the Frankfurt Institute for Advanced Studies.
Breaking new intellectual ground is never easy, and it can be difficult to set aside long-held views. Yet the will to reexamine the past in the quest to identify future research strategies is a most invigorating activity. On behalf of everyone involved, I hope this volume will be successful in guiding and inspiring further enquiry into the cerebral cortex.
As science works to address any number of complex problems, a certain measure of humility must accompany its quest. Viewed over time, it is clear that myriad intricacies are often undervalued, as our collective wisdom and collaborative efforts have failed to resolve any number of issues. Although ultimate answers may be rare, this should not undercut the process of discovery or diminish the measurable progress that has been, or is currently being, made. It simply puts into context a truism: Science is an iterative process. As knowledge expands, each step forward requires us to test the concepts and ideas that emerge. To do this may require us to develop new methods or tools, which in turn may lead us to uncover completely new aspects of the problem that had hitherto escaped attention, thus bringing us back to a point where we need to evaluate, again, where things stand.
So it is, and has been, with our quest to understand the cerebral cortex.
Three decades ago, two of us (Pasco Rakic and Wolf Singer) chaired a Dahlem Workshop in Berlin on the neurobiology of neocortex. This gathering brought together forty distinguished neuroscientists from comparative and evolutionary biology, developmental neurobiology, neuroanatomy, neurophysiology, and behavioral neuroscience for an in-depth discussion of the cerebral cortex and an assessment of current research. The motivation behind this Dahlem Workshop was the realization that although research had advanced system by system and yielded an immense amount of data, the underlying rules and principles were defined for, and understood in, separate research areas, thus complicating communication and cross-disciplinary research. What was clearly lacking was an overarching theory of cortical organization—one that could account for general principles within particular areas as well as for cooperative interactions between cortical regions and cross-system generalities. From the numerous peer reviews of the results (Rakic and Singer 1988), this book captured the conceptual understanding of the time and stimulated future research in developmental, cellular, functional, and cognitive neuroscience.
Years later, at an annual meeting of the Society for Neuroscience, we started to reflect on how the field had changed since that Berlin meeting: What seminal discoveries had actually been made? Which questions remained unanswered, and what might be needed to address these now? Our discussions led us to explore whether it might be worthwhile to convene another group of experts to assess where things currently stand, in an effort to position research with the conceptual means to move ever forward. Marked by the emergence of completely new disciplines, several key areas demonstrated the extent to which research had expanded dramatically over the past three decades:
Progress in genetics and molecular biology had revolutionized neuroscientific approaches in virtually all domains, from investigations of development all the way to studies of psychiatric conditions.
The transfection of neurons and glial cells with genetically encoded marker molecules and the development of transgenic animal models had permitted comprehensive analyses of the brain’s connectome, massive parallel recording of neuronal activity at the cellular level, as well as cell-specific interference with neuronal activity.
The advent of noninvasive imaging technologies and methods to stimulate selected regions of the human brain had boosted the field of cognitive neuroscience.
The availability of powerful and affordable computational resources now allow us to address the large data sets that were produced through advanced electrophysiological and optical recording methods.
Last, but not least, the rapidly growing field of computational neuroscience enables us, for the first time, to test the validity of theories and concepts through simulation experiments that are able to cope, although still in a rudimentary way, with the mind-boggling complexity and dynamics of neuronal interactions.
This progress convinced us of the necessity for a new collaboration, yet to do justice to these novel developments, the scope of expertise needed to be broadened. We found a willing partner in Terry Sejnowski, who worked with us to develop a proposal for a forum that would explore the extent to which existing data could be embedded in unifying conceptual frameworks of the neocortex.
As the reader may be aware, major changes in 2006 impacted the Dahlem Workshops, and the institution no longer exists. Its guiding spirit, philosophy, and approach, however, continue to flourish in Frankfurt under the auspices of the Ernst Strüngmann Forum. (For an overview of this transition, see Singer 2016:475–476). Briefly, the Ernst Strüngmann Forum creates an environment that ensures open discourse and encourages divergent ideas. Long-established perspectives are questioned and disciplinary idiosyncrasies exposed. Consensus is never a goal. Instead, topics are examined from multiple perspectives: existing gaps in knowledge are exposed, key questions formulated, and ways of filling such gaps (through future research) are proposed. From April 8–13, 2018, the 27th Ernst Strüngmann Forum was convened in Frankfurt, Germany, to which 48 experts from diverse areas in neuroscience participated.
Even a week-long brainstorming encounter of this kind is unable to do complete justice to the state-of-the-art research that has unfolded over three decades, much let alone provide a comprehensive summary. Far more time and effort would be needed just to review the immense amount of data that has accumulated in virtually every domain of research into the cerebral cortex. What could be perceived as a “shortcoming,” however, actually gives way to an important insight: In 1987, at the Dahlem Workshop, participants were by and large aware of the developments in the various disciplines and were able to understand the concepts and terminologies used in these fields. In 2018, at the Ernst Strüngmann Forum, transdisciplinary dialogue proved much more difficult: a plethora of abbreviations characterize the language of geneticists and molecular biologists, and the mathematical descriptions of complex dynamics and the highly differentiated taxonomies used in cognitive psychology posed substantial challenges to everyone.
At Dahlem, theories on cortical processing were still dominated by behaviorist concepts, which viewed the brain primarily as a stimulus-response machine. Accordingly, emphasis was placed on serial processing in feedforward architectures. The assumption was that detailed analysis of single-cell responses across the processing hierarchy, all the way up to executive centers, should ultimately permit comprehensive understanding of the system. Hence, the field was mainly interested in describing the gradual transformation of neuronal response properties from sensory surfaces across the hierarchy of cortical processing levels to executive organs. Common concepts for the investigation of sensory processes were feature-selective receptive fields, filter operations to reduce signal-to-noise ratios and redundancies, columns as functional units, maps for the orderly arrangement of neighborhood relations, representations of cognitive objects by responses of individual neurons, and (on the executive side) motor response fields, command neurons, and population vectors. As all information was assumed to be encoded in the discharge rate of neurons, the gold standard was the single-unit recording. Signals reflecting the temporal coordination of population activity, such as field potentials and EEG, were considered too coarse, and it was felt that they provided scant additional information. With a few notable exceptions (see below), these concepts are implemented in the architecture of perceptrons and Hopfield networks as well as their recent extension in deep learning networks. Because of the astonishing performance of these artificial systems in admittedly restricted domains, and because the architecture of these artificial neuronal networks shares similarities with some of the organizational features of the cerebral cortex, one might assume that we now possess valid and explicit models of brain function and hence are close to understanding how the cortex works.
At the Ernst Strüngmann Forum, it became clear that this optimistic view is not warranted; many of the concepts favored during the Dahlem Workshop needed to be abandoned or substantially modified due to novel insights that had since been gained. Importantly, we realized that we are probably further away from a comprehensive understanding of the functions of the cerebral cortex than we imagined thirty years ago. As always in the empirical sciences, technological advances go hand in hand with conceptual developments. In addition to the still valid approach of feedforward processing, the comprehensive study of connectomics (both at the level of intracortical microcircuitry and inter-areal connections) forced us to consider
functional implications of recurrent coupling within and between cortical areas,
the immense density of information exchanged among processing streams,
flat and often reversed hierarchy of putative interactions, and
distributedness captured by graph theoretical terms such as rich club or small world networks.
These anatomical features are reflected by functional features that could, in part, have been discovered already by single-cell recordings at the time of the Dahlem Workshop. One of them is the concept of an invariant feature-selective receptive field. When feature-selective neurons were exposed to complex patterns, in particular in awake-performing animals, it became obvious how their responses are strongly sensitive to context, behavioral state, and top-down influences resulting from predictions, expectancies, and attention. It was recognized, however, that neuronal responses were variable and not always canonical, in particular in behaving animals, but this variability was attributed to noise fluctuations. The experimenter averaged over trials to extract the “essential” information, as the brain was supposed to average across a population of similar neurons. The new structural data also challenged the concept of columns as a functional unit. They suggest, at least outside input layer four, that horizontal coupling is reciprocal and continuous, even across boundaries between areas. Finally, the flat hierarchy and dense interconnectivity make it appear highly unlikely that areas operate in isolation and only serve as links in a serial processing stream.
Major arguments for an extension and reinterpretation of classical concepts came from experiments in which researchers recorded from more than one neuron at a time. It soon became clear that the fluctuations of neuronal responsiveness were correlated. Some maintain that these correlations reflect noise, hence the term “noise fluctuations.” Others, however, observe that correlated firing contained information as it depended on stimulus configuration and behavioral context. Parallel recordings from electrode arrays have also revealed a puzzling but well-coordinated dynamics of cell populations. It was observed that individual neurons can engage in oscillatory patterning of their responses and that these temporally structured responses could synchronize with amazing precision in the millisecond range, depending on stimulus configurations, central states, and top-down signals. Furthermore, these oscillations are organized as traveling waves across the cortex and are both generated spontaneously and induced by stimuli. After the discovery of these coordinated population dynamics in the cerebral cortex, very similar oscillatory phenomena and traveling waves were observed in another structure sharing essential features of recurrency and connected with the cerebral cortex: the hippocampus (Muller et al. 2018). These observations led to a renaissance of interest in dynamics and in recording methods able to capture spatially and temporally coordinated (synchronized) activity of local cell populations with multiunit activity (MUA), intracortical local field potentials (LFPs), electrocorticography recordings from cortical surface electrodes and, at a still coarser spatial and temporal scale, of EEG, MEG, and fMRI signals, respectively. Together with massive parallel recordings of single-cell activity, these approaches revealed a surprising degree of temporal coordination of distributed neuronal activity, both within and across cortical areas, including the nesting of oscillatory activity across distinct frequency bands. Finally, measurements of coherence allowed identification of stimulus and task-dependent formation and dissolution of widespread functional networks and to track the flexible routing of communication between cortical areas. Although the oscillatory patterning of EEG signals in distinct frequency bands was well established at the time of the Dahlem Workshop, and although it was known that these coarse signals reflect synchronized activity, these dynamic signatures of cortical processes were not considered in a functional context: they were merely taken as a state variable correlated with changes in sleep stages and arousal levels. One likely reason is that in the 1980s, most cortical physiology focused on the visual system, and it was thought that processing of (stationary) visual patterns required no computations in the temporal domain. Since then, however, increased research has been devoted to the auditory system, speech recognition, short-term memory, motor control, and spatial navigation, and interest in dynamic processes has increased. A role of precisely timed neuronal activity has also been recognized when it became clear that mechanisms of use-dependent synaptic plasticity were exquisitely sensitive to precise timing relations between pre- and postsynaptic activity, both during development and adult learning. In parallel, computational models became more dynamic, especially those that analyze the computational potential of recurrently coupled networks.
At this Ernst Strüngmann Forum there appeared to be a broad consensus that neuronal information processing capitalizes on the spatial as well as the temporal dimensions of the brain: not only the frequency but also the timing of discharges are informative. However, we are still at the very beginning of our attempts to explore the puzzling complexity of the dynamics that emerge from delay-coupled neuronal networks and to figure out whether and, if so, how the brain actually uses the exceedingly high-dimensional state space provided by these dynamics for computation and the storage of information. One possibility is that the brain exploits these dynamics to define relations that comply with the time-sensitive learning rules for the processing of temporally structured stimuli (the processing of sequences and language) as well as for the realization of generative functions such as are required in predictive coding. In this context, it was noted as surprising that theories on cortical functions took so long to incorporate concepts of pattern generation and dynamic routing, as these had been present in the fields studying pattern generators in invertebrates, lower vertebrates, and insects.
The new evidence on the structural and functional organization of the cerebral cortex suggests that current concepts have to be considerably extended to do justice to the complexity and power of cortical computations. There was consensus that we have to learn to cope with the high-dimensional, nonlinear dynamics of the unimaginably complex interactions among the neurons of cortical networks, and that we will need new tools (e.g., machine learning) to decipher the information content in high-dimensional activity vectors as well as new mathematical instruments to analyze and interpret the trajectories of network states. Concerns were also expressed with respect to the requirement to provide causal evidence for the relations between neuronal activity and behavior. While new methods such as optogenetics and DREADDS permit cell-specific manipulation of neuronal activity, interference with the activity of nodes in a highly interconnected system may have uncontrollable consequences other than those intended. This may force the field to relax the criteria for the establishment of causal relations and in certain cases be satisfied with correlative evidence.
While the new data on connectomics and dynamics has precipitated a shift in concepts and paradigms, which is currently raising more questions than actual answers, the great advances in genetics and molecular biology have dramatically enhanced the resolution of investigations on developmental processes. The basic concepts involved in phylogenetic and ontogenetic development, formulated at the time of the Dahlem Workshop, seem to have passed the test of time. Still, much more is known now about the genetic and molecular networks that determine the birth, division cycles, migration paths, and differentiation steps of stem cells giving rise to excitatory and inhibitory neurons. Among the numerous new insights in the mechanisms determining the fate of precursor cells were the notions that inhibitory interneurons continue to be integrated into cortical circuitry during early postnatal development, that primates possess special mechanisms to increase neuron numbers in supragranular layers, and that genes have been identified that control the overall volume of the neocortex. Since participants at this Forum conduct research on a variety of animal models, the considerable species-specific differences were evident. For example, although radial glial cells in developing rodents are an excellent model to study some aspects of cortical development, the equivalent cells in primates (including humans) have specific genes and molecules as well as possess certain functional capacities that are absent in all subprimate species analyzed thus far. The difference between primary visual cortex in primates and nonprimates is obvious. Likewise, rodents do not even possess some of the cytoarchitectonic and functional areas (e.g., dorsal prefrontal association cortex, Broca and Wernicke areas), which have different neuronal composition and pattern of connections. Thus, the development, anatomy, and function of some human-specific cortical features can only be studied in humans.
In conclusion, and in keeping with the overall nature of science, this Forum was a sincere attempt to understand the major developments that have taken place in neocortex research over the last thirty years, ever with an eye toward the future. It is clear that a large number of methodological breakthroughs in all disciplines of the life sciences drove progress forward, that the analysis of massive new data (especially the big data on connectomics and molecular diversity) is reliant on powerful computational tools, and that a substantial amount of new data has been acquired only through large cooperative efforts, as opposed to research in small groups characteristic of neuroscientific investigation thirty years ago. Equally, however, it is clear that conceptualization lags behind data accumulation. Thus, we posit that the greatest challenge for future endeavors will be to integrate the plethora of facts generated by the highly diverse fields of research into an overarching comprehensive theory on cortical functions. Whether this is at all possible—whether there is even such a thing as a unifying theory of neocortex—remains an open question. Perhaps accumulated knowledge must remain distributed across the community of specialized experts, similar to how functions of the cerebral cortex are distributed. Just as the brain, as a whole, produces intuitively plausible behavior, distributed knowledge might serve to explain a large number of normal and pathological behaviors, ultimately enabling the development of useful tools without meeting the epistemic challenge of having to fit into a unified theory.
We wish to express our gratitude to Silke Bernhard, the director of the Dahlem Workshops from 1974 until 1989. We know of no better way to pay tribute to her memory than to continue the dialogue that took root in Berlin some thirty years ago. Equally, we wish to acknowledge the efforts of Andreas and Thomas Strüngmann, who have enabled this unique approach to continue in Frankfurt. Their vision and support of the Ernst Strüngmann Forum is invaluable to basic science, for it offers researchers a much-needed yet rare opportunity to reflect, to reanalyze, to correct, and to propose directions for future research to pursue. On behalf of all participants, we thank you sincerely.
The extraordinary complexity of the mammalian neocortex is the result of millions of years of evolution. Elucidating the principles underlying its development and function has been a major goal in the neurosciences. How a seemingly uniform group of neuroepithelial stem cells produces the vast array of electrically responsive cell types, and how these resulting cells establish such a rich variety of circuits in the mature neocortex remains, in particular, a key focus of the field. This chapter reviews seminal advances in understanding the production, specification, and migration of neocortical neurons prior to the establishment of mature circuits.
The extraordinary cognitive abilities that humans possess, such as syntactical-grammatical language, abstract thinking, episodic memory, or complex reasoning, are largely dependent on the brain, and more specifically on its surface, the cerebral cortex, as was initially proposed by Thomas Willis in 1664. Since then, neuroscience has endeavored to decipher what makes the human brain so unique when compared with other species. Early studies were based on comparative brain anatomy between humans and other extant or extinct species, in the latter case based on data compiled from fossil records. More recently, comparison of the number of neurons and studies of cortical development have improved our understanding of the field. Nowadays, in the era of genomics, new possibilities have arisen for determining changes in gene expression or regulatory activity that underlie the observed differences in phenotypes. This chapter summarizes what is known about the human cerebral cortex. It focuses on the neocortex, which represents about 80% of the human brain mass, and places it into an evolutionary context by considering other hominins, nonhuman primates, and mammals. Finally, it explores the role of genomics in elucidating the shared and unique features of human nervous system development, organization, and function.
This chapter explores what happens when the development of the cerebral cortex goes awry. It presents results on work with CHMP1A mutations, which highlight the importance of specialized cell-to-cell communication via extracellular vesicles in cortical development and function. It reviews genetic causes of microcephaly, with an emphasis on centrosomal proteins, and presents novel insights about cortical evolution shown using a ferret model of microcephaly caused by ASPM loss of function. It reviews recent work to identify noncoding mutations that cause brain malformations, which has expanded understanding of cortical development beyond protein-coding genes. These three examples illustrate general principles of cortical growth and function (cellular communication and synaptic plasticity, evolution, and utilization of large data sets), made possible by recent advances in DNA sequencing technology.
The cerebral cortex controls our unique higher cognitive abilities. Modifications to gene expression, progenitor behavior, cell lineage, and neural circuitry have accompanied evolution of the cerebral cortex. This chapter considers the progress made over the past thirty years in defining potential mechanisms that contribute to cortical development and evolution. It discusses the value of model systems for understanding elaboration of cortical organization in humans, with an emphasis on recent technical and conceptual advances. It then examines our current understanding of the molecular and cellular basis for cortical development and evolution; discusses how neuronal fates are specified and organized in lamina, columns, and areas; and revisits the radial unit and protomap hypotheses. Finally, it considers our current understanding of the development, stability, and plasticity of cortical circuitry. Throughout, it highlights the profound impact that new technological advances have made at the molecular and cellular level, and how this has changed our understanding of cortical development and evolution. The authors conclude by identifying critical and tractable research directions to address gaps in our understanding of cortical development and evolution.
Unraveling the organizational structure of the brain has, in large measure, been reductionist in nature. While this has revealed, in ever-increasing detail, the fine structure of the brain, it does leave less directly addressed the beautifully integrated nature of brain function. Views of the functional organization of the brain should include a unitary perspective, despite the diversity of its constituent parts. This chapter focuses on recent observations from the authors’ laboratory, which point to the value of an integrated approach as well as to answer the assigned title question: arguably, the brain consists of a single network with functional diversity.
From interacting cellular components to networks of neurons and neural systems, interconnected units comprise a fundamental organizing principle of the nervous system. Understanding how their patterns of connections and interactions give rise to the many functions of the nervous system is a primary goal of neuroscience. Recently, this pursuit has begun to benefit from the development of new mathematical tools that can relate a system’s architecture to its dynamics and function. These tools, stemming from the broader field of network science, have been used with increasing success to build models of neural systems across spatial scales and species. This chapter discusses the nature of network models in neuroscience. It begins with a review of model theory from a philosophical perspective to inform our view of networks as models of complex systems, in general, and of the brain, in particular. It summarizes the types of models that are frequently studied in network neuroscience along three primary dimensions: from data representations to first-principles theory, from biophysical realism to functional phenomenology, and from elementary descriptions to coarse-grained approximations. Ways to validate these models are then considered, with a focus on approaches that perturb a system to probe its function. In closing, a description is provided of important frontiers in the construction of network models and their relevance for understanding increasingly complex functions of neural systems.
Since the days of Ramón y Cajal and Golgi, reconstruction of neuronal morphology has been a central element of neuroscience research. The cell body (soma) and dendrites receive and integrate synaptic input patterns from diverse neuronal ensembles. The axon, in turn, broadcasts the results of this integration process to a variety of neurons within and across brain regions. Morphological differences in the dendritic and axonal shapes are thus closely linked to a neuron’s inputs, outputs, computations, and hence functions. Quantification of somatic, dendritic, and/or axonal properties by morphological reconstructions thus represents one of the major approaches to define brain areas and neuronal cell types therein. This chapter addresses some of the technical challenges involved in reconstructing neuronal morphologies and in linking morphology to other properties of the neurons, such as intrinsic physiology and synaptic connectivity. It discusses conceptual challenges involved in using morphological reconstructions for the definition of neuronal cell types, as well as for the identification of neural circuit structure and function.
Recent research in the neurosciences has revealed a wealth of new information about the structural organization and physiological operation of the cerebral cortex. These details span vast spatial scales and range from the expression, arrangement, and interaction of molecular gene products at the synapse to the organization of computational networks across the whole brain. This chapter highlights recent discoveries that have laid bare important aspects of the brain’s functional architecture. It begins by describing the dynamic and contingent arrangement of subcellular elements in synaptic connections. Amid this complexity, several common neural circuit motifs, identified across multiple species and preparations, shape the electrophysiological signaling in the cortex. It then turns to the topic of network organization, spurred by routine capacity for noninvasive MRI in humans, where interdisciplinary tools are lending new insights into large-scale principles of brain organization. Discussion follows on one of the most important aspects of brain architecture; namely, the plasticity that affords an animal flexible behavior. In closing, reflections are put forth on the nature of the brain’s complexity, and how its biological details might be best captured in computational models in the future.
A hallmark of cortical organization is the coexistence of serial feedforward with reentrant processing. The latter is based on feedback projections from higher to lower processing levels and massive reciprocal excitatory projections which link neurons located within the same cortical areas as well as cortical areas occupying the same level in the processing hierarchy. These reentrant connections, together with local negative feedback loops, give rise to exceedingly complex dynamics that are characterized by oscillations in a broad range of frequencies, synchronization of discharges, and cross-frequency coupling. Evidence is reviewed which suggests that these dynamic properties support specific computations: the flexible binding of distributed neurons into functionally coherent assemblies, the attention-dependent selection of sensory signals, the conversion of semantic relations into temporal relations, the comparison of stored priors with sensory evidence, the selective routing of signals in densely interconnected networks, the definition of relations in the context of learning, and the dynamic formation of functional networks. Arguments challenging a functional role of oscillations and synchrony, due to their volatile nature, are discussed in relation to recent evidence that highlights the advantages of volatility.
Information processing in the brain is implemented across several temporal and spatial scales by populations of neurons. This chapter addresses how single neurons, small network motifs, and larger networks, in which emergent dynamics are largely shaped by the connectivity of the system, contribute to this processing of information. Computation is defined as a semantic mapping; that is, it is the process by which representations of external (e.g., stimulus-driven) or internal (e.g., memories) information change. A feature specific to neuronal computation is that mappings are mostly local, constrained by connectivity patterns between neurons. This implies that complex mappings from local information onto representations that are highly relational and abstracted, and which rely on information between distant parts of the system, require mechanisms that can bridge, bind, and integrate pieces of information across large scales. An overview of this process in the nervous system is delineated: Local information processing is described at the level of individual neurons and small motifs. Emergent phenomena are addressed that implement information processing across large recurrent neuronal populations. Finally, an omnipresent but mostly ignored feature of neuronal systems, delay-coupled computation, is described.
Theories of information coding in cortical populations have been put forth for many years, but only recently have experimental methods become available to permit simultaneous recordings from hundreds of neurons, thus allowing these theories to be tested. This chapter discusses some of the more prominent theories and argues that they fall along a spectrum of coding schemes, ranging from population codes that are built up from single-neuron tuning functions to codes that emerge from the collective dynamics of cortical populations. At the extremes, these theories are incompatible: one relies on single neurons whereas the other ingrains coarse neuronal activity into low-dimensional trajectories that summarize the covariance of activity across multiple neurons. It is proposed that both can be reconciled using a hierarchical coding scheme where relevant information is represented at the level of large-scale spatiotemporal patterns, and both individual neurons and the temporal interrelationships convey information. Antecedents to this contemporary theory can be seen in Donald Hebb’s assembly phase sequences (Hebb 1949): information is encoded at the single-neuron level in terms of tuning functions, but spatiotemporal patterning of individual neurons provides context to interpret the population code fully. Moreover, the encoding perspective proposed here explicitly incorporates the synaptic implementation of the code, thus strengthening the postulate.
A central goal of systems neuroscience is to understand how the brain represents and processes information to guide behavior (broadly defined as encompassing perception, cognition, and observable outcomes of those mental states through action). These concepts have been central to research in this field for at least sixty years, and research efforts have taken a variety of approaches. At this Forum, our discussions focused on what is meant by “functional” and “inter-areal,” what new concepts have emerged over the last several decades, and how we need to update and refresh these concepts and approaches for the coming decade.
In this chapter, we consider some of the historical conceptual frameworks that have shaped consideration of neural coding and brain function, with an eye toward what aspects have held up well, what aspects need to be revised, and what new concepts may foster future work.
Conceptual frameworks need to be revised periodically lest they become counterproductive and actually blind us to the significance of novel discoveries. Take, for example, hippocampal place cells: their accidental discovery led to the generation of new conceptual frameworks linking phenomena (e.g., memory, spatial navigation, and sleep) that previously seemed disparate, revealing unimagined mechanistic connections. Progress in scientific understanding requires an iterative loop from experiment to model/theory and back. Without such periodic reassessment, fields of scientific inquiry risk becoming bogged down by the propagation of outdated frameworks, often across multiple generations of researchers. This not only limits the impact of the truly new and unexpected, it hinders the pace of progress.
This chapter sets the scene for the treatment of complexity and computation in human cognition and discusses how this treatment is informed by the neurobiological and functional properties of the cerebral cortex. Its agenda is to establish some guiding principles that may help identify hypotheses and computational architectures that go beyond mere descriptions of how the cortex underwrites the repertoire of functions we enjoy, such as action, perception, cognition, affect, and consciousness. In short, it explores the computational imperatives that form the basis for human experience. Complexity and computation are considered, as is how they organize our approach to neuronal dynamics. Criteria are identified that any tenable theoretical framework must respect. In addition, it discusses computational theories that can be entertained, and the degree to which they account for empirical data from anatomy and neurophysiology. Finally, some of the deeper issues that face sentient artifacts are considered that, ultimately, possess a sense of self, purpose, and agency.
Relative to other primates, humans exhibit a great variety of singular cognitive abilities for language, mathematics, music, tool use, theory of mind, and self-consciousness. What has brought about this singularity? This chapter examines the hypothesis that the human brain is unique in being endowed with a mental representation of nested, tree-like symbolic structures. Such syntactic structures are essential in the modern description of human languages, including natural languages as well as the artificial ones used in music or mathematics. Nonhuman animals may possess abstract representation of temporal sequences, but evidence suggests that those representations do not include the sort of nested tree structures typical of human grammars. Brain imaging, magnetoencephalography and intracranial recordings have begun to reveal the neural correlates of the nested structure of linguistic constituents, which involve Broca’s area and the superior temporal sulcus of the left hemisphere. Importantly, the mental manipulation of musical and mathematical structures, which also involves nested trees, is not confined to such classical language areas. Instead, high-level mathematics involves bilateral intraparietal areas involved in elementary number sense and simple arithmetic as well as bilateral inferotemporal areas involved in processing Arabic numerals. This chapter proposes that several distinct circuits of the human brain have become attuned to nested tree structures for different domains, such as language, mathematics, or music. According to the demodularization hypothesis, during human brain evolution, primitive tree structures may have emerged within specialized neural circuits (e.g., those involved in spatial or geometrical computations) and were later exapted toward a more general role in language processing and conscious verbal report.
Currently we do not have a really good idea about what is special about the human brain and how this has led to uniquely human behaviors. To progress forward, we first need to ignore appeals to authority (e.g., Darwin) and accept that mammalian brains are not simply differently sized versions of the same thing. This does not mean that there are not commonalities between the brains of mammalians and other taxonomic groups, but that the only way to identify meaningful similarities and differences is through a comparative approach that looks at a number of different species. This chapter argues that two other lines of investigation are important in comparative neuroscience. First, investigating development will help to solve how evolution finds the same or different solutions. Homologous or convergent developmental trajectories reveal the constraints (or the lack of constraints) on how the brain reaches an adaptive solution. Second, investigating the body and its biomechanics will reveal how the structure of the body generates both constraints and advantages for the nervous system. Understanding the evolution of the human brain requires a comparative understanding of how it develops and operates in concert with the body.
How do the computations of the cerebral cortex and subcortical structures account for human perception, cognition, and affect? Answering this question requires understanding how the neurobiological and functional properties of the human brain give rise to the repertoire of human faculties and behavior, and hence, an understanding of the neural mechanisms that implement these functions. While research over the past decades has made substantial progress toward this end, significant challenges still lie ahead, and new opportunities open up daily as neuroscience and related fields develop and implement new theories and technologies. To (begin to) address these challenges, this chapter explores conceptual and methodological aspects inherent to the study of the neurobiology of the human mind that are at the core of the current “central paradigm” (Kuhn 1962) in neuroscience, but are often taken for granted and undergo little scrutiny. In particular, it discusses what defines or constitutes “uniquely human” mental capacities, the promises and pitfalls of using animal models to understand the human brain, whether neural solutions and computations are shared across species or repurposed for potentially uniquely human capacities, and what inspiration and information can be drawn from recent developments in artificial intelligence. Attention is given to laying out desiderata for future investigations into the human mind.