The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative 2.0

The Advisory Committee to the NIH Director BRAIN Initiative Working Group 2.0, formed in April 2018, has been working tirelessly to assess BRAIN’s progress and advances within the context of the original BRAIN 2025 report, identify key opportunities to apply new and emerging tools to revolutionize our understanding of brain circuits, and designate valuable areas of continued technology development. Over the course of the last year, the Working Group has undertaken a deliberative and open process consisting of portfolio review, scientific workshops, town halls, and public solicitation. 

Continuing in this manner, the Working Group is sharing with the community its thoughts on the current state of the BRAIN Initiative, including opportunities for keeping pace with the evolving scientific landscape, including the identification of new opportunities for research and technology development, within a solid ethical framework, to ensure BRAIN Initiative research is of the utmost value to the public it intends to serve. Following a 30-day public comment period, the Working Group will review all responses as they iterate a Report to the Advisory Committee to the NIH Director (ACD) for consideration at its meeting on June 13th and 14th, 2019.  

 

The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative 2.0

 

From Cells to Circuits, Toward Cures

 

The Working Group has structured its initial findings around the seven scientific Priority Areas identified by BRAIN 2025. Each of these constitutes a chapter that provides a brief description of how the Priority Area fits into the goal of understanding circuits; reviews accomplishments to date in context of the BRAIN 2025 short- and long-term goals; identifies gaps and opportunities; and suggests revised short- and long-term goals. Notably, based upon the statements calling for continued study in many areas, the Working Group in most cases has characterized what is new and what should be ongoing activity, framing these scientific directions with a discussion of overarching topics that affect all areas of science. These include sharing and using BRAIN Initiative technologies; data management and sharing; and scientific workforce-related considerations. The Working Group also offers ideas for transformative projects, which all involve complex and multiscale lines of inquiry to help articulate our vision for out-of-the-box approaches to truly transform neuroscience inquiry on our quest to understand the amazing organ that is the human brain.

 

Priority Area 1: DISCOVERING DIVERSITY

 

This BRAIN 2025 priority area aims to generate a census and taxonomy of cell types in the brain – a master “parts list,” and with BRAIN Initiative support, progress in this area has far exceeded expectations. This avenue of research is both foundational and enabling. A census of brain cell types is foundational because it provides a unique framework to understand brain function and development and raises fundamental issues about how cell diversity relates to function:

  • How do the brain’s diverse parts relate to one another?
  • How do the parts, alone and together, contribute to function?
  • Is the cell the basic unit for defining function, and ultimately, for defining behavior and dysfunction?
  • What is a “cell type” in the brain? Is such a concept even useful?

This research is also enabling because it will form the basis for the development of novel technologies to visualize cell connectivity and to manipulate cell function in models of human conditions. In turn, advances in cell access will create a powerful new platform for understanding and curing brain disorders. The study of cell types as units of brain dysfunction will open doors to new therapeutic approaches beyond genetic manipulations by offering an opportunity to influence circuits as interventional targets.  

 

Understanding brain cell types: Fundamental insights for discovery 

Neurons differ with respect to many observable characteristics, or phenotypic features: cytoarchitecture, connectivity, location, electrophysiological properties, genetic make-up, and transcriptomic characteristics. Moreover, neurons and their functional integrity are highly influenced by the billions of supporting cells known as glia. Must all these features align to define a cell “type?” Single-cell RNA sequencing (scRNAseq) has revolutionized the characterization of cellular diversity through broad analysis of gene expression (transcriptomic profiling) of individual cells, in a scalable and high-throughput way. By contrast, methods to characterize other phenotypic features remain less efficient and are thus not yet applied broadly across brain cell types. Thus, the question arises: Should a cell’s transcriptomic profile become a surrogate and sufficient definition of its type, simply because measuring the transcriptome is faster and easier than measuring other phenotypic features? In the retina, for example, there is excellent congruence between transcriptomic type and phenotypic features defined by other measures. However, the retina may be a special case, since it is a sensory “chip,” and we do not know if such congruence will hold for the many other cell types in the brain. Thus, we face both power and limitation from technological advances in single-cell sequencing: Do transcriptomic and genomic features align with other neuronal phenotypic characteristics, and if not, how we should integrate information about other characteristics into our view of neuronal identity?

A second major issue, raised by scRNAseq technology and data, is: Why does the brain need so many cell types, and how much detail is necessary to understand function? Relying on one technology, such as high-dimensional transcriptomic analyses, provides very fine detail about clusters of cell types. The visual cortex, for example, contains more than 100 transcriptomic clusters at the finest “leaves” of a hierarchical, branched tree. Theoretical neuroscientists have been able to successfully mimic many aspects of brain function with artificial neural networks composed of only a single type of generic electronic neuron. What additional explanatory power, efficiency, or computational flexibility do we gain by incorporating the specialized firing dynamics of precisely subclassified cell types into such models, and at what level of granularity? Resolving these issues will require new experimental methods, new computational algorithms for integrating and curating data of different phenotypic characteristics, and closer interaction between theoretical and experimental neurobiologists. It is likely that these answers will not only transform our understanding of brain function, but also of cellular diversity in other organs and tissues. 

 

Toward cures: Future platform for therapeutics

It is well established that certain brain disorders, particularly neurodegenerative diseases, affect specific neuronal cell types and/or surrounding glial cells. For example, Parkinson’s disease causes selective degeneration of dopaminergic neurons in the substantia nigra, Huntington’s disease targets medium spiny neurons in the striatum, while in Alzheimer’s disease some of the key genetic risk factors are expressed in glial cells, which in turn affect circuit function. This raises two important and related questions: i) What is the cellular and molecular basis of such selectivity? and ii) Do other brain disorders, including neuropsychiatric disorders, affect specific cell types? For example, are depression, autism, or schizophrenia diseases of specific cell types, and if so, which ones? Our ability to understand and distinguish cell types in healthy brains compared to diseased brains may answer this fundamental question as well as introduce targeted and precise therapeutic approaches with potentially fewer side effects.

 

BRAIN 2025 vision: Generate a first-draft cell census and develop enabling technologies 

 

Cell census

How many cell types are there in the brain? This deceptively simple question may not have a single answer, but rather depend on the definition of “cell type.” Neuronal and non-neuronal cells are not all alike, with variable phenotypic characteristics, and the various cell types also differ from each other in morphology, electrophysiology, transcriptomic features, and other dimensions. The first challenge posed by BRAIN 2025 was to achieve a consensus set of criteria for defining neural cell types, which would encompass multiple facets of cellular identity. This set of criteria would first be established, in mouse cells, in one or two relatively well-characterized systems, such as the retina and spinal cord. Achieving such an objective would, in turn, require development of new technologies for multiplexing and integrating measurement of neuronal phenotypic characteristics (e.g., transcriptomic, electrophysiological, anatomical). If that objective were achieved, then BRAIN 1.0 proposed to extend the approach to a few additional selected areas in the central brain (also in mice), toward building a census of cell types in other organisms, including non-human primates (NHPs) and humans, as well as in certain genetically tractable model organisms (zebrafish,fruit flies, and others). At the time, in 2014, before the advent of scRNAseq, achieving these objectives would have been considered a success for the first 5 years of the BRAIN Initiative.

 

Technologies, reagents, and datasets

BRAIN 1.0 anticipated the datasets generated from building the neuronal cell census to be large, diverse, and high-dimensional. This expectation pointed to a need for publicly accessible databases for annotation, curation, and storage of such data, in a manner that allowed not only data retrieval but also searches and other computational analyses by the research community. Such an effort would require a concerted and collaborative effort between those generating the data – as well as with data analysts, software and database engineers, user-interface developers, and computational scientists. BRAIN 1.0 also outlined the need for reagents, or markers, to identify specific neural cell types. Because cells can be transcriptionally profiled using RNA fluorescent in-situ hybridization (FISH), discordance between mRNA levels and protein levels – along with the inability to use FISH in living cells – renders this technique limiting by itself. BRAIN 1.0 thus called for development of antibody reagents to identify cell types, emphasizing cross‐species reactivity (rodents, non‐human primates, humans) and immunohistochemical applications. Monoclonal antibodies to cell-surface epitopes, in particular, offer advantages due to their ability to identify and isolate living cells for electrophysiological or developmental studies, as well as to purify in vitro-generated cell types for transplantation.

 

Manipulation and perturbation

In genetically tractable organisms (e.g., mice, zebrafish, fruit flies), transgenic approaches that enable selective targeting of specific cell types (e.g., Cre lines of transgenic mice) offered opportunities for cell manipulation toward understanding cell function. In parallel, BRAIN 1.0 highlighted a need for new technologies to allow experimental access to specific cell types across species, and in particular emphasized methods that do not require germline genomic modification and that could potentially be applied to humans for therapeutic purposes, as well as to animal models not amenable to genetic manipulation (rats, NHPs). Such methods could be used to directly drive reporter or effector gene expression via replication-incompetent viral vectors. They include, for example, compact, cell type-specific cis-regulatory DNA modules such as adeno-associated virus, or AAV; CRISPR/Cas9-based methods for homologous recombination into cell type-specific endogenous gene loci in post-mitotic neurons; or cloned monoclonal antibodies to cell type-specific surface antigens for pseudotyping enveloped animal viral vectors (e.g., lentivirus). To identify cell types with sufficient specificity, BRAIN 1.0 also anticipated a need for intersectional methods that assess expression of multiple gene markers simultaneously. 

 

BRAIN 1.0 projected generating a “first-draft” cell-type census spanning the entire mouse brain and spinal cord, along with a suite of associated reagents (e.g., transgenic mouse lines, compact cis-regulatory DNA modules/enhancers) permitting access to at least 200 different cell types in the mouse brain, as well as complete access to all cell types in selected regions (e.g., the retina). BRAIN 1.0 also anticipated generating a first-draft census of cell types for selected brain regions in NHPs and humans. Achieving these objectives would enable cell type‐specific optical imaging and optogenetic perturbations in multiple mammalian species, including NHPs and in human tissue. An aspirational goal for this time frame aimed to achieve proof‐of‐principle cell type‐specific targeting for therapeutic manipulations in humans, independent of germ-line genomic modifications. 

 

NIH funding to date: Discovering Diversity

 

NIH has implemented its vision of an integrated census of brain cell types in three stages. NIH first funded 10 pilot projects in a 3-year pilot phase. These projects took diverse approaches to characterize cell types in the brain, and investigators worked closely together, collectively generating more than 50 publications during this time period. Encouraged by this progress, NIH launched phase 2, a coordinated set of awards organized as the BRAIN Initiative Cell Census Network (BICCN), aiming to develop by 2021 a comprehensive mouse brain cell atlas of cell types, as well as to advance techniques for cell-type mapping in human and NHP brains. NIH anticipates that lessons learned from the mouse cell-census program and the coordinated work on studying larger brains will enable an increasing focus on NHP and human brains in preparation for phase 3 of the program, beginning around 2022.

 

Where are we now in defining the brain parts list?

 

Priority Area 1.Discovering Diversity component of BRAIN 2025 is a model of success. Thanks to timely development of key, transformative technologies, many of the specific stated goals within this priority area have been achieved and, in some cases, substantially exceeded. 

 

Transcriptional profiling

The advent of high-throughput ways to profile large numbers of single cells at the molecular level has enabled extensive single-cell, molecular “atlasing” in an increasingly large number of brain areas and in the retina. This has generated vast datasets for comprehensive analyses that will help construct a theoretical framework of cell diversity. One concrete and important deliverable is an open‐access database of integrated information with computational search tools, which has built a successful pipeline for collection, standardization, analysis, storage, and distribution of data. Additional technologies that enable spatial mapping of identified cell types will complement existing findings significantly over the next few years. At this point, integrative approaches to match molecular identity with positional information, connectivity, and physiological properties of single cells are ongoing and will play a key role in understanding how transcriptomic data relate to other phenotypic characteristics of neural cell types. 

 

Cell-type diversity

We now have several examples of the logic of cell-type diversification in different regions of the rodent brain, as well as in other organisms. These include a collective of millions of single cells molecularly profiled in the mouse motor cortex, V1 visual cortexhypothalamus, and retina. In addition, several regions in the adult mouse brain have been profiled at a lower coverage rate that has also highlighted phenotypic attributes of glial cells.  As noted, however, transcriptomic phenotyping cannot yet be considered a surrogate measure of neuronal identity that correlates with and predicts all other cell characteristics. That is because there are clear cases in which differences in axonal projections do not coincide with transcriptomic differences. There are likely other circumstances in which knowledge lags behind technology – and thus we don’t know what we don’t know. 

 

Integrative technologies

Work in other organisms is underway, most prominently in human and marmoset brains, and large single-cell datasets have emerged from BRAIN Initiative-funded research. We are now poised to learn more about additional regions of the adult brain and about various time windows in human brain development. Also, now available are technologies that integrate molecular information with connectivity and positional and physiological information, which will move us closer to investigating cell diversity in the context of circuits. Recent in-situ sequencing technology advances including MERFISH, seqFISH, STARmap, and others have enabled positional, transcriptional mapping of vast arrays of cell types in tissue at single-cell resolution. These efforts will no doubt continue to be central contributors toward atlasing cell types, as well as toward linking cell identity to connectivity and function. New integrative technology that assesses multiple identity parameters of a cell, at scale, include patch-seq (for sampling electrophysiological and transcriptional properties of cells), as well as Map-seq and MERFISH (for integrating connectivity and cell-specific transcriptional information). 

 

Advanced tissue labeling and imaging

Further, advances in imaging and labelingallow more detailed mapping of cells in intact, three-dimensional tissue. These technologies are also now being integrated withhigh-throughput single-cell sequencing approaches. Study of the epigenomic landscape is now possible with ATAC-seq and snmC-seq, which allow mapping of chromatin modifications and methylation patterns at single-cell resolution. Such approaches can be combined with RNAseq data to reveal simultaneous single-cell transcriptomic and epigenomic patterns..

 

Goals unmet: Cell-type targeting and protein-analytical tools

Technology is urgently needed to achieve cell type-specific targeting methods that are applicable and effective in the human and NHP brain, as well as in other traditional neuroscience model organisms such as the rat. A current limitation in cell-type analysis is a lack of antibodies to recognize cell types. Similarly, genetic tools to recognize and manipulate cells with class specificity remain modest in scale. In this arena, perhaps the largest concerted effort was put into place by the Allen Institute for Brain Science, through their generation of a bank of Cre lines in primary visual cortex (area V1). While an important resource, these studies reveal a limitation of the approach in that each Cre line typically labels multiple transcriptomic cell types. More tools are needed to investigate a more comprehensive set of cortical and subcortical regions, as well as to expand genetic access to specific cell types. In most cases, this will require flexible, intersectional genetic methods involving multiple recombinases or entirely new genetic approaches such as those that build on CRISPR/Cas9 and that scale production of new Cre and Flpe lines. Complementary knowledge emerging from single-cell profiling endeavors should provide an opportunity to identify cell-type specific promoter regions and enhancers to drive expression in specific cell types via viruses. The ability to manipulate viral serotypes or pseudotypes will help to expand genetic access in multiple species, most prominently in human and NHP tissue. Refining those techniques may benefit from use of brain organoids or fresh brain slices, and in some cases, human in-vitro systems – all of which may warrant re-evaluation of existing ethical standards of evolving methodologies and brain models. Beyond use in characterizing cell-type protein distribution, antibodies are also needed to purify, tag, and target specific cell types in species that are not accessible genetically  

 

Gaps and new opportunities for brain 2.0

The success of the brain-wide transcriptional cell census in mice has introduced a vast number of exciting new opportunities that hinge upon access to newly transcriptionally defined cell populations in the various conditions in which they exist in animals and in humans. Further, though comprehensive and powerful, the current transcriptionally-based cell census requires application of additional, independent multi-modal methods that integrate physiology, anatomy, connectivity, and function. The multimodal definition of cell types that will arise from these efforts should inspire and inform theory and cell-type based models of circuit function. Other approaches that will facilitate rich study of human cells and circuits in health and disease include new technologies for molecular profiling with single-cell resolution that are now applicable to the human brain; three-dimensional cellular models of the human nervous system (i.e., brain organoids/assembloids); methods to label and manipulate human cells; and cross-species comparisons. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity.

1. Development of new tools and technologies

  1. Tools to integrate molecular, connectivity, and physiological properties of cell types. Now available are large datasets of single-cell transcriptomic and epigenomic data as well as in-situ multimodal profiling tools (patch-seq, MAP-seq, MERFISH, seqFISH, STARmap), offering an opportunity to integrate data at single-cell resolution, at scale. One example might be bridging cell identification with connectivity through combined cell barcoding and in-situ sequencing. Similarly, tools to integrate data from single cell transcriptional, epigenetic, and proteomic profiling may be developed.
  2. Tools to access a large number of defined cells with class specificity. Transitioning from our current understanding of cell diversity to defining cell function and enabling higher-order understanding of cell-cell interactions and circuits requires the development of multiple ways to target gene expression in specific cell types and in multiple species.
  3. Technology to merge activity and molecular maps. Use of activity reporters (genetically encoded calcium indicators, others) and multiplex FISH will enable layering functional information onto transcriptional maps of cell populations in a specific brain area.

 

2. Generate a protein-based understanding and protein-based access to cell types. 

Proteins, not gene transcripts, are the functional players of healthy and diseased cells, and cell types may be better defined by proteins than gene transcripts. Methods to quantify, compare, and differentiate catalogues using this information may yield novel strategies for accessing cell types across species for selective manipulation. It is therefore imperative to move beyond a solely transcriptionally based cell atlas and generate a protein-based description of key cell types that includes quantification of the subcellular distribution of proteins in different cell types. Moreover, abnormal variants rather than missing genes are often at the basis of disease states. Thus, while most molecular cell-census efforts have been based so far on cell transcriptomics, an essential next step will be to gain a precise understanding of the expression of specific transcript and protein variants in different cell types and assess their subcellular localization. In addition, new approaches should be developed to study proteins in native environments rather than in vitro, which, combined with the development of specific antibody reagents, will enable the development of anatomically resolved large-scale in-situ investigations of proteins, including variants. The compatibility of nanobodies with EM reconstructions and live-cell analyses makes significant expansion of these tools particularly attractive. 

 

3. Exploit cell-type information to understand and modulate circuits.

Cell-census results provide a new platform for a comprehensive connectivity mapping and functional targeting of brain circuits. The integration of the various cell type phenotypic characteristics is important for answering fundamental questions such as whether transcriptionally defined cell types define basic units of functional specialization in the nervous system, or whether they reflect some other axis of biological identity, such as developmental specification based on genetically based patterning. Critically, through direct collaborations between experimentalists and theorists, cell-type information can now be integrated into theories of brain function. Currently, insufficient cross-talk between results from experimental data arising from investigations of multicellular regions (e.g., visual cortex) and modeled data from neural networks (using a few simplified cell types) hinders a deeper understanding of the impact of cellular diversity and specificity on circuit function. Grounding in theory will help technology-driven data collection at high precision to attain a better understanding of biology and of how neural networks compute.  

 

4. Expand study of human brain biology. 

New technologies for single-cell unbiased analyses of many cell types, combined with improved access to cells that do not rely upon germline modification, offer an opportunity to study the postnatal human brain and, in parallel, to study accessible stages of the human brain throughout the course of key developmental stages, such as through use of human brain samples collected from a large spectrum of individuals and brain specimens collected by neurosurgeons. Additionally, there is a need to develop and employ non-rodent models of the human brain. Genetic studies of psychiatric and prominent neurodevelopmental conditions (e.g., schizophrenia, bipolar disorder, autism) indicate complex polygenic etiologies in the vast majority of individuals. Understanding these genetic contributions will require use of human models. Brain organoids, derived from human pluripotent stem cells, offer an opportunity to study human brain tissue. While these quasi-cellular systems are still primitive and substantial progress will be required to model complex aspects of human circuit functionality and disease, they represent an opportunity to experimentally study aspects of human brain development and functionality that would otherwise never be accessible. 

 

5. Determine whether cell types are a fundamental unit of etiology and pathophysiology, as well as whether they may serve as potential targets for therapies in human brain disorders. 

We already know that certain neurodegenerative diseases are associated with the death of specific cell types (e.g., Parkinson’s disease, amyotrophic lateral sclerosis, others); can other types of neurological disorders, especially neuropsychiatric disorders, such as schizophrenia or bipolar disorder be similarly traced to malfunctions of specific cell types? Given precise new knowledge of cell types in the mouse brain, more genetically accurate animal models of brain disorders may be able to determine whether disease processes affect certain cell types and not others – and in turn, whether cell type-specific, circuit-level approaches can ameliorate or reverse symptoms in some of these models. An essential step in this direction is to use evolutionary principles and cell-type analyses from additional animal models (non-mammalian) to probe connections between cell types and disease manifestations or specific behaviors. Comparing cell types across species (from humans to mice, fish, and other model organisms) requires close collaborations with experts in various model systems and theorists. Such comparisons may uncover relationships between cell types, circuit function, and disease states.

 

Suggested short-term goals for BRAIN 2.0:

  1. Establish a data ecosystem for cell types allowing the integration of different facets of neuronal phenotype. We currently lack robust and scalable methods to integrate complex data reflecting distinct features, or facets, of brain-cell phenotypes. Having this information would likely be transformative for neuroscience – akin to the human genome sequence for biomedicine. Such a system might include appropriate ontologies/nomenclature and data formats, storage, and access infrastructure, and be dynamic to accommodate new data types and improved analysis techniques as they evolve. This technology is important, since as noted above it is not yet clear that scRNAseq data alone can be used as a surrogate measure of cell type in the central brain, independent of morphological or physiological properties (as it can be used in the retina). Achieving a consensus definition of cell types in the brain will therefore require a computational framework that integrates different data modalities, in a quantitative manner and be consistent with current FAIR (findable, accessible, interoperable, and reusable) data-science principles. Accomplishing this goal requires funding to support a balance of dedicated leadership, project management, data-science expertise (computation, data infrastructure, software, database engineering, ontologies, machine learning), as well as incentive structures to reward valuable data-management and data-sharing practices. Investment in such computational resources will maximize the value of ongoing breakthroughs in scRNAseq, imaging, and other technologies, to establish the relationship between molecular/genetic, anatomic, and physiological facets of brain-cell identity. As with other aspects of potentially sensitive collections of large amounts of data, access should be governed by a set of appropriate policies that reflect well-considered ethical principles. In particular, an interdisciplinary team of neuroscientists and ethicists could ask how human brain data and the privacy of the participants from whom data are acquired can be protected in case of immediate or legacy use beyond the experiment.
  2. Produce a consensus typology/taxonomy of brain-cell types. Fundamental multimodal knowledge of cell types should facilitate an informed consensus for defining brain cell types. Such knowledge should be applicable to all brain and spinal-cord regions and should also include vascular and other non-neuronal cell types. Generating a fully anatomically informed cell typology with high granularity across brains of mice, zebrafish, flies, worms and other genetically tractable model organisms is both valuable and doable. An anatomically resolved, three-dimensional atlas of the mouse brain in particular would provide a framework to study cell-cell interactions and neural-circuit function. Finally, cell-type profiling during nervous-system development will facilitate understanding how the emergence of specific cell-type identity correlates with function and connectivity. 

  3. Enable genetic and non-genetic access to cell types across multiple species. Molecular profiling in cortical and subcortical brain areas shows that individual cell populations are often defined by combinations of markers, suggesting that simple intersectional genetic approaches are insufficient to visualize or functionally interrogate specific cell types. We thus need new tools and strategies to access cell types across multiple species, using a range of genetic and non-genetic methods without germline modification that can be applied to virtually any mammalian species. These tools will enable systematic mapping of activity and connectivity and reveal other circuit-relevant information. Tools may include use of enhancers, CRISPR/Cas9-based gene-editing methods, viral pseudotyping, and serotyping. These approaches should enable anatomical access (via forward (anterograde) and reverse (retrograde) labeling) as well as the ability to study and collectively manipulate large number of cell types as fundamental units of etiological and pathophysiological significance. 

  4. Employ cell-census data to update and test models and theories of neural-circuit function. Recent strides in cell-type identification have only just begun to penetrate theoretical neuroscience research, calling for more plausible theoretical tools and platforms to integrate cell-specific information into broader models of circuit function (see Priority Area 5. Identifying Fundamental Principles). 

  5. Develop protein labels, especially those with cross-species applicability. Significant advances in nucleic-acid labeling (e.g., MERFISH/STARmap/seqFISH) point to the importance of concomitant development of protein labels for major cell-type markers of various types. These include chemically based protein labels and combination labels that permit integrative analysis of nucleic acids and protein variants in a selected set of 20 to 50 functionally important cortical and subcortical brain areas including parts of the human brain.

  6. Create connectivity and functional maps at multiple scales while retaining cell-type information. For example, electron-microscopy reconstructions should include contrast agents or immunolabeling that preserve membrane structure. Various fluorescence methods may also advance multiplexed labeling. 

  7. Conduct comparisons of human brain cell types with other species, including those from NHPs

  8. Extend single-cell, multimodal profiling to additional species, including NHPs and human brains. Analysis of functionally important cell types in species that reflect variable evolutionary distance from humans should employ several phenotypic approaches (transcriptional, morphological, connectivity, functional), and also address measuring variability according to state. A cell census of carefully selected NHP and human-brain areas and regions that encompass functionally important cortical and subcortical areas of the healthy brain, at multiple stages of postnatal development, is feasible. These human data should be integrated with the more complete mouse cell census. An initial focus on the healthy brain will provide the solid platform needed to understand various diseases.

 

Suggested long-term goals for BRAIN 2.0:

  1. Integrate cell-type data platforms for theory development. Achieving this goal will bring together neurobiologists, computational scientists, and theorists to generate emerging models for brain development, function, and disease.
  2. Create an anatomically resolved census of the whole brain, in 6 to 10 species, with high granularity and genetic and non-genetic access. Achieving this goal will enable multidimensional study of key cortical and subcortical cell types and help discover principles/parameters that define cross species homology in cell types.
  3. Support development of three-dimensional cellular systems modeling the human brain (organoids/assembloids). Such experimental systems that contain human-cell specification and diversity (neurons, glia and vasculature), circuit formation, integration, and plasticity mayenable reliable and effective modeling of defined aspects of brain development that would otherwise be inaccessible for ethical and practical reasons. Although currently derived brain organoids are still primitive, they nonetheless offer an unprecedented opportunity for experimental access to the human developing brain and for study of key developmental processes affected by human genetic diversity and disease. With this complexity comes the potential for development of higher-order human attributes, raising new ethical considerations. Careful consideration should be given to determining how these attributes will be monitored with the goal of elaborating ethical aspects that must be taken into account and addressed as progress occurs. The same is true for organoids/assembloids used in transplantation studies or for long-term cultures of multiple cortical human-brain cell-based model systems. Systematic neuroethics research, exploring the requisite or minimum features of engineered neural circuitry that might alter how we ethically consider these models, can be meaningfully pursued by offering research opportunities to incorporate interdisciplinary teams of ethicists and scientists in these efforts.

 

In summary, progress in Priority Area 1. Discovering Diversity has been faster than anticipated, enabled by advances in high-throughput technologies and analytical methods. New opportunities for BRAIN 2.0 include expanding cell-type profiling and data analysis to integrate measurements of additional phenotypic features of brain cells; generating a protein-based understanding and access to cell types; enabling genetic and non-genetic access to cell types across multiple species; expanding human cell biology; and performing cell-type based models of circuit function. At the completion of BRAIN 2025, we expect that current and additional progress in this area will clarify, perhaps even define, contributions of distinct cell types to circuit function and the physiological and pathological sequalae.

Priority Area 2: MAPS AT MULTIPLE SCALES

The first crude brain versions of visual retinal maps were constructed by correlating brain lesions in the occipital lobe of wounded soldiers with their visual field deficits.Many new methods of mapping neural circuits have emerged since then, both anatomically and functionally and across a wide range of scales, from non‐invasive whole human brain imaging to dense reconstruction of synaptic inputs and outputs at the subcellular level.In this section, we focus primarily on the challenge of mapping neural circuits anatomically, while the next section, Priority Area 3. Brain in Action addresses the challenge of functional mapping. Improved methods for reconstructing anatomy of neural circuits at all scales and linking these data to circuit function, described in the next section, will contribute significantly to achieving an essential objective of neuroscience research: understanding the logic of structural relationships to function. 

 

BRAIN 2025 envisioned improved technologies – faster, less expensive, scalable – for anatomic reconstruction of neural circuits at all scales, from non-invasive maps of human brain circuits to dense reconstructions of synaptic wiring diagrams in animals. Many powerful brain-mapping tools that were in their infancy in 2013 have now emerged as a result of BRAIN Initiative support. These include serial-sectioning electron microscopy (EM), brain clearing, expansion and labeling methods, high-speed optical functional imaging, optogenetic excitation of cellular circuits, and human laminal/columnar scale functional mapping. Other novel approaches include the use of genetic barcodes for large-scale connectomics. Rapid, worldwide adoption of these relatively new methods affirms their scientific impact, and they have spurred other new deliverables such as detailed, large-scale cell atlases in a variety of species including fish, rodents, NHPs, and humans. 

 

Nevertheless, the overall potential of microscale-mapping methods has yet to be fully unlocked, with bottlenecks remaining in tissue processing, imaging throughput, and subsequent comprehensive and quantitative analysis of resulting data. The explosive growth in machine-learning tools over the past 2 to 3 years has great potential to overcome these technical barriers, making possible more analyses of even larger data sets. These include dense-EM and x-ray microscopic image collections not anticipated at the start of BRAIN 1.0; these very large new datasets offer an important opportunity for BRAIN 2.0 to initiate closer and deeper engagement with data scientists. Another evolving opportunity is connecting new knowledge about structure to dynamical measures of in-vivo cell and circuit activity, across scales ranging from local synaptic connections to whole-brain networks. 

 

To solve some of the most vexing challenges in human health, from addiction to dementia, it will be essential to understand links between molecular, cellular, and network-wide activity, and how changes in anatomic circuitry can lead to aberrant function. We are thus faced with a need to correlateanatomical architecture at the nanometer level with data reflecting dynamic behavior of cellular circuits continually shaped and modified in real time by neuromodulatory and other forces. The ability to assess these dynamic factors in vivo across large scales is an essential priority. In addition, we will need more ambitious and comprehensive models and theory to truly understand how function is built on structure to yield behavior. Finally, with consolidation of tools to map at scales from individual synapses to whole human brains, a major challenge for BRAIN 2.0 will be to support a shift from mapping at multiple individual scales to mapping across scales – in key animal models and ultimately in humans. These advances should offer key insights into disease prevention and treatment.

 

BRAIN 2025 vision: Mapping at multiscale and linking anatomy to function  

BRAIN 1.0 set an ambitious agenda with both short- and long-term goals in the domain of multiscale mapping.  Several BRAIN 1.0 goals focused on creating projectional and connectional maps in animal models of increasing size and complexity. These included development of methods for efficiently mapping and annotating projectomes in experimental animals, including NHPs, as well as in human-tissue blocks, using clearance methods or serial-sectioning techniques. A second key goal was development of new techniques to use EM and/or super‐resolution light microscopy to integrate molecular signatures of cells and synapses to their nanoscale connectivity. To be feasible, these methods required companion computational tools, e.g.,to reduce time needed to segment volume-EM data sets by 100- to 1,000‐fold, toward reconstruction of micro‐connectomes of individual animals studied physiologically and behaviorally (e.g., zebrafish). Finally, BRAIN 1.0 recognized the importance of increasing spatial resolution of current functional imaging tools in humans, including validating magnetic resonance-imaging (MRI)-based methods for mapping the macro-connectome, improving resolution of human functional MRI to the range of 0.3-0.4 cubic millimeters, and applying mapping and projectomic tools developed in animal models to human brain-block specimens. 

Longer-term goals from BRAIN 1.0 included linking circuit anatomy and behavior and understanding how individual variance across scales affects animal and human behavior in health and disease. A long-term objective of microscale reconstruction of key brain areas in animals was to assess the relationship between individual connectivity variation to functional differences in outputs. Similar approaches aimed to define links between projectomes of specific circuitry in individual animals with behavioral variation in those individuals. Both of these goals require analysis of hundreds to thousands of animals – hence the concurrent need to vastly improve speeds of data collection and analyses. In research with humans, BRAIN 2025 goals included even more aggressive targets for increasing spatial resolution of functional-magnetic resonance techniques, down to 0.1 cubic millimeters, and in the longer term, mapping connectomes with sensitivity to individual variance in hundreds to thousands of human subjects. Documenting and understanding human brain variation, including making use of measurements in humans with neurological and psychiatric disorders, would be facilitated by the use of standardized formats.

 

NIH funding to date: Maps at Multiple Scales

In each year of the BRAIN Initiative, NIH has issued a general call for tools and techniques to access and characterize cells of the brain with cell-type and circuit-level specificity. The first funding opportunity announcement (FOA), “Tools for Cell- and Circuit-Specific Processes,” has supported a range of projects for accessing neurons and mapping their connections, as well as monitoring and manipulating their activity. In fiscal year 2018, NIH issued two additional FOAs to address gaps in this research portfolio: one FOA to target non-neuronal cells (three awards issued) and another FOA to develop methods and capacity for micro-connectivity analyses with synapse-level resolution (five awards issued).

 

Where are we now in brain mapping?

The Priority Area 2. Maps at Multiple Scales component of BRAIN 2025 has produced dramatic advances in a range of areas, reflecting funding by multiple BRAIN 1.0 programs.Below, we highlight some of these new tools to map brain circuitry – at widely varying scales and in post-mortem samples as well as living tissue– that have been spurred by BRAIN 1.0 efforts. 

 

Structural analysis

We have seen remarkable advances in serial EM, X-ray tomography, and automated segmentation. While these techniques predate the BRAIN Initiative, various technology leaps – including large-scale parallel EM-microscope array and machine learning-aided advances in image segmentation of individual cells and subcellular structures – have moved these studies from limited-scale demonstration projects to powerful tools for neuroscience inquiry. Nanoscale imaging and computational segmentation in zebrafish is an exciting example, unveiling an entire projectome of myelinated fibers in a vertebrate brain. The work seems ripe for expansion to analyses of entire mammalian brains and portions of human cortex. This should allow reconstruction of connections between every neuron in a sample, along with detailed cellular anatomy of non-neural cells associated with these circuits. The recent developmentof molecular identification approaches that are compatible with EM reconstructions, such as the use of fluorescently tagged nanobodies with NATIVE opens exciting new bridges between structure, connectivity, and cell-type identification.  

 

New histology driven by neuroscience

Improved brain tissue analysis techniques are another good example of active progress resulting from the fusion of neuroscience and engineering that BRAIN 1.0 had envisioned, in this case through ideas from chemical engineering via the proliferation of hydrogel-tissue chemistry over the past 6 years, which have now taken root across biology. New materials and approaches have significantly reinvigoratedmany aspects of century-old histology and microscopy methods, achieving sample clarity, accessibility forprotein labeling and nucleic-acid labeling/sequencing, tissue-size changes (expansion/contraction),and quality/process reliability. Newer approaches address fluorescent-labeling challenges in innovative ways. These include preserving genetically encoded fluorescent markers such as GCaMP and its multispectral derivatives, as well as deep-sample antibody labeling. Repeated staining cycles are now possible, assisted by non-destructive imaging technologies. These approaches (retrograde and anterograde tracers, activation-driven labeling such as Calcium-Modulated Photoactivatable Ratiometric Integratoror CaMPARI) still need improvement but allow rich and diverse analyses of activated cell distributions and connectivity across thebrain. When combined with insitu sequencing, such approaches may provide new and high content information in many species. 

 

Microscopy

Expansion microscopy has matured as a way to expand cleared tissue to enable the use of confocal and light-sheet microscopy to visualize structures below the diffraction barrier, with minimal photobleaching. Recent work combining expansion of brain samples with super-resolution lattice light-sheet microscopy is nearing EM resolution and has an added benefit of multiplex-capable fluorescent labeling. Thus, it is now possible to image an entire fruit-fly brain or a slice of mouse cerebral cortex in 2 to 3 days, using multiple markers and achieving an effective resolution of about 60x60x90 nanometers, reflecting a 4-fold expansion from previous levels of resolution.

 

Molecular identification

Barcoding is another promising new approach for anatomic mapping, in which specific genetic sequences are inserted into cells to identify and record activity patterns or other characteristics. Distinct from in-situ sequencing that seeks to identify native genetic information in single cells (e.g., MERFISH, seq-fish, STARmap), barcoding uses viruses capable of transferring a wide diversity of sequences into cells. In the barcoding method MAPseq, cells are transfected in one region with viral barcodes that are then trafficked along axons, enabling tracking across long-range projections. The technique has been applied to image projections of hundreds of neurons in primary visual cortex throughout the brain. Because it uses microdissection to extract tissue regions for sequencing-based analyses, barcoding provides a more statistically broad evaluation of connectivity between regions than is readily achievable with imaging methods. 

 

Human functional imaging

Significant advances, especially those fMRI-related, have emerged in top-down mapping approaches of human brains, yielding detailed cytoarchitectural and functional maps with cortical laminar and columnar specificity. While the biological limits of hemodynamic response measures remain an active topic of inquiry, several BRAIN Initiative projects employ advanced cameras that should push the technical limits of fMRI to size scales of 0.125 microliters, with temporal resolution of 1Hz or higher. During BRAIN 2.0, these approaches will allow investigators to bridge such measurements to those made at the mesoscale in animal models (including NHPs) using optical methods. This should lay the steps of a credible path to large-scale understanding of networks and circuitry in the human brain.

 

Linking structure to function

The essential goal of mapping brain structure is to gain insight into function. Advances in multiscale functional imaging and large-scale recording methods have greatly advanced our ability to compare functional connectivity to multiscale structure. Fast, functional microscopy methods allow us to image activity at the single-cell level across the brains of increasingly popular small-animal model systems such as zebrafish larvae, fruit flies, and roundworms (which are ever closer to having complete structural connectome information). These techniques span from high-speed functional microscopy approaches such as light-sheet microscopy, to real-time meso-scale imaging of fluorescent indicators of cellular function across awake mouse cortex (e.g., wide-field optical mapping).  Further, electrical recording has undergone dramatic scaling – for example, Neuropixels,Neurogrid – and new tools have been developed to combine such recording with optogenetics and imaging. As will be discussed within Priority Area 3. Brain in Action, the significant development of strategies for large-scale, real-time recording of functional activity across scales has afforded new opportunities to begin to link structure to function. This is exemplified by recent work, in which whole brain 2P-calcium imaging in zebrafish identified single neurons across the brain with neuromodulatory function. The same brains were then fixed and labeled, revealing molecular cell type of the same neurons.

Progress toward linking structure to function is also being made in research on the human brain. Resting-state fMRI has coined the term “functional connectivity” to refer to regions of the brain that are inferred to be connected based on the synchrony of activity between regions – even if that activity is seemingly random. Widespread findings of brain-wide maps of functional connectivity throughout the human brain over the past 10 years using resting-state fMRI have revealed that such patterns differina wide range of disease states. Although these observations have stimulated much wider efforts to map and correlate functional connectivity to a physical connectome using approaches such as diffusion-basedimaging, more recent results show that connectivity of these networks can vary dramatically from moment to moment. Similar studies are now being conducted on awake mouse and macaque brains. In some cases, thesehavebeen complemented by optogenetic or electrical excitation of key regions, which identifyfunctionally connected regions independently. 

New functional-measurement technologies have thus brought us to a point where in-vivo functional mapping of circuits can be combined with mapping of anatomical connectivity in a single animal or human research participant. These studies will play a critical role in development of theoretical frameworks and discovery of fundamental principles. In this way, thebrain structures can be mappedto their functional correlates across all spatial and temporal scales. 

 

Data processing 

Speed of dataprocessing has improved. The time to segment volume-EM data sets has dropped 100- to 1,000‐fold since the start of BRAIN 1.0 through the use of advances in machine learning such as convolutional neural networks for EM-section segmentation, and improvements in microscope design to include multiple parallel paths numbering upwards of 60/microscope. Thus, large-scale studies of large regions of mammalian brains (including those from humans, such as whole-thickness cortical columns) are feasible – meaning that analysis of entire brains of animals of increasing size and complexity is a realistic goal for BRAIN 2.0.
 

Goals unmet: Brain mapping

All of the major goals set by this Priority Area have been met. 

 

Gaps and opportunities: Next steps for BRAIN 2.0

A key goal of this BRAIN 2025 priority area is to generate circuit diagrams that vary in resolution from individual synapses to neuronal ensembles. Doing so will improve our ability to integrate knowledge from anatomical and functional domains, in the contexts of both health and disease. Moving forward, we see a need for advancing methods related to tissue processing and data acquisition.

  1. Increase imaging, tissue clearing, and labeling speeds for large brain regions and whole brains. The time required for tissue clearing and labeling has become the rate-limiting impediment to imaging large samples. Improvements in this domain could greatly enhance ability of many labs to add anatomical mapping to their functional experiments. 

Additionally, previously identified challenges identified by BRAIN 1.0 deserve continuing emphasis:

  1. Improve multi-scale observations of structure and function. Advances in this area will lead to better ways to link micro-level analyses (i.e., synapse-level) with meso- and macro-scale observations of functioning circuits. Combining functional data with transcriptional and anatomical findings, at all levels of the brain, from single synapses to neuronal ensembles, and in the same individual animal, will clarify links between structure and function – and ultimately, specific behaviors. Several groups are now embarking on efforts to combine results from functional-behavioral studies (from advanced optical methods) with serial-EM data of the relevant local circuits. These efforts, however, have been limited to small regions of mammalian brain or to invertebrate brains. Extending these studies to larger, distributed brain networks linked to complex mammalian behaviors awaits better ways to functionally image and structurally map entire mammalian brains. Tasks that are becoming increasingly commonplace, such as brain clearing, imaging, and annotation, are nonetheless tedious and prone to variability among labs. Thus, it may be valuable for BRAIN 2.0 to establish some type of commercial service to standardize brain-sample processing as well as to generate reliable annotations of cell types and connections.
  2. Create dynamic maps that include non-neural cell types. This step is especially relevant for probing human disease mechanisms. Recent BRAIN 1.0-inspired technology revealed disease-associated genes (particularly those related to psychiatric illness) in astrocytes. Thus, maps that exclude non-neuronal cell types are likely to be incomplete. It will be important to combine within single data sets electrophysiological, metabolic, neurochemical, and gene-expression measurements from a broad diversity of cell types. This should be done both locally and in distributed networks, at both meso- and macro- scales.
  3. Integrate fMRI with invasive activity measures in animals. Given the limitation to use primarily non-invasive techniques in humans, it will be important to strengthen small-animal or NHP fMRI approaches to improve integration of non-invasive functional MRI signals with invasive fluorescent optical methods and electrophysiology in awake animals performing a range of behaviors. In this way, we can build a deep link between functional mapping studies in humans and the functional and anatomical in animals. 
  4. Improve methods for anatomical data analysis. Modern machine learning techniques offer new,faster, and more powerfulapproaches to remove noise from data as well as to analyze large and disparate datasets, including EM datasets at the micro- and nanoscales. Annotation of cell types, boundaries, and interfaces in very large (petabyte) volumes is a realistic goal for BRAIN 2.0, as are comprehensive definitions of connectomes across those scales. Creative approaches to academic/industrial partnerships may accelerate progress. An even more ambitious machine learning opportunity relates to inferring molecular properties of identified cell types in dense morphological EM datasets. New forms of multiscale and multimodal data training sets in which cell-labeling methods are accurately registered with EM data is a good start. It is essential for BRAIN 2.0 to consider as a high priority data storage, access, analysis, and sharing such that the BRAIN investment is applicable as broadly as possible throughout the biomedical research community. 
  5. Improve functional MRI resolution to better than 0.01 cubic millimeters.  This goal remains elusive due to two fundamental limitations. First, reducing spatial scale from 400 microns voxels (a short-term BRAIN 1.0 goal) to 100 microns implies a 64-fold drop in both volume and signal-to-noise ratio. New/improved data-science methods, such as machine learning for image reconstruction, and even higher magnetic-field strengths (14T or above) will be needed to achieve this ambitious goal for BRAIN 2.0. MRI alternatives such as BRAIN Initiative-supported magnetic-particle imaging offer another path to achieving the needed high sensitivity, albeit with substantial engineering challenges. The second key challenge is biological – hemodynamic signatures currently define MRI functional contrast, and the spatiotemporal control of blood flow is defined by biology, not technology. While additional work in animal models using optical methods – or even focal microscopy in humans during invasive procedures – should shed light on this issue during BRAIN 2.0, current data suggest that such a technological goal is worth achieving.
  6. Develop signatures of microstructural features mappable by magnetic resonance-based methods. Methods to perform projectional mapping and tissue microstructure in vivo in humans are today limited to diffusion-magnetic resonance methods. Further technical advances allowing tracing of small fiber bundles across long distances in the human brain may arise from BRAIN 2.0 investments, especially within Priority Area 6. Human Neuroscience.
  7. Integrate theory and experimental expertise. A clear gap remains in our ability not just to acquire maps of the brain at multiple scales, but to understand conceptually how the different levels arise from and interact with each other. This gap in conceptual understanding might be bridged through increased support of theorists to guide interpretation of results obtained by new methods which span spatial scales (also see Priority Area 5. IdentifyingFundamental Principles).

 

Suggested short-term goals for BRAIN 2.0:

 

Model systems

Suggested new short-term goals include:

  1. Improve throughput in clearance and labeling methods; and develop and disseminate software and machine learning tools to efficiently analyze the resulting dense three-dimensional datasets. Generate extensive collections of nanobodies directed against key cell-type markers, which are compatible with EM reconstruction and will greatly enhance the impact of EM-based structural analyses. Access to cloud-based graphical-processing unit-based technology and data-storage solutions are likely to be necessary for maximal use of these data.
  2. Continue to develop and expand the study of neuromodulators action both microscopically and at meso- and macro- scales. In particular invest in synapse specific visualization of distinct neurotransmitter systems in intact circuits. Develop tools to monitor release of specific neurotransmitters and activation of their cognate receptors. 
  3. Improve transsynaptic anterograde viral tracing in living cells and expand viral tracing to models other than the mouse brain. Transsynaptic retrograde viral methods have greatly enhanced our ability to identify input from defined neuronal populations (and thus trace neural circuits). Although barcode-based tracing has provided new information about brain-wide projections of defined neuronal populations, jumping forward across synapses (anterograde methods) remain unavailable (except those requiring use of toxic or inefficient reagents). Thus, we cannot yet identify postsynaptic targets (i.e., the actual connectivity) of labeled axons, creating a significant challenge for mesoscale-circuit mapping. 

Previous BRAIN 1.0 goals related to model systems and humans that warrant continuing attention include:

 

Model systems

  1. Integrate optical imaging and electrophysiology with functional magnetic resonance methods in rodents and NHPs. These steps will ultimately enable measurement of diverse behaviors in awake animals and build an important bridge from animal to human studies.
  2. Continue to invest in efforts to map both structure and function in the same animals. A special set of challenges arise for studies that aim to directly correlate structure to function in individual animals, e.g., registering functional to structural maps. As already mentioned, it may be valuable for BRAIN 2.0 to establish a commercial service to generate reliable, standardized annotations of cell types and connections.

Humans

  1. Advance our understanding of non-invasive measures of brain microstructure made using MRI or other electromagnetic methods as well as PET. Similarly, efforts should continue to clarify the strengths and limitations of structural and functional connectivity measures. Detailed validation studies could combine animal studies with both invasive and noninvasive stimulation paradigms in humans, combined with functional studies, and include detailed measurements made in ex-vivo specimens.
  2. Reproducibly characterize individual variance (including across the lifespan) from structural and functional measurements.  

 

Suggested long-term goals for BRAIN 2.0:

Several long-term goals not highlighted in BRAIN 1.0 warrant attention in BRAIN 2.0:

Model systems

  1. Evaluate the whole-mouse brain connectome at the EM level (see transformative project below), integrating in-vivo functional and molecular correlates acquired before death. This goal strives to infer circuit function from physical structure, as well asidentify limits to that goal; i.e., todetermine which additional parameters are needed to predict function and behavior across scales. Corresponding in-vivo measurements of synaptic activity (both inhibitory and excitatory) in conjunction with neuromodulatory inputs, will be critical for developing and testing network theories of brain function.
  2. Obtain whole-primate (NHP, then human) brain projectomes from brains of individual animals that have been functionally characterized. A critical shortcoming of current primate atlases is that they are compiled from many individuals, which is problematic due to variation across individuals in layout of areas and connectivity. Longer-term, scaling these approaches with high-throughput capability will inform the study of individual variation.

 

Humans

  1. Achieve whole-brain, high-resolution (spatiotemporal), functional magnetic resonance unrestrained by biological limits from rapid-gradient switching and high-field radiofrequency coils. Achieving this goal will deepen our understanding of how functional activation and connectivity work together across brain regions. While BRAIN 1.0-funded instrumentation advances have set the stage for this endeavor, significant new investments are needed, such as in field-generating mechanics and novel gradient design. At the onset of BRAIN 2.0, this appears to be a realistic, albeit challenging, goal.
  2. Apply machine learning methods to compare and contrast homologous regions in the mouse and human brain. These data, combined with knowledge from the mouse whole-brain connectome project, will inform understanding of the human connectome at the nanoscale.  

 

Other BRAIN 1.0 goals require continued effort going forward:

  1. Use improved high-throughput clearance and labeling methods and rapid serial-sectioning EM tools to study human cortical and subcortical structures
  2. Develop a high-throughput paradigm to develop and apply novel PET tracers for key molecular targets (e.g., neuromodulatory receptors, synapses). Achieving this goal awaits improved PET-camera technology and would benefit greatly from an academic/industrial partnership.
  3. Use improved high-throughput clearance and labeling methods and rapid serial-sectioning EM tools to study human cortical and subcortical structures. 
  4. Combine in-vivo and ex-vivo data to establish fundamental links between structural and functional connectivity in humans, including the role of natural variation. When are these directly interpretable? When not, what is the significance of observed functional connectivity? 

 

In summary, we have seen substantial progress in Priority Area 2. Maps at Multiple Scales reflected by impressive improvements in tissue processing and imaging that are bringing brain regions and circuitry into sharper relief for continued investigation. Opportunities for BRAIN 2.0 include increasing the speed and efficiency of these powerful new tools; expanding analyses to larger brains; increasing mapping of non-neuronal cell types and synapses; integrating structure and function mapping in the same brain; and acquiring and refining data-science advances to facilitate cross-species comparisons. At the completion of BRAIN 2025, we expect that continued progress in this area will allow us to understand the structure of the brain and its numerous functions more fully. This multidimensional view will be transformative for developing therapeutic approaches appreciative of this complex organ.

 

Priority Area 3: BRAIN IN ACTION

While identifying all brain cell types and the wiring diagrams connecting them form essential groundwork for understanding neural circuits, recording neuronal activity in behaving animals and evaluating what signals are encoded and how they change in different behavioral contexts is essential for a mechanistic understanding of brain circuits. Linking hypothesis-driven experiments with modeling and theory will lead to genuine insights into the basic principles of brain circuit organization and function. A critical step ahead is tostudy more complex behavioral tasks and to use more sophisticated methods of quantifying behavioral, environmental,and internal state influences on individuals. Further, to continue to make progress in understanding the brain in health and disease, we need to be able to access specific cell types or other identified circuit elements and measure various aspects of their dynamic functions (e.g., neuromodulators). We also need effective methods to map aberrant patterns of brain activity to derive therapies for treating diseased brains central to so many debilitating human conditions. 

BRAIN 2025 vision 

BRAIN 1.0 set out the ambitious goal of achieving 10- to 100-fold improvements in capabilities for electrical and optical monitoring of brain activity and similar improvements in human neuroimaging. Expected advances included substantial increases in the number of individual cells recorded, improvements in activity sampling, and better methods for linking cells and cell types with specific behaviors (e.g., through immediate early-gene tagging). BRAIN 1.0 posited that a few test cases in model organisms such as larval zebrafish or the roundworm would empirically answer the question: How many neurons do we need torecord? In these relatively simple systems, large-scale, nearly complete neuronal recordings are possible, allowing scientists to begin to understand how many neurons are essential to build a comprehensive picture of circuit function.

NIH funding to date: Brain in Action

NIH has issued three FOAs to develop technologies for recording neural activity – each representing a different stage in the development pipeline from initial concept through technology optimization and iterative engagement with early adopters. The primary goal of these FOAs is to enable new capabilities for in vivo experiments, at or near cellular resolution, in animal models. Neural activity is defined broadly to include electrical activity, neurotransmitter and neuropeptide signaling, as well as plasticity and intracellular signaling. Technologies funded through these FOAs represent diverse approaches including optical, electrical, magnetic, acoustic, and genetic recording. The current NIH research portfolio of imaging methods offers the opportunity for investigators to continue to develop instrumentation capable of imaging the brain faster, deeper, and more broadly. 

 

Where are we now with recording neural activity?

The Priority Area 3. Brain in Action component of BRAIN 2025 has made good progress on many fronts. 

Electrical recording

Methods for deep tissue and surface-level electrocorticography (ECoG) recording have seen dramatic leaps in volume with electrode numbers increasing from 10 to nearly 1,000. Although funded outside of the BRAIN Initiative, the Neuropixels probes have stimulated substantial excitement. Electronics for filtering, amplification, multiplexing, and digitization have been integrated into recording devices. Many complementary approaches now enable electrophysiology to be combined with optical imaging and interrogation techniques such as optogenetics and pharmacology. These include monolithic integration of light-emitting devices into recording probestransparent conductive oxides; and graphene structures that permit optical imaging or stimulation; and multifunctional fibers with recording, optical stimulation, and drug/gene delivery capabilities.

Optical recording

for monitoring calcium signals now offer an order of magnitude greater signal-to-noise ratio compared to a decade ago, sometimes allowing detection of individual action potentials, and long-sought robust optical-voltage detection is becoming a reality. Recent work has also engineeredfluorescent indicators to monitora range of neurotransmitters and neuromodulators, such as dopamine, providing an important new chapter on understanding cellular activity in the brain. Encouragingly, the research community developing these molecules has established a healthy culture of rapidly sharing their discoveries via the plasmid repository Addgene, viral-vector core facilities, and suppliers of transgenic animals. 

We are also seeing progress toward achieving the necessary temporal resolution and fields-of-view in behaving animals for using these new forms of optical recording. Small-model organisms such as roundworms, zebrafish larvae, and fruit flies can rapidly advance novel optical-labeling and manipulation techniques. Faster forms of light-sheet microscopy, spinning-disk confocal microscopy, computational imaging, and two-photon microscopy have improved volumetric-imaging speeds in these model organisms. Meanwhile, sophisticated strategies for tracking and generating more complex behavioral experiments that are compatible with simultaneous optical recordingare opening the door to real-time, whole-nervous system read-outsin awake, behaving organisms.

Other improvements are apparent in speed and field of view, which enhance the ability of microscopic methods to analyze mammalian brains. Advances incorporating 3-photon excitation for deeper imaging within the mammalian brain have also been widely adopted. Another area of progress for large-scale mapping is two-photon mesoscope technology that includes robotically controlled systems for imaging NHP brains.Gradient-index, lens-based micro-endoscope imaging is transforming our understanding of functional activity of specific cell types in deep-brain structures. This technology, which mounts a small, lightweight camera directly on the head of an awake, behaving rodent, has been widely adopted. A commercial version is available, and hundreds of labs are using an open-source version called miniscopeWide-field, fluorescence-based mapping of cellular activity and hemodynamics over the entire dorsal cortical surface of behaving mice hasalso been refined, permitting acquisition of rich (albeit lower-resolution) high-speed recordings in both head-fixed and freely moving rodents. Also advancing rapidly is the ability to obtain optical readouts from multiple cell types simultaneously using spectrally distinct probes, which will help us understand links between cell-typeactivity and behavior. Other optical methods will soon be able to visualize binding of specific neurotransmitters. 

Data analysis (see also Priority Area 5. Identifying Fundamental Principles)

The faster streams and higher volumes of data generated by new in-vivo imaging and recording technologies confront end users with new challenges for data analyses and interpretation. Fortunately, arange of newanalytic tools have been developed for spike sorting. For example, Kilosortcan process data from 384-channel electrode arrays in near real time. Software for optical microscopy and miniscope calcium and voltage-imaging data has also been improved and shared, enabling extraction of the time courses and locations of individual cells. It is also now possible to quantify animal behavioral data such as webcam streams of freely behaving mice. Machine learning and other artificial intelligence approaches are integral to this progress.

Functional labeling for ex-vivo evaluation

A type of approach that departs from in-vivo imaging exploitsthe induction of immediate early genes during episodes of neural activity associated with a behavioral response to identify cells involved in thatspecific behavior. Further, improvements to systems such as CaMPARI may lead to new generations of photoconvertible proteinsthat can image integrated calciumactivity of large populations of cells over precisely defined time windows. Other technology improvements for circuit labeling include engineered adeno-associated viruses that can transport genes efficiently and noninvasively across the blood-brain barrier. Similarly, new generations of improved engineered rabies virus permit longitudinal functional studies on identified projection neurons based upon non-toxic, retrograde labeling.

Larger-scale imaging technologies

Hardware advances including high-magnetic fields and newly developed radio-frequency coils and sequence designs are improving both spatial resolution and imaging speed. For example, high-field scanners (7T) bring fMRI resolution to near 0.5 millimeters.Functional ultrasound has emerged as a minimally or non-invasive technique to map hemodynamics of entire rodent brains, offering higher spatial resolution than fMRI and applicability for use in freely behaving animals (Mace et al https://www.nature.com/articles/nmeth.1641). Photoacoustic tomography now permits volumetric imaging at 50-Hz volumetric frame rates and sensitivity to hemoglobin oxygenation, while studies are underway to explore the potential of photoacoustic approaches to measure direct indicators of cellular activity such as the genetically encoded calcium indicator GCaMP. Functional near-infrared spectroscopy (fNIRS), a technique that offers lower-resolution hemodynamic recording but does not include the use of ultrasound, has been dramatically improved and much more widely adopted for human studies in both adults and infants.

Goals unmet: Recording neural activity

The most ambitious goals from BRAIN 2025, such as recording the activity of 1 to 10 million neurons in a behaving mammal, remain out ofreach. 

Recording devices. While recording microelectrodes have advanced significantly, probes with configurations that go beyond simple laminar arrangements are still needed. Novel recording devices using nanowires or other nanofabricated structures await in-vivo testing and validation compared to existing electrophysiological approaches. Carbon-fiber electrodes have emerged as inexpensive and chronically viable electrodes, but they face challenges due to laborious hand-assembly methods and the need for sophisticated implantation techniques due to the delicate nature of ultra-miniaturized electrodes. While we have seen good progress in wireless recording, artefacts from concurrent wireless power delivery remain a problem. Recent efforts in artefact-rejection circuits, however, promise high-density closed-loop stimulation and recording. There also remains a need for high-throughput electrophysiological and multimodal tools that operate consistently over extended periods. 

Understanding neuronal codes. Optical methods used in zebrafish larvae and roundworms now permit recording from every neuron in the brain or nervous system of these small, tractable organisms during increasingly complex behaviors. Such approacheswill likely soon inform estimates of the number of neurons required to account for observed behaviors based oncircuit activity (see Priority Area 5. Identifying Fundamental Principles).

Gaps and opportunities: Next steps for BRAIN 2.0

To understand the dynamic interplay of activity across cells and circuits in behaving organisms is the ultimate aim of this Priority Area. Progress is underway, but several avenues present new opportunities. These include improving (and making accessible to individual investigators) electrophysiological, neurochemical, and imaging tools that can access more brain regions than currently possible (in humans and in NHPs); developing and applying powerful data-science techniques; and expanding human capital through interdisciplinary approaches and via increased investment in theoretical research. 

Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity. Among these are some aspects not directly emphasized in BRAIN 2025:

  1. Expand functionality and integration of electrophysiological and neurochemical methods. While silicon probes have upped electrode counts dramatically, these instruments remain poorly suited for chronic long-term studies in freely moving subjects. If refined and optimized, nanomaterials-based tools could provide fundamental progress in in vivorecording capability. Chemists and materials scientists should be encouraged to collaborate with neuroscientists. BRAIN 1.0 focused on electrophysiological, calcium, and hemodynamic recording of circuit dynamics, but neurochemical targets (including neuropeptides) are also ripe for new applications. 
  2. Capitalize upon machine learning-based data analyses. Major recent progress in machine learning techniques introduce significant opportunities for automating data analysis in a range of settings including measuring animal behavior and cleaning/de-noising varied datasets – and ultimately, for creating models, predictions, and frameworks to map brain activity with behavior. Unfortunately, these burgeoning techniques are beyond the reach of many labs who nonetheless will soon acquire huge quantities of real-time neuroscience data. BRAIN 2.0 should help provide investigators access to, and help in adoption of, such powerful methods. Improved interdisciplinary education and training for both trainees and established scientists should be promoted to advance understanding of the capabilities and limitations of various data-science techniques. Within the field of data science itself, improved methods are needed to interpret neural-net/deep-learning models and outputs, toward deriving mechanistic insights about brain function (see Priority Area 5. Identifying Fundamental Principles).
  3. Improve tools for studying primate brains. Currently, a chasm exists in the availability of imaging tools used for studies of mice and those used for studies of humans. Challenges arise from a range of issues including brain size, motion-related measurement barriers, limited progress from small studies, cost, and ethical concerns. Refining tools used in rodents to make them applicable to NHPs is generally is the most effective way to gain insights on primate brain function and to address the technical challenges that prevent tools and treatments from applying to humans. Improvements in transgenic labeling and fluorescent indicators (and techniques for imaging larger and deeper brain areas) could bridge the gap in brain-mapping studies between mice and humans. Examples for increased NHP-research emphasis include testing and optimizing electrical-recording technologies, non-invasive imaging, and neuromodulation technologies – which would in turn contribute toward understanding compatibility and safety of use in humans. Improving NHP-compatible functional-recording technologies to levels currently possible in mice would provide valuable correlates for ongoing cell-type analyses in all primates. Extending imaging depths for NHP studies will also facilitate studies of smaller mammals, vertebrates, and invertebrates, while enabling broader cross-scale and cross-species comparisons. Guidance for studies involving NHPs is necessary for the particular and potentially unique ethical issues arising when the NHP model involves modifications that more closely mimic human physiology or behaviors. Issues related to NHP models are discussed in more detail in the Neuroethics Roadmap.
  4. Develop novel human brain-imaging technologies beyond fMRI. Potentially game-changing approaches include ultrasound, electroencephalography, magnetoencephalography, positron-emission tomography (PET), and functional near-infrared spectroscopy, but all require further investigation to establish their value.

Other opportunities identified by BRAIN 1.0 deserve continuing emphasis:

  1. Expand optical imaging. Most functional-imaging progress in mammalian brains has been limited by accessibility to cortical and other superficial structures. Optical recording from deep-brain regions with endoscopic methods is still tissue-destructive andtoo infrequent for robustcomparison to that from other regions. Imaging from deep cortical layers also remains an important challenge, as feedback to subcortical pathways originates in these layers.
  2. Develop tools to measure synaptic strength and neuromodulatory function. One goal is to gain the ability to quantify the function and strength of inhibitory synapses. Another direction is to further access the function elucidate the study of excitatory synapses, extending technology beyond calcium imaging technology (cite). will benefit the study of excitatory synapses as well. Human studies of synaptic density and neuroreceptor signaling are achievable via PET but will require new tracers with better understood targeting and pharmacodynamic properties. We are still unable to fully assess key elements of synaptic function in vivo in humans. The use of next-generation PET cameras with higher sensitivity, coupled with novel synaptic tracers, can be combined with distributed functional MRI data to measure the influence of neuromodulators on distributed networks and circuitry, and map these dynamic changes to quantitative measures of receptor trafficking, all while tracking behavioral measures. Parallel strategies could be employed in model species.
  3. As human datasets become more multimodal and complex, it is likely that we will gain granularity in deciphering human memories, thoughts, and emotions – and neuroethical issues will become increasingly significant. This is particularly true regarding deciphering circuit function, since circuits are key to understanding higher-order behaviors. Neuroethical concerns include but are not limited to:
    1. To what degree are a research participant’s memories and thoughts reflected in collected data?
    2. Who has access to these data and for what uses?
    3. If brain circuit and neuroimaging data are used beyond the lab in contexts such as legal cases, consumer marketing, and national security, are existing legal and regulatory structures adequate to ensure that brain data is not mis-interpreted or misused?
    4. With improved data analysis techniques, how likely is it that data intended for one use will yield unforeseen information into other aspects of a participant’s privacy?
  4. Importantly, as such models are used in human analyses linking brain activity to behavior, scientists must carefully consider personal biases that may skew hypotheses, algorithm development, and data analyses. Neuroscientific research that will be related to brain health will necessarily involve setting bounds to establish which behaviors are considered “normal” and which are considered “abnormal;” however these bounds may not be universally accepted. Some variations reflected by human lived experiences may be not only acceptable, but perhaps desirable. Scientists and ethicists can explore together, from the inception of an experimental design, how these biases and assumptions might inform their studies.

 

Suggested short-term goals for BRAIN 2.0:

Some suggested new short-term goals for studying the brain in action include:

  1. Explore real-time interactions between different cell types, neuromodulators, and activity during short- and long-term behaviors. Achieving this goal requires development of a diversity of sensing and imaging techniques to monitor a variety of electrical and chemical signals in a range of cell types. An important prerequisite to perform such studies is to improve transfection technology, such the activity of large numbers of neuromodulator systems can be tested while avoiding costly and time-consuming genetic approaches in mice.
  2. Combine ultrasound methods with direct sensing of neural activity, possibly through development of near-infrared photoacoustic-compatible indicators of neural activity. Functional ultrasound is a minimally/non-invasive, lower-cost alternative to electrocorticography (ECoG)/fMRI with potential for use in awake, behaving animals and even humans (see Priority Area 4. Demonstrating Causality). This and similar photoacoustic methods can image hemodynamic contrast in an entire rodent brain and are suitable for imaging newborn humans.
  3. Develop new NHP recording and imaging technologies. NHP models can serve a critical bridging role in bringing approaches that have been developed and highly-refined in rodents to a stage where they can be used in the human brain. Such studies will also develop and validate a pipeline toward new human brain recording technologies – both in terms of safety and efficacy, as well as to establish the utility and value of such recordings for clinical use. Improving NHP-recording technologies could also advance our dynamic understanding of the brain regarding complex cognitive tasks not possible with rodents. This goal might also include development of more transgenic NHP models, pending comprehensive review of scientific, budgetary, and ethical factors relevant to use of NHPs. 
  4. Develop tools to analyze naturalistic (untrained) and trained behaviors. The reductionist approach typical of decades of biomedical research has been fruitful for understanding the nervous system through models of simple, trained behaviors. This approach will remain important going forward, but now we also need to include more ethological behaviors and more naturalistic environments, for example through use of virtual reality. However, little will be learned if behavioral constraints are relaxed without the ability to measure various behavioral components. Thus, we need robust, automated methods to detect and classify naturalistic behaviors in freely moving animals and humans, in various settings from monitoring individuals to monitoring social interactions. Importantly, the resulting data will only be useful for integration with other knowledge if such methods have temporal resolution commensurate with calcium or electrophysiological signals. 
  5. Develop tools to assimilate and link brain recordings with behavior. Integrating activity mapping with theory is an important step in neuroscience discovery: it is necessary to implement approaches that describe activity in local areas related to that in distributed circuits as well as brainwide circuits. Fortunately, increasingly sophisticated data-science analysis techniques, including several machine learning approaches, can suggest mechanistic links between cerebral signals and behaviors. These modern techniques go beyond predictive value; they can find and expose patterns and connections not possible by human study or intuition. Moreover, it is likely that data-science methods can penetrate the growing volume of complex multi-dimensional recordings unlocked by BRAIN Initiative-funded technologies. Engaging data scientists – as well as supporting data-science training for both neuroscientists and neurotechnology developers – is urgently needed to leverage this burgeoning opportunity.
  6. Integrate technology development and information transfer between model systems. During BRAIN 2.0, we must face the obstacles that make it difficult to take emerging tools from small, lab-specific studies in rodents to application in primates, including humans. Achieving this goal calls for collaboration with neurosurgery teams and FDA, as well as considering the ethical implications.

Some specific goals identified by BRAIN 1.0 deserve special emphasis going forward:
 

  1. Continue to advance electrophysiological technologies. Dense microelectrode probes are poised to transform electrophysiological recordings, with electrode counts potentially reaching 100,000 channels or more. BRAIN 2.0 should aim for the capacity to record neurons in behaving animals stably across years, not weeks – to promote understanding of both development and aging. The ability to track neurons in behaving animals over long periods (electrically or optically) will help us understand neuronal plasticity and stability of neuronal populations. Also needed are methods for high-density recording in freely moving animals with the ability to sample over large expanses of cortex and sub-cortical structures. Achieving this goal might require shifting emphasis from silicon probes to softer materials with capabilities approaching those of silicon devices.
  2. Continue to advance optical-recording technologies. Especially for studying the human brain, we need the ability to record with single-cell resolution (e.g., multiple or multiplexed miniscopes) in regions other than the cortex (as well as in combination with cortical regions) of freely moving animals. Whole-brain or whole-nervous system imaging of small model organisms will inform trade-offs between comprehensive, ensemble-level recordings, or distributed recordings of subsets of single cells. Thus, the need to improve imaging technologies for high-speed, comprehensive imaging of such small-animal systems is still relevant.
  3. Develop better optical reporters of cellular activity. Direct imaging of voltage allows a direct link between structure and function, which is difficult to obtain using electrophysiology. In particular, the ability to image neuronal voltage at synaptic resolution would allow much better understanding of how inhibitory and excitatory inputs onto a dendritic tree are transformed into an output. More generally, being able to restrict optical indicators to the soma or nucleus of brain cells will help to isolated signals from neighboring cells, but it hinges on defining technical specifications for different recording tools for regions and cellular locations – including neuropil, which also contains glial cells.
  4. Develop dynamic methods for detecting the release of specific neuropeptides in vivo, in real time. Neuromodulators can affect brain computations profoundly, and we cannot understand their spatial impact based solely on anatomical connectivity. Achieving a functional chemical connectome will likely require combinations of methods. These include use of fluorescence-based, synaptic-level measurements and optical detection of neuromodulator signaling through G-protein coupled receptors. These measurements should not be limited to "traditional" neurotransmitters but also be able to monitor neuropeptides, lipids, and other signaling molecules. Improved chloride sensors would uncover important information about inhibitory signaling. Combining electrophysiology with neurochemical measurements will enhance what we know about neurotransmission mechanisms and drug-receptor interactions. Moreover, combining optogenetic perturbation of one deep-brain region with micro-endoscopic imaging in another deep-brain region will advance understanding about neuromodulation across structures.
  5. Develop methods to label active neurons. Improved methods for permanent “activity stamping” to label active neurons in vivo at high spatiotemporal resolution will help determine which neurons participate in, or drive, specific behaviors. 
  6. Neuroethical considerations for human-brain circuitry analyses: Incorporate neuroethical deliberations, considerations, and recommendations for performing and advancing the work from the onset of the experiments through the lifetime of the study. This should entail having a neuroethicist as part of the research team from the inception of the work through the end of the study, a suggestion noted in the introduction of this report as important for BRAIN Initiative-funded research. As articulated previously, careful consideration of sharing and potential use of participant data means considering both participant privacy and rules for data sharing and access. Institutions need to consider whether current safeguards are legally and ethically adequate for consent in situations in which human research participants (and potentially scientists and health providers) do not thoroughly understand the level of risk involved in restoring function or changing personality – or other unpredicted outcomes – resulting from circuit remodeling.

Suggested long-term goals for BRAIN 2.0:
Some suggested new longer-term goals for BRAIN 2.0 include: 

  1. Determine the number of cells that must be recorded simultaneously to account for specific behaviors at a given level of precision. This question remains unanswered, highlighting lack of a theoretical framework to guide experiments outlined by this priority area and associated technologies (see Priority Area 5. Identifying Fundamental Principles).
  2. Develop analytic tools to establish causal links between large-scale neural population activity and complex behavior. This goal is challenging, and to date, it remains largely unsolved. Rapid advances in computer vision and machine learning offer tremendous opportunity to improve spatiotemporal resolution and objectivity of methods that classify naturalistic behaviors in an automated fashion. Most behaviors used in neuroscience experiments, whether sophisticated or simple, are ad hoc designs developed on a lab-by-lab basis. As a result, behavioral findings are often difficult to compare. While it is likely that top-down specifications of preferred behaviors would be counterproductive, there is value in finding consensus for selection of a set of robust and standardized behaviors (preferably relatively natural behaviors).
  3. Image high-speed neural activity throughout the human brain. Watching the human brain in action is a goal that remains beyond our grasp yet achieving it should remain a high priority for BRAIN 2.0. Animal studies may (but have not yet) provide insights about ideal information to record as well as its practical uses (see also Priority Area 4. Demonstrating Causality).

 

In summary, we have seen very good progress in Priority Area 3. Brain in Action, driven in part by improvements in hardware and integrated strategies that combine two or more approaches: electrophysiology, optical imaging, optogenetics, and pharmacologic modulation. Opportunities for BRAIN 2.0 include expanding the ability to understand neuromodulatory function; advancing tools to study larger (primate) brains; and sophisticated, computational tools to better quantify complex behaviors (especially in natural settings). At the completion of BRAIN 2025, we expect that continued advances in this area will provide a clearer understanding of how dynamic activity in and across brain regions instigates so many distinct behaviors in animals and in humans.

 

Priority Area 4: DEMONSTRATING CAUSALITY

This BRAIN 2025 priority area aimsto derive interventional technology to test cause-and-effect relationships between structure and function. This type of approach has been fundamental in driving basic understanding of how complex living systems work and has powered remarkable progress in biology over the past century. By way of example, our understanding of what genes do – that is, what genes actually cause to happen in cells – both in health and disease, has been driven by the ability to generate gain-of-function or loss-of-function mutations in single genes. With continued basic research investments, such tools have steadily increased in sophistication, from mutagenesis screens in organisms to transgenic and knockout technologies to RNA interference and CRISPR/Cas gene-editing interventions. Such causaltools targeting gene expression are especially powerful coupled with observationaltools that allow assessment of (and experimental guidance by) naturally occurring gene expression patterns, and downstream events resulting from the experimental intervention itself. 

Genomes in living cells and organisms, like brains, are highly nonlinear. They are rich in feedback, parallel-processing and exhibit redundancy, interconnectedness and interdependence, history-dependence, and context-specific states. In using the perturbation tools listed above to test the causal significance of genes in mediating particular processes across biological systems, geneticists became adept at meeting the challenges of such complexity, using observational tools to guide perturbation, rigorous control experiments, and appropriate conceptual frameworks. Similar thinking regarding causality has helped revolutionize cell biology and biochemistry as well, via gain- or loss-of-function interventions at the elemental level of single biochemical messengers and even single amino acids, within the corresponding complex nonlinear biological systems as they operate.

In the years leading up to the BRAIN 2025 report, neuroscientists had begun to develop analogous capabilities for providing gain- or loss-of-function of neuronal activity, at an elemental level – that is, a cell type within the brain of a behaving animal. These causal tools, which now include a broad range of interventional methods, have led to many thousands of discoveries over the past 15 years. The BRAIN 2025 report took note of this revolution as well as opportunities for further developing and accelerating the opportunities provided by causal circuit neuroscience. Among the goals identified were:

  • Increase the number of orthogonal interventional tools to control multiple elements independently
  • Refine intervention beyond cell types to the level of single cells or multiple individually-defined cells

BRAIN 2025 recognized that substantial synergy existed between the development of interventional tools and other domains of neuroscience in the BRAIN Initiative:

  1. Advances in the enumeration of cell types, and in the definition of cell type itself, can inform causal cell type interventions. The BRAIN Initiative's emphasis on cell-type diversity yielded synergy in this area. 
  2. The development of tools to observe activity is critical for the ability to test causality in neural dynamics by matching or modifying naturally occurring activity patterns, as well as for testing the relevance of context – such as monitoring ongoing activity in other cellular populations – for determining elicited physiological or behavioral outcomes. 
  3. Advances in computational algorithms are necessary to implement closed-loop interventions – those interventions that measure the current state tointerpret responses in controlled cells and in downstream populations. 
  4. The potential significance of understanding causality for clinical impact can hardly be overstated. There is little doubt that a major barrier to the development of new, safe, effective, and specific therapies for neurological and psychiatric diseases has been lack of causal knowledge: Which cells and cell types and circuits actually cause – rather than correlate with – clinically relevant cognition and actions?

 

BRAIN 2025 vision 

BRAIN 1.0 sought to accelerate the development and application of interventional tools for demonstrating causal relationships between cell and circuit activity, as well as for physiological and behavioral processes. Although the initial phase of the BRAIN Initiative was by design intended to boost technology development, it also considered applying these new technologies and other tools in real-world biological settings.

 

New and improved perturbation technologies suitable for controlling cells specified by type, wiring, location, and other characteristics

At the outset of the BRAIN Initiative, causal circuit-targeting tools were chiefly optical (optogenetic, in which a microbial opsin gene is introduced into target cells to confer light-triggered actuation) or chemical. Chemogenetic approaches introduce genetically engineered receptors into targeted cells. Such receptors are not naturally present and respond to non-natural chemicals only – thus limiting this method to affecting only the targeted cells containing the chemical and the gene. While these methods were already broadly adopted at the time, only two or three independent channels of control could operate simultaneously and independently in the same preparation. BRAIN 2025 aimed not only to increase the diversity of optical and chemical interventions, but also to explore, develop, refine, and deliver other tools such as genetically encoded actuators, small molecules, and new devices. Other emerging approaches included magnetic neuromodulation and ultrasonic (acoustic) approaches – potentially via disrupting the blood-brain barrier, focal heating, and/or direct neuromodulation.

The ability to target cell types for causal experiments relies on cell-type identification, which in 2012 was based on a limited number of defining features, and usually only one feature, such as the activity of a single promoter. BRAIN 2025 aimed to increase the diversity and complexity of cell identification and targeting by combining many features and selecting features based on more thoroughly validated cell typing. 

The ability to control individually targeted cells (as opposed to all cells of a particular type) was still in its infancy, reflected by the novel use of a guided light beam for single-cell optogenetic control in vivo in mammals. BRAIN 2025 supported scaling up this single-cell intervention to achieve independent control of many individually specified neurons in behaving animals.

 

Application of perturbation tools to behaving animals  

While by design the initial phase of the BRAIN Initiative was intended to favor technology development, BRAIN 1.0 also placed critical emphasis on ongoing application – both for its own sake and to help guide development of practical and useful tools in the pursuit of fundamental knowledge. An overarching goal to be enabled by new technology was to determine the causal relationships between neural dynamics and behavior in a range of systems. Analyzing and perturbing these relationships would require advances in imaging (e.g., wider-field, deeper, and multi-site imaging) or other recording methods – such as electrophysiology (e.g., higher resolution) and imaging (e.g., wider-field, deeper, and multi-site imaging), sophisticated online algorithms for analysis and classification of activity, next-generation automated movement tracking and motion correction methods for integration with behavior, and novel devices enabling fast integration of causal interventions with recording.

 

Aligning perturbation to naturally-occurring neural patterns 

Tools combining electrophysiological recording with electrical and optical neuromodulation permit direct monitoring of the electrophysiological effects associated with these interventions. They also “close the loop” on control of neural activity through behavior and by conditioning activity features of the network itself. This allows activity-triggered interrogation of neural activity with millisecond precision in behaving animals, enabling studies of spike timing-dependent plasticity in developing or repairing circuits. Similarly, the emerging ability to control selected cells using optically-detected naturally-occurring activity of those same neurons during behavior opened the door to probing how the firing of individual neurons, cell types, or neural ensembles depends causally upon (or helps to shape) the dynamics of the network within which they are embedded. Optical or electrophysiological tracking of local and global activity patterns thus provides critical information about brain or circuit context, which is likely to have a strong influence upon perturbation responses.

The methods and advances within Priority Area 3. Brain in Action are critical for this work. The ability to observe and detect global or local firing events should be used in combination with statistical methods for causal inference to test interactions that may mediate behavior. To complement observation and to demonstrate causality, the BRAIN Initiative sought to develop and advance tools to control brain activity in a manner that increasingly approaches physiological firing patterns in single cells as well as in distributed ensembles across the brain. 

NIH funding to date: Demonstrating Causality

This Priority Area is tightly linked with Priority Area 3. Brain in Action, with many of the same programmatic goals. NIH issued three FOAs to develop technologies to modulate neural activity – each representing a different stage in the development pipeline from initial concept through technology optimization and iterative engagement with early adopters. The primary goal of these FOAs is to enable new capabilities for in vivo experiments, at or near cellular resolution, in animal models. Neural activity is defined broadly to include electrical activity, neurotransmitter and neuropeptide signaling, as well as plasticity and intracellular signaling. Technologies funded through these FOAs represent diverse approaches including optical, electrical, magnetic, acoustic, and genetic modulation. Multiple research groups are attempting to integrate calcium imaging and optogenetic stimulation into experiments in NHPs, although further optimization of expression and imaging conditions is needed. A fourth FOA in fiscal year 2018 called for projects to systematically characterize, model, and validate membrane, cellular, circuit, and adaptive-biological responses of neuronal and non-neuronal cells to various types of stimulation. This research aims to inform future device development (three awards issued).

 

Application of perturbation tools to humans to understand normal nervous system function and mechanisms, causes, and treatments of psychiatric and neurological disease

A complete understanding of brain mechanisms at all levels may not be necessary to modulate the brain therapeutically. As tools emerge to modulate the brain in increasingly physiologically relevant ways, we may discover heuristic methods to shift pathological brains toward healthy function, yielding novel therapeutic strategies for treating brain disorders ranging from autism to Alzheimer’s Disease. Thus, perturbation tools used in animals (e.g., optogenetic, chemogenetic) need not be the same tools ultimately used in humans: Identifying regions and projections important for particular behaviors in animal models may guide interventions with electrodes or transcranial magnetic stimulation. Identifying causal cell types could lead to molecular and medication-based strategies, especially if aligned with the ability to phenotype cells at the molecular level. This is occurring with hydrogel-tissue chemistry and its many variants, an approach also emerging at the outset of the BRAIN Initiative.

 

Where are we now with demonstrating causality? 

The Priority Area 4. Demonstrating Causality component of BRAIN 2025 has been successful in paving the way for application of new technologies to behaviors, at scales ranging from individual cells to brain regions – in organisms ranging from invertebrates to rodents to primates.

 

Improved tools

Two-photon techniques have progressed from single-cell control to the control of whole-organism (mouse) behavior via multiple, individually specified cells. Optogenetic methods now operate across time scales, with fast opsins working at the millisecond scale, and step-function opsins allowing chronicexcitation or inhibition without the need for continuous light delivery in living organisms. Chemogenetic tools (e.g., Designer Receptors Exclusively Activated by Designer Drugs, or DREADDs) also permit control over extended time scales without the need for light delivery, using the pre-drug clozapine-N-oxide. Researchers have combined experimental palettes of opsins and DREADDs to influence mixed-cell populations; they are also compatible with fluorescent-activity imaging. Many relevant activity-readout methods are directly relevant to defining causal relationships between neural transmission and various biological outputs, including genetically encoded voltage and neurochemical indicators, such as dopamine. These advances join the emergence of complementary tools developed outside the BRAIN 1.0 investment. 

Along with optogenetics and chemogenetics, magnetic tools are improving, and ultrasonic techniques have also emerged with complementary uses and capabilities. Further development of acoustic and magnetic tools may augment our ability to control multiple, independent cell populations using several stimuli. As these methods mature, they will improve our ability to study activity, structure, and function at cellular resolution – toward understanding circuits that drive precisely defined behaviors, both in health and disease. As these methods mature, they will improve our ability to study activity, structure, and function at cellular resolution – toward understanding circuits that drive precisely defined behaviors, both in health and disease. This will require enhanced neuroethical discussions that evolve over time as science provides increasing insight into behaviors.

 

Combining perturbation and observation

Determining causality requires that perturbation tools and observational methods work together, which has been enabled by the design of specialized hardware. One example is fiber photometry from single-site regions (Gunaydin et al., Cell 2014) to multi-site regions (Kim et al., Nature Methods 2016) that enable calciumfluorescence to be detectable at each site through use of the same type of probe that delivers optogenetic control. This method thus aligns naturally occurring timing and local neural-activity magnitude. Other types of hardware combine multiple functional features to permit read-write capability. The use of closed-loop electrical recording and stimulation in humans is also progressing. New hardware designs have increased the spatial extent of interfaces with neural circuitry, in line with the BRAIN 2025 goal of providing access to broad and deep volumes of neural tissue during behaviors. Wide-field imaging approaches now allow visualization of activity in up to tens of thousands of neurons – over millimeters or farther with little temporal delay within an experiment. Other advanced microscopy methods have led to improved optical-imaging depth during holographic optogenetic stimulation with multi-photon (2p and 3p) methods. Variants of light-sheet microscopy provide high-speed and high-quality imaging in shallow cortical imaging. Invasive optics such as endoscopes and miniscopes now offer access to deep brain structures in freely moving animals and have been widely shared through an open-source framework. This technology garnered the honor of Nature Methods’ 2019 “Method of the Year.”

 

Application to humans

Finally, we have seen progress in applying use of perturbation tools in humans. This should advance our understanding of both healthy nervous-system function as well as mechanisms, causes, and treatments of psychiatric and neurological diseases. Researchers have applied temporal-lobe, closed-loop deep-brain stimulation (DBS) to improve memory, and the therapeutic efficacy of DBS for Parkinson’s disease has been enhanced by closed-loop control. Optogenetics is currently guiding clinical application of TMS, DBS, pharmacology, and combination therapeutics, and the concept of an optogenetics-guided clinical trial has surfaced. Together, these interventional approaches can achieve far greater specificity, even comparable to optogenetic intervention without gene transduction, if guided by causal knowledge from optogenetics. Optogenetic research in rodent models has already inspired novel treatment concepts leading to reported therapeutic clinical benefit. In a rat model of compulsive drug seeking, prolonged cocaine self-administration decreased intrinsic excitability of mPFC pyramidal neurons, especially pronounced in drug-seeking animals. Compensating for hypoactivity of these projection neurons with optogenetic mPFC stimulation prevented cocaine-seeking behavior. Guided by this finding, scientists have demonstrated that stimulation of dorsolateral prefrontal cortex reduces drug use and cue-induced craving in both cocaine- and heroin-addicted people, respectively. Optogenetic methods have also underscored the special efficacy of targeting white-matter tracts with electrical DBS. Guided by these insights, electrical, low-frequency DBS combined with dopamine antagonists in cocaine-adapted mice is being developed as a potential therapy. Finally, there is exciting recent progress in developing and testing in human research participants methods for real-time observation, decoding, and closed-loop stimulation, particularly related to stabilizing mood. These methods are based upon multiscale modeling of brain activity that incorporates both neural firing and local-field recordingss.

 

Goals unmet: Demonstrating causality

While none of the goals have been fully met, all goals are being addressed (quantitative metrics and milestones have been achieved).

 

Gaps and opportunities: Next steps for BRAIN 2.0

The brain is a closed-loop, non-linear system that integrates external information with internal representations at multiple spatial and temporal scales to generate thoughts, feelings, and actions. Priority Area 3. Brain in Action outlined a vision for monitoring brains in action. Once fully implemented, the approaches described therein have the potential to uncover key biological substrates of thoughts and behavior by monitoring many components of the brain concurrently with its inputs and outputs. Combined with advanced statistical methods for causal inference, these experiments may reveal mechanisms whereby those substrates interact to mediate behavior. However, implementing perturbations is a gold standard for testing hypotheses and demonstrating causality within organized systems. Within the scope of the BRAIN Initiative, the goal of causal experimentation is to determine the integrated processes in which the brain causes thoughts, feelings, and actions to emerge from its components and under its various physiological constraints. 

Critically, we do not believe that it will be necessary to develop neurotechnology that monitors and modulates every physiological variable concurrently (from proteins and synapses to activity of cells and circuits) to achieve physiologically constrained perturbations. Rather, the fundamental principles and theoretical frameworks discovered and advanced through the BRAIN Initiative are revealing which physiological variables (what, where, and when) will be sufficient to approximate broader brain function and to facilitate physiological tuning of modulatory outcomes. Thus, by integrating the monitoring tools proposed in Priority Area 3. Brain in Action with the interventional tools outlined in this section, we expect to be able to demonstrate causality with a degree of certainty far greater than that of large-scale monitoring and causal inference alone (just as with demonstrations of causality in other nonlinear biological systems, such as genomes operating within cells and organisms). 

Advancing this vision will require development and integration of multiple technologies described throughout this report. New challenges for data storage and management arise as we gain the ability to monitor large-scale brain systems and behavior. These challenges will grow exponentially as we complement these data with vast information about the impact of our new perturbation tools on brain and behavior. Additional challenges for data sharing to facilitate analyses across multidisciplinary teams will need to be addressed. Finally, the neuroscience community will need to tackle new theoretical issues (see Priority Area 5. Identifying Fundamental Principles) such as development of fast, online analysis tools in closed-loop protocols conditioned by neural activity – as well as new capabilities in nonlinear control theory appropriate for a system as complex as the brain. Together, the components and vision of BRAIN 2025 (supported by capital investment in the overall BRAIN Initiative by federal government and partner agencies and guided by ongoing and careful examination of ethical principles) through demonstrating the causal mechanisms underlying brain function, will lead to deeper understanding of ourselves as well as enable novel diagnostics and treatments for a wide range of disorders. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity.

Demonstrating causality of circuits or systems brings the ability to manipulate brain cells, which may affect traits we hold dear as humans – identity, agency, self, emotions, and decision-making. These newfound abilities will also elicit the more familiar neuroethical tensions related to affecting learning processes, memory formation, as well as the emergence of consciousness. As stated in the Neuroethics Roadmap, we have a moral imperative to use the knowledge we gain from the BRAIN Initiative to improve brain health and alleviate suffering from brain diseases and disorders: Demonstrating and manipulating cells and circuits will likely be critical toward developing precise and beneficial treatments. Yet, insufficiently tempered enthusiasm could also compromise some of our most cherished features of human life – such as the ability to exercise free will. Missteps in the development of these technologies could lead to slowing or halting of progress in areas that have the potential to generate knowledge that could be used to alleviate human suffering. In addition, without effective communication to non-scientists in sectors such as law, business, and education, there is a risk that neuroscience findings will be accepted uncritically and without proper understanding of our current limited understanding about causal relationships between the brain, behavior, and mental states. Further, there is also potential for malign use of this ability to manipulate behavior that could result in harm to individuals as well as disrupt societies in fundamental and irreversible ways. In proceeding toward demonstrating causality, in general, neuroethical considerations of human risk should align with the importance of the scientific question at hand or the severity of an underlying condition. For the same experimental question and technology, sometimes it will be appropriate to proceed cautiously, whereas in other cases, continuing may not be advisable. These and similar ethical issues must be addressed as neuroethics research, both conceptual and empirical, in partnership with the evolving science (see Neuroethics Roadmap for a thorough discussion of neuroethics research opportunities). Neuroethical awareness and consideration can ultimately advance this important work, by serving in a horizon-scanning function to anticipate and mitigate near and farther-term consequences of potential uses of these discoveries and emerging technologies.

 

Suggested short-term goals for BRAIN 2.0:

New suggested short-term goals for BRAIN 2.0 include:

  1. Develop methods for precise single-cell optogenetic control in mobile animals and deep structures. Current efforts, which allow rich and complex asynchronous cell-ensemble stimulation, are extremely useful and informative but are largely limited to head-fixed vertebrate models, minimizing complexity of behaviors tested.
  2. Define, in mammals, the minimal number of individually-specified neurons needed to alter behavior in detectable ways. Although we have a general understanding of control of mammalian behavior within 20 individually-specified cortical neurons, a logical next step is to define appropriate theory to explain results in the context of computationally framed issues such as noise and controllability
  3. Define causal circuits for selected maladaptive behavioral disorders, such as addiction, impaired social cognition, aggression, and compulsive behaviors.
  4. Expand machine learning algorithms capable of deep-behavioral analysis in model organisms (rodents and fruit flies).The BRAIN 1.0 investment succeeded in driving integrated evolution of quantitative, precise, high-content behavioral methods appropriate for freely-moving or restrained animals. The next step, for BRAIN 2.0, is to target large, intact primate or human systems in addition to various smaller model systems, building on advanced instrumentation, computation, data management, and analyses.

Several BRAIN 1.0 short-term goals should be reshaped and continued:

  1. Develop strategies to perform quantitative, tunable real-time perturbations of a circuit-dynamical figure of merit. One example is excitation-inhibition balance, with a goal of mirroring subtlety of various human circuits/conditions and devising control/treatment strategies.
  2. Align perturbations with naturally occurring signals (brain states, behavioral states, circuit states) to measure effect of temporal and contextual variation on behavior(s).
  3. Predict and control behavioral consequences of perturbations, combining experiments and theory. 
  4. Define causal circuits for key adaptive behaviors of interest, such as cognition, movement planning, sensory perception, and ethologically naturalistic behaviors.
  5. Address challenges of genetic perturbation tools in primates, which remain much less effective than in rodents. There is substantial need for greater transduction volumes for optogenetic and chemogenetic tools. Light, virus, and chemical delivery into large enough volumes to affect behavior is crucial.
  6. Enable direct correlation between circuit manipulation and activity recording with real-time neural ensemble analyses. Tighter connection between experimental approaches and with theoretical neuroscience is necessary to design “physics-like” model-testing experiments during behavior with perturbation tools.
  7. Apply emerging perturbation tools (e.g., magnetogenetic, acoustogenetic) to circuits that are currently less easily accessible with established techniques, such as both deep and distributed brain circuits. Alternatively, or in parallel, develop optogenetic approaches (opsins, hardware) that allow deep and distributed control. 
  8. Integrate perturbation techniques with other key BRAIN Initiative‐sponsored technologies: cell-type identification (e.g. align with deep molecular phenotyping of hydrogel-tissue chemistry, MERFISH, STARmap), anatomical circuit tracing (MAPseq), large-scale recording of native activity patterns aligned with cell typology (MultiMAP), precise quantification of behavior, and tests of specific theories of neural coding, computation, and dynamics.
  9. Include a neuroethicist as part of the research team from the inception of the work through the lifetime of the research study to articulate and remedy neuroethical concerns arising from altering behavior in humans through directed manipulation of brain circuits. Also warranted is careful consideration about obtaining consent from participants who participate in these experimental paradigms that have unknown but potential risks that may alter personhood. 
  10. Ensure equitable participation in research studies whose findings may affect large numbers of people
  11. Continue research to clarify the ethical implications of NHP models that more closely mimic human physiology and behaviors, with subsequent guidance developed based on the findings. Issues related to NHP models are discussed in more detail in the Neuroethics Roadmap.

 

Suggested long-term goals for BRAIN 2.0:

New suggested goals for BRAIN 2.0 include:

  1. Discover ancestral and canonical principles, establishing deep conceptual links between animals and humans. An ancestral or canonical principle of neural circuitry might be exemplified by identifying the molecular identity of cells in a model organism, then mapping corresponding physiology and behavior across species to test evolutionary conservation.
  2. Translate nanomaterial-based techniques (upconversion, magnetic, ultrasonic) for neural interrogation, taking these from in vitro and boutique applications to robust use in behavioral experiments for circuit dissection. Nanomaterials can act as transducers, delivery tools, and readouts. These technologies remain largely trapped in materials science and chemistry labs and require rigorous evaluation in vivo to facilitate their use for discovery of circuit function, connectivity, and dynamics. 
  3. Based upon deeper understanding of causality in brain-wide dynamics, develop novel diagnostic and treatment-design approaches for neuropsychiatric disorders. Clinically relevant progress in demonstrating causality will enable the entire neuropsychiatry community to more efficiently leverage the vast human-subject literature, unleashing new diagnostic strategies and individualized interventions.

BRAIN 1.0 long-term goals that could be reshaped and continued include:

  1. Advance the scale of multiple single-cell perturbationby approximately one order of magnitude per year. At present, ~100 cells can be currently controlled independently along with one modality of readout data (calcium imaging) and behavior in the same preparation. Although it is an ambitious goal, we should strivefor the ability to access 1,000 cells in year 6, 10,000 in year 7, and so on. Each numerical milestone would achieve, along with millisecond-level activity control, information about cellular-resolution activity imaging, local and global wiring, molecular annotation, behavior, and modeling. 
  2. To align perturbation with local and global contexts of neural activity andbrain state, develop and apply acoustic and magnetic methods to both perturb and read out fromdeep-brain regions. For example, magnetic-imaging methods can be extended beyond hemodynamic responses, to probing ions and neurotransmitters

 

In summary, we have seen considerable progress in Priority Area 4. Demonstrating Causality. All major short- and long-term goals are in the process of being completed. BRAIN 2.0 should be tuned and reshaped to take advantage of new developments and opportunities that have emerged in single-cell control, nanotechnologies, and machine learning. New to BRAIN 2.0, we suggest that since causal technologies have advanced rapidly, it may be time to consider applying these methods to understanding neuropsychiatric disease states at the circuit level. Application of these techniques will continue to provide insight into fundamental principles of circuit operation. At the completion of BRAIN 2025, we envision widespread adoption of integrated neurotechnologies that enable scientists to modulate activity throughout the brain to drive desired and predictable outcomes. We expect that the fundamental understanding obtained as a culmination of the integration of theory, observation, and closed-loop experimentation described herein will allow the design of neurotechnologies that adjust neural activity to produce desired clinical outcomes safely and reliably.

Priority Area 5: IDENTIFYING FUNDAMENTAL PRINCIPLES

In biology, the goal of theory is to help organize experimental observations into conceptual frameworks– and from these, to build predictive models. The need for theory is especially acute in neuroscience, wheresystem complexity is very high. Deciphering relationships between observable properties of the brain and the underlying algorithms these structures anddynamics implementis critical for our ability to both understand the brain and to diagnose and design interventions for disease. 

Technological advances from the BRAIN Initiative continue to provide rich data that capture electrical and chemical activity within large populations of neurons, along with detailed knowledge of the diverse and dynamic characteristics of cells and connections. Central to the BRAIN Initiative’s mission is the development of statistical and analytical methods to make sense of the data, along with theories for the algorithms that underlie brain function. Such theories, implemented in and explored through computational models, provide a framework for hypothesis-driven experimental design and analysis strategies that make optimal use of hard-won experimental data.  

Ultimately, theoretical formulations of key questions will reveal the brain’s fundamental computations and also provide clinical access to targeted interventions under conditions of dysfunction. Answers to these questions will define how network dynamics depend on properties of single neurons and their connections; how behaviors are selected, initiated, implemented and flexibly modified by environmental conditions and internal brain states; and which cellular, synaptic and circuit mechanisms support different types of learning. Studying a variety of model systems can help to identify the principles underlying computations that may be implemented in different ways.

 

BRAIN 2025 vision: Integrating datasets over multiple scales to reveal fundamental principles 

A fundamental goal of systems neuroscience is to understand how neural activity gives rise to natural behavior. In order to achieve this goal, we must first build comprehensive models that offer quantitative descriptions of behavior. Goals for this Priority Area included:

New analytic approaches for large, complex datasets. BRAIN 1.0 aimed to leverage the ability of experimental datasets to identify fundamental principles of brain organization and function. Our ability to fulfill this vision will be bolstered by progressively richer experimental datasets, close collaborations between theorists and experimentalists, and training of students and postdocs in quantitative methods. Critical to success are new techniques for analyzing large, complex data sets and connecting them to rich behavioral measures. The vast increase in the number of neurons that can be measured simultaneously has generated a major interpretation challenge – creating a new urgency for handling this unanticipated but welcome success. 

Identifying multiscale linkages. A second key component prioritized by BRAIN 1.0 for this Priority Area was the need to bridge multiple scales. Doing so entails evaluating detailed biology at one scale (e.g., single neuron activity and cellular identity) and understanding its impact at another scale (e.g., EEG measurements in behaving humans). Bridging these scales is essential if the discoveries made in animal models are to truly advance our understanding of the human brain in health and disease. 

Uncovering general principles. BRAIN 1.0 envisioned that achieving the goals defined above would support the ability to identify general principles applicable to understanding the human brain. Such principles include identifying computations common to multiple scales and systems; constructing a mechanistic understanding of how movement is controlled; and understanding how the brain makes decisions. Insights from many animal models provide an opportunity to uncover general biological concepts shared among species. 

Building a quantitatively-trained workforce. BRAIN 2025 identified a need for quantitative training of neuroscientists as well as support of theory research, prioritizing collaborative integration of theory into experimental work. It also laid out a vision for new systematic approaches to data management that would facilitate broad access to annotated data sets. This would enhance collaboration between labs and with theorists, as well as support validated and reproducible science. A key aim was to accelerate incorporation of theory, modeling, computation, and statistics perspectives and techniques into experimental neuroscience research.

NIH funding to date: Identifying Fundamental Principles

NIH has funded projects that apply quantitative models to test foundational theories and models of circuit-level mechanisms in the context of specific behaviors or brain states. Importantly, a separate FOA related to understanding neural circuits has been issued for projects to develop theories and computational models, as well as to build an analytic toolset to understanding brain data. The FOA “Theories, Models, and Methods” provides opportunities for novel theory development and has been re-issued routinely. NIH also: i) encourages theory-driven experimental design in all experimental brain circuit requests for applications (RFAs); ii) plans to re-compete the U19 awards to elaborate and innovatively test explicit theories; iii) considers other funding mechanisms to promote and support novel theory development; and iv) incorporates quantitative methods and high quality/resolution behavior in most team RFAs. 

 

Where are we now? How advances in theory and analysis have transformed the landscape

Over the past 5 years of investments in modeling and data-analysis methods, research in this priority area has fueled development of new paradigms that are helping to unravel circuit complexity related to brain functions such as motor control and decision-making. The BRAIN Initiative has been for the most part successful in emphasizing to the community the importance of theory and analysis for neuroscience research. Yet, there is still a tremendous amount of work needed to uncover fundamental principles of brain function. We do not yet have a comprehensive theory of any specific brain system, nor do we have definitive answers to most of the key challenges laid out in the ambitious short- and long-term goals of BRAIN 2025.

 

Network modeling 

We have seen progress in simulation via use of relatively large-scale neural networks. Rather than being limited to selecting parameters by hand, scientists can now take advantage of advances in network-training methods such that artificial neural networks can learn to perform complex tasks analogous to those used in some experiments. Since all parameters can be characterized in detail, by analyzing these trained networks, scientists can derive core dynamics underlying computations and then explore the relevance of low-level mechanisms such as specific synaptic learning rules. A now-classic example is training feedforward convolutional networks to recognize objects; in these networks, some of the receptive-field properties of the visual system naturally emerge. Highly interconnected neural networks allow simulation of computations that can function over many timescales. Exploring computation via recurrent neural networks can help to refine our thinking about specific brain mechanisms. While recurrent neural networks trained to carry out a task may have many different parameter settings, their dynamics typically converge to a common, low-dimensional structure dictated by the demands of the task, revealing the core computation that is invariant to possible individual implementations. Modelers have used such recurrent neural networks to understand the dynamics of perceptual integration,flexible decision-makingmotor control,spatial navigation, and the ability to estimate time. In a recent study of how networks can retain and manipulate memories, investigators revealed how short-term synaptic plasticity permits memory storage without persistent activity; however, new dynamical architectures emerged when the networks were trained to manipulate the information stored in memory. 

 

Data-analysis tools 

Large-scale data collection is essential both to test modeling predictions and to drive the development of new models, but methods to rapidly process these data are critical. Advances in data processing are generally keeping pace with technological developments, and new tools are being rapidly released to the community. Effective, freely available software includes Kilosort and Mountainsort (Flatiron Institute) for sorting spikes. For denoising signals and segmenting neurons in imaging data, Constrained Nonnegative Matrix Factorization and suite2P are both now widely used and allow scientists to identify many more neurons than previously possible. For studies in humans and NHPs, where automated spike sorting remains challenging, “clusterless” approaches to electrophysiological data analysis that obviate the need for precise spike sorting may be effective in some applications. Variants of these methods have been implemented for real-time analysis, including for calcium imagingspike sorting, and deconvolution of multiunit signals. Furthermore, machine-learning methods for automatic tracking of video data after relatively little hand-training along with behavioral segmentation are revolutionizing quantitative analyses of complex naturalistic behaviors.

Once data is preprocessed, analysis methods are needed to test and develop models. Dimensionality reduction methods help identify low-dimensional structure in neural activity, allowing complex data to be summarized and visualized more easily. Improved variants of dimensionality-reduction tools are now widely used. These include demixed principal component analysis, which can isolate components of neural variations according to task parameters such as context or choice outcome, and Gaussian process-factor analysis, which incorporates variations over time. New statistical methods using Bayesian graphical modeling and deep learning can build data-driven models that increasingly mirror the dynamical systems structure of artificial network models. A powerful recently reported deep-learning approach infers latent dynamics from single-trial neural-spiking data. Such methods can be used to compare data directly with information obtained via recurrent neural network models. 

Key data-analysis challenges remain. These include extracting information from experiments that acquire data involving complex stimuli and responses associated with natural behaviors; analyzing variations among individual trials; reconciling dependence of neural responsiveness on internal state; and incorporating other slowly varying changes such as those that occur during learning. We are seeing progress in several directions. The Latent Factor Analysis via Dynamical Systems method can extract de-noised single-trial firing rates from spiking data. New techniques that enable online updating of response models using adaptive filtering are being used successfully for population decoding brain-computer interfaces. Emerging methods can incorporate latent-state variables that may evolve over time. Thus, these methods offer the ability to analyze slowly varying changes as well as different behaviors or states. It is now possible, for example, to implement frameworks that can reconcile noisy, blurred, and undersampled measurements quickly and stably. Such models (recurrent switching linear dynamical systems) are an important step toward understanding neural drivers of natural behaviors in model organisms such as larval zebrafish. Hierarchical generalized linear models, useful for untangling interrelated processes that are part of a complex hierarchy, have discovered multiple behavioral states in fruit flies.

Theory will play an important role in guiding experimental design and building valid models. For example, new methods make it possible to capture complete information from coarse-grained calcium-activity images, reducing data-storage burdens. In the retina, investigators have been able to incorporate only partial knowledge of anatomy to infer circuit structure from sparse neural recordings. By computing a limit of the number of neurons needed to capture relevant network activity as a function of task complexity, new conceptual insights provide quantitative guidelines for future large-scale experimental design.

 

Multiscale modeling and analysis

The ability to account for brain-wide electric fields is necessary for interpreting in detail animal and human data obtained from experiments recording local fields. Models for accomplishing this goal would serve as a framework to test theories, and ultimately, to design clinical-stimulation protocols, by predicting the fields generated by targeted stimulation. Solving this problem requires approaches that can bridge experimental scales. One example is the Human Neocortical Neurosolver, a user-friendly online modeling tool that simulates electrical activity across neocortical layers by incorporating biophysical information on cell type, layer-specific inputs and outputs. This tool allows researchers to test hypotheses about circuit mechanisms underlying electrical fields measured by electroencephalography and magnetoencephalography. Scientists have developed less biophysically detailed network models spanning multiple brain areas, and other investigators developed a framework for use in prototype brain-computer interfaces that combines spike and local-field potential data in a multiscale decoding model.

 

Discovery of general principles of neural coding and dynamics

The BRAIN Initiative has supported an extensive set of projects in which advanced modeling and data-analysis methods are leveraged to understand many brain functions: sensory representation, learning, flexible behavior, and decision-making. Studies in rodents show how sensory and task representations are intermingled and controlled by state across brain regions. A ground-breaking analysis of whole-brain activity in zebrafish shows how sensory inputs drive movements via sequential operations of sensory integration, competition, and demixing. Enabled by tools for automatic behavioral segmentation and sophisticated data analysis, we are getting a glimpse at how behavioral switching occurs– dorsolateral striatal activity, for instance, reflects rapid transitions between behavioral motifs. 

We have also identified common strategies used by multiple systems. For example, the observation of random sampling of olfactory space in both fruit flies and rodents has led to robust theories of sensory coding. The identification of the organization of head-direction cells in fruit flies as a ring-attractor network was inspired by models of visual coding mechanisms in mammals. Another concept emerging across multiple systems is that of activity sequences, apparent in high vocal center neurons in birds, hippocampi in mice, and posterior parietal cortex in mice. Theoretical work has helped explain how such activity sequences might arise from recurrent neural networks.

 

Training a new generation of theorists

We are amid a culture shift that aims to democratize tools and method development to provide much broader accessibility. This transformation will benefit from the emergence of data-pipeline systems that streamline data collection, annotation, processing, analysis, storage, and sharing. Training in quantitative methods has been partially supported by BRAIN 1.0 through summer courses in computational neuroscience. Despite the absence of targeted hiring incentives by the BRAIN Initiative, increased support for theoretical research has no doubt helped to stimulate rapid growth in hiring in academic positions in computational neuroscience over the past 5 years. 

 

Goals unmet on the path to identifying fundamental principles of brain function

Despite exciting potential for generating insight into many brain mechanisms, recently developed network models rarely account for or utilize biophysical details of neurons, synapses, glial cells, and neuromodulators. For example, experimental evidence is outpacing theory regarding the important role of cell-type specific responses to neuromodulation that influence activity patterns – these observations await theoretical study. We must continue to develop modeling frameworks that take into account such investigations that are well underway in specific brain regions such as cortex, the basal ganglia, and the cerebellum. 

Bridging micro- and macro- scales remains a major challenge. There are few clear-cut examples of studies that coordinate detailed biology data at one scale to understand its impact at another scale. Any modeling strategy that attempts to include all biological detailsis unlikely to provide deep insights with broad applicability. However, oversimplification is at odds with mounting evidence from BRAIN 1.0 that show diverse cortical population circuitry with very rich responses. We will likely see short-term progress on well-defined subproblems. For example, which excitatory-inhibitory connectivity structures are consistent with observations from population-wide neuronal activity? Such studies will begin to link spatial and temporal scales of cortical circuits with the dynamic outputs these circuits produce.  

Diverse theoretical approaches are required to support continued progress. Novel proposals of computational algorithms are important, since conceptual models may not be easily discoverable from network modeling. One recently reported framework that employs summary statistics – a message-passing algorithm operating at the level of redundant neural populations – may be such a computational motif. More broadly, reinforcement learning has been a very powerful paradigm for experimentally based interpretation of motor learning and decision-making. Within this framework, for example, investigators have interpreted dopamine signals as reward-prediction errors. Yet, new evidence shows that dopamine signals convey a wide range of diverse information, suggesting that updated learning frameworks may be needed. Expanding the scope of reinforcement learning models is an area of active development by theorists. Predictive coding is another potentially powerful normative framework for understanding both brain development and function. In general, given the infant state of our understanding of the brain, and the diversity of general principles that may be discovered, we must be wary of following a single theoretical approach, and instead encourage diverse ideas. 

 

Gaps and Opportunities: Next Steps for BRAIN 2.0

The goals of BRAIN 2025 provided a roadmap for development of statistical and modeling tools essential for extracting information from experimental data. They identify core conceptual areas where interactions between theory and experiment are critical for fully understanding neural systems. We suggest that BRAIN 2.0 should continue to support (and scale and disseminate) quantitative methodologies. Conceptual challenges posed by BRAIN 2025 still lack definitive answers; indeed, solutions to several of these questions are still in their infancy. For conceptual work – where goals are far-reaching – continued support is needed for iterative progress. New network frameworks open promising paths, but these are still in early days of application to a wide range of problems. While a concerted BRAIN 1.0 funding strategy focused upon neuroimaging, support for advances in other areas has been more piecemeal – and often only in conjunction with experimental studies. Specific areas of theory that warrant more aggressive support include theories on the role of non-neuronal cell types in brain function; theories that bridge scales from biophysics to network-level computation; theories that account for large-scale activity patterns such as waves, oscillations and sharp-wave ripples; incorporation of neuromodulation into network models; theories that help constrain the definitions of cell type; and theories that interpret the connectivity data obtained from mapping at all scales. We must continue to seek unifying high-level conceptual theories that provide broad explanatory frameworks across brain areas and model systems. 

To identify canonical principles, cross-species comparisons are likely to be helpful. Translating findings from model organisms (and from models, more generally) to humans and applying this knowledge to treating diseases depends on our ability to connect information from single neurons to recording and manipulation approaches in humans. Moving forward, we highlight several specific opportunities for BRAIN 2.0, reflecting a balance of new directions and continued activity.

  1. Continue development of data-analysis methods. While automated spike-sorting methods applicable to large-scale data are widely available and have substantially reduced the time from experiment to result in rodents, equivalent tools are not yet available for NHPs and humans, where signal-to-noise ratio and electrode density are typically much lower. More work also remains for handling data that varies over time as well as results from single-trial analyses. A number of groups are working with real-time feedback control, both in animal studies and in brain-computer interfaces. Driven by promise for clinical applications, the most active work is in research with humans, limited to relatively coarse-grained information. Advances will extend such approaches to closed-loop control of neural activity in animal-model systems in which decoding algorithms can handle much more precise neural-activity information. Such advances will becritical to attain the goals of Priority Area 4. Demonstrating Causality.
  2. Better understand the role(s) of cell types. Newly emerging data sets on cell type and connectivity, and comparisons across species, raise theoretical questions not yet addressed on a wide scale. What computational role is served by so many cell types? Can network theories constrain the number of effective cell classes? What can we infer from connectomic measurements about circuits and network dynamics? What is the appropriate level of connectomic detail to support predictive models? Comparative connectomics, like comparative genomics, is likely to reveal patterns and conserved rules for both neuronal structure and function. 
  3. Continued emphasis onnovel theoretical and multiscale frameworks. Although modeling via recurrentneural networks offers promise, we will likely need new conceptual frameworks to interpret and understand circuit dynamics deeply. As an example, one fruitful area may be control theory. Progress in generalizing control theoretic approaches to complex nonlinear networks may introduce explanatory frameworks for neural function and thus serve as a vital tool for closed-loop brain manipulation. New methods are also needed for multiscale analyses as well as to build models that span multiple timescales of synaptic function, cellular dynamics, plasticity, and neuromodulation.
  4. Foster more interactions between experimentation and theory. Large collaborative projects funded by the BRAIN Initiativehave provided strong support for experimental/theoretical collaborations. However, this collaborative investment should be more widespread at the level of individual investigators or small groups of investigators. One BRAIN 2025 proposal not yet implemented is encouraging ongoing projects to undertake 3- to 6-month exploratory collaborations between neuroscience labs and scientists from theoretical, computational, and statistical backgrounds. Thissort of effortwould likely reap benefits in increasing widescale theoretical input into BRAIN Initiative-funded projects. 
  5. Expand and broaden training and recruitment of quantitative expertise. There remains an acute need to make training in quantitative approaches available to all neuroscientists, to continue to fund theorists, and to recruit scientists from quantitative disciplines who can bring novel approaches to answer outstanding questions. While BRAIN 1.0 has funded short-term summer courses, additional modes of training support should both grow the pool of theorists and raise the level of quantitative training broadly.

 

BRAIN 1.0 identified a wide range of important short and long-term goals for the advancement of fundamental understanding of brain processes across many scales. Here we reiterate, summarize and update these goals for the next decade. 

 

Suggested short-term goals for BRAIN 2.0:

Continue development of techniques for analyzing large, complex data sets.

  1. The development of rapid methods for spike sorting and information analysis of encoding should continue and scale to 100,000 to 1,000,000 neurons recorded simultaneously. Techniques that specifically focus on the distinct recording conditions for NHP and human data are urgently needed.  
  2. Develop real‐time rapid-visualization and signal-processing algorithms for all types of neurophysiological data: 
    • Functional imaging: fMRI, positron-emission tomography, and near-infrared spectroscopy
    • Neurophysiology: electroencephalography (EEG), magnetoencephalography (MEG), and local-field potentials; single cell- and multiple-cell spike trains
    • Optical recordings: genetically‐encoded or chemical reporters of voltage (including subthreshold voltage), calcium, neurotransmitters, synaptic activity, and biochemical states
    • Behavior: force and motion,processed video data, and animal and human psychophysical data
    • Cell‐level data: anatomy, connectivity, gene expression, and biophysical properties 
  3. Develop principled methods, potentially including novel developments in nonlinear control theory, for real‐time feedback-control experiments to manipulate and analyze neural circuits using novel perturbation and recording techniques. Include real‐time applications to neural devices and prosthetics in humans. Explore the role of precise neuronal-level manipulation.
  4. Integrate statistical and analytic approaches with models of neural circuits that are based on connectivity maps and cell types. 
  5. Machine learning-based analyses of complex datasets promise to revolutionize our understanding of brain diseases, but at the same time they may detect and possibly reveal unforeseen aspects of a research participant’s individual privacy and other aspects of behavior from the data that is being analyzed. For instance, individuals without symptoms may be identified as having abnormal brain circuitry. Such a finding could affect insurance coverage and invoke social stigma. To limit this possibility to the greatest extent possible, special attention should be paid to narrowly defining the parameters for machine learning-based analyses of brain data to the scientific question being queried. Use of data for additional analyses should only be undertaken with appropriate consent.   

 

Multiscale linkages

  1. Establish biophysical sources of the major brain rhythms in EEG and MEG recordings, as well as in the more local sources that give rise to local field potentials in various brain regions and different cortical layers.
  2. Develop a formal statistical-inference framework to conduct network-connectivity analyses from different types of neuroscience data such as fMRI, EEG, local field potentials, and multiple single-neuron recordings. 
  3. Explore theoretical and statistical frameworks for fusing information from neuroscience experiments across different experimental techniques and different temporal and spatial scales. 
  4. Develop computationally efficient solutions to high-dimensional inverse problems, with particular attention to the interpretation of EEG and MEG data in humans. 
  5. Develop theories and models of collective neuronal activity on spatial scales that span individual synapses, neurons, circuits, networks, and systems; develop theories of dynamic activity that span timescales of synapses, action potentials, network activity (including attractors and persistent activity), and internal-circuit states (including neuropeptides and neuromodulatory systems). 

 

Identifying general principles 

  1. Develop theoretical insights into how circuit dynamics depend on properties of single neurons and their connections. Explore computational principles comparatively in a range of model species. Identify conditions for which insights from small circuits become relevant to larger circuits. Determine which general rules of circuit function depend on specific biological details of neuronal and synapse function. 
  2. Develop systematic theories of how information is encoded in chemical and electrical activity of neurons and glia; how these are used to determine behavior on short time scales; and how they are used to adapt, refine, and learn behaviors on longer time scales. 
  3. Develop a detailed understanding of circuit and plasticity mechanisms behind different forms of learning.
  4. Propose, study, and validate mechanisms that allow information to be gated, switched, and transmitted between specific brain regions. In general, systems neuroscience is commonly broken down into the study of different specialized systems, and our understanding of the interfaces between these systems remains much less understood.
  5. Develop methods to detect and classify internal brain states; relate these states “downward” (to neuromodulatory mechanisms) and “upward” (to memory formation, motivation, and internal models). 
  6. Construct a mechanisticunderstanding of how cellular-level neuronal activity and neuromodulation in multiple brain areas contribute to major brain functions:
    • how motor acts are initiated, controlled, sequenced, and ended. 
    • decision‐making
    • goal‐directed and flexible behavior
  7. Neuroethics: As noted with Priority Area 4. Demonstrating Causality, greater understanding of how the brain learns, makes decisions, and manifests intended behaviors lead to the possibility that neuroscientific techniques will be used to manipulate abilities that form the core of human life: agency, free will, intentions and decision-making, and memory formation. Continuing the efforts of the BRAIN Initiative Neuroethics Working Group to provide ethics consultation to select projects or applications, when appropriate, should help BRAIN Initiative-funded researchers navigate many of the neuroethical issues associated with their work. Moreover,addressing the potential ethical issues attendant to these discoveries could be enriched by including a neuroethicist on the research team. From the beginning of a project, the neuroethicist would collaborate with neuroscientists to identify the potential near and far-term neuroethical concerns of the work and then would help to identify ethical strategies throughout the experiment and translation of the findings, when feasible.

 

Accelerate incorporation of theory, modeling, computation, and statistics perspectives and techniques in neuroscience departments and programs 

  1. Encourage experimental research projects to support brief (3- to 6-month) exploratory collaborations withtheoretical/computational/statistical scientists.
  2. Support morepurely theoretical or statistical approaches and those that enable small-scale theory/experimental collaborations.
  3. Tailor new graduate and postdoctoral training grants to theoretical neuroscientists and enhance training of experimental neuroscientists in quantitative methods.  
  4. Continue to provide incentives for hiring theory, modeling, computation, and statistics faculty, and support dissemination of new computational methods for postdoctoral and graduate students through summer courses, institutional curricula, web‐based courses, meeting workshops, and other mechanisms.

 

Suggested long-term goals for BRAIN 2.0:

Develop new techniques for analyzing large, complex data sets

  1. Integrate statistical and analytic approaches with models of neural circuits based on connectivity maps and cell types. 
  2. Extend the solutions for spike sorting, encoding, connectivity, and decoding to data sets larger than 1,000,000 simultaneously recorded neurons, and integrate with connectomic data and other types of data. 

 

Multiscale linkages

  1. Establish a generic framework for fusing information from neuroscience experiments across different experimental techniques and across different temporal and spatial scales.
  2. Enable real‐time high-dimensional inverse solutions from MRI, EEG, and MEG recordings. 
  3. Identify essential elements of widely distributed, time‐varying neuronal processes by bridging between detailed realistic models and qualitative behavioral models. Define principles governing those computations at each spatial and temporal scale important for understanding system-wide behavior.  

 

Identify general principles

  1. Establish theoretical approaches to understand general principles applicable in micro-, meso- and macroscalecircuits in multiple animals. Of particular interest are theoretical studies that illuminate circuit-circuit interactions and consequent complex human cognition. An ultimate goal is to seek high-level theories of the brain that can unify the many diverse phenomena currently studied by different camps of neuroscientists (e.g., vision scientists, memory scientists, decision scientists).  
  2. Work toward a complete computational theory of one or several chosen systems, for example, hydra, roundworm, fruit fly, or zebrafish larva, making use of data, simulations and modeling that bridge single cells to behavior and provide a basis to extract computational principles at multiple scales. Such a program could provide constraints on the physical measurements necessary to build explanatory models.   

 

In summary, Priority Area 5. Identifying Fundamental Principles has achieved many of its goals, stimulating development of new data analytic and modeling approaches to deepen understanding of motor control, decision-making and other brain functions. Network-training methods now enable artificial neural networks to learn to perform complex tasks similar to those used experimentally, to generate new hypotheses and to capture hidden structure in data. In BRAIN 2.0, attention should be paid to integrating emerging biological knowledge into models and nurturing a diversity of theoretical approaches at multiple levels of description and spatial scales. At the conclusion of BRAIN 2025, advances in this area will bring together theory and experiment to solve profound and overarching questions central to systems neuroscience, which will ultimately explain how intricately connected networks of neurons acquire the ability to perceive, think, remember, and dream.

 

Priority Area 6: HUMAN NEUROSCIENCE

A primary goal of the BRAIN Initiative is to understand human brain function in a way that will translate new discoveries and technological advances into effective diagnosis, prevention, and treatment of human brain disorders. The study of human brain function faces major challenges because many experimental approaches applicable to lab animals cannot be immediately translated to study in humans. Nevertheless, direct study of the human brain is critical because of our unique cognitive abilities as well the profound personal and societal consequences of human brain disorders. Advances in genetics, single-cell studies, imaging, and physiology introduce possibilities for studying the human brain at multiple levels to understand its normal function and what goes awry in neurological and neurodegenerative disorders.

  

Understanding the brain: Fundamental insights for discovery 

Improvements to existing technologies such as MRI and PET have revolutionized our ability to noninvasively study the structure, wiring, function, and chemistry of the human brain. Physiological approaches benefit from the increasing number of humans undergoing diagnostic brain monitoring with recording or stimulating electrodes, and from those who are receiving neurotechnological devices for therapeutic applications or diagnosis (e.g., deep-brain stimulation (DBS) and epilepsy monitoring). Advances in DNA sequencing are revealing genes that, when altered, create symptoms that link molecules to behavior. These new opportunities will allow combining techniques to cross barriers of spatial and temporal scale that allow us to understand the human brain much more deeply. For example, we can combine observations from noninvasive brain imaging with high-resolution cellular and physiological data obtained from humans with implanted devices to quantify activity and chemistry at a cellular level. We can also envision translational opportunities afforded by combined measurement of noninvasive and cellular‐level signals in animal models. Breaking through barriers of scale – enabling us to compare and combine data from distinct experimental approaches – would yield substantial benefits for diagnosing and treating diseases as well as for basic discovery about the human brain.

 

Toward cures: Future platform for therapeutics

The last 20 years have seen explosive growth in the development and use of noninvasive brain-mapping methods, predominantly MRI (complemented by magnetoencephalography and electroencephalography) to study the human brain under normal and pathological conditions, as well as across the human lifespan. Methods to stimulate the nervous system are advancing beyond the experimental phase and toward therapeutic use. In addition to invasive DBS, transcranial magnetic and direct-current stimulation (TMS and tDCS) are being explored as therapies. We anticipate significant progress in using these sensing and stimulation methods to measure wiring and function of the human brain at multiple scales: in neuronal ensembles, in circuits, and in larger-scale networks (“circuits of circuits”). In turn, these capabilities will allow us to visualize and understand how circuit‐level disruptions lead to human brain disorders. Brain-measurement techniques are also valuable for validating emerging technologies, such as functional ultrasound, and for evaluating the effects of medications and genetic therapies. By integrating data at multiple levels, BRAIN 2.0 can be a platform to link molecules, networks, and behavior.

 

BRAIN 2025 Vision 

The overall objective of human neuroscience research is to understand certain brain functions that can either only be studied by evaluating humans, or to validate/translate concepts derived from animal studies. Examples of the former include such functions as language, higher‐order symbolic mental operations, and individual‐specific aspects of complex brain disorders such as schizophrenia or traumatic brain injury. Examples of the latter include neuropsychiatric models for addiction and obsessive-compulsive disorder that have been derived via optogenetic manipulation of rodents – a technique that is not yet usable in humans. The availability of clinically approved investigational technologies, including devices that are surgically implanted into the brain, provide a unique research opportunity to investigate cause-and-effect relationships in neural function by stimulation or recording at cellular and circuit-level resolution. Our ability to record electrical activity at the cellular level, in humans, is expanding, providing a unique opportunity to link the activity of individual neurons with more global signals obtained using noninvasive imaging methods such as fMRI. In turn, both cellular‐level and global signals can then be linked to human behavior, thought, and emotion. Research involving human research participants, however, comes with a special mandate to ensure that these rare and valuable data are collected according to rigorous scientific and neuroethical standards, curated carefully, and shared responsibly among the research community. Assembling and funding specialized teams of researchers and neuroethicists is necessary to ensure coherence between experimental studies and clinical treatment approaches.

BRAIN 1.0 focused short-term goals on developing both tools and an ecosystem to enable the conduct of human neuroscience research. These goals include developing innovative tools translatable to human applications; establishing pilot projects for collaborative human-neuroscience clinical-trial networks; supporting training grants for human research; developing and implementing methods for archiving and sharing electrophysiological, structural, and clinical data; and establishing neuroethical guidance and training programs. The long-term goals from BRAIN 2025 aimed to build upon this foundation of technology and expertise to advance human neuroscience research. Key priorities for technology included attaining higher‐resolution recording and stimulation approaches in humans; supporting alternative technologies involving electrical, optical, acoustic, and magnetic modalities for greater precision and less invasiveness; and taking advantage of surgical settings for capturing more data and accessing human neural circuits. Infrastructure priorities included establishing international collaborative networks to expand impact, integrating human and animal data to identify fundamental mechanisms, promoting effective sharing of curated multi-level human data, and ensuring that human neuroscience research adheres to consensus ethical principles.

Progress in this priority area anticipated several tangible outcomes. Integrated teams of clinicians, scientists, device engineers, patient‐care specialists, regulatory specialists, and neuroethicists are working together to identify and pursue unique research opportunities offered by the participation of informed, consenting human research participants. Cooperation of clinical and academic research teams and private companies in a pre‐competitive space will enable implementation, integration, and long-term support of innovative new technologies for human neuroscience research. Integrated technologies will combine recording and stimulation capabilities from implantable devices and integrate aspects of electrical, optical, acoustic, genetic, and other approaches for research with humans and for clinical applications. 

 

Where are we now?

The Human Neuroscience component of BRAIN 2025 saw dramatic successes but revealed several ongoing tensions from this complicated endeavor. Successes arrived with technology breakthroughs, whereas challenges surround the human element of scientific investigation, including promoting and assembling interdisciplinary interactions as well as accessing, analyzing, and sharing data in a responsible but productive way.  

NIH funding to date: Human Neuroscience

NIH issued three FOAs to develop non-invasive imaging technologies, starting with planning grants in the first 2 years of the BRAIN Initiative, followed by FOAs for proof-of-concept and production-level projects in fiscal years 2017 and 2018. These opportunities cover the technical-development spectrum regarding idea maturity, development stage, and availability of preliminary data. In addition, a separate FOA called for studies of cellular and population events underlying signals from existing non-invasive approaches, especially neuronal, glial, and vascular responses that are the basis for functional magnetic resonance imaging (fMRI). The FOA also sought research on combinations of recording and imaging approaches to bridge spatial and temporal scales for more precise understanding of the information coded in meso- and macro-scale signals recorded from the brain. Complementing the human-imaging FOAs, NIH issued two FOAs for non-invasive neuromodulation in research with humans – one FOA to develop and optimize technologies (three awards issued in fiscal year 2018) and another FOA calling for mechanistic studies to understand the technologies’ effects (five awards issued in fiscal year 2018).

 

Imaging

MRI and PET. The most dramatic yields from BRAIN 1.0 investments in non-invasive functional mapping have been MRI advances, in particular related to magnetic-field gradient, radiofrequency, and static-field increases. Preliminary reports of human imaging at 10.5 Tesla (T), the highest fields ever used in research with humans, are just beginning, and next-generation integrated systems with advanced designs at 7 T promise equally exciting advances. Other MRI-based projects seek to remove restrictions of strict immobility, offering the potential for more naturalistic assessment of human behaviors using full tomographic three-dimensional mapping. These systems are expected to become available during the next few years. All of the next-generation human neuroimaging grants awarded by BRAIN 1.0 thus far are scheduled to complete final design, construction, and testing of their novel instruments during BRAIN 2.0. In this regard, BRAIN 1.0 has already made significant progress toward meeting BRAIN 2025’s human-brain mapping goals, although success awaits full implementation. Integration of whole-brain MRI-based measurements with invasive electrophysiological recording is also emerging, given that the safety of invasive recording devices with structural and now-functional magnetic resonance has in some settings been established. Yet, the means to fully integrate and interpret these data streams is a work in progress. Extending initial efforts to “best-in-class” measurements in each domain (high-field functional MRI, dense-invasive recording arrays) remains a highly challenging opportunity for the future. Advanced instruments that can push forward molecular-imaging acquisition, predominantly with PET, are being developed through BRAIN Initiative funding. As with MRI, PET technology has yet to achieve very significant performance gains in resolution and sensitivity, but this work – including instruments that integrate PET with high-field MRI – is in development (Phase 1). Thus far, only one Phase-2 award has been made, suggesting that PET technology improvement is a work in progress.

 

Tools and probes

New tools, including magnetic-particle imaging methods, have begun to emerge for use in research with humans. This technology likely has the ability to improve hemodynamically based brain mapping by an order of magnitude beyond that achievable for ultra-high field MRI – albeit with some uncertainty in the availability of appropriate magnetic nanoparticles. Hybrid electrophysiology/ultrasound systems are also in early developmental stages. These may fundamentally change the way we reconcile imaging characteristics and at-scale physiological function (the electromagnetic inverse problem) and thus introduce the opportunity for limited tomographic mapping of electrophysiological signals. A comparably smaller BRAIN 1.0 investment has gone to development of novel probes to be used with these emerging instruments. The expected order-of-magnitude increase in sensitivity in PET instrumentation during the BRAIN 2.0 period – combined with integration of such capabilities with functional and structural MRI at high field – presents a significant opportunity to study neurotransmitter and neuromodulator dynamics and their distributed functional consequences during cognitive paradigms. Discovering and validating new probes in line with these capabilities is a key prospect. Probes that are capable of precise targeting of specific neuromodulatory systems, specific cell types, and other key molecular targets remains an important opportunity for BRAIN 2.0. Partnerships with the private sector could accelerate this process significantly.

 

Closed-loop DBS

The first 5 years of the BRAIN Initiative defined foundational elements (both sensing and operational) required for human neuroscience instrumentation, including studies aiming to demonstrate causality. This endeavor was catalyzed by a novel public-private partnership (PPP) between industry, NIH, and FDA. The initial focus of the PPP was to explore sensing and stimulation devices for application to closed-loop brain stimulation. Over the course of the first 2 years, a series of templates were created to facilitate confidentiality and partnership agreements for NIH-funded studies. These templates saved many person-years of negotiations for each grant and helped make BRAIN Initiative support of technology palatable for companies. The agreements reflect a compromise on intellectual property rights, with partners achieving a balanced share of value. Since then, NIH has issued periodic requests for funding to support researchers prototyping new therapy concepts with these tools. 

 

Non-invasive brain stimulation 

Since the inception of BRAIN 1.0, we have seen significant, albeit somewhat limited, advances from support of research on minimally invasive brain stimulation – with important potential for human neuroscience. Studies investigating the dose-response relationships of external electrical and magnetic stimulation are providing an empirical foundation for efforts to standardize treatments, while at a more fundamental level, the BRAIN Initiative is now supporting studies to understand the underlying biophysical mechanisms of action of electrical and magnetic fields at a cellular level. Several BRAIN Initiative grants have also gone to rigorous evaluation of functional ultrasound as a means for non-invasive neuromodulation, either directly or through opening of the blood-brain barrier and subsequent focal delivery of drugs.

 

Organoids and assembloids

Brain organoids and assembloids, derived from human pluripotent stem cells, are an emerging technology developed outside of the scope of BRAIN 2025 that may offer new opportunities to study human brain tissue. While very valuable for dissecting key determinants of brain development or potential alterations due to specific disease states, it is not yet an apt substitute for the use of intact brains for studying the interplay between cells and networks, toward understanding functional connectivity. The BRAIN Initiative’s Neuroethics Working Group examined this issue specifically through a workshop, multiple discussions, and a publication that detailed some of the neuroethical issues associated with organoid work. The BRAIN Initiative has also funded a neuroethics research project that brings bioethicists and brain-organoid researchers together to identify emerging ethical issues in brain organoid and assembloid research.

 

Gaps and opportunities: Next steps for BRAIN 2.0

Generally, human neuroscience research in BRAIN 2.0 will benefit most from enhanced support of technology development and sharing, integration with genomic data and advances, human-resource issues including collaboration and training, and data-science improvements. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity.

  1. Technology. There is a pressing need for better invasive and noninvasive tools to understand and manipulate brain function, which will help define mechanisms of effects and/or benefit from brain stimulation. Ethically, invasive human neuroscience research requires clinical justification for participation, which by default may also motivate less-invasive methods for expanding the pool of neurotypical controls, and also leveraging existing clinical procedures in more creative ways. Establishing more PPPs could help, recognizing that economic considerations factor into companies’ willingness to invest in research with long-term timeframes. Better tools are also needed to study and control brain activity at higher levels of spatial and temporal resolution. Novel tracers, for example, will advance study of synaptic function in humans. This will contribute to better understanding of brain development, psychiatric diseases, and potentially provide biomarkers for neurodegenerative disorders. Development of safe and efficient viral vectors for replacing genes or other cell-specific tools for manipulating various molecules and pathways will be key steps for manipulating the human brain at the level of circuits as well as with medications. While the field of gene therapy has been revived during the past few years, outside of the scope of the BRAIN Initiative, broad and well-distributed delivery of molecules throughout the nervous system using viral vectors remains a challenge requiring more investment in better vectors and or other technologies.
  2. Collaborative efforts. There is a lot of exciting neuroscience that is funded outside of the BRAIN Initiative. Thus, it is critical for scientists studying the human brain at different scales to collaborate and to form networks around human neuroscience to benefit both fundamental research and translational research. Bringing clinicians, neuroscientists (systems and molecular), engineers, computational biologists, and ethicists to work together is still a gap and opportunity. One particular missed opportunity from BRAIN 1.0 is collaborative human neuroscience trial networks that lower the boundaries for research. For example, templates for informed consent and other neuroethical considerations could facilitate progress, similar to the PPP process for developing devices.Additional collaborations could be enhanced by integrating a neuroethicist into the research team.
  3. Training. BRAIN 2.0 should increase emphasis on funding trainees, especially those striving to become interdisciplinary neuroscientists, as well as supporting clinical/surgical investigators with access to human-brain tissue. More broadly, there is a pressing need to train clinical investigators, scientists, and physician scientists in various aspects of human neuroscience. These clinical investigators will benefit from training grants to facilitate access to, and awareness of, BRAIN Initiative-funded research. In addition, support mechanisms such as the neuroscience trial network might help early stage researchers initiate their programs. A critical part of training should also include intentionally integrating neuroethics into discussions about neuroscience. Neuroethics knowledge and awareness can help neuroscientists develop an ethical awareness of what constitutes a neuroethical concern and where to get assistance or find collaborators to address neuroethical concerns.
  4. Biological discovery. In addition tocontinuing BRAIN 1.0’s intentional focus on technology and tool development for the neuroscience research community, studies probing fundamental biological function are also necessary. One significant gap and opportunity is integration of genetic data and meta-data with data arising from imaging, physiology, and brain-modulation studies. Genetic and pathway data provide clues for areas of further study, and knowledge of behavioral or phenotypic features will help link cells to networks to global brain function. Given that there are so many behaviors that can only be studied in humans, this is at once a gap and an opportunity. As new techniques and methodologies are able to more closely approximate the human brain, more careful and evolving ethical consideration about the appropriateness of existing ethical conduct, guidelines, and regulations with such research models should be explored.
  5. Data dissemination. The collection of human data, both digital and material, has not been as effective as it should have been during BRAIN 1.0. Many scientists cannot access primary data after study publication, data are scattered in different areas or lackappropriate meta-data for effective analyses, and scientists are not aware of the wealth of useful data available from the BRAIN 1.0 investment. This has been exacerbated by the absence of a unified repository for all relevant data and a hesitance to require (and enforce) data sharing. Better coordination could also include integration with related activities funded elsewhere, notably through the Human Connectome Project and the Adolescent Brain Cognitive Development study, which have both made important contributions to large-scale data structures for human neuroimaging and cognitive data. Also critical are consolidated, user-friendly search tools for BRAIN Initiative-funded project data, to facilitate research by the broader biomedical research community. As a reference, the Human Genome Project’s impact came not so much from the groups that sequenced the genome but from all the users that worked with the data, transforming the landscape of disease-gene discovery, evolutionary biology, genome architecture, and the vast universe of various non-coding RNAs. As is the case with the Human Genome Project, widespread data sharing should ensure data security and privacy.

 

Suggested short-term goals for BRAIN 2.0:

The evolving scientific landscape suggests some new short-term goals for BRAIN 2.0:

  1. Develop better approaches to acquire, preserve, and study, living human tissue from surgical procedures, as well as from post-mortem samples, to enable human brain structure-function mapping, transcriptional, and proteomic studies. One example is cortical tissue from epilepsy surgery, but there are opportunities to acquire tissues from a number of brain areas given the diversity of neurosurgical procedures. Acquiring these samples can provide material for optogenetic manipulation and other methods that are not currently feasible in humans. Encouraging partnerships between neurosurgeons and scientists can enhance studies on neural connectivity using tools ranging from imaging to physiology.
  2. Increase mechanistic understanding of DBS and closed-loop modulation in preclinical and clinical models. While modest success was achieved in movement disorders therapies, clinical trial failures in depression and poor market penetration in epilepsy motivate continued refinement in DBS stimulation methodologies. Continuing support through the PPP helps to lower the risk profile for industry to invest in novel therapies, which might otherwise be abandoned or significantly delayed. 
  3. Expand research beyond invasive devices. The PPP and BRAIN should also emphasize more than just bi-directional medical implants to increase the impact of the work. For example, we should conduct human imaging coupled with simultaneous, minimally-invasive electrophysiological and tomographic “functional” mapping of local and distant effects of such neuromodulatory actions. Rebalancing the portfolio with imaging technology, surgical procedures, and molecular approaches should be considered.
  4. Continue to invest in the physics/engineering of non-invasive imaging instrumentation and support the development of non-invasive approaches with high spatial and temporal resolution to monitor neural activity (including non-electrical activity) in humans.
  5. Establishstandards for teams developing human-use tools. This research requires high standards for design practices, quality-management systems, basic program management, and other aspects subject to regulatory scrutiny. As reflected in limited return on investment in this area from BRAIN 1.0, many academically trained scientists do not have these skills and as such struggle to deploy systems at scale without appropriate resources. Encouraging collaborations with industry may advance clinical applicability of human neuroscience research involving technology.
  6. Support interdisciplinary research to allow successful use of fMRI technology in clinical settings. Currently, fMRI use is limited to research, and its clinical use is mostly for presurgical mapping. Combining better understanding of human brain connectivity, with improved functional imaging and computation might help advance the applicability of fMRI in the clinical setting. A good example are disorders of consciousness and psychiatric disorders, where both structural and functional connectivity mapping will likely be of important clinical value in prognosis and guiding advanced neuromodulatory treatments in the future.
  7. Training (short- and long-term): Support neuroscience-oriented training of scientists outside neurobiology, including computational scientists, physicists, and engineers, toward advancing progress in imaging and non-invasive electrophysiology technologies. Also needed is interdisciplinary training of various types: computation with biology, virology with brain research, engineering with biology, ethics and neuroethics with neuroscience.
  8. Improve data access. Key to the success of the BRAIN Initiative is broad data accessibility, to encourage varied experimental approaches and introduce novel hypotheses. Primary data must be collected and shared in formats that are accessible, user friendly, and contain publication source codes. A centralized data repository might facilitate this objective. 
  9. Develop set of actionable neuroethical guidelines for neurostimulation and neuromodulation in humans (short- and long-term). More critical review should be required for plans to implant devices into humans for long periods, including consideration of closed-loop systems that limit independent control by the individual with the implant. For example, at the completion of a research study, if the participant has attained therapeutic benefit and would like to continue to leave the device implanted, then both funding agencies and companies involved with the research must help investigators address the question of how to provide long-term support for those individuals. The BRAIN Initiative’s Neuroethics Working Group has examined this issue and continues to work to ensure that the associated neuroethics issues are recognized and managed and has a paper in review detailing the associated issues.
  10. Develop better integrated training that brings neuroethicists and neuroscientists together and other aspirational goals we described in the Neuroethics Roadmap.

Suggested long-term goals for BRAIN 2.0:

Several new long-term goals related to human neuroscience could be pursued in BRAIN 2.0:

  1. Develop better technologies and assay systems for targeting neurons and glia in humans, including improved viral vectors, next-generation CRISPR technologies, and other non-viral methods. New approaches now allow large-scale screens with DNA barcodesto select viral vectors. Better technologies are needed to identify safe and neurotropic viruses for future gene-replacement and gene editing-based therapies. These studies need careful ethical oversight.
  2. Discover and validate novel PET tracers to monitor neural activity and molecular signatures in human synapses. PPPs with the pharmaceutical industry (by virtue of their significant neurochemical databases), akin to those for development of devices, may uncover diagnostic applications of compounds with limited therapeutic potential.
  3. Improve electrophysiological source localization, bringing near or true tomographic capabilities to non-invasive electromagnetic recording. Advances in machine learning may offer significant opportunities for progress in this domain, as will forthcoming developments in magnets that are operational at room temperature.
  4. Develop multiscale approaches and tools to integrate data generated using different experimental approaches. The integration of multiple and diverse data sets (imaging, physiology, and behavior and clinical records) is a prerequisite for solving human specific neurobiological questions.
  5. Develop better translational model systems offset the need for human studies. Developing suitable models to explore foundational aspects of disease states and treatment mechanisms might help accelerate the process to defining applications in humans.

 

In summary, we are poised for progress during BRAIN 2.0 in Priority Area 6. Human Neuroscience. Advances in BRAIN 1.0 have set the stage for conducting neuroscience research with human participants, but these opportunities arrive with the need to consider several issues. Progress in this area requires extraordinary levels of collaboration, involving integrated teams of clinicians, scientists, device engineers, patient‐care specialists, regulatory specialists, and neuroethicists – to ensure not only innovation, but also safety and scientific rigor. Important ethical concerns center on use of integrated technologies and implantable devices, including considerations of long-term management of those shown to be effective through research. New challenges also face us with regard to managing this type of human data. Advances in this critical area are at the heart of the goals of the BRAIN Initiative – revealing mysteries of humans’ unique cognitive abilities and helping us treat or prevent devastating consequences of brain dysfunction.

 

 

Priority Area 7: From BRAIN to Brain

The topics described in Priority Areas 1 to 6 outline the most critical scientific topics for the BRAIN Initiative. Collectively, they cover key questions and needs for progress toward understanding how populations of neurons create unitary perceptions, optimal decisions, and coordinated movements. However, addressing those various priorities individually will not be the fastest route to discovery. Technologies and experimental insights that come from work in the different scientific Priority Areas are often complementary; offering more impact in combinations. For example, theoretical work and modeling are most effective when tightly melded with experimentation. Similarly, physiological experiments that both monitor and perturb neuronal activity during behavior can provide insights that could not arise using isolated approaches. The most productive experiments will be those that can exploit tools and knowledge from multiple areas: cell-type identity, circuit connectivity, functional maps, theory and activity monitoring and perturbation. 

NIH funding to date: The BRAIN Initiative to the Brain

In addition to developing new technologies, the NIH BRAIN Initiative has funded efforts to integrate and apply the cutting-edge approaches to answer fundamental questions about circuit function. A series of “BRAIN Circuits” FOAs support research that integrates experimental, analytic, and theoretical capabilities for comprehensive analysis of specific neural circuits or systems. Across these FOAs, projects are expected to record and perturb circuit function with cellular and sub-second resolution, as well as to apply quantitative models to test foundational theories and models of circuit-level mechanisms in the context of specific behaviors or brain states. The resulting projects represent a diverse research portfolio of approaches to understand circuits and their contributions to perceptions, motivations, actions, and other mental processes throughout the nervous system. NIH issued four FOAs representing different stages of research in animal models, plus a separate FOA for research in humans using electrode devices implanted for therapeutic recording and/or stimulation or for pre-surgical neural activity monitoring. Collectively, this suite of four FOAs issued 41 awards in fiscal year 2018. 

 

BRAIN 2025 vision: The power of integrated technologies  

BRAIN 2025 recognized that the BRAIN Initiative must prioritize combining complementary approaches toward using fully integrated systems to explore neuronal mechanisms driving higher brain function. Development of integrated approaches is itself a major challenge requiring sustained encouragement and support. Effective integration requires much more than simultaneous uses of multiple techniques. Moreover, use of integrated systems pose issues that do not arise in use of isolated methods. Examples include crosstalk between wavelengths used for optical imaging and those used for optical stimulation; contamination of electrophysiological recordings by electrical-stimulation artifacts; and incompatibility between methods for establishing cell identity or neuronal projectomes and nanoscale reconstruction of circuits. In general, combining approaches requires tailoring fully integrated systems to address those issues that do not exist when the component methods are used independently. For this reason, BRAIN 2025 emphasized use of integrated approaches by making it a separate Priority Area. Integrating diverse approaches was expected to require large consortia of experimentalists, technologists, theorists, and data scientists – invoking a special importance for team science as well as technology dissemination and training. 

 

Where are we now with integrating approaches?

BRAIN 2025 expected that the development and implementation of integrated approaches would lag behind progress in other Priority Areas based upon initial development of independent approaches. As such, BRAIN 1.0 allocated relatively modest funding for the first 5 years of the BRAIN Initiative, followed by substantial increases. Progress has been largely consistent with those expectations.

Some of the examples of integrated technologies described in BRAIN 2025 have come to light, such as electrophysiological or optical recording while stimulating genetically identified cells. Multiple fabrication strategies have yielded penetrating and surface-recording probes compatible with optical stimulation and/or imaging, as well as drug and gene delivery. We are also seeing progress in using spatial-light modulators and holographic techniques to create patterns of two-photon stimulation that can activate populations of individually targeted neurons in specified spatiotemporal patterns. This work has significantly extended single-cell controlcapabilities in rodents, fish, and invertebrates. Most recently, this approach has been used to control mouse behavior; when combined with complementary optical imaging of neuronal responses to natural stimuli, we should be able to assess behavioral consequences of playing back natural patterns of population activity in the brain. 

Efforts to combine genetic access with connectomics is also moving along. Exhaustive serial electron-microscopic (EM) reconstruction of specific cell types or cells with specific projections using genetic tools that yield EM contrast or molecular recognition of epitopes using nanobodies are ongoing. Progress has been made on even some of the most ambitious integrated technologies put forth by BRAIN 2025. For example, the MICrONS project supported by the Intelligence Advanced Research Projects Activity has made impressive progress. Measuring connectomics of neuronal circuits after large-scale recording, this project aims to achieve synaptic-level EM reconstruction of a cubic millimeter of mouse visual cortex in which the response properties of ~100,000 neurons have been determined using optical-imaging methods.

Beyond the examples provided by BRAIN 2025, other integrative efforts are underway, such as the structural and functional MRI-based measurements that are being combined with invasive electrophysiological recording in human studies, as discussed in Priority Area 6. Human Neuroscience. Additional strategies are also ripe for development. For instance, the ability to record neurochemicals, as discussed in Priority Area 3: Brain in Action, awaits integration with electrophysiological and optical recording of neural activity, as well as optical and electrical neuromodulation, and likely medication-based interventions to correlate local neurochemistry with circuit electrophysiology. As mentioned in the discussion of Priority Area 2. Maps on Multiple Scales, technologies that measure various activities have reached a point where functional analysis of circuits might be combined with anatomical-connectivity mapping, potentially uncovering advanced theoretical frameworks. Measurements using PET cameras to track synaptic activity could be combined with fMRI data to measure the influence of neuromodulators in brain circuits of behaving human research participants. 

 

Next steps for integrative efforts in BRAIN 2.0

Because this priority area deals with a broad approach rather than specific approaches or types of tools, BRAIN 2025 did not list individual short-term and long-term goals, opting instead to describe examples. Following that lead, we do not think it is necessary or helpful for BRAIN 2.0 to include an exhaustive set of suggested goals for integrative approaches. However, many opportunities and goals listed in Priority Areas 1 to 6 hinge upon integration. These include:

  • Tools to integrate molecular, connectivity, and physiological properties of cell types
  • Connectivity and functional maps at multiple scales that retain cell-type information
  • Integration of fMRI with other activity measures and anatomical connections
  • Integration of electrophysiological and neurochemical methods
  • Integration of perturbational techniques with other technologies
  • More interactions between experimentation and theory
  • Development of approaches and tools to integrate human data from different experimental approaches

In addition, new themes have emerged that must become part of any comprehensive strategy to achieve integrative neuroscience. Specifically, analyses of brain-circuit properties require integrated approaches for the study of both neuronal and non-neuronal functions; relative contributions of cortical and non-cortical brain structures; more naturalistic behavior paradigms; and finally, various models of team science for accomplishing these complex tasks.

There is no question that integrated approaches – truly advancing BRAIN to the brain – will remain key for progress toward understanding brain circuits. We suggest continued BRAIN-Initiative support for the development and application of integrated approaches.

 

Priority Area 8: ORGANIZATION OF SCIENCE: BRAIN 2.0

Science is an intensely human endeavor. Many challenges in modern biomedicine arise from the reality that the fruits of science – discoveries, tools, and cures – require human actions and often significant teamwork to find meaningful application. Moreover, given that the outcomes of neuroscience research are so relevant to people’s lives, ensuring that this taxpayer-funded science draws from the entire intellectual capital of our nation is critical. We need the broadest perspectives at work to define and solve problems in the integrated framework of science and society.

Collaborations among people with diverse expertise (e.g., basic scientists and clinicians or technology developers and technology users) can be challenging. Forging human alliances is an ongoing sociological issue that is difficult to solve and requires culture change built around shared goals and a desire to advance human health. BRAIN 2.0 has not been immune to these challenges and solving them is essential for the ultimate success of the BRAIN Initiative. We suggest several proactive steps to address three areas for growth regarding the overall organization of science: data sharing, technology dissemination, and workforce development as it relates to ensuring diverse expertise in the second half of the BRAIN Initiative and beyond.

 

I. Sharing Data

BRAIN Initiative-funded researchers generate vast amounts of data in a wide array of formats. In tandem with the growth of massive storage capabilities and high-speed computing, more diverse, fragmented, and heterogeneous quantities of data are being generated than ever before. These include both quantitative and qualitative datasets from scientists conducting studies with both model organisms and humans. Metadata, “data about data,” provides information such as data content, context, and structure. Metadata enables data re-use and expands discovery beyond individual labs. A major current challenge is that few labs are effectively sharing data. Of those that do, few use a standardized format, and few adequately handle their metadata. As an example, a MATLAB structure full of spikes is of limited use if information such as animal age, strain, sex, and other characteristics are not included with the dataset. Currently, many experimenters record such metadata data in laboratory notebooks, which keeps it unconnected with actual data. Sharing data and code both within and outside collaborations is an essential component of the BRAIN Initiative. In addition to extending value from individual datasets by enabling re-use, data sharing promotes higher standards for data management and curation even before data are made public.

NIH funding to date: Data management and data sharing

NIH has taken initial steps toward development of an informatics infrastructure by launching the Brain Cell Data Center (BCDC), which is tasked with establishing a web-accessible information system to capture, store, analyze, curate, and display all data and metadata on brain cell types and their connectivity from the BICCN. NIH also issued FOAs to support infrastructure for three distinct activities: i) creating standards to describe common experimental protocols and data; ii) aggregating data in archives; and iii) developing software for data integration and analyses. The FOAs identify distinct experimental areas as “sub-domains,” which are defined by applicants, with a suggestion that appropriate sub-domains might comprise research funded by distinct BRAIN Initiative FOAs. This might include, for example, data from non-invasive neuromodulation experiments, from human MRI experiments, or from invasive devices for recording and modulation. This approach is based on differing characteristics of the types of research supported by the BRAIN Initiative – although NIH expects the program to evolve as understanding of how to link different modalities matures. Finally, in January 2019, NIH released a Data-sharing Notice that will require BRAIN Initiative researchers to submit their data to BRAIN data archives, develop a resource-sharing plan, and include in grant applications costs for data preparation and archiving. 

 

However, as noted above, creating an environment conducive to widespread data sharing is a challenge. Common arguments against wide availability/sharing of data and code include perceptions of irrelevant focus of time, effort, and resources; no apparent immediate utility; and a disincentive to conducting hard/risky experiments. Because cultural issues are central to data sharing, putting into place appropriate reward, review, and expectation systems are vital to ensure that data is findable, accessible, interoperable, and reusable (FAIR). Practices must both incentivize and reward researchers who comply with these rules and expectations, perhaps treating datasets as “products” that are valued outcomes of research in the same way that papers are considered and valued.

Still, more steps could facilitate progress toward a more open-science environment for the BRAIN Initiative. For example, at what processing stage will data be stored? What metadata will be included? Will all labs use the same data standard? At what point will data be released to the public? Will code be released to the public? Who will fund the data storage?

Practically speaking, a straightforward first step to implement the BRAIN Initiative data-sharing policy is for BRAIN Initiative grantees and their collaborators to notify team members at the outset of BRAIN Initiative-funded studies that their data will eventually be made public and that adhering to relevant standards as data emerges will ensure that careful curation from the earliest stages of a project eliminates time-consuming and costly data organization later.

 

Core principles for data management, sharing, and standards

To facilitate adoption and enforcement of the BRAIN Initiative data-sharing policy, we suggest the following core principles for BRAIN 2.0:

 

  1. Data from BRAIN Initiative-funded projects must be shared publicly upon first publication in a peer-reviewed journal. BRAIN Initiative-funded scientists must communicate this principle to all team members, including those (often trainees) who collect the data. While good science requires that data be shared so that it can be replicated, confirmed, and expanded toward more discovery. However, there may be circumstances that preclude sharing of human data. These special circumstances may include, for example:
    • When a research participant’s identity will be compromised from combining that individual’s composite datasets, which was neither envisioned nor specified in the informed-consent process
    • Should the data pose a threat to human agency 
    • Determining when an exception applies may not be straightforward: Exceptions to the norm of data sharing should be rare and carefully considered and reviewed.
  2. Data should be stored in standardized formats. Within the arena of systems neuroscience, considerable resources are available to facilitate use of data standards, including Neurodata Without Borders (NWB). However, even researchers who are enthusiastic about this resource struggle to understand how to fit their data into the NWB standard. Most laboratories have a strong preference for putting data into the NWB format when they are ready to share it. BRAIN 2.0 could provide resources to support this activity. The Allen Institute leads in this arena and has adopted a workflow that academic labs may also favor: They use an internal data format that is ideally suited to their analyses and then transform data into the NWB format when ready to share it. However, it is not clear whether sufficient resources are currently available for the entire BRAIN scientific community. Moreover, some subfields fields such as imaging already employ their own data standards (e.g., BIDS). Ideally, data should be as interoperable as possible.
  3. BRAIN-Initiative data should be stored on an NIH-maintained central server. We suggest that all teams funded by the BRAIN Initiative must place data on this server, at least for sharing outside the BRAIN Initiative-funded collaboration. For most projects, these data will have already undergone considerable pre-processing (such as spike sorting for electrophysiology or, for imaging data, de-noising and segmentation). Although sharing raw data is ideal in some ways, modern datasets are often too large for this to be financially practical. 
  4. Assign credit to those who collect the data. Considerable time and effort are required to generate high-quality datasets in systems neuroscience – even more so for datasets collected from NHPs. A natural solution to this problem is to generate for each shared dataset a citable identifier, such as a research resource identifier(RRID) or digital-object identifier(DOI). Further, publicly available datasets should be routinely included on publication records and as criteria for hiring and promotion decisions. 
  5. Metadata must be stored systematically. BRAIN 2.0 should convene scientific community input on metadata parameters. For animals, these may include strain, sex, age, light-dark cycle, and number of cage mates. 
  6. Enable storage of raw data as much as possible but this will rapidly become untenable. Key to the feasibility of this principle will the development of a strategy to estimate the useful half-life for data of different types and different levels of extractions. 
  7. Data standards should include standards and guidance for ethically acceptable collection, use, storage, and access to data. This topic was elaborated by the Neuroethics Roadmap as described by NeNQ2 which asks “What are the ethical standards of biological material and data collection and how do local standards compare to those of global collaborators?” Subquestion 2a asks “How can human brain data (e.g., images, neural recordings, etc.), and the privacy of participants from whom data is acquired, be protected in case of immediate or legacy use beyond the experiment?” Subquestion 2b asks “Should special regard be given to the brain tissue and its donors due to the origin of the tissue and its past?”

 

II. Human Capital

Modern neuroscientific discovery thrives from close interactions among researchers from a broad range of fields and backgrounds. We suggest that BRAIN 2.0 should continue efforts to diversify the talent pool to include increased representation from quantitative scientists, theoreticians, clinicians, and researchers from a range of experiences and backgrounds that offer important perspectives to research aiming to understand the basis of our own thoughts and behaviors – and to understand and manage disorders that affect so many Americans. 

Moreover, both individual-lab science and team-science approaches are necessary in biomedicine. Various models exist for team science and due to the extraordinary complexity of the human brain, large-scale collaborative approaches are necessary. Attention must be paid to incentives and rewards for individuals to participate in a larger effort than is customary for most of the history of biomedical research. 

There are likely to be some areas of neuroscience inherently more ready for team science, but overall, there is a need to be flexible and dynamic – as technology and knowledge emerge and questions shift. BRAIN 2.0 can play a formative role in enabling multiple models of team science that have the flexibility to change according to progress – and that are not excessively managed in a top-down manner.

NIH funding to date: Human Capital

A high-priority area for BRAIN 1.0 included the goal, “Attract new investigators to neuroscience from the quantitative disciplines (physics, statistics, computer sciences, mathematics, and engineering), and training graduate students and postdoctoral students in quantitative neuroscience.” To achieve this goal, BRAIN 1.0 implemented various career-development/career-enhancement programs, employing the K18, R25, and F32 mechanisms. At the BRAIN 2025 halfway mark, these specific training mechanisms have yet to be broadly adopted. For example, NIH funded 12 F32 postdoctoral training awards in 2018 and only a handful of other awards (three K18s and three R25 short courses). To align the BRAIN initiative with the 21st Century Cures Act of 2016 and the American Competitiveness Act of 2017, BRAIN also employed two training mechanisms to promote entry of talent from all sectors of the American population into the brain sciences. In 2018, 17 trainees are supported via NIH diversity supplements – roughly half of which went to graduate students, postdoctoral fellows, and early-career scientists. BRAIN also instituted a new K99/R00 BRAIN Initiative Advanced Postdoctoral Career Transition Award to Promote Diversity, to enhance workforce diversity in the neuroscience workforce and maintain a strong cohort of new and talented, NIH-supported, independent investigators from diverse backgrounds (including women) in BRAIN Initiative research areas. The first of these awards will be made in 2019.

 

BRAIN 1.0 progress to date

As detailed in the congressionally commissioned 2018 National Academies of Science, Engineering, and Medicine’s report, “The Next Generation of Biomedical and Behavioral Sciences Researchers: Breaking Through,” the majority of NIH-supported graduate and post-doctoral trainees are funded through research project grants awarded to their principal investigator mentors. Since these individuals are not tracked, we are unable to accurately measure the constituency of the BRAIN Initiative’s training investment, but we assume the existence of many uncounted BRAIN Initiative-supported trainees. A fuller understanding of this training investment requires that BRAIN 2.0 monitor the number of trainees supported by NIH-funded research project grants (possibly through progress reports or other means). Furthermore, BRAIN 2.0 should track the number of new (no previous NIH funding) investigators from quantitative disciplines receiving BRAIN Initiative-funded research project grants either as principal investigators or as key senior personnel (i.e., a co-investigator). 

 

BRAIN 2.0 findings

BRAIN 1.0 outlined a vision for training that promoted recruitment of scientists from quantitative disciplines and encouraged quantitative training for graduate students and postdoctoral students. We suggest continued implementation of this strategy. Nevertheless, to achieve the aims and transformative projects proposed for BRAIN 2.0, we will also need a new workforce of investigators trained to bridge the gaps between academia and industry. Three suggested findings below support this vision. 

  1. Attract quantitative expertise to neuroscience. There remains a pressing need to rapidly expand recruitment of quantitative scientists and their trainees into neuroscience. BRAIN 2.0-dedicated funding should be used to attract new talent from quantitative disciplines into neuroscience. Doing so could bolster efforts currently funded by relevant NIH Institutes and Centers (not via the BRAIN Initiative) to attract new talent from quantitative disciplines into neuroscience – in particular those scientists never before supported by NIH research project grants.
  2. Enhance diversity in the BRAIN Initiative-funded workforce. BRAIN 2.0 should continue to recognize that enhancing diversity of the research workforce is a scientific imperative. It should continue to recruit and support students, postdocs, and investigators from diverse backgrounds in BRAIN Initiative-funded projects These include individuals from groups underrepresented in health-related research.
  3. As highlighted in Priority Area 4. Demonstrating Causality, more clinical and translational expertise is needed to achieve bold outcomes envisioned for BRAIN 2.0. NIH has recently created and expanded support mechanisms to address challenges faced by physician scientists outlined in the Physician-Scientist Workforce Working Group Report of 2014. We suggest a portfolio of BRAIN 2.0 strategies to address this gap. Examples include:
    • Offering research opportunities at specific training periods such as during residency
    • Recruiting and retaining outstanding, postdoc-level health professionals who have demonstrated potential and interest in pursuing careers as clinician-investigators
    • Prioritizing support for residents across neuroscience-related disciplines including neurosurgery, neuroradiology, neurology, psychiatry, ophthalmology, and others
    • Integrating training across diverse residency specialties in topics relevant for future human translation of BRAIN Initiative-funded discoveries
    • Introducing individual residents with prior research training into the BRAIN Initiative workforce (One example is the NIMH administrative-supplement mechanism, “Enable Continuity of Research Experiences of MD/PhDs during Clinical Training”)

Achieving our vision for BRAIN 2.0 will necessitate realignment of the relationship between the private sector, government, and academia to enable discovery science that finds application in real-world settings (research or clinical). As highlighted in the 2018 Congressionally commissioned National Academy of Science Report Breaking Through, nearly 80 percent of postdoctoral trainees in the life sciences pursue careers outside of the academic enterprise. We believe that this pool of trainees reflects a major opportunity for the BRAIN Initiative to strategically build the workforce needed to fulfill the full promise of BRAIN 2.0. 

  1. The BRAIN initiative should consider approaches to support training across industrial and academic sectors. This might involve support for training in a BRAIN initiative funded academic laboratory coupled with time in an industrial setting, and/or a start-up company. We must train scientists to understand and address the barriers to translation that exist across these sections. Trainees could focus their efforts on issues related to tool developed, data management, facilitating data sharing/standardization, the maintenance of technology developed through BRAIN, etc. Similar programs have been implemented though the NSF for placing postdoctoral trainees in startup environments. https://nsfsbir.asee.org/.
  2. The BRAIN Initiative should support the transition of advanced postdocs to independence within the startup space. For example, this mechanism could combine advanced postdoctoral training in an academic/industrial setting with committed industrial support based on clearly specified success metrics. Trainees could focus their program on overcoming issues related to tool development/implementation, data management, facilitating data sharing/standardization, providing broad training for tool usage across academic labs, maintaining technology developed through BRAIN, and others.
  3. Given the necessity of neuroethics dialogue and awareness at all levels of training, NIH and other BRAIN Initiative partners should consider adding additional neuroethics training opportunities within existing responsible conduct of research (RCR) training requirements for neuroscientists and other BRAIN Initiative-funded researchers. An additional step to build neuroethics knowledge and awareness includes establishing training grants, career-development awards, and other funding strategies to explore more formalized forms of neuroethics training.
  4. An important area for public engagement of neuroethics work is in the global arena. There are BRAIN Initiative-funded projects in many countries that interact at various levels including around neuroethics. Indeed, different cultures view ethical issues through a range of lenses. Since these divergent opinions are useful in codifying best practices, NIH should encourage and enable global engagement of neuroethicists through travel grants and meeting grants that will enable scientific and cross-cultural communication among the BRAIN Initiative-funded community and beyond on issues of neuroethical relevance. 

 

III. SHARING AND USING BRAIN INITIATIVE TECHNOLOGY

The promise of the BRAIN Initiative rests with understanding how neural circuits produce behavior – as well as how dysfunctional circuits contribute to, or possibly cause, diseases. Realizing this promise hinges upon tools to monitor and control circuits; detailed, multidimensional brain maps of circuits; and use of those tools and maps to connect circuit function with perception, emotion, cognition, and action. 

NIH funding to date: Technology dissemination

The first half of the BRAIN Initiative emphasized development and optimization of new technologies, but as the techniques and resources mature, dissemination to the research community will be critical to the BRAIN Initiative’s success. NIH has taken some initial steps in this direction, including issuing a FOA for individual labs interested in incorporating new technologies (as well as career-enhancement awards for learning new techniques), a recent FOA in fiscal year 2018 for research-resource grants for technology integration and dissemination, and a set of small business FOAs for technology commercialization.

 

Many neurotechnology innovators express frustration with the complexities and demands of disseminating and translating technologies they develop; often, they are scientists who do not have the experience or expertise to function beyond invention. Such activities are also often incompatible with an investigator’s academic position or institutional infrastructure/resources. As with many other types of biomedical discoveries that fail to reach the market, the gap between invention and commercialization can prevent BRAIN Initiative-funded innovations from reaching research or clinical application. To unlock the impact of BRAIN investments in technology development, we believe that strategic investments in BRAIN 2.0 will facilitate rapid, efficient, and effective collaborative dissemination of techniques from innovators to end users.

 

BRAIN-Initiative technologies: Unique challenges

The BRAIN Initiative creates new challenges for technology use and dissemination compared to most NIH-funded research. Beyond the initial spark of invention, many neuroscience-related technologies demand additional focus on deployment, use, and long-term support (life-cycle management). Successful tool deployment for the BRAIN Initiative requires development and funding of processes to promote the use of tools, including development of new skill sets within the neuroscience community.

Although some recently developedtools are being commercially developed and distributed, development timelines for sophisticated methodsfrequently fail to meet the needs of neuroscientists eager to adopt the latest techniques. Many of the most powerful new techniques exceed the technological abilities of most neuroscience labs, making it difficult for scientists to use prototype forms. Currently, researchlabs thatdevelop and employ state-of-the-art tools end upbeing “taxed” by donating their time to train researchers less familiar with the techniques and assist with building, supporting, and troubleshooting prototype systems for others. 

One challenge is the broad spectrum of technologies emanating from the BRAIN Initiative investment – including software, viruses and animal models, microscopes, electrodes, and human-use technologies such as implants and MRI-based innovations. No one approach serves the needs of both developers and users (including research participants and patients) of these technologies, given the highly variable range of costs, market size, and urgency for use. Several BRAIN Initiative-funded projects illustrate successes and challenges noted above. To clarify the distinct paths related to BRAIN 2.0, we discuss these examples and related issues for both human-use technologies and lab-use technologies.

 

Human-use technologies 

Various constraints limit the deployment of integrated technologies for use in humans. Translating probes from research use in rodents to that in research with humans is insurmountable for a typical neural-engineering lab with average resources. Establishing essentialcollaborations with clinicians remains challenging based upon perceived risks associated with new technologies. Such constraints point to the need for research investments in less-invasive sensing and operational technologies – and highlight an essential role of PPPs to move neurotechnologies into clinical evaluation in humans.

BRAIN 1.0 helped establish a foundation for translational neuroscience to flourish by supporting development and deployment of human-use technologies. Early on, BRAIN 1.0 developed a series of PPP templates to facilitate confidentiality and partnership agreements for NIH-funded studies. Development of these templates saved hundreds of days of negotiations for each grant, making it feasible for companies to support BRAIN Initiative technologies. The agreements reflected compromise on intellectual-property rights, assigning a balanced share of value to participants. The PPPs provide key access to human networks through proven technology – a difficult task for an academic lab, given regulatory requirements. Private-sector partners during BRAIN 1.0 included modest-sized companies such as Blackrock Microsystems and NeuroPace, Inc., as well as large entities such as Medtronic and Boston Scientific. The PPP structure had broad benefit: giving researchers access to next-generation tools years ahead of their formal release, while companies received early feedback on prototype versions and a window into the most promising clinical areas for further exploration. Research is underway across a variety of disease areas, ranging from Parkinson’s to epilepsy to mood disorders to dementia. This varied investment thus reflects a balanced portfolio of iterative research to advance today’s commercial interests, while exploring high-risk, high-reward concepts of interest to NIH and its stakeholders. For ethical reasons, these technologies are all being tested in individuals with an underlying condition that adequately justifies procedure-associated risk.

BRAIN 1.0 can claim several success stories related to use of neurotechnologies in humans. For example, closed-loop brain stimulation that measures physiological signals and adjusts stimulation accordingly is being used in the context of movement disorders and epilepsy. An apt example of translational research, these systems apply newly discovered neuroscience principles to improve patient care. A practical point to make reviewing this work is that the technology used for the systems was already largely in place prior to the BRAIN Initiative, in particular chronic implantable bidirectional interfaces. Human-use technologies, especially implantable ones, can take many years to develop. Indeed, many of the programs highlighted during BRAIN 1.0 used existing technology. However, it is worth noting that the BRAIN Initiative facilitated better use of these technologies, and new technologies are currently being used with patients as a result of this investment of time and resources.

 

Lab-use technologies 

Compared to neurotechnologies used in humans, development and use of lab-use BRAIN-Initiative technologies experience very different challenges, facing none of the hurdles associated with testing therapies in people. But the unrestrained and rapid proliferation of new lab-based methods can have both unexpected and unintended consequences. For example, innovation and utility are not necessarily connected, and academic scientists are typically rewarded mainly for innovation. Broad utility of tools among end users requires thoughtful and resource-intensive steps involving product development, manufacturing, standardization, and documentation – while also looking ahead toward end-user training, support, and product/program sustainability. Other factors such as intellectual-property concerns affect the feasibility of various business models, which can range from an open-source “build it yourself” strategy to a commercialized platform. One good example of this approach is the commercialization of the Neuropixels probe (see text box).

Neuropixels

Neuropixels is a neural-recording technology that can monitor hundreds of neurons simultaneously throughout an individual animal brain. When it debuted, the device offered a leap ahead compared to existing recording systems. Funded outside of the BRAIN Initiative, Neuropixels development hinged on substantial, sustained funding ($10M or more over 5 or more years), a partnership with an industrial contributor with manufacturing and operational knowledge (IMEC in Belgium), and a business model for sustainability. In this case, private entities (the Howard Hughes Medical Institute, the Allen Brain Institute, the Gatsby Charitable Foundation and the Wellcome Trust) subsidized efforts to rapidly share Neuropixels with scientists, including providing substantial infrastructure and personnel support for well-controlled device manufacturing. Currently, Neuropixels supplies devices to researchers at “cost-plus,” incurring a modest mark-up to cover these expenses. Neuropixels supports semi-annual formal training meetings to promote end-user support within the scientific community. This approach has been intentionally gradual, starting with a few large labs and scaling up carefully, to ensure that technology dissemination occurs at a scalable rate. The case of Neuropixels, and other technologies such as Medtronic’s Brain RadioTMand Blackrock Microsystems’ neurophysiological systems, highlights a necessary departure from standard business models for dissemination of lab-use neuroscience tools. It also highlights the necessity of end-user training and empowerment to enable wide adoption without prohibitive cost. As a result, however, the project is revenue-neutral.

 

BRAIN Initiative neurotechnology: Where are we now?

Many practical obstacles still prevent widespread dissemination of technologies. We suggest that BRAIN 2.0 should address these issues directly, with frank community input and carefully conceived strategies to ensure that BRAIN Initiative investments are fully leveraged to generate breakthrough insights into brain function. Novel ways to support technology development and dissemination are urgently needed, and these efforts need to operate beyond conventional commercialization timelines. BRAIN 2.0 should establish firmer requirements for technology developers to work iteratively with end users, to ensure relevance. Establishing support infrastructure to aid the collaborations between experimentalists and data scientists (or other experimentalists) will help remove perceived barriers. Many of the roadblocks are ingrained within neuroscience culture, but they are worsened by our current hypercompetitive biomedical research environment as well as financial and legal aspects of commercialization. Addressing these issues more directly through improved interdisciplinary training as well as collaboration incentives and community education could remove many of the tensions and barriers that are restraining our progress toward understanding the brain. 

Technology dissemination is arguably a new area for NIH, in which the BRAIN Initiative is taking on the role of a technology incubator for new ideas, hoping to see them propagate into the scientific marketplace. Uptake of BRAIN Initiative-funded imaging tools by the research community has been slowed by impediments such as incompatibility with two existing commercial models (private-sector collaborations vs. small-business grants). We consider these below.

 

NIH Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) program

Although BRAIN 1.0 funded many SBIR grants, several went to existing, established companies. As such, it is unclear whether projects conducted via these grants supported new innovations attributable to the BRAIN Initiative. BRAIN 2.0 could leverage its initial investment through a matchmaking role to foster additional collaborations between academic scientists and existing companies comfortable within the SBIR-funding ecosystem. Academic researchers could also form a new company to qualify for SBIR funding. However, this carries substantial risk, requiring a sound business plan to handle (and resource) intellectual-property matters as well as establish a corporate and development team to map research, development, and eventual profit. Most academic scientists are unfamiliar with these processes.

Other neurotechnology techniques and tools are likely to support a niche user base. These relatively low-profit or small-market products call for very different dissemination approaches compared to paths generally supported by university technology-transfer offices, SBIRs, and venture-capital funding. Even open-source dissemination, which is popular in principle, is still costly in practice. BRAIN 2.0 might address these challenges through innovative strategies – perhaps non-profit models that subsidize technology development via established industrial partners with necessary expertise to rapidly develop products beyond the abilities and resources of academic inventors. In summary, SBIR/STTR funding models alone are insufficient in many cases to permit academic inventors to successfully commercialize their technologies. 

 

Technology sharing and training grants

Previously, BRAIN 1.0 issued one round of U24 dissemination grants applicable to a broad range of BRAIN Initiative-funded neurotechnologies but did not target the BRAIN community directly. Potential steps to increase awareness of these opportunities include issuing better guidance on topical areas; use of a continuous cycle of opportunities instead of a perceived “one-off” grant; focusing on proper business models; and requesting end-user projections and sustainability plans; ensuring compliance with applicable regulations; and adopting appropriate ethical protocols. Supplemental funding could assist tool-developing labs to support technical staff fully dedicated to training and sharing.

 

Annual BRAIN Investigator meeting

Each year, the BRAIN Initiative hosts an investigator’s meeting to convene the community, foster knowledge exchange, raise awareness about neurotechnology developments, and highlight successes. This meeting presents a compelling opportunity to address issues related to training and dissemination. BRAIN 2.0 might consider inviting leaders from a spectrum of companies to help raise awareness about early-stage opportunities. Other uses of this convening might include matchmaking sessions for technology makers and users, and boot camps on innovation and business principlesand sessions on neuroethics. To encourage diverse participation, meeting planners should personally invite potential industry partners.

 

Gaps and opportunities: Next steps for BRAIN 2.0

A recurring issue in any type of technology development is mission misalignment. Academic scientists are not trained in bridging the gap from invention to market, and they are rewarded for tool innovation – not tool utility. For example, tenure decisions rarely consider tool deployment explicitly. Can we raise the profile of successful tool builders in the community? At a minimum, we need to help provide the tools for entrepreneurial scientists to be successful taking their idea from the bench and helping to propagate it in the broader space. To that end, great scientists may not make great business people. Start-up companies often face “founder’s syndrome,” in which initial vision collides with leadership and developmental ability when the company expands. What structures should we put in place to allow for a smooth hand-off from vision to leadership and expansion? Alternatively, how can we help provide training and resources for inventors to learn and grow, toward supporting their innovation successfully?

 

Capital flows and time-value of money

Supporting tool development requires substantial resources. In the case of Neuropixels, making 1,000 units exceeds $2 million, well out of the reach of most academic labs. Modeling cash flows from an operational perspective is important to define a realistic level of available support given the funding environment. Time is another challenge: biomedical research tolerates long latencies for application, but business is not so forgiving. The time-value of money – the concept that current money is worth more than the identical sum in the future due to its potential earning capacity – works against neurotechnology translation compared to other investments (yearly discount rates of 15 percent or more are common). Such realities motivate the need for investment capital from non-corporate sources during the protracted translation timeframe (often called the valley of death for this reason). Neuropixels is an excellent example of successful investment from non-corporate sources.

 

Defining a business model

Many types of business models can support neurotechnology development and scale-up. These include open-sourcing and nonprofits (like Neuropixels); small companies focused on research (like Blackrock Microsystems); and large entities (like Boston Scientific and Medtronic) aiming to fill a long-term funnel. Common threads running through each of these models, however, are cash-flow management and basic management theory.

 

What will success look like? 

The BRAIN Initiative is not a typical NIH program – both by virtue of the topic of study (our own intellectual, cognitive, and emotional state); its broad appeal to many fields of inquiry; its cross-application to academia, industry, societal institutions, and international collaboration; and its promise for ending human suffering from disorders of the brain. Given these numerous characteristics, what sort of return on investment should we expect?

Innovative companies like Google and medical-technology leaders like Medtronic keep target percentages for monitoring idea-to-product progress. These thresholds are meant to balance innovation and risk. Should the BRAIN Initiative strive for around 15 percent, a typical threshold used in high-risk, high-reward industries? Some loose target would help frame strategic discussions, both to ensure the publicly funded BRAIN Initiative remains relevant and successful.

Setting expectations raises the point of dual purpose. We believe the BRAIN Initiative should stimulate the U.S. economy; thus, new companies and industrial growth are important. However, a more concrete goal would be to simply ensure that scientists conducting neuroscience research have access to the latest technologies. This conveys a multifaceted and very difficult challenge in which innovators must develop useful, reliable tools that become rapidly available to others. Challenges for innovators are matched by valid concerns about introducing risk for research participants (in particular NHPs and humans). One possibility is for NIH to adopt an investment mentality akin to that of the Wellcome Trust, which precludes “paying twice” for innovations: first to build them, then to access them for research.

 

Suggested 5-year technology goals 

Human-use technologies

  1. Translation council. NIH receives guidance from senior thought leaders on its research program via scientific councils; a similar group that advises on the portfolio of BRAIN Initiative technologies would help maximize dissemination of the most promising ideas. The team would consist of scientists and technologists with industry experience (not-for-profit and for-profit) and proven capability for translating a new idea to a broader environment. This group could help define metrics of success (see above) and ensure projects have sufficient infrastructure and support for success. Note that members of this team might also serve as mentors to new entrepreneurs, in a model similar to a venture-capital firm or incubator. 
  2. Training boot camp for entrepreneurial academic scientists. The National Science Foundation hosts i-Corps, which trains academic researchers about business processes and helps to refine their business plans. We propose that an i-Corps short course be included at the next annual BRAIN Investigator’s meeting to gauge interest from the community about such a resource (possibly also for lab-use technologies).
  3. Continue to improve the capability of resources to serve biomedical research, with an emphasis on reducing foundational frictions in technology deployment. Shifting this program to a cyclical funding cycle is important, but other leveraging aspects are needed to facilitate translation. One example is a researcher-focused quality-management system to help develop technology intended for human use. In the short-term, partnering with FDA to create a database that streamlines this process for researchers and includes exemplars of successful translation to human use (through an investigational device exemption, IDE) would likely facilitate successful dissemination.
  4. Contribute to improvement of the capability of resources for biomedical research by also requiring usage projections and a sustainable financial model. Also required should be basic marketing considerations, like a strengths/weakness/opportunity/threat (SWOT) analysis of the impact of a resource in a particular geographic location or scientific space. One key challenge is culture: overall, the NIH model is very scientist-driven, with little opportunity for the agency to shape what is proposed or the ability to ensure (enforce) team cohesion. 
  5. Balance the neurotechnology pipeline. The BRAIN Initiative PPP program continually faces challenges with balancing industry’s near-term focus with the future-looking aims of NIH-funded research. Human trials are expensive (more than $100,000/protocol/year), and companies who participate assume liability (if even implicitly) with these studies. Providing some financial compensation to help offset these costs would attract more industry participation, since the probability of a short-term win for translation is low, and the time-value-of-money and opportunity cost can make the NIH BRAIN Initiative look unattractive. BRAIN 2.0 might consider an analysis to determine why the pharmaceutical industry or other private sectors have not participated extensively. We are unaware of a good model to incentivize a company to license a BRAIN Initiative-funded technology – they will do so only if they deem it financially viable, assuming no NIH support. Ironically, BRAIN U01 grants for technology development have budgets that far exceed many corporate development budgets. Addressing this imbalance could deliver more, better-produced products to neuroscientists more rapidly. 
  6. Expand investment beyond invasive devices. The PPP and BRAIN Initiative should emphasize neurotechnologies beyond bi-directional medical implants. Other important technologies for investment include imaging, surgical procedures, and molecular approaches. In particular, we advocate for an explicit grant sequence that leverages acute and subacute procedures to maximize potential learning from the daily procedures that give unique access to the human nervous system. The translatability of minimally-invasive systems might make them more attractive for the BRAIN Initiative especially in the short-term. For example, compared to about 6 million Apple watches sold in a quarter, only about 200,000 DBS implants have been sold in the past 20 years.
  7. More critical assessment of teams developing human-use tools. We believe there needs to be a better credentialing process required for performing this work, which requires diligence in good-design practices, quality-management systems, and basic program management. Many of these skills are not found in a typical academic lab, and scientists struggle to deploy systems at scale without appropriate resources. During the initial 5 years, many tens of millions of dollars were spent for development of new advanced medical devices led by teams of academics with no product development, translational, or management experience. NIH (and the government) should consider asking industry collaborators to improve the probability of a positive return on investment. Industry collaborators can provide translational know-how that complements academic research.
  8. Clearinghouses for BRAIN tools. BRAIN 2.0 could identify entities to support use and sharing of tools to ensure they are widely available to the community. The translation council (described above) could establish a structure for such a system. Similar concepts might also apply to manufacturers or suppliers: BRAIN 2.0 could establish a preferred vendor list aligned with objectives of the BRAIN Initiative.
  9. Increased, transparent links to work among federal agencies. It appears that many federal agencies are working in parallel to invest in tools for BRAIN Initiative applications. While coordination plans are in place, they could be optimized and BRAIN 2.0 could ensure that these plans are transparent to the public. This behavior also exists in academia, where scientists have incentives both to collaborate and to compete.
  10. Advertise success stories for both industry and academia. For many industry participants in the PPP, the only short-term reward is a “halo effect” arising from discoveries that support the public good. NIH press announcements often do not acknowledge industry partners, which can frustrate cooperation between partners. Similarly, what can NIH do to raise the profile of successful tool builders so that early-stage academic scientists will be recognized within their departments and broader? 
  11. Develop expanded neuroethical considerations. Several opportunities exist, such as:
  • Develop guidelines to limit inappropriate use of brain data, including what data can be shared and who has access to the data and for what purpose.  
  • Dedicate more critical review of funding proposals for devices that might remain implanted in an individual for long periods. NIH might consider setting aside funds to help support long-term care for such individuals, based upon prior suggestions [ref].
  • Encourage and provide incentives to include neuroethics components in research grants as well as in training and technology development.
  • Expand dialogue between the neuroscience research community and social institutions such as law, education, business, marketing, and public policy, that are increasingly relying on neuroscience evidence to craft social and legal policy.
  • As neuroscience allows for data acquisition beyond lab and clinical settings, ensure that mobile neuroimaging is conducted with proper attention to eliciting informed consent, returning incidental findings, and respecting cultural differences within diverse populations of human research participants. 
  • Ensure that the results obtained by machine-learning systems and data-analysis algorithms are not biased by a lack of diversity by data used to train the algorithms.
  • Identify and respond to regulatory gaps in direct-to-consumer marketing and consumer use of novel neurotechnologies such as transcranial direct-current stimulation devices. 
  • Ensure equitable use of advances in neurotechnology across populations.
  • Establish a neuroethics network, consisting of people to consider issues on an ongoing basis for a range of stakeholders (neuroscience researchers and trainees, IRBs, health-care providers, and the non-scientific public). Researchers and policy makers could consult this network for help with projects and other issues that arise.

 

Lab-use technologies

Technologies in this category are diverse – ranging from reagents, fluorescent dyes, and viruses, to microscopes, electrodes, and recording and actuation devices. These tools can also include algorithms and analysis methods.

  1. Establish a roadmap of viable strategies for dissemination of technologies based on their likely market. These might include:
    • Open-source sharing (simple technologies, relatively small market, low-cost components or easily accessible production/replication). These could be either facilitated by online resources, or simply replicated from published work. 
    • Subsidized “build-it-yourself” dissemination via collaboration (more complex technologies requiring only modest productization with a small market and modestly priced components). This requires personnel and infrastructure at the originating lab, and training and support for end-user labs. 
    • Direct-sale dissemination via universities or start-ups (more complex technologies requiring professional manufacture, small market and modestly priced components). This will introduce intellectual-property and conflict-of-interest considerations. 
    • Commercial development by an independent private company, accessing SBIR funding (simple-to-complex technologies with a market sizable enough for financial sustainability).
    • Licensing and commercial development by a public or private company (technologies with a sufficient market to offset more significant development and post-sale support). This approach can lead to long lead times before technologies can become available. 
    • Dissemination by a service-model “technology hub” to enable efficient sharing among diverse laboratories (rare or expensive equipment such as state-of-the-art MRI systems that could be supported by fee-for use or other models). The business models for such entities should be assessed in the resources. 
  2. Analyze resources needed by labs to identify the most suitable dissemination model. Labs would benefit from knowing the most relevant shared services for their needs such as intellectual-property consultation, market-research assistance, or matchmaking to suitable corporate entities. Collating and sharing expertise from successful tool disseminators could propagate a network of peer mentoring (this same network could provide support for enhancing recognition of technology dissemination as an academic achievement for junior faculty).
  3. Consider establishing dedicated training programs for scientists who wish to disseminate their technologies, focusing on the unique considerations of technologies for lab (rather than human) use.  
  4. Consider strengthening a culture of close collaboration between innovators and end users in the context of the most relevant market. Collaborative, iterative refinement of technologies ensures impact and accessibility. Tool disseminators should be accountable for utility.
  5. Consider mechanisms to subsidize important, but financially unattractive, technologies to enable dissemination. Supplementing production costs to make such technologies viable might be the most cost-effective option to ensuring return on the NIH-funding investment. 
  6. Develop an alternative approach to offer rapid funding for new-tool adoption to investigators, especially collaborations between innovators and new end-user labs. Despite the perceived failure of the BRAIN Initiative RFA “Technology Sharing and Propagation (R03)”, a revised funding scheme would be valuable that supported constructing a system as well as labor or training to bring sufficient new expertise into the lab to remove barriers to adoption (i.e., new analysis techniques, new labeling strategies). 
  7. Fund training courses. Several “neurotechnology methods” courses already exist (e.g., those at Woods Hole Marine Biological Laboratory and Cold Spring Harbor Laboratory), but iterations could more directly address a shift to building and using non-commercial techniques. Since continued training of new users may extend beyond the development phase of a technology (or may move from the innovator lab to the lab of a super-user), such training grants could be separated from innovation grants and thus fill the gap between expertise and personnel.
  8. Support dissemination of BRAIN Initiative technologies to researchers addressing disease states. Many tools now developed for lab-based neuroscience research have been driven by applications to understanding the healthy brain. However, such techniques have extraordinary, untapped potential for helping us understand disease. Combining novel technology applications with disease studies could yield new mechanistic insights into both normal and abnormal brain function (and effects on behavior). These platforms could also be used to screen for and evaluate candidate therapies. It should be noted that the challenges of expanding of BRAIN Initiative technologies to a wider audience of scientists exploring pathology may be significant.

 

Beyond the Vision of BRAIN 1.0: TRANSFORMATIVE PROJECTS

While the preceding sections include many important findings for future implementation of the BRAIN Initiative, none represent a notable departure from the vision presented by BRAIN 2025. However, one domain where we believe the BRAIN Initiative could flourish is through encouraging development of several large-scale projects that will yield particularly important resources and data to propel neuroscience far into the future.

The BRAIN Initiative Cell Census Network (BICCN), which was described in Priority Area 1: Discovering Diversity, stands as a model for such projects. Building on early BRAIN Initiative successes to identify cell types in the mouse brain, the BICCN has brought together centers that are compiling a comprehensive mouse brain-cell atlas. Teams are launching corresponding efforts in NHPs and in humans, as well as establishing a Cell Data Center that will integrate, visualize, and disseminate the cell-census data. This lasting resource will be relevant to countless neuroscience research projects for decades to come and provide necessary tools to conceive cell-based therapies for use in humans. It is likely that the high cost of the BICCN will be returned many times over by motivating and systematizing future research. 

We believe that directing resources to transformative projects of this scale will accelerate the goals of the BRAIN Initiative. We recognize that resources are finite and that small-scale research projects are the lifeblood of BRAIN Initiative research. However, the BRAIN Initiative offers a unique opportunity to accomplish huge projects like the BICCN that do not have access to major, sustained support.

The next pages highlight examples of large-scale transformative projects that we believe could change the course of neuroscience research and provide exceptional returns both for fundamental understanding of brain function and for building powerful technological and knowledge-based platforms for ameliorating neurological and neuropsychiatric disorders. These examples support our view that BRAIN 2.0 should mobilize efforts to identify and support more large-scale projects – often involving various levels of team science – that have potential for exceptional impact.

 

1. A Cell Type-Specific Armamentarium for Understanding Brain Function and Dysfunction 

Ramón y Cajal’s original microscopic structure of the brain provided a visual display of the extraordinary beauty of this organ but anatomy by itself could not provide a mechanistic understanding of the role of different cell types in human behavior and thought. With a detailed cell census in hand, we now have an opportunity to manipulate brain cell function and resolve questions of cause and effect between cell types, their functional outputs, and illnesses of the brain.

This large-scale, high-throughput transformative project would generate and implement methods to specifically access, manipulate, and model a few hundred clinically-relevant cell types across multiple species, including NHPs and humans, in a manner not requiring germline genome modification.

Central to achieving this transformative goal is the ability to permanently label and reversibly alter function of groups of cells from any organism, employing strategies that enable access to specific cell types for the purpose of experimentally manipulating them. Example technologies might include CRISPR-based methods, high-throughput use of compact (adeno-associated virus (AAV)-sized) enhancers that can control hundreds or thousands of specific cell types; monoclonal antibodies and/or nanobodies against cell type-specific surface proteins for pseudotyping lentiviruses; AAV serotypes with novel cell specificities; permanent, activity-dependent cell-marking methods; and methods that combine approaches and targets (e.g., split-GAL4 with two enhancers, split-GAL4 with pseudotyped lentivirus). 

Such reversible, cell type-based manipulation of brain activity would advance understanding of fundamental principles of brain function, but also guide novel therapies for brain disorders through the use of animal models. These therapies would encompass cell type-specific manipulations of electrical activity, chemical neuromodulation, and gene expression. A preview of the power of this type of approach in other tissues include studies with Perturb-seq and CROP-seq.

At the molecular level, these tools – together with proteomic data and antibody reagents – will enable functional studies and empower disease-oriented research. At the circuit level, the ability to manipulate neuronal function with increased cell type-specificity will enable potential therapeutic interventions to override brain dysfunctions that have complex and/or early developmental genetic etiologies, and which would otherwise resist gene-level therapies (see Reaching Circuit Cures, below). Together these molecular- and circuit-level approaches have the potential to transform treatment of brain disorders.

 

2. The Human Brain Cell Atlas

Work now underway in the BICCN aims to provide a cell type census that will characterize cell diversity in the human brain. That effort, however, will not provide detailed information about the morphology, connections, or detailed information about location of cell types in cortical layers or within other brain structures. This transformative project builds on technological and conceptual advances from model systems and aims to generate a comprehensive cell-type atlas in the human brain that includes an anatomically informed, highly granular cell census of the whole human brain. 

Attaining this goal entails significant changes in current scales of tissue-processing capabilities, which await progress in automation, serial analyses, and coordination efforts across large collaborative research groups. This is an ambitious goal, but current technology paves the way for making it a reality over the next several years, especially given expected emergence of improved methods.

 

3. The Mouse Brain Connectome 

This transformative project aims to comprehensively map the entire mouse brain, enabling study of brain circuitry from synapses to coordinated function and behavior. 

Imaging an entire mouse brain is a daunting prospect, as it requires detailed, nanometer-level electron-microscopic (EM) imaging of roughly 300,000 brain slices. The resulting, three-dimensional dataset will be massive – thousands of times larger than previous endeavors – and require additional processing to identify cell boundaries, provide molecular information, and map connectivity across six orders of magnitude spatially. 

Accomplishing such ambitious goal in a 5-year period will require continued advances in high-throughput tissue handling, EM devices with nearly 100 parallel beams, and modern machine-learning algorithms implemented by supercomputers. Despite the enormity of the challenge, we believe it is feasible. New histologies and tissue chemistries have changed the landscape of the field. Sample-handling is increasingly automated, and the maps of several complete small organisms are now appearing. Multibeam-EM devices with 64 parallel beams are also now available. Manufacturing and deploying the required 20 to 30 100-parallel beam devices over a 5-year period in a time- and cost-efficient manner will require academic-industrial partnerships on a scale atypical for neuroscience but not unfamiliar to in high-energy physics and astronomy. Public-private partnerships (PPPs) for image analysis, including the most advanced industrial platforms for large-scale data analytics, have already demonstrated the feasibility of training deep networks to perform tasks that would require impossible amounts of human effort. Hence, despite both the complexity and cost associated with this transformative project, its ultimate success is not in question.

While many researchers see reconstruction of the entire neural network of a human brain as the ultimate goal of connectomics, that goal is still out of reach – perhaps several decades away – for several reasons: postmortem brain tissue is not sufficiently preserved to allow full circuit reconstruction; EM staining for a volume the size of the human brain is unattainable with known technologies; and EM scans of an entire human brain would require approximately one zettabyte of digital storage (1021bytes, or 1 billion terabytes) – roughly the total amount of Internet traffic worldwide in one year. For these reasons we need to begin with a more tractable mammalian brain to learn how to acquire, process, and share out the extraordinarily large data sets that connectomics generates. The mouse brain provides a roadmap to the organization of the human brain at a scale that is doable with current technologies.

The value of the mouse connectome for neuroscience could be immense. Because equivalent data have never been collected at this scale, the full implications of its ability to define brain function remain uncertain. However, we have already learned important lessons from brain-mapping efforts in other species.  More than 30 years after its publication, the roundworm map continues to be a widely cited resource, showing the durability of such canonical datasets. Efforts to map the fruit-fly brain are nearing completion, and dense reconstructions of vertebrate zebrafish larvae have recently been released. Although the volumes involved were orders of magnitude smaller than for the mouse brain, they have given us network-topology principles and defined functional modules – with clear importance toward understanding neural function and behavior. However, none of these past and current efforts approximates the value of comprehensive study of a mammalian brain, whose organizing structures, from the laminar structure of a cortex to detailed subcortical and brainstem nuclei, are widely preserved across mammalian species, including humans. 

Importantly, in addition to providing essential data to begin to address how structure subserves function in a mammalian brain at synaptic and circuit levels, we will undoubtedly find surprises from these data. As a field, we have focused excessively on familiar or accessible brain structures (cerebral cortex, hippocampus, cerebellum). A whole brain connectome for even one mammalian individual is likely to reveal worlds of circuitry that have been overlooked and will be a treasure trove for theorists. Dense EM reconstructions contain detailed information from all cell types within the brain’s tissue, hence such data will provide a wealth of knowledge on the anatomical underpinnings of the essential roles of various cell types in brain function. Combining EM data with functional, molecular, and metabolic readouts obtain in a living brain before sectioning will allow data scientists an opportunity to see just how much of this information is learnable from EM data alone. Doing so will teach us whether inferring cell type or metabolism is confined only to molecular methods.

 

4. Reaching Circuit Cures

As neuroscience seeks to perturb brain function, researchers will discover many new ways to modulate brain activity in a manner that influences thoughts, feelings, and actions. Most of these strategies will undoubtedly include a capability to move beyond normal physiological ranges of brain activity, and conversely, they will enable approaches to drive atypical brain circuits and systems back toward healthy, adaptive function. Thus, there also exists an enormous opportunity for the BRAIN Initiative research community to make breakthrough insights into understanding and moving toward cures for pathological conditions operating at the circuit level. 

This transformative project aims to protect or correct a vulnerable circuit, through achieving specific circuit-level understanding of, and truly specific interventions for, a major human neuropsychiatric or neurological disease symptom. 

  1. A circuit in this context is not a brain region or group of cells (the target of most interventional devices), but rather the next level up in organizational complexity and scale. This, for example, might imply time-varying electrical traffic along a set of molecularly defined cells projecting from one part of the brain to another. It is likely that by selectively controlling activity of specific circuits with high spatiotemporal precision, it may be possible to achieve long-lasting changes in brain function and reduce the significant human suffering common to most neuropsychiatric disorders. Multiple brain-stimulation studies in animal models have revealed that it is possible to induce long-term changes in affective and cognitive behavior by altering the functional relationship (synaptic strength) between populations of cells deep in the brain by combining the projection specificity and temporal precision of optogenetics with activity-dependent long-term synaptic plasticity rules. This endeavor could include developing an approach to drive long-term circuit-specific changes using precisely timed noninvasive stimulation – potentially in combination with safe, plasticity-facilitating medications or sensory stimuli, also delivered noninvasively, if needed. This technology could work at the surface of the brain, for example with transcranial magnetic stimulation, but also reach even deep targets using knowledge of axonal-tract trajectories known to be causal by virtue of work described earlier in this priority area narrative, as well as spatially localizable for each individual using imaging approaches to visually represent nerve tracts. Simultaneous stimulation at multiple locations could leverage natural wiring anatomy to perturb circuits at convergence points of multiple tracts deeper within the brain. Strategies for developing such a technology could be begun in model species to discover optimal spatiotemporal stimulation patterns needed to achieve desired long-term functional strengthening or weakening of deep circuitry. This approach could then be translated into NHPs for monitoring studies of circuit and behavioral impact, followed ultimately by testing and validation in humans. 
  2. Another strategy for a Circuit Cure would leverage new knowledge arising from the intersection of high-speed causal neuroscience experiments and high-speed recording methods. Some invasive DBS designs for epilepsy, are closed-loop systems that are triggered into action by detection of abnormal rhythms, not unlike an implanted cardioverter defibrillator device in a heart. The novelty invoked by this transformative approach is use of a develop a closed-loop or triggered, noninvasive neurotechnology that delivers a circuit manipulation with temporal precision tolerant of reduced spatial precision typical to noninvasive devices. This technology would be temporally and spatially targeted based on an activity map generated from monitoring the brain in real time. An example might be a pathological reward state, as observed in the manic state of bipolar disorder or during substance-use craving. In such scenarios, the aberrant brain state would be first be sensed, followed by delivery of a stimulus to suppress the reward circuit. When the abnormal state subsides, the neurotechnology would deactivate, allowing the individual to continue to enjoy experiences that are rewarding under healthy circumstances. 
  3. Another Circuit Cure model would entail discovery and control of brain circuits that carry risk for future development of disease, or which instead provide resilience against future chronic processes or life events that may trigger disease symptoms. The first step in achieving this transformative project would be identification of specific, neural-circuit activity patterns that create vulnerability to, or resilience against, neuropsychiatric disorders in sensitive model species. Causal interventions, ideally minimally invasive as with the above examples, would be tested and identified in these circuits to prevent emergence of behavioral dysfunction in response to environmental exposures that would otherwise bring about dysfunction. These technologies would then be translated to NHPs, and ultimately to humans, yielding methods for preventing human disorders, such as opioid abuse, Alzheimer’s disease, and others.

While the Circuit Cures proposed project advances novel treatments, along the way neuroscientists will likely uncover fundamental principles underlying neural and behavioral mechanisms, such as distributed-circuit dynamics, plasticity, state, and decision making in the human brain. These observations will be valuable for validating and further revising theoretical principles derived from model systems to optimize brain stimulation-based treatments for neural circuit disorders in humans. For all of these proposed Circuit Cures, access to brain circuits in a reliable and predictable manner that yields long-term changes is very likely to result in novel, safe, and effective treatments for brain disorders currently devastating to patients and their families.

 

5. Memory and the Offline Brain

We propose a large-scale project with the goal of answering the following question: How does the brain retrieve and leverage information from internal models and diverse memory systems?

This is a project that truly spans multiple scales, from synapses to brainwide networks, and it will require building bridges between neural activity and behavior. Expertise from theorists will support experimentalists’ efforts to interpret the data in the context of relevant theoretical frameworks such as adaptation, attractor networks, Bayesian computation, and reinforcement learning.

This project would likely include coordinated projects with many investigators aiming to record activity in multiple brain areas during different kinds of memory tasks. These large-scale maps of neural activity will be key for understanding how multiple systems coordinate to support memory and retrieval. Further, they will afford an opportunity to investigate the role of feedback in distributed neural circuits, as retrieval of internal knowledge likely unfolds in a top-down manner. Thus, these maps would optimally incorporate cell type and projection information. Three steps will encourage collaboration among groups. First, a meeting will convene all interested participants before any awards are made, to facilitate setting synergistic goals. Second, applicants will be asked to describe explicitly how their proposed project links to at least two other proposed projects. Third, all groups will meet throughout the grant period to share and discuss ideas and progress across systems and models. At the end of this effort, we hope to have developed a coherent model supported by data, across several species, of how internally stored knowledge is generated, retrieved, and deployed to guide behavior. We recognize that many groups are already working on this problem. Although existing efforts have made inroads on this problem, major outstanding questions remain about memory formation, representation, and retrieval. We hope this project will unify existing efforts and reveal common motifs or reveal unknown master control mechanisms.

APPENDIX I: ROSTER

BRAIN ACD WG 2.0 Members 

 

Catherine Dulac, PhD (co-chair)                   Harvard University

John Maunsell, PhD (co-chair)                      University of Chicago

David Anderson, PhD                                    California Institute of Technology

Polina Anikeeva, PhD                                    Massachusetts Institute of Technology

Paola Arlotta, PhD                                          Harvard University

Anne Churchland, PhD                                  Cold Spring Harbor Laboratory

Karl Deisseroth, MD/PhD                              Stanford University

Timothy Denison, PhD                                  Oxford University

James Deshler, PhD (Ex officio)                    National Science Foundation

Kafui Dzirasa, MD/PhD                                 Duke University

Alfred Emondi, PhD (Ex officio)                      Defense Advanced Research Projects Agency

Adrienne Fairhall, PhD                                  University of Washington

Christine Grady, RN, PhD (Ex officio)            Bioethics, National Institutes of Health

Elizabeth Hillman, PhD                                  Columbia University

Lyric Jorgenson, PhD(Ex officio)                   National Institutes of Health

David Markowitz, PhD (Ex officio)                  Intelligence Advanced Research Projects Activity

Lisa Monteggia, PhD                                     University of Texas Southwestern

Carlos Peña, PhD (Ex officio)                        Food and Drug Administration

Krishna Shenoy, PhD                                    Stanford University

Doris Tsao, PhD                                            California Institute of Technology

Huda Zoghbi, MD                                          Baylor College of Medicine

 

NIH ACD BRAIN Initiative Neuroethics Subgroup

 

James Eberwine, PhD (co-chair)                   University of Pennsylvania

Jeffrey Kahn, PhD, MPH (co-chair)                Johns Hopkins University

Adrienne Fairhall, PhD                                   University of Washington

Elizabeth Hillman, PhD                                   Columbia University

Christine Grady, MSN, PhD                           National Institutes of Health

Karen Rommelfanger, PhD                            Emory University

Insoo Hyun, PhD                                            Case Western University

Andre Machado, MD                                      Cleveland Clinic

Laura Roberts, MD                                         Stanford University

Francis Shen, JD, PhD                                   University of Minnesota

 

NIH Staff:

 

Alison Davis, PhD 

Nina Hsu, PhD

Sam White, PhD