The BRAIN Neuroethics Working Group (NEWG), which is a group of experts in neuroethics and neuroscience that serves to provide the NIH BRAIN Initiative with input relating to neuroethics, held a virtual meeting on Thursday, January 29, 2026. The meeting videocast is available for public viewing.
| Time (ET) | Agenda Item |
|---|---|
| 12:00 pm | Welcome Dr. Nina Hsu, NEWG Designated Federal Official |
| 12:05 pm | Update from the BRAIN Director |
| 12:15 pm | Potential Ethical Considerations in Neuroscience and AI Drs. Saskia Hendriks and Nina Hsu |
| 12:30 pm | Primers with Scientific / Technical Grounding Drs. Shreya Saxena, Josue Ortega Caro, and Shailee Jain |
| 1:15 pm | Topic 1: Contextual Privacy Dr. Margot Hanley |
| 1:45 pm | Topic 2: Informed Consent Dr. Barbara Evans |
| 2:15 pm | Topic 3: Bias Dr. Michael Romano |
| 2:45 pm | Break |
| 3:00 pm | NEWG Discussion of Next Steps Drs. Nita Farahany and Amy McGuire |
| 3:30 pm | Roundtable Updates Dr. Amy McGuire |
| 3:00 pm | Adjourn |
Meeting Summary
Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN)
Neuroethics Working Group (NEWG) Meeting
January 29, 2026
On January 29, 2026, the National Institutes of Health (NIH) Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative Neuroethics Working Group (NEWG) convened for a virtual meeting to share updates and discuss potential ethical considerations related to brain foundation models.
Nina Hsu, PhD, Designated Federal Official of the NEWG, welcomed the working group meeting attendees and provided an overview of the meeting. Next, John Ngai, PhD, Director of the NIH BRAIN Initiative, provided neuroethics-related program updates. He welcomed Amy McGuire, JD, PhD, as the new NEWG Co-chair and recognized Walter Koroshetz, MD, FAAN, for his tenure as NINDS Director. He also shared a recently published Highlighted Topic focused on advancing bioethics research projects that promote trust in and strengthen biomedical research. Additionally, he invited attendees to respond to a request for information on a draft NIH controlled-access data policy and proposed revisions to the NIH genomic data sharing policy.
To set the stage for the meeting, Saskia Hendriks, MD, PhD, Staff Bioethicist at the NIH, and Dr. Hsu offered a refresher on recent NEWG discussions. They shared insights from the BRAIN Initiative’s 2023 data-sharing workshop, the 2024 exploration of brain foundation models(pdf, 163 KB), the 2024 NeuroAI workshop, and the 2025 exploration of digital brain twins. They emphasized the potential ethical considerations that were raised across these past events, including data collection, privacy, and misuse; the added value of updated ethics and governance guidelines for emerging technologies; and the challenges of developing data protections while ensuring that data maintain scientific utility.
Next, several guest speakers provided updates in the field of brain foundation models. Shreya Saxena, PhD, Assistant Professor of Biomedical Engineering at Yale University, provided an overview of how brain foundation models work. Brain foundation models are large neural networks built and trained on diverse, multimodal data—such as neural signals, imaging, and behavioral input (e.g., speech)—to support tasks such as decoding behavior, detecting disease, and predicting clinical outcomes. These models are helpful for processing high-volume, high-dimensional neural data, which are often noisy and highly variable.
Josue Otega Caro, PhD, Wu Tsai Postdoctoral Fellow at Yale University, described his work on BrainLM, a large-scale functional magnetic resonance imaging (fMRI)-based brain foundation model trained on UK Biobank and Human Connectome Project datasets. The full BrainLM dataset includes nearly 80,000 recordings, or about 6700 hours, of data. BrainLM was trained using a self-supervised training method called masked autoencoder. The model’s performance continues to improve as the size of its training dataset increases. BrainLM outperforms other models on prediction of clinical metadata such as age, PTSD, anxiety, and neuroticism. Dr. Caro concluded his presentation by sharing ethical considerations related to BrainLM. Recent studies show that large language models may allow users to reconstruct training data through the model, raising risks related to data privacy. Using training data that are not representative may also create biases in the model—e.g., if a model for clinical prediction is trained primarily on data from healthy subjects. Additionally, researchers may need to navigate unique Institutional Review Board requirements for different recording modalities.
Next, Shailee Jain, PhD, Postdoctoral Researcher at UCSF, discussed the use of invasive neural data in brain foundation models. Invasive tools like electrocorticography and Neuropixels probes have enabled the collection of extremely accurate, high-resolution measurements of brain activity. Foundation models allow researchers to integrate these data across individuals, cognitive tasks, recording modalities, and datasets to uncover insights about neural mechanisms, dynamics, and information pathways. Different types of brain foundation models—such as neural modeling, brain-computer interfaces (BCIs), in silico neural generation, and behavior modeling—ingest specific data inputs for specific downstream uses. Dr. Jain highlighted technical and ethical questions related to data use and ownership as well as assessing the accuracy of AI algorithms.
Following these presentations, NEWG members engaged in focused discussions about contextual privacy, informed consent, and bias. Margot Hanley, PhD, Postdoctoral Researcher at Duke University, provided an overview of contextual privacy, a framework that emphasizes the importance of context in determining expectations regarding how personal information is shared and used. She underscored how brain foundation models complicate traditional privacy norms due to data repurposing, dataset integration, and unanticipated downstream data reuse. NEWG members discussed how to effectively identify what is most meaningful as contextual norms shift and considered possible scenarios in which individuals can remove their data from large models if they choose to withdraw consent after models are trained.
Next, Barbara Evans, PhD, JD, LLM, Stephen C. O’Connell Chair and Professor of Law at the University of Florida, outlined the strengths and limits of informed consent as a primary privacy safeguard and described a shift toward duty‑based protections. Such protections include setting and enforcing standards around data deidentification, access, and use while retaining the scientific utility of datasets. She emphasized that public trust is necessary for research to be effective and can be built through reciprocity, non-exploitation, and service for the public good. The working group then discussed how to communicate the risks and benefits of their work, especially when harms such as data breaches occur, and emphasized the importance of reciprocity and prioritizing the voices of people with lived experience as decisions are made.
In a session led by Michael Romano, MD, PhD, Radiology Resident at UCSF, NEWG members examined sources of bias in brain foundation models, including sampling, measurement, and reporting. Dr. Romano proposed approaches to mitigate bias, including collecting representative data, disclosing the demographics and characteristics of training data, oversampling poorly represented subpopulations, and training clinicians to assess potential sources of bias. NEWG members suggested additional ways to address bias, including engaging with patient advocacy groups and community partners, developing unique foundation models for different subpopulations (e.g., institution-specific models trained to that institution’s patient population), and developing a standard of performance for more generalizable models to guide appropriate use.
Next, Drs. Farahany and McGuire facilitated a conversation to identify next steps for NEWG. Members reflected on AI’s role in heightening ongoing ethical dilemmas in research and discussed the potential unique ethical challenges raised by brain foundation models. They agreed that robust data governance is critical to ensure ethical use of foundation models. Data governance must strike a balance between safeguarding training data and ensuring that the data can be used for scientific good. Proposed approaches to data governance include establishing a data fiduciary, engaging with impacted parties such as community organizations and technology developers, and publishing considerations to inform policy decisions.
The meeting closed with roundtable updates from NEWG members. Winston Chiong, MD, PhD, shared news of an upcoming publication on BCIs and reminded NEWG members about the Neuroethics 2026 meeting on April 15 – 17, 2026. Jen French, MBA, shared that the Implantable BCI Collaborative Community (iBCI-CC)’s ethics workgroup is developing a checklist for informed consent. iBCI-CC is also working with the American Brain Coalition on an engagement project to be released in Quarter 2 of 2026. Caroline Montojo, PhD, shared the Dana Foundation’s recent call for letters of interest, which asks for pilot projects that connect brain research to real-world needs. She also updated the NEWG on a new board member at the Dana Foundation and shared the application for the Neurotech Justice Accelerator internship program. Dr. McGuire shared a recently published paper on data sharing and privacy.[1] Dr. Farahany shared several updates, including a new paper in pre-print[2] exploring training data governance for brain foundation models; recent initiatives from the Uniform Laws Commission; a workshop on the World Economic Forum’s NeuroTrust Index on March 30, 2026; and an upcoming public comment period for the American and European Law Institutes on the principles of biometrics, scheduled to open in early March.
For more on the NEWG meeting, view the video recording. The next NEWG meeting will be held in August 2026.
[1] Guerrini CJ, Robinson JO, Crossnohere NL, Majumder MA, Jones KM, Brooks WB, Sheth SA, McGuire AL. Privacy in perspective: research participants' priorities and concerns related to sharing data generated in human neuroscience studies. Neuroethics. 2025 Aug;18(2):37. doi: 10.1007/s12152-025-09609-1. Epub 2025 Aug 4. PMID: 40821358; PMCID: PMC12356284
[2] Hanley M, Yeh JT, Rodriguez R, Pilkington J, Farahany N. Training Data Governance for Brain Foundation Models. arXiv preprint arXiv:2602.02511. 2026 Jan 23.