The blobfish, once considered the ugliest animal in the world, has since had quite the redemption arc. Years after it was first discovered, scientists realized that the deep-sea creature appeared so unnervingly blobby only because it went through an extreme change in pressure when it was brought up to the surface. In its natural environment, 4,000 feet underwater, the fish looks perfectly handsome.
Structural biologists, whose goal is to deduce a molecule’s structure and function within a cell, face the risk of making a similar mistake. If biomolecular complexes are extracted from the cell, better-quality images can be obtained, but the molecules may not look natural. On the other hand, studying molecules without disrupting their environment at all is technically challenging, like filming deep underwater.
A new method, called purification-free ribosome imaging from subcellular mixtures (cryoPRISM), offers an appealing compromise. Developed by graduate students Mira May and Gabriela López-Pérez in the Davis lab in the MIT Department of Biology and recently published in PNAS, the technique allows biologists to visualize molecular complexes without taking them too far out of their natural context.
CryoPRISM captures molecular structures in cells that have just been broken open. This comes as close to preserving the natural interactions between molecules as possible, short of the extremely resource-intensive in-cell structural imaging, according to associate professor of biology Joey Davis, the faculty lead of the study.
“We think that the cryoPRISM method is a sweet spot where we preserve much of the native cellular contacts, but still have the resolution that lets us actually see molecular details,” Davis says. “Even in the extremely well-trodden system of translation in E. coli, which people have worked on for over 50 years, we are still finding new states that had just escaped people’s attention.”
A negative control that was not so negative
The development of cryoPRISM, as many discoveries in science, resulted from an unexpected observation that Mira May, the co-first author of the study, made while working on a different project.
Like all living organisms, bacteria rely on a process called translation to manufacture the proteins that carry out essential functions within the cell, from copying DNA to digesting nutrients. A key machine involved in translation is the ribosome — a biomolecular complex that assembles proteins based on instructions encoded by another molecule called mRNA. To regulate its activity, cells employ additional proteins that can change the shape of the ribosome, thus guiding its function.
May sought to identify new players in ribosomal regulation using cryoEM, by rapidly freezing lots of purified molecules and collecting thousands of 2D images to reconstruct their 3D structures. May was trying to pull ribosomes out of cells to visualize them together with their regulators. For her experiments, she designed a negative control containing unpurified bacterial lysate — a mixture of everything spilled from burst cells.
May expected to get noisy, low-quality images from this sample. To her surprise, instead, she saw intact ribosomes together with their natural interacting partners.
In just a few days, this technique experimentally validated data that would have taken months to acquire using other approaches.
“As I found more and more ribosomal states, this project became a method, not just a one-off finding,” May recalls.
Discovering new biology in a saturated field
Once May and her colleagues were confident that cryoPRISM could detect known ribosomal states, they began searching for ones that had previously escaped detection.
“It’s not just that we can recapitulate things that have been previously observed, but we can actually also discover novel ribosomal biology,” May says.
One of the novel states May identified has important implications for our understanding of the evolution of translation regulation.
During active translation, bacterial ribosomes are accompanied by a group of helper proteins called elongation factors. These factors bring in the materials for protein synthesis, like tRNAs and amino acids.
When cells encounter unfavorable conditions, such as colder temperatures, they reduce translation, which means that many ribosomes are out of work. These idle, hibernating ribosomes stop decoding mRNA, and the interface where they usually interact with helper molecules gets blocked by a hibernation factor called RaiA. This protein helps idle ribosomes avoid reactivation, like a sleeping mask that prevents a person from being woken up by light.
May observed the idle ribosomal state in her data, which on its own did not surprise her – this state had been described before. What surprised her was that some inactive ribosomes were interacting not only with RaiA, but also with an elongation factor called EF-G, which in bacteria was previously believed to only interact with active ribosomes.
A similar phenomenon has been seen before in more complex organisms, but observing it in a microbe suggests that its evolutionary origin may be older than previously thought.
“It fits an emerging model in the field, that elongation factors might bind to hibernating ribosomes to protect both the ribosome and themselves from degradation during periods of stress,” May explains. “Think of it like short-term storage.”
An unstressed cell might quickly eliminate unneeded inactive ribosomes, but because any stressor that puts ribosomes to sleep could be temporary, the cell may prefer to hold off on destroying them. That way, the ribosomes can be quickly reactivated if conditions improve.
The future of cryoPRISM
May has already teamed up with other MIT researchers to use cryoPRISM to visualize ribosomes in cells that are notoriously difficult to work with, including pathogenic organisms, which can be challenging to culture at the scale required for particle purification, and red blood cells isolated from patients, which cannot be cultured at all.
Besides its immediate application for translation research, cryoPRISM is a stepping stone toward the broader goal of structural biology: studying biomolecules in their natural environment.
To truly learn about deep-sea fish, scientists need to look at them in the deep sea; and to learn about cellular machines, scientists need to look at them in cells. According to Davis, cryoPRISM perfectly fits into the “theme of structural biology moving closer and closer to cellular context.”
Lasers, robots, action: MIT workshop explores Raman spectroscopyParticipants learn how laser “fingerprinting” can help identify materials in fields ranging from law enforcement to art restoration.Could a three-hour workshop on an advanced materials analysis technique turn someone into a detective — or perhaps an art restorer?
At MIT’s Center for Bits and Atoms in late January, about a dozen students explored that possibility during an Independent Activities Period (IAP) workshop on Raman spectroscopy, a technique that uses laser light to “fingerprint” materials. The session even featured a robotic dog equipped with sensing equipment, demonstrating how chemical analysis can be done remotely.
The workshop, led by MIT postdoc Lamyaa Almehmadi in collaboration with the CBA, introduced participants to a powerful technique now used by law enforcement and first responders to identify narcotics and explosives, by gemologists to authenticate precious stones, and pharmaceutical companies to verify raw materials and ensure product quality. CBA graduate researcher Jiaming Liu co-hosted, delivering lectures, demonstrating Raman equipment, and contributing to the curriculum and hands-on demonstrations.
“It can open up new possibilities for innovation across many fields,” said Almehmadi, an analytical chemist in the Department of Materials Science and Engineering (DMSE). After attendees learned the fundamentals, she encouraged them to think creatively about new applications: “My hope is to inspire all of you to think about doing something with Raman spectroscopy that no one has done before.”
Fingerprinting materials
Participants brought items to class to analyze using handheld devices, which fire laser light and measure how it bounces back. The resulting pattern behaves like a molecular fingerprint, identifying the materials in the item — whether it’s a paper clip, a piece of tree bark, or a mixing bowl.
Workshop attendee Sarah Ciriello, an administrative assistant at DMSE who brought a stone she found at the beach, was taken aback by the results. The Raman device suggested a 39 percent probability that the sample contained concrete-like material, with the remaining readings matching synthetic compounds — blurring the line between natural and manufactured materials.
“It’s man-made — I was surprised,” Ciriello said.
Developed in 1928 by Indian scientist C.V. Raman, who later won the Nobel Prize in Physics, Raman spectroscopy was groundbreaking because it used visible light to probe materials without destroying them, a major advantage over other techniques at the time, such as chromatography or mass spectrometry. But for decades, the Raman signal — the light scattered back from a sample — was weak, and the instruments were big and bulky, limiting its practical use.
Advances in lasers, computing power, and miniaturized optics have transformed Raman spectroscopy into a portable tool. Today’s handheld devices can instantly compare a sample’s molecular fingerprint against vast digital libraries, allowing users to identify thousands of materials in seconds. Because it doesn’t destroy the sample, Raman is especially useful in fields that require preserving materials — such as law enforcement, where evidence must remain intact, and art restoration.
Almehmadi’s own research focuses on advancing Raman spectroscopy by developing highly sensitive, semiconductor-based sensors that make portable chemical analysis possible, with applications ranging from medical diagnostics to forensic and environmental monitoring.
“Raman can be used to analyze any material,” Almehmadi says. “That’s why I decided to introduce it to students from diverse backgrounds.”
IAP classes are open to students and staff across MIT, and the Raman workshop reflected that range — from administrative staff to graduate and undergraduate students and postdocs in departments and labs including DMSE, the Department of Mechanical Engineering, the Media Lab, and the Broad Institute.
Walking the robot dog
A crowd-pleasing element in the workshop was the integration of a robot dog that belongs to the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The demonstration highlighted how Raman technology can be used in dangerous environments, such as crime scenes or toxic industrial sites.
The handheld device was secured to the robot using tape, and Almehmadi showed how she could navigate the dog to a plastic bag filled with a white powder — baking soda.
But in a real-world scenario, “How can we know if it is baking soda or not?” she says. “So we just shined the light, and then the instrument told us what it was.”
Participants used a Wi-Fi app on their phones to view the results and a small remote controller to operate the robotic dog themselves.
“I loved the robot dog,” Ciriello says. “I was able to control it a bit, but it was challenging because the gauge was really sensitive.”
Michael Kitcher, a postdoc in DMSE, also praises the robot demonstration.
“Given that we just duct taped the device onto the dog — it was cool to see it actually worked,” he says.
Looking ahead
Kitcher, who researches magnetic materials for electronic applications, joined the workshop to learn more about Raman spectroscopy, which he had read about but never used. He was impressed by its versatility — in addition to the beach stone and baking soda, the device identified materials in a contact lens, cosmetics, and even a diamond.
Although it struggled to analyze a piece of chocolate he brought — other signals from the chocolate interfered — Kitcher sees strong potential for his own research. One area he’s interested in is unconventional magnetic materials, such as altermagnets, with unusual magnetic behavior that researchers hope to better understand and control for more energy-efficient electronics.
“Over the last couple of years, people have been trying to get a better sense of why these materials behave the way they do — how we can control this unconventional magnetic order,” he says. Raman spectroscopy can probe the vibrations of atoms in a material, helping researchers detect patterns in the crystal structure that underlie unusual magnetic behaviors. By understanding these vibrations, scientists could unlock material design rules that enable ultra-fast, low-energy computing.
Hands-on workshops like this — that inspire innovative future applications — Almehmadi says, are at the heart of an MIT education.
“I’ve always learned best by doing,” she says. “Lectures and reading are important, but real understanding comes from hands-on experience.”
Weekends@MIT offers connection through varied activitiesRegular campus events built community and promote student wellbeing.Weekends at MIT are often a time for students to catch up on sleep or finish p-sets, lab work, and other school assignments. But for more than two decades, through a student-driven initiative supported by the Division of Student Life (DSL), students have been able to find welcoming activities designed to build community on Friday and Saturday nights through Weekends@MIT. All events are open to both graduate and undergraduate students.
At the heart of Weekends@MIT is a leadership team within the Wellbeing Ambassadors program. Ten leadership team members plan and host a variety of events from 9 to 11 p.m. in the MIT Wellbeing Lab, transforming the space into a hub for connection and creativity. While DSL staff provide advising, logistical support, and funding, event ideas come from students. Club members are committed to facilitating student social activities, all while increasing health awareness.
Student-led activities
Student ownership is intentional, says Robyn Priest, an assistant dean in the Division of Student Life. “All the ideas for activities come from the students. Leaders brainstorm themes, vote on their favorite concepts, and spearhead events in small teams. The only criterion is that it be substance-free. The students involved are dedicated, and the time commitment can be significant, so they are paid. But our students consistently step up, motivated by the opportunity to create experiences for their peers.”
Past events have included craft nights with boba tea, yoga, trivia competitions, bracelet-making workshops, waffle nights with customizable toppings, and even Spooky Skate, a Halloween costume ice-skating event hosted by the club in the Z Center.
Priest notes that just this past fall semester, more than 2,000 students attended the Friday night events, with many programs designed as drop-in experiences so students can participate around their busy schedules.
“I joined Weekends@MIT because I really liked the idea of helping organize activities on campus that promoted well-being for students and provided them with chill events that they can attend to build community and feel good on Friday nights,” says junior Emily Crespin Guerra.
Senior Ruting Hung adds, “I wanted to become more involved in promoting wellness on campus. Since then, I've found that it has also served as a way for me to recharge after a long week.”
Expanding Saturday events
Saturdays bring additional variety through collaborations with student clubs and groups. Organizations can apply for funding — typically several hundred dollars — to host events between 9 and 11 p.m. that are open to all students.
Undergraduate and graduate organizations, cultural groups, and hobby-based clubs have all contributed to programming. The partnerships also introduce new audiences to the Wellbeing Lab, helping the space become a familiar and welcoming destination across campus communities.
Connecting the campus through communication
Another key component of Weekends@MIT is a weekly newsletter distributed to thousands of students. The newsletter highlights upcoming programs in the Wellbeing Lab, along with other campus events that align with the initiative’s goals of connection and community without alcohol.
First-year student Vivian Dinh notes, “I love how the events provide a fun escape from the stress of classes and problem sets. The Wellbeing Lab is such a nice facility on campus for students to relax and enjoy themselves.”
A long tradition, evolving for the future
The current initiative builds on a long history of student-led weekend programming that began more than 20 years ago. Over time, the effort has evolved — from early safety campaigns to today’s comprehensive model focused on well-being, belonging, and social connection — but the core idea remains the same: students creating healthy spaces for other students.
Looking ahead, Weekends@MIT aims to continue expanding collaborations and exploring new ways to bring communities together on weekends. Additional events for this semester include: pupusas; blitz chess tournament with the Chess Club; craft night; movies and waffles; mocktails and latte art; a Bob Ross paint night, and much more.
What’s the right path for AI?Conference speakers discussed the unfolding trajectory of AI and the benefits of shaping technology to meets people’s needs.Who benefits from artificial intelligence? This basic question, which has been especially salient during the AI surge of the last few years, was front and center at a conference at MIT on Wednesday, as speakers and audience members grappled with the many dimensions of AI’s impact.
In one of the conferences’s keynote talks, journalist Karen Hao ’15 called for an altered trajectory of AI development, including a move away from the massive scale-up of data use, data centers, and models being used to develop tools under the rubric of “artificial general intelligence.”
“This scale is unnecessary,” said Hao, who has become a prominent voice in AI discussions. “You do not need this scale of AI and compute to realize the benefits.” Indeed, she added, “If we really want AI to be broadly beneficial, we urgently need to shift away from this approach.”
Hao is a former staff member at The Wall Street Journal and MIT Technology Review, and author of the 2025 book, “Empire of AI.” She has reported extensively on the growth of the AI industry.
In her remarks, Hao outlined the astonishing size of datasets now being used by the biggest AI firms to develop large language models. She also emphasized some of the tradeoffs in this scale-up, such as the massive energy consumption and emissions of hyper-scale data centers, which also consume large amounts of water. Drawing on her own reporting, Hao also noted the human toll from the input work that global gig-economy employees do, inputting data manually for the hyper-scale models.
By contrast, Hao offered, an alternate path for AI might exist in the example of AlphaFold, the Nobel Prize-winning tool used to identify protein structures. This represents the concept of the “small, task-specific AI model tackling a well-scoped problem that lends itself to the computational strengths of AI,” Hao said.
She added: “It’s trained on highly curated data sets that only have to do with the problem at hand: protein folding and amino acid sequences. … There’s no need for fast supercomputing because the datasets are small, the model is small, and it’s still unlocking enormous benefit.”
In a second keynote address, scholar Paola Ricaurte underscored the desirability of purpose-driven AI approaches, outlining a number of conceptual keys to evaluating the usefulness of AI.
“There is no sense in having technologies that are not going to respond to the communities that are going to use them,” said Ricaurte.
She is a professor at Tecnologico de Monterrey in Mexico and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. Ricaurte has also served on expert committees such as the Global Partnership for AI, UNESCO’s AI Ethics Experts Without Borders, and the Women for Ethical AI project.
The event was hosted by the MIT Program in Women’s and Gender Studies. Manduhai Buyandelger, the program’s director and a professor of anthropology, provided introductory remarks.
Titled “Gender, Empire, and AI: Symposium and Design Workshop,” the event was held in the conference space at the MIT Schwartzman College of Computing, with over 300 people in attendance for the keynote talks. There was also a segment of the event devoted to discussion groups, and an afternoon session on design, in a half-dozen different subject areas.
In her talk, Hao decried the often-vague nature of AI discourse, suggesting it impedes a more thoughtful discussion about the industry’s direction.
“Part of the challenge in talking about AI is the complete lack of specificity in the term ‘artificial intelligence,’” Hao said. “It’s like the word ‘transportation.’ You could be referring to anything from a bicycle to a rocket.” As a result, she said, “when we talk about accessing its benefits, we actually have to be very specific. Which AI technologies are we talking about, and which ones do we want more of?”
In her view, the smaller-sized tools — more akin to the bicycle, by analogy — are more useful on an everyday basis. As another example, Hao mentioned the project Climate Change AI, focused on tools that can help improve the energy efficiency of buildings, track emissions, optimize supply chains, forecast extreme weather, and more.
“This is the vision of AI that we should be building towards,” Hao said.
In conclusion, Hao encouraged audience members to be active participants in AI-related discourse and projects, saying the trajectory of the technology was not yet fixed, and that public interventions matter.
Citing the writer Rebecca Solnit, Hao suggested to the audience that “Hope locates itself in the premise that we don’t know what will happen, and that in the spaciousness of uncertainty is room to act.” She also noted, “Each and every one of you has an active role to play in shaping technology development.”
Ricaurte, similarly, encouraged attendees to be proactive participants in AI matters, noting that technologies will work best when the pressing everyday needs of all citizens are addressed.
“We have the responsibility to make hope possible,” Ricaurte said.
After 16 years leading Picower Institute, Li-Huei Tsai will sharpen focus on research, teachingTsai, who has grown the MIT neuroscience institute, will increase focus on research including Alzheimer’s disease and Down syndrome.MIT Picower Professor Li-Huei Tsai, who has led The Picower Institute for Learning and Memory since 2009, will step down from the role of director at the end of the academic year in May. Her decision frees her to focus exclusively on her academic work, including her continued leadership of MIT’s Aging Brain Initiative and the Alana Down Syndrome Center. Meanwhile, the search for the Picower Institute’s next director has begun.
“During her exceptional 16-year tenure in the role of director, Li-Huei has led substantial growth at the Picower Institute,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble professor of astrophysics. “She has markedly expanded the faculty — eight of the current 16 labs joined Picower under her directorship — through successful recruitment of highly talented neuroscientists. She has done this, and more, all while leading one of our most productive and influential labs, working on a quintessentially grand challenge in human health: combating Alzheimer’s disease.”
To conduct the search for a new Picower Institute director, Mavalvala has appointed a committee led by Sherman Fairchild Professor Matthew Wilson, associate director of the institute. Serving with Wilson are Picower Professor and former institute director Mark Bear, Menicon Professor Troy Littleton, Assistant Professor Sara Prescott, and Professor Fan Wang. They will identify and interview candidates, producing a report to Mavalvala later this spring.
Growing an institute
Tsai, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of The Broad Institute of MIT and Harvard, says she is grateful to have had the opportunity to build the Picower Institute into a preeminent center for neuroscience research.
“I’m immensely proud of what our institute represents: world-renowned neuroscience research that is creative, rigorous, novel, and impactful,” Tsai says. “Our labs produce innovations, discoveries, and often translational strategies that have broken new ground and pushed science, medicine, and technology forward. We also provide excellent training that has enabled us to launch the careers of many of the field’s new and future leaders. It has been a tremendous honor to be able to build on the incredible foundation and inspiration provided by my predecessors Susumu Tonegawa and Mark Bear to enable the institute’s growth and success.”
Founded by Tonegawa as the Center for Learning and Memory in 1994, and then renamed The Picower Institute for Learning and Memory after a transformative gift by Barbara and Jeffry Picower in 2002, the institute now comprises about 400 scientists, students, and staff across 16 labs in MIT’s buildings 46 and 68.
But when Tsai became director in July 2009, just three years after coming to MIT from Harvard Medical School, the Picower Institute was a smaller enterprise of 11 labs, and a community closer to 200 members. Over the ensuing years, Tsai worked closely with the Picowers’ foundation, formerly the JPB Foundation and now the Freedom Together Foundation, to develop several strategic initiatives to accelerate growth and enhance research productivity. These have included programs specifically designed to support junior faculty, to catalyze more applications for more private grant funding, and to sustain fellowships for more than 18 postdocs and graduate students. Working with the foundation, she has also expanded the scope of research support provided by the Picower Institute Innovation Fund begun under Bear.
Eager to galvanize colleagues across MIT in fighting neurodegenerative diseases and neurological disorders affecting cognition, Tsai also built and launched two campus-wide initiatives: The Aging Brain Initiative, founded in 2015 and sustained by a broad coalition of donors, and the Alana Down Syndrome Center, established in 2019 with a gift from The Alana Foundation.
Research focus
As the Picower Institute has grown, Tsai’s research has, too. In work spanning molecular, cellular, circuit, and network scales in the brain, Tsai has led numerous highly cited discoveries about the neurobiology of Alzheimer’s disease and has translated several of these insights into specific therapeutic strategies, including one now undergoing a national phase III clinical trial. In all, she has published more than 230 peer-reviewed neuroscience studies, generated numerous patents, and helped launch several startups. She has been named a fellow of the National Academy of Medicine, the American Academy of Arts and Sciences, and the National Academy of Inventors, and received awards including the Society for Neuroscience Mika Salpeter Lifetime Achievement Award and the Hans Wigzell Prize.
Tsai’s earliest discoveries identified key roles in neurodegeneration for the enzyme CDK5. She has pioneered understanding of how epigenetic changes in brain cells affect Alzheimer’s pathology and memory. Her work has also highlighted a critical role for DNA double-strand breaks in disease.
In more recent work, Tsai’s lab has conducted several studies using innovative human stem-cell-based cultures to advance understanding of how the biggest genetic risk factor for Alzheimer’s (a gene variant called APOE4) contributes to pathology, and how some existing medications and supplements might help. In collaboration with MIT professor of computer science Manolis Kellis, she has also published several sweeping atlases documenting how gene expression and epigenetics differ in Alzheimer’s disease. These studies have provided the field with troves of new data and have yielded new insights into what makes the brain vulnerable to disease, and what helps some people remain resilient.
Tsai has also led a collaboration with professors Emery N. Brown and Edward S. Boyden that’s discovered a potential noninvasive, device-based treatment for Alzheimer’s and possibly other neurological disorders. Called “Gamma Entrainment Using Sensory Stimuli” (GENUS), the technology stimulates the senses (vision, hearing, or touch) to increase the power and synchrony of 40Hz frequency “gamma” waves in the brain. Numerous studies, involving either lab animals or human volunteers by her group and others, have shown that the approach can preserve brain volume and learning and memory and reduce signs of Alzheimer’s pathology. Via an MIT spinoff company, the technology has now advanced to pivotal clinical trial enrolling hundreds of people around the country.
“After 16 years leading the Picower Institute, I’m now eager to sharpen my focus on advancing human health through the work in my lab, the Aging Brain Initiative, and the Alana Center,” Tsai says.
MIT and Hasso Plattner Institute establish collaborative hub for AI and creativityJointly led by the MIT Morningside Academy for Design, MIT Schwarzman College of Computing, and the Hasso Plattner Institute in Potsdam, the hub will foster a dynamic community where computing, creativity, and human-centered innovation meet.The following is a joint announcement from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, Hasso Plattner Institute, and Hasso Plattner Foundation.
The MIT Morningside Academy for Design (MAD), MIT Schwarzman College of Computing, Hasso Plattner Institute (HPI), and Hasso Plattner Foundation celebrated the launch of the MIT and HPI AI and Creativity Hub (MHACH) at a signing ceremony this week. This 10-year initiative aims to deepen ties between computing and design as advances in artificial intelligence are reshaping how ideas are conceived and shared.
Funded by the Hasso Plattner Foundation, MIT and HPI will work together to foster collaborative interdisciplinary research and support a portfolio of educational programs, fellowships, and faculty engagement focused on AI and creativity, expanding scholarly inquiry into AI applications across disciplines, industries, and societal challenges. The collaboration begins with an inaugural two-day workshop March 19-20 at MIT, bringing together faculty, students, and researchers to set early priorities.
“As we hear from our faculty, as the Information Age gives way to an era of imagination, we expect a new emphasis on human creativity,” reflects MIT President Sally Kornbluth. “Through this collaboration, MIT and HPI are creating a shared space where students and faculty will come together across disciplines to explore new ideas, experiment with emerging tools, and invent new frontiers at the intersection of human creativity and AI.”
“The best minds need the right environment to do their most creative work,” says Rouven Westphal, from the Hasso Plattner Foundation. “When HPI and MIT come together across disciplines and borders, they create exactly that. The Hasso Plattner Foundation is committed to supporting this collaboration for the long term, building on Hasso Plattner’s vision of uniting technological excellence with human-centered design and creativity.”
Deepening collaboration at the intersection of technology, creativity, and societal impact
Building on the success of the Hasso Plattner Institute-MIT Research Program on Designing for Sustainability, established in 2022 between MIT MAD and HPI, the new MHACH hub represents a commitment to deepen collaboration at the intersection of technology, creativity, and societal impact.
“MIT and HPI share a common commitment to turning scientific excellence into real-world impact. Through this collaboration, we will create an environment where students and researchers from both sides of the Atlantic can work together, experiment across disciplines, and learn from one another — at a time when artificial intelligence is set to profoundly shape our lives. We are convinced that this collaboration will generate ideas with impact far beyond both institutions and inspire international cooperation and innovation,” says Professor Tobias Friedrich, dean and managing director of the Hasso Plattner Institute.
“HPI and MIT exist at the nexus of technology and creativity. Expanding this dynamic relationship will generate new paths for the infusion of AI, design, and creativity, enabling students, faculty, and researchers to dream and discover novel solutions, moving more quickly than ever from idea to implementation. MAD was established to connect thinkers across and beyond the Institute, and this new era of collaboration with HPI advances that mission on a global scale,” comments Hashim Sarkis, dean of the MIT School of Architecture and Planning and the Elizabeth and James Killian (1926) Professor.
Academic leadership from MIT and HPI will jointly shape the hub’s research and teaching agenda. Based in Potsdam, Germany, HPI is a center of excellence for digital engineering advancing research, education, and societal transfer in IT systems engineering, data engineering, cybersecurity, entrepreneurship, and digital health. Through its globally recognized HPI d-school and pioneering work in design thinking methodology, HPI brings a distinctive perspective on human-centered innovation to the collaboration, alongside a strong record in AI and data science research and technology transfer.
Expanding research and education on AI and creativity
The efforts of this multifaceted initiative are intended to foster a dynamic academic community spanning MIT and HPI, anchored by Hasso Plattner–named professorships and graduate fellowships whose recipients will be actively engaged in the hub. The long-term framework is designed to provide continuity for faculty appointments, doctoral training, and cross-campus research.
The agreement also includes the development of classes and educational programs in areas of shared AI focus, along with expanded experiential opportunities through AI-focused workshops, hackathons, and summer exchanges. A steering committee composed of representatives from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, and Hasso Plattner Institute will facilitate the shared governance of MHACH.
“Creativity has always been about extending human capability. At its core, this collaboration asks what it truly means to create something new. The question isn’t whether AI diminishes creativity, but how new forms of intelligence can deepen and enrich that process. Our goal is to explore that intersection with rigor and build a cross-disciplinary scholarly and research community that shapes how AI supports the creation of new ideas and knowledge,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.
This collaboration is made possible by the Hasso Plattner Foundation’s long-term philanthropic commitment to institutions that connect technological innovation with design thinking and education. The Hasso Plattner Foundation has played a central role in establishing and supporting institutions such as the Hasso Plattner Institute and international design thinking programs that bridge disciplines and geographies.
Preserving KeresLinguistics graduate student William Pacheco hopes to preserve his endangered native language, while also becoming a better learner and educator.Growing up in the village of Kewa — located between Santa Fe and Albuquerque in New Mexico — William Pacheco, a member of the Santo Domingo Pueblo, learned the value of his language, its history, and the traditions it carries.
“We speak Keres, a language isolate found in seven villages and communities in central New Mexico,” he says. “It’s an endangered language with fewer than 10,000 speakers.” The Pueblos’ conception of ‘language,’ according to Pacheco, evokes the idea that speaking “comes from deep within.”
Pacheco is a graduate student in the MIT Indigenous Languages Initiative, a special master’s program in linguistics for members of communities whose languages are threatened. The two-year program provides its graduates with the linguistic knowledge to help them keep their communities’ languages alive. The initiative also offers expanded opportunities for students and faculty to become involved in Indigenous and endangered languages, working with both native speaker linguists in the master’s program and outside groups, ideas that appealed to him.
“There’s some complexity to our language that defies traditional instruction,” says Pacheco, who will complete his studies this spring. “I want to develop the linguistic tools I need to improve my understanding of its construction and how best to teach and preserve it.” Pacheco is keenly aware of cultural differences in how language transmission occurs. Language, he believes, evolves over time and is best learned experientially; the Western model of language learning prioritizes immediacy and test-taking.
A variety of factors complicate efforts to preserve and potentially teach Keres. Each of the villages where it’s spoken has its own distinct dialect. These dialects are mutually intelligible to various degrees based on where they’re being spoken. Additionally, the last three decades have seen a significant increase in English usage by young Pueblos, which further endangers Keres’ existence.
Furthermore, Keres isn’t a written language. For centuries, the Pueblo have relied on daily use within their homes and communities to maintain its vitality. “The community doesn’t want it written,” Pacheco says.
Contact with the wider world has previously imperiled Indigenous ideas, an outcome Pacheco wants to avoid. “We believe [Keres] is a form of intellectual property, a tradition and artifact that is best served by empowering our people to preserve it,” he says.
From the Southwest to MIT
While he’s now passionate about linguistics, languages weren’t Pacheco’s first choice when considering an educational path. “I always admired [MIT alumnus and Nobel laureate] Richard Feynman,” he recalls. “I wanted to study physics.”
After earning an undergraduate degree from the University of New Mexico, Pacheco, who’d been working as a K-12 educator, began efforts to preserve Keres, increasing the language’s vitality and preserving its usefulness for, and value to, future generations. He sought permission and certification from the tribe to teach the language at the Santa Fe Indian School, an off-reservation boarding school. He soon discovered that a traditional Western approach to language learning wouldn’t suffice.
“Students weren’t taking the course to be scholars of the language; they wanted to learn it to build community and create opportunities to connect with elders,” Pacheco says. It was students’ advocacy, he notes, that led to the Keres learning initiative. While designing the course, however, he found gaps in his knowledge that led him to consider graduate study.
“There are fascinating idiosyncrasies in Keres, including, for example, verb morphology — the ways in which verbs and verb sounds change,” he notes. “I wasn’t sure about how to teach them.” He sought to improve his understanding and ability by earning a master’s degree in learning design, innovation, and technology from Harvard University. While completing his studies there, he had another burst of inspiration.
“I thought a background in linguistics would prove useful,” he says. “An advisor told me about the Indigenous Languages Initiative at MIT and recommended I apply.” Pacheco knew of Professor Emeritus Noam Chomsky’s pioneering work in generative linguistics at the Institute and sought to learn more about the field’s potential to help him become a better, more effective educator and linguist.
Upon arriving at MIT in 2024, Pacheco found himself embraced by faculty and students alike. “[MIT linguists] Adam Albright and Norvin Richards have been wonderfully supportive mentors, offering enthusiasm and expertise” he says. “I’ve benefited from MIT’s approach to linguistics and its use of scientific inquiry as a tool to explore language.” Engaging with other students working to preserve languages at risk of extinction continues to drive his work.
“MIT continually encourages us to use its resources, to collaborate, and to help one another find solutions to our unique challenges,” he says. “Networking, gathering good ideas, and having access to professors and students from a variety of disciplines is incredibly valuable.”
MIT’s scholars, Pacheco says, are experienced with Indigenous language learning, education, and pedagogy.
Developing an organized approach to Keres research and instruction
While gratified that his work created opportunities for him to preserve and teach Keres, Pacheco marvels at his path to the Institute and its impact on his life. “It was my language, not my interest in physics, which led me to Harvard and MIT,” he says. “How did I end up at these places?”
An advantage of language and linguistics education at MIT is the rigor with which it explores language acquisition modeling and allows for alternatives to established systems. Pacheco is after new ideas for Keres language learning and education, working to develop an effective course based on generative linguistics that both preserves the Pueblos’ approach to community and offers an educational model students are likely to embrace. He’s already had opportunities to test novel theories and practices as an educator back home.
“I was teaching students to use Keres as a programming tool,” he says. “We modeled a robot as a member of the community navigating a maze, and students would have to teach it to accept commands in Keres.”
Pacheco also wants to explore community-centered language issues. He wants to standardize the development and education of community linguists, creating a cohort of scholars trained to use the tools he designs that are deeply invested in Keres’ preservation and instruction.
“We want to drive inquiries into Keres and how it’s taught,” he says, “while also centering Indigenous knowledge systems and expanding access to linguistics study for Indigenous scholars.”
Pacheco believes there’s value in exposing scholars and communities to the cultural and ideological exchanges he’s enjoyed between the sciences, humanities, Indigenous ideas, and experiences. “Indigenous scholars exist at MIT,” he says. “We’re here, and the Institute’s support helps preserve languages like Keres as important communal and cultural artifacts.”
Pacheco is grateful for the opportunities his research at MIT have afforded him. While his education as a linguist and scholar continues, Pacheco’s community, culture, and support for Keres language learning remain top priorities.
“I want to amplify the impact in tribal language policy and Indigenous-centered education,” he says. “Language, its study, and its transmission is both science and art.”
Improving cartilage repair through cell therapySMART breakthrough offers a promising pathway toward improved manufacturing of high‑quality cells for regenerative therapies to treat joint diseases.Researchers have developed a new method for monitoring iron flux — the movement and rate at which cells take in, store, use and release iron — in stem cells known as mesenchymal stromal cells (MSCs). The system can provide insights within a minute about a cell’s ability to grow cartilage tissue for cartilage repair.
The breakthrough offers a promising pathway toward more consistent and efficient manufacturing of high‑quality MSCs for regenerative therapies to treat joint diseases such as osteoarthritis, chronic joint degeneration conditions, and cartilage injuries.
The work was led by researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) group within the Singapore-MIT Alliance for Research and Technology (SMART), and was supported by the SMART Antimicrobial Resistance (AMR) research group, in collaboration with MIT and the National University of Singapore (NUS).
A paper describing the work, “Cellular iron flux measurement by micromagnetic resonance relaxometry as a critical quality attribute of mesenchymal stromal cells,” was published in February in the journal Stem Cells Translational Medicine.
Regenerative therapies hold significant promise for patients with the potential to repair damaged tissues rather than simply manage symptoms. However, one of the biggest challenges in bringing these therapies to patients lies in the unpredictable quality of the MSC’s chondrogenic potential — a cell’s ability to develop and form cartilage tissue — during the in vitro manufacturing process.
Even when grown under controlled laboratory conditions, MSCs are prone to losing some of their potential and ability to form cartilage tissue, leading to inconsistent cartilage repair outcomes due to the varying quality of MSC batches. Existing tests that evaluate the quality of MSCs’ cartilage‑forming potential are destructive in nature, which causes irreversible damage to the cells being tested and renders them unusable for further therapeutic or manufacturing purposes.
In addition, the tests require a prolonged — up to 21-day — period for cells to grow. This slows decision‑making, extends production timelines, and can hinder the timely translation of MSC-based therapies into clinical use and delay treatment for patients. As MSCs can lose chondrogenic potential during this process, early assessment is essential for manufacturers to determine whether a batch should be continued or discontinued. Hence, there is a need for a reliable and rapid method to predict MSCs’ chondrogenic potential during the cell manufacturing process.
The new developement represents a rapid, non-destructive method to monitor iron flux in MSCs by measuring iron changes in spent media — residual components in the culture medium after cell growth. Using an inexpensive benchtop micromagnetic resonance relaxometry (µMRR) device, the approach enables real‑time monitoring of cellular iron changes without damaging the cells. The inexpensive µMRR device can be easily integrated into existing laboratories and manufacturing workflows, enabling routine, real‑time quality monitoring without significant infrastructure or cost barriers.
Iron homeostasis is a critical process that maintains normal levels of iron for cell function, maintaining the balance between providing sufficient iron for essential processes, while preventing toxic accumulation. The study found that iron homeostasis is highly correlated with the MSC’s chondrogenic potential, where significant iron uptake and accumulation will reduce the cell’s ability to form cartilage. The researchers also found that supplementing the cell growth process with ascorbic acid (AA) helps regulate iron homeostasis by limiting iron flux, thereby improving the MSC’s chondrogenic potential.
Using this novel method, spent media are collected as samples and treated with AA. The µMRR device is then used to track and provide real-time insights into small iron concentration changes within the spent media. These iron concentration changes reflect how MSCs take up and release iron and can provide an early indicator of whether a batch is likely to succeed in forming good cartilage.
These findings allow manufacturers to not only monitor MSCs quality for cartilage repair in real-time, but also to assess when, and to what extent, interventions such as AA supplementation are likely to be beneficial - supporting efficient manufacturing of more effective and consistent MSC‑based therapies.
“One of the key challenges in cartilage regeneration is the inability to reliably predict whether MSCs will retain their chondrogenic potential during manufacturing. Our study addresses this by introducing a rapid, non-destructive method to monitor iron flux dynamics as a novel critical quality attribute (CQA) of MSCs' chondrogenic capacity. This approach enables early identification of suboptimal cell batches during culture, enhancing quality control efficiency, reducing manufacturing costs, and accelerating clinical translation,” says Yanmeng Yang, CAMP postdoc and first author of the paper.
“Our research sheds light on a fundamental biological process that, until now, has been extremely difficult to measure. By monitoring iron flux in real-time without destroying the cells, we can gain actionable insights into a cell batch’s chondrogenic potential, which allows for early decision-making during the manufacturing process. The findings support µMRR‑based iron monitoring as an effective quality control strategy for MSC-based therapy manufacturing, paving the way for more consistent and clinically viable regenerative medicine for cartilage regeneration,” says MIT Professor Jongyoon Han, co-head CAMP PI, AMP PI, and corresponding author of the paper.
This method represents a promising step toward improving manufacturing consistency and functional characterisation of MSC-based cellular products. Beyond advancing cell therapy manufacturing, it contributes to the scientific industry studying iron biology by providing real-time iron flux measurements that were previously unavailable. The research also advances clinical translation of high-quality cell therapies for cartilage regeneration, bringing these closer to patients with joint degeneration conditions and cartilage injuries.
Building on these findings, the researchers plan to carry out future preclinical and clinical studies to expand this approach beyond quality control in manufacturing, with the aim of establishing µMRR as a validated method for the clinical translation of MSC-based therapies in patients for cartilage repair.
The research, conducted at SMART, was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.
MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.
Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.
This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.
The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.
This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.
These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.
“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”
Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.
Surmounting specularity
The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.
These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.
But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.
“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.
The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.
In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.
“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.
Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.
Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.
“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.
The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.
The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.
Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.
Seeing “ghosts”
The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.
Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.
These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.
“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.
They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.
They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.
In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.
This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.
A better method for identifying overconfident large language modelsThis new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.
But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.
To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.
Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.
They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.
“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.
She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.
Understanding overconfidence
Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.
However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.
The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.
“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.
Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.
To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.
An ensemble approach
The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.
To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.
“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.
Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.
“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.
TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.
They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.
Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.
Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.
In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.
This work is funded, in part, by the MIT-IBM Watson AI Lab.
New model predicts how mosquitoes will fly Their flight patterns change in response to different sensory cues, a new study finds. The work could lead to more effective traps and mosquito control strategies.A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.
Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.
Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.
When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.
When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.
Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.
The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.
“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”
The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.
Flight by numbers
Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.
Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.
“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”
At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.
Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?
“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”
Taking cues
For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.
In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.
The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.
Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.
“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”
In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.
“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.
“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”
The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.
“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”
This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund.
Pursuing a passion for public healthMIT senior Srihitha Dasari reflects on the power of experiential learning through the PKG Center for Social Impact.MIT senior Srihitha Dasari never imagined she would be speaking in front of the United Nations about health care, technology, and the power of co-designing public health interventions in collaboration with impacted communities.
But when she stepped up to the podium to speak about digital well-being and community-centered health care design, she carried with her more than research findings. She brought several years of experiential learning in public health environments, ranging from visiting exam rooms of New England’s largest safety net hospital to collaborating with nurses in rural Argentina and working on maternal health in India and Nepal.
Dasari arrived at MIT intending to major in brain and cognitive sciences and follow a pre-med track. Like many aspiring physicians, she pictured her MIT years filled with lab work, shadowing doctors, and preparing for medical school. Instead, during her first Independent Activities Period (IAP), she enrolled in the PKG Center for Social Impact’s IAP Health Program and began to broaden her understanding of practicing medicine.
“What was really incredible about IAP Health,” says Dasari, is that “I did it so early in not only my academic career, but just in the beginning of when I was actually formulating a lot of my career aspirations, [and] it really immersed me into what public health looks like.”
Through IAP Health, Dasari worked as an intern at the Boston Medical Center Autism Program. There, she provided in-clinic support to children with autism and their families, helping guide them through appointments and collaborating with physicians to adapt exam techniques to meet patients’ needs.
“When you think about how medicine is delivered, it can feel very systematic — like there are boxes you have to check,” she says. “But working in that clinic showed me … you can modify the experience to truly care for the whole person.”
The program exposed her not only to clinical care, but to the broader forces that shape health outcomes. “I didn’t envision myself doing public health when I entered college,” Dasari says. “But looking back, public health is the through line of everything I’ve done.”
She remained at Boston Medical Center as an intern for over a year with continued support and funding from the PKG Center’s Federal Work-Study and Social Impact Internship programs. The sustained engagement deepened her understanding of how health-care systems can either reinforce or reduce disparities — a systems-level perspective that carried into her global work.
During her second-year IAP, Dasari received a PKG Fellowship to develop an electronic health record system for a maternal ward in a rural hospital in Argentina. The project grew out of a relationship she developed through the student group MIT Global Health Alliance, which supports co-designing public health interventions with impacted communities.
Dasari’s collaboration with the hospital evolved into a social enterprise that she co-founded: PuntoSalud, an AI-powered chatbot designed to bridge health information gaps in rural Argentina. Dasari and her co-founders received a $5,000 award and seed funding to prototype and develop PuntoSalud through the PKG IDEAS Social Innovation Incubator, MIT’s only entrepreneurship program focused solely on social impact.
Speaking at the United Nations underscored a lesson she absorbed throughout her varied experience: Meaningful health innovation begins with relationships.
“I’ve been able to meet people from so many different facets of the health-care pipeline that I didn’t envision myself meeting,” Dasari says.
The mindset she developed through PKG programming has informed her experience beyond the center. Through MIT D-Lab, Dasari conducted maternal and neonatal health needs assessments in rural Nepal, interviewing community members to better understand gaps in care. The findings informed efforts to retrofit birthing centers with improved heating systems in cold climates. Later, supported by the MIT International Science and Technology Initiatives, she traveled to India to interview health-care providers about strategies to reduce non-medical cesarean section rates, with the goal of developing policy recommendations for other health systems.
“I came in thinking I would practice medicine one-on-one,” Dasari says. “Now I want to increase my impact in the health care field. I see that as clinical medicine intersected with public health, relieving health disparities for a wider population.”
As Dasari prepares to leave MIT for a year in clinical research, she does so with a systems lens on science and health care, and a commitment to social impact.
“The path I’ve taken in health care as an undergrad student has given me both a sense of purpose and fulfillment as I prepare to leave MIT,” she says. “It’s shown me that meaningful impact can begin long before medical school, and that I want to carry forward the values these experiences instilled in me.”
For Dasari, experiential learning didn’t redirect her ambitions, but enhanced them.
“I feel like the PKG Center … it’s not changing your goals,” she says. “It’s shaping them into their fullest potential.”
Brain circuit needed to incorporate new information may be linked to schizophreniaImpairments of this circuit may help to explain why some people with schizophrenia lose touch with reality.One of the symptoms of schizophrenia is difficulty incorporating new information about the world. This can lead people with schizophrenia to struggle with making decisions and, eventually, to lose touch with reality.
MIT neuroscientists have now identified a gene mutation that appears to give rise to this type of difficulty. In a study of mice, the researchers found that the mutated gene impairs the function of a brain circuit that is responsible for updating beliefs based on new input.
This mutation, in a gene called grin2a, was originally identified in a large-scale screen of patients with schizophrenia. The new study suggests that drugs targeting this brain circuit could help with some of the cognitive impairments seen in people with schizophrenia.
“If this circuit doesn’t work well, you cannot quickly integrate information,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT. “We are quite confident this circuit is one of the mechanisms that contributes to the cognitive impairment that is a major part of the pathology of schizophrenia.”
Feng and Michael Halassa, a professor of psychiatry and neuroscience and director of translational research at Tufts University School of Medicine, are the senior authors of the new study, which appears today in Nature Neuroscience. Tingting Zhou, a research scientist at the McGovern Institute, and Yi-Yun Ho, a former MIT postdoc, are the lead authors of the paper.
Adapting to new information
Schizophrenia is known to have a strong genetic component. For the general population, the risk of developing the disease is about 1 percent, but that goes up to 10 percent for those who have a parent or sibling with the disease, and 50 percent for people who have an identical twin with the disease.
Researchers at the Stanley Center for Psychiatric Research at the Broad Institute have identified more than 100 gene variants linked to schizophrenia, using genome-wide association studies. However, many of those variants are located in non-coding regions of the genome, making it difficult to figure out how they might influence development of the disease.
More recently, researchers at the Stanley Center used a different strategy, known as whole-exome sequencing, to reveal gene mutations linked to schizophrenia. This technique sequences only the protein-coding regions of the genome, so it can reveal mutations that are located in known genes.
Using this approach on about 25,000 sequences from people with schizophrenia and 100,000 sequences from control subjects, the researchers identified 10 genes in which mutations significantly increase the risk of developing schizophrenia.
In the new Nature Neuroscience study, Feng and his students created a mouse model with a mutation in one of those genes, grin2a. This gene encodes a protein that forms part of the NMDA receptor — a receptor that is activated by the neurotransmitter glutamate and is often found on the surface of neurons.
Zhou then investigated whether these mice displayed any of the characteristic behaviors seen in people with schizophrenia. These individuals show many complex symptoms, including psychoses such as hallucinations and delusions (loss of contact with reality). Those are difficult to study in mice, but it is possible to study related symptoms such as difficulty in interpreting new sensory input.
Over the past two decades, schizophrenia researchers have hypothesized that psychosis may stem from an impaired ability to update beliefs based on new information.
“Our brain can form a prior belief of reality, and when sensory input comes into the brain, a neurotypical brain can use this new input to update the prior belief. This allows us to generate a new belief that’s close to what the reality is,” Zhou says. “What happens in schizophrenia patients is that they weigh too heavily on the prior belief. They don’t use as much current input to update what they believed before, so the new belief is detached from reality.”
To study this, Zhou designed an experiment that required mice to choose between two levers to press to earn a food reward. One lever was low-reward — mice had to push it six times to get one drop of milk. A high-reward lever dispensed three drops per push.
At the beginning of the study, all of the mice learned to prefer the high-reward lever. However, as the experiment went on, the number of presses required to dispense the higher reward gradually went up, while there were no changes to the low-reward lever.
As the effort required went up, healthy mice start to switch back and forth between the two levers. Once they had to press the high-reward lever around 18 times for three drops of milk, making the effort per drop about the same for each lever, they eventually switched permanently to the low-reward lever. However, mice with a mutation in grin2a showed a different behavior pattern. They spent more time switching back and forth between the two levers, and they made the switch to the low-reward side much later.
“We find that neurotypical animals make adaptive decisions in this changing environment,” Zhou says. “They can switch from the high-reward side to the low-reward side around the equal value point, while for the animals with the mutation, the switch happens much later. Their adaptive decision-making is much slower compared to the wild-type animals.”
An impaired circuit
Using functional ultrasound imaging and electrical recordings, the researchers found that the brain region affected most by the grin2a mutation was the mediodorsal thalamus. This part of the brain connects with the prefrontal cortex to form a thalamocortical circuit that is responsible for regulating cognitive functions such as executive control and decision-making.
The researchers found that neuronal activity in the mediodorsal thalamus appears to keep track of the changes in value of the two reward options. Additionally, the mice showed different patterns of neural activity depending on which state they were — either an exploratory state or committed to one side.
The researchers also showed that they could use optogenetics to reverse the behavioral symptoms of the mice with mutated grin2a. They engineered the neurons of the mediodorsal thalamus so that they could be activated by light, and when these neurons were activated, the mice began behaving similarly to mice without the grin2a mutation.
While only a very small percentage of schizophrenia patients have mutations in the grin2a gene, it’s possible that this circuit dysfunction is a converging mechanism of cognitive impairment for a subset of schizophrenia patients with different causes.
Targeting this circuit could offer a way to overcome some of the cognitive impairments seen in people with schizophrenia, the researchers say. To do that, they are now working on identifying targets within the circuit that could be potentially druggable.
The research was funded by the National Institutes of Mental Health, the Poitras Center for Psychiatric Disorders Research at MIT, the Yang Tan Collective at MIT, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Stelling Family Research Fund at MIT, the Stanley Center for Psychiatric Research, and the Brain and Behavior Research Foundation.
Turning extreme heat into large-scale energy storageFourth Power, founded by Professor Asegun Henry, is developing thermal batteries for efficiently storing excess electricity from utility grids and power producers.Thermal batteries can efficiently store energy as heat. But building them requires a carefully designed system with materials that can withstand cycles of extremely high temperatures, without succumbing to problems like corrosion, thermal expansion, and structural fatigue.
Many thermal battery systems move high-temperature gas or molten salt around through metal pipes. Fourth Power, founded by MIT Professor Asegun Henry, is turning these materials inside out, using molten metal to transport the heat, which is stored in carbon bricks.
“The idea was, instead of making the system from metal, let’s move liquid metals,” says Henry SM ’06, PhD ’09.
Henry’s approach earned him a Guinness World Record for the hottest liquid pump back in 2017 — important because when you double the absolute temperature of a material, to the point where it glows white-hot, the amount of light it emits doesn’t just double, it increases 16 times (or to the fourth power).
The company is harvesting all that light with thermophotovoltaic cells, which work like solar cells to convert light into electricity. Henry and his collaborators broke another record when they demonstrated a thermophotovoltaic cell that could convert light to electricity with an efficiency above 40 percent.
Fourth Power is working to use those record-breaking innovations to provide energy for power grids, power producers, and technology companies building power-hungry infrastructure like data centers. Henry says the batteries can provide anywhere from 10 to over 100 hours of electricity at a storage cost that is significantly cheaper than lithium-ion batteries at grid scale. The company is currently cycling each section of its system through relevant operating temperatures — which are nearly half as hot as the sun — and plans to have a fully integrated demonstration unit operating later this year.
“Explaining why our system is such a huge improvement over everything else centers around power density,” explains Henry, who serves as Fourth Power’s chief technologist. “We realized if you push the temperature higher, you will transfer heat at a higher rate and shrink the system. Then everything gets cheaper. That’s why we pursue such high temperatures at Fourth Power. We operate our thermal battery between 1,900 and 2,400 degrees Celsius, which allows us to save a tremendous amount on the balance of system costs.”
A career in heat
Henry earned his master’s and PhD degrees from MIT before working in faculty positions at Georgia Tech and MIT. As a professor at both schools, his research has focused on thermal transport, storage, renewable energy, and other technologies that could lead to improvements in sustainability and decarbonization. Today, he is the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering.
Heat transfer systems are usually made out of metals like iron and nickel. Generally, the higher temperature you want to reach, the more expensive the metal. Henry noticed ceramics can get much hotter than metals, but they’re not used nearly as often. He started asking why.
“The answer is often pretty straightforward: You can’t weld ceramics,” Henry says. “Ceramics aren’t ductile. They generally fail in a catastrophically brittle way, and that’s not how we like large systems to behave. But I couldn’t find many problems beyond that.”
After receiving funding from the Department of Energy and the MIT Energy Initiative, Henry spent years developing a pump made from ceramics and graphite (which is similar to a ceramic). In 2017, his pump set the record for the highest recorded operating temperature for a liquid pump, at 1,200 Celsius. The pump used white-hot liquid tin as a fuel. He chose tin because it doesn’t react with carbon, eliminating corrosion. It also has a relatively low melting point and high boiling point, which keeps it liquid in a large temperature range.
The challenge then became designing the system.
“Typically, a mechanical engineer would come up with a design and say, ‘Give me the best materials to do this,’” Henry says. “We flipped the problem, so we were saying, ‘We know what materials will work, now we need to figure out how to make a system out of it.’”
In 2023, Henry met Arvin Ganesan, who had previously led global energy work at Apple. At first, Ganesan wasn’t interested in joining a startup — he had two young kids and wanted to prioritize his family — but he was intrigued by the potential of the technology. At their first meeting, the two connected over shared values and fatherhood, as Henry surprised Ganesan by bringing his own young children.
“I had a sense this technology had the promise to tackle the twin crises of affordability and climate change at the same time,” says Ganesan, who is now Fourth Power’s CEO. “As energy demand becomes more pronounced, we either need to deploy harder and deeper tech, which is also important, or improve existing tech. Fourth Power is trying to simplify the physics and thermodynamic principles to deliver an approach that has been very well-studied for a very long time.”
Since 2023, Fourth Power has been conducting sponsored research at the LNS Bates Research and Engineering Center to validate the durability and reliability of its components ahead of a fully integrated demonstration.
The system Fourth Power designed takes in excess electricity from sources like the grid and uses it to heat a series of 6-foot-long, 20-inch thick graphite bricks until they reach about 2,400 Celsius. At that point the system is considered fully charged.
When the customer wants the electricity back, the bricks are used to heat up liquid tin, which flows through a series of graphite pipes, pumps, and flow meters to thermophotovoltaic cells, which turn the light from the glowing hot infrastructure back into electricity.
“You can basically dip the cells into the light and get power, or you can pull them back out and shut it off,” Henry explains. “The liquid metal starts at 2,400 Celsius and then cools as it’s going through the system because it’s giving a bunch of its energy to the photovoltaic, and then it circulates back through the graphite blocks, which act as a furnace, to retrieve more heat.”
From concept to company
Later this year, Fourth Power plans to turn on a 1-megawatt-hour system in its new headquarters in Bedford, Massachusetts. A full-scale system would offer 25 megawatts of power and 250 megawatt hours of storage and take up about half a football field.
“Most technologies you’ll see in storage are around 10 megawatts an acre or less,” Henry explains. “Fourth Power is more like 100 megawatts per acre. It’s very power-dense.”
The power and storage units of Fourth Power’s system are modular, which will allow customers to start with a smaller system and add storage units to extend storage length later. The company expects to lose about 1 percent of total heat stored per day.
“Customers can buy one storage and one power module, and that’s a 10-hour battery,” Henry explains. “But if they want one power module and two storage modules, that’s a 20-hour battery. Customers can mix and match, which is really advantageous for utilities as renewables scale and storage needs change.”
Down the line, the system could also be run as a power plant, converting fuel into electricity or using fuel to charge its batteries during stretches with little wind or sun. It could also be used to provide industrial heat.
But for now, Fourth Power is focused on the battery application.
“Utilities need something cheap and they need something reliable,” Henry says. “The only technology that has managed to reach at least one of those requirements is lithium ion. But the world is waiting for something that’s much cheaper than lithium ion and just as reliable, if not better. That’s what we’re focused on demonstrating to the world.”
John Ochsendorf named associate dean for research for the School of Architecture and PlanningThe newly created role will shape the infrastructure needed to nurture the school’s growing research goals.Professor John Ochsendorf, a member of the MIT faculty since 2002, is taking on a new role in support of the research efforts of faculty and students in the MIT School of Architecture and Planning (SA+P). At the start of this year, Ochsendorf was appointed to lead an initiative strengthening research strategy, support, and funding across the school.
“John is a bridge-builder by instinct and practice, and we look forward to the bridges he will build between our school and industry, our school and MIT, and between research and pedagogy in our school,” says SA+P Dean Hashim Sarkis. The appointment comes as sponsored research across SA+P continues to grow, expanding opportunities for graduate research assistantships and interdisciplinary collaboration across MIT.
Ochsendorf is the Class of 1942 Professor with dual appointments in the departments of Architecture and Civil and Environmental Engineering in the MIT School of Engineering. At the center of his work is a deep commitment to students and education through research and making. For example, in close collaboration with students and alumni, he has contributed to projects ranging from the Sean Collier Memorial on campus to a recent Martin Puryear sculpture at Storm King Art Center. Since 2022, Ochsendorf has served as the founding director of the MIT Morningside Academy for Design, where he helped establish new models for design research, interdisciplinary collaboration, and student engagement across the Institute.
Ochsendorf describes the new role as both a “challenge and an opportunity” to support the considerable and increasingly broad portfolio of research across SA+P.
“We want to understand the current landscape of our research funding and identify the challenges and inefficiencies impacting faculty,” he notes. “The ultimate goal is to grow our research capacity for a world that needs the best ideas from MIT.”
The effort is consistent with SA+P’s history of pioneering research and pedagogic exploration. The Department of Architecture was among the first in the United States to establish doctoral programs within a school of architecture, including PhDs in history, theory, and criticism and in building technology. The Department of Urban Studies and Planning is home to the largest urban planning faculty in the country and maintains a variety of research labs, while Media Arts and Sciences and the Media Lab has a broad and deep research culture. Each of the school’s departments enjoys the advantage of operating within the context of MIT’s culture of innovation and interdisciplinary study. As new faculty hires have been increasingly research-driven, the time for developing and supporting robust research portfolios is now.
Ochsendorf and his students’ research have bridged the spectrum from humanistic research supported by organizations such as the National Endowment for the Humanities and the Graham Foundation for Advanced Studies in the Fine Arts to more scientific research supported by the National Science Foundation. In his new role, he will build on that experience to work with faculty and Institute partners to strengthen grant development, clarify research priorities, and expand research capacity across SA+P.
“I’ve always loved being at MIT because of the team spirit here,” says Ochsendorf. “We’re a place where we try to support each other, and it’s because of this environment that I am excited about this new role.”
Sustaining diplomacy amid competition in US-China relationsAt MIT, former U.S. ambassador to China Nicholas Burns highlights climate change as an area for diplomatic engagement, while exploring areas including China's emphasis on STEM education.The United States and China “are the two largest emitters of carbon in the world,” said Nicholas Burns, former U.S. ambassador to the People’s Republic of China, at a recent MIT seminar. “We need to work with each other for the good of both of our countries.”
During the MITEI Presents: Advancing the Energy Transition presentation, Burns gave insight into the evolving state of U.S.-China relations, its implications for the global order, and its impact on global efforts to advance the energy transition and address climate change.
“We are the two largest global economies,” said Burns, who is now the Goodman Professor of the Practice of Diplomacy and International Relations at Harvard University’s Kennedy School of Government. “These are the only two countries that affect everybody else in the international system because of our weight.”
The relationship between the United States and China can be summarized in three words, according to Burns: competitive, tough, and adversarial — a description that rings true on both sides. He listed four primary areas for this competition: military, technology, trade and economics, and values.
Burns described the especially complicated area of trade and economics. “We both want to be number one. Neither of us — to be honest — is willing to be number two,” said Burns. Outside of North America, China is the United States’ largest trade partner. Outright trade wars — like those in April and October 2025 — create friction. “At one point, you’ll remember, 145 percent tariffs by the United States, and 125 percent by China on the United States. That just grinds a relationship. Those level of tariffs, had they been sustained, would have meant zero trade between the two countries.”
The energy field can be significantly impacted by this area of competition, Burns added. China is dominant in the production and processing of rare earth elements, many of which are critical to products like lithium batteries, solar panels, and electric vehicles. In 2024 and 2025, the United States was not the only country to place tariffs on these products; India, Turkey, South Africa, Mexico, Canada, the EU, and others followed suit. “I think the Trump administration is right, as President Biden was, to try to diversify sources on rare earths,” Burns said.
Burns also noted with interest the dichotomy in the Chinese energy sector between their lead on clean energy technology and their continual use of coal, standing out as an inconsistency in China’s efforts. Burns believes that climate change could be a key area of cooperation between China and the United States, emphasizing the importance of the United States’ participation, both technologically and diplomatically.
Burns also described the significant technological competition between the United States and China — an area of central importance. Throughout his presentation, Burns was quick to praise the emphasis that China puts on education and academic achievement, particularly in STEM fields. Pulling from a recent article in The Economist, he compared the 36 percent of Chinese first-year university students majoring in STEM fields to the 5 percent of American first-year students in STEM. “Think about the volume of graduates and the disparity between our country and China,” he said. “Then think about the percentage of those graduates who go into science and technology.”
Currently, areas like artificial intelligence, quantum computing, and biotechnology are taking center stage in technological innovation. “The Chinese are very skilled in terms of industrial processes and doctrine of adapting quickly,” said Burns. He explained that holding a competitive edge lies not only in who is first on the market, but who adopts the technology first, and who is able to unite that technological progress with policy.
“This is the most important relationship that we have in the world,” said Burns. He believes that the true test is whether the United States and China can manage competition so that interests are protected, while avoiding the use of the massive destructive power both countries possess. “We’ve got to normalize the communication and engagement to prevent the worst from happening,” said Burns.
“We’re at a stage of human history where we’re all linked together, and the fate of everybody in this room and all of our countries is linked together by these huge transnational challenges,” said Burns. “We’ve got to learn to compete and yet live in peace with each other in the process.”
This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit MITEI’s Events page for more information on this and additional events.
MIT-IBM Watson AI Lab seed to signal: Amplifying early-career faculty impactAcademia-industry relationship is an early-stage accelerator, supporting professional progress and research.The early years of faculty members’ careers are a formative and exciting time in which to establish a firm footing that helps determine the trajectory of researchers’ studies. This includes building a research team, which demands innovative ideas and direction, creative collaborators, and reliable resources.
For a group of MIT faculty working with and on artificial intelligence, early engagement with the MIT-IBM Watson AI Lab through projects has played an important role helping to promote ambitious lines of inquiry and shaping prolific research groups.
Building momentum
“The MIT-IBM Watson AI Lab has been hugely important for my success, especially when I was starting out,” says Jacob Andreas — associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab — who studies natural language processing (NLP). Shortly after joining MIT, Andreas jump-started his first major project through the MIT-IBM Watson AI Lab, working on language representation and structured data augmentation methods for low-resource languages. “It really was the thing that let me launch my lab and start recruiting students.”
Andreas notes that this occurred during a “pivotal moment” when the field of NLP was undergoing significant shifts to understand language models — a task that required significantly more compute, which was available through the MIT-IBM Watson AI Lab. “I feel like the kind of the work that we did under that [first] project, and in collaboration with all of our people on the IBM side, was pretty helpful in figuring out just how to navigate that transition.” Further, the Andreas group was able to pursue multi-year projects on pre-training, reinforcement learning, and calibration for trustworthy responses, thanks to the computing resources and expertise within the MIT-IBM community.
For several other faculty members, timely participation with the MIT-IBM Watson AI Lab proved to be highly advantageous as well. “Having both intellectual support and also being able to leverage some of the computational resources that are within MIT-IBM, that’s been completely transformative and incredibly important for my research program,” says Yoon Kim — associate professor in EECS, CSAIL, and a researcher with the MIT-IBM Watson AI Lab — who has also seen his research field alter trajectory. Before joining MIT, Kim met his future collaborators during an MIT-IBM postdoctoral position, where he pursued neuro-symbolic model development; now, Kim’s team develops methods to improve large language model (LLM) capabilities and efficiency.
One factor he points to that led to his group’s success is a seamless research process with intellectual partners. This has allowed his MIT-IBM team to apply for a project, experiment at scale, identify bottlenecks, validate techniques, and adapt as necessary to develop cutting-edge methods for potential inclusion in real-world applications. “This is an impetus for new ideas, and that’s, I think, what’s unique about this relationship,” says Kim.
Merging expertise
The nature of the MIT-IBM Watson AI Lab is that it not only brings together researchers in the AI realm to accelerate research, but also blends work across disciplines. Lab researcher and MIT associate professor in EECS and CSAIL Justin Solomon describes his research group as growing up with the lab, and the collaboration as being “crucial … from its beginning until now.” Solomon’s research team focuses on theoretically oriented, geometric problems as they pertain to computer graphics, vision, and machine learning.
Solomon credits the MIT-IBM collaboration with expanding his skill set as well as applications of his group’s work — a sentiment that’s also shared by lab researchers Chuchu Fan, an associate professor of aeronautics and astronautics and a member of the Laboratory for Information and Decision Systems, and Faez Ahmed, associate professor of mechanical engineering. “They [IBM] are able to translate some of these really messy problems from engineering into the sort of mathematical assets that our team can work on, and close the loop,” says Solomon. This, for Solomon, includes fusing distinct AI models that were trained on different datasets for separate tasks. “I think these are all really exciting spaces,” he says.
“I think these early-career projects [with the MIT-IBM Watson AI Lab] largely shaped my own research agenda,” says Fan, whose research intersects robotics, control theory, and safety-critical systems. Like Kim, Solomon, and Andreas, Fan and Ahmed began projects through the collaboration the first year they were able to at MIT. Constraints and optimization govern the problems that Fan and Ahmed address, and so require deep domain knowledge outside of AI.
Working with the MIT-IBM Watson AI Lab enabled Fan’s group to combine formal methods with natural language processing, which she says, allowed the team to go from developing autoregressive task and motion planning for robots to creating LLM-based agents for travel planning, decision-making, and verification. “That work was the first exploration of using an LLM to translate any free-form natural language into some specification that robot can understand, can execute. That’s something that I’m very proud of, and very difficult at the time,” says Fan. Further, through joint investigation, her team has been able to improve LLM reasoning — work that “would be impossible without the IBM support,” she says.
Through the lab, Faez Ahmed’s collaboration facilitated the development of machine-learning methods to accelerate discovery and design within complex mechanical systems. Their Linkages work, for instance, employs “generative optimization” to solve engineering problems in a way that is both data-driven and has precision; more recently, they’re applying multi-modal data and LLMs to computer-aided design. Ahmed states that AI is frequently applied to problems that are already solvable, but could benefit from increased speed or efficiency; however, challenges — like mechanical linkages that were deemed “almost unsolvable” — are now within reach. “I do think that is definitely the hallmark [of our MIT-IBM team],” says Ahmed, praising the achievements of his MIT-IBM group, which is co-lead by Akash Srivastava and Dan Gutfreund of IBM.
What began as initial collaborations for each MIT faculty member has evolved into a lasting intellectual relationship, where both parties are “excited about the science,” and “student-driven,” Ahmed adds. Taken together, the experiences of Jacob Andreas, Yoon Kim, Justin Solomon, Chuchu Fan, and Faez Ahmed speak to the impact that a durable, hands-on, academia-industry relationship can have on establishing research groups and ambitious scientific exploration.
Three anesthesia drugs all have the same effect in the brain, MIT researchers findDiscovering this common mechanism could lead to a universal anesthesia-delivery system to monitor patients more effectively.When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.
This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.
“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.
Miller, Emery Brown, who is the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.
Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.
Destabilizing the brain
Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.
When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.
“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”
In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.
For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.
In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.
This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.
Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.
“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”
The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.
“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”
Monitoring anesthesia
Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.
For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.
To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.
“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.
Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.
The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.
“We the People” depicts inventors, dreamers, and innovators in all 50 statesFor the 250th anniversary of the US, Joshua Bennett’s epic poem set celebrates unexpected lives forged across the nation.Zora Neale Hurston remains one of America’s best-known authors. Charles Henry Turner developed landmark studies about the behavior of bees and spiders. Brian Wilson founded the Beach Boys. George Nissen invented the trampoline. What do they all have in common?
Well, for one thing, they were all innovative Americans — creators and discoverers, producing work no one anticipated. For another, they are all now celebrated as such, in verse, by Joshua Bennett.
That’s right. Bennett — an MIT professor, lauded poet, and literary scholar — is marking the 250th anniversary of the founding of the U.S. with a book-length work of poetry about the country and some of its distinctive figures. In fact, 50 of them: Bennett has written a substantial work featuring remarkable people or inventions from each of the 50 states, meditating on their place in cultural fabric of the U.S.
“There’s so much to be said for a country where you and I are possible, and the things we do are possible,” Bennett says.
The book, “We (The People of the United States),” is published today by Penguin Books. Bennett is a professor and the Distinguished Chair of the Humanities at MIT.
Bennett’s new work has some prominent Americans in it, but is no gauzy listing of familiar icons. Many of the 50 people in his book overcame hardship, poverty, rejection, or discrimination; some have already been rescued from obscurity, but others have not received proper acclaim. Few of them had a straightforward, simple connection with their times.
“It’s about feeling that you have a life in this country which is undeniably complex, but also has this remarkable beauty to it,” Bennett says of the work. “A beauty you helped to create, and that no one can take away from you.”
The figures that Bennett writes about are sources of fascination, and inspiration, demonstrating the kinds of lives it is possible to invent in the U.S.
“We’re in a moment that calls for compelling, historically grounded stories about what America is, what it has been, and what it can be,” Bennett adds. “Can we build a life-affirming vision for the future and those who will inherit it? I’m trying to. I work on it every day.”
Taking flight
“We (The People of the United States)” is inspired, in part, by Virgil’s “Georgics,” pastoral poems by the great Roman poet. Bennett encountered them while a PhD student in literature at Princeton University.
“The poet Susan Stewart, my professor at Princeton, introduced me to Virgil’s Georgics,” Bennett says. “I eventually started to think: What would it look like for me to cover Virgil?” Adding to his interest in the concept, one of his favorite poets, Gwendolyn Brooks, had spent time recasting Virgil’s ancient epic, “The Aeneid,” for her Pulitzer Prize-winning work, “Annie Allen.” She also translated the original work from Latin as a teenager. Moreover, Bennett’s writing has long engaged with the subject of people working the land in America.
“I decided to start writing all these poems about agriculture,” Bennett says. “But then I thought, this would be interesting as an epic poem about America.” As he launched the project, its focus shifted some more: “I started to think about the book as an ode to invention.”
Soon Bennett had worked out the structure. An opening section of the work is about his own family background, becoming a father, and the process of building a life here in Massachusetts.
“Where does my influence, my aspiration, end and the child begin?” Bennett writes in one poem. That section prefigures further themes in the collection about the domestic environments many of its figures emerged from. For the rest of the work, with one innovator or innovation for each of the 50 states, Bennett adopted a regular writing schedule, producing at least one new poem per week until he was finished.
Hurston, one of several famous authors and artists featured in the book, represents Florida. From Ohio, entomologist Charles Henry Turner was the first Black person to receive a PhD from the University of Chicago, in 1907, before conducting a wide range of studies about the cognition and behavior of spiders and bees, among other things.
George Nissen, alternately, was a University of Iowa gymnast who built the first trampoline in the 1930s in his home state — something Bennett calls a “magical device” that brings to life “the scene in your mind of the leap/and of the leap itself, where you are airborne, illuminated/quickly immortal.” Whether these innovations appear through rigorous academic exploration or became mass-market goods that produce flights of fancy, Bennett has a keen eye for people who break new ground and fire our own feelings of wonder.
“We actually are all bound up in it together,” Bennett says. “These different figures, from various fields, eras, and lifelong pursuits are in here together precisely because they helped weave the story of this country together. It’s a story that is still unfolding.”
Bennett is straightforward about the struggles many of his subjects faced. His choice to represent North Carolina is the poet George Moses Horton, an enslaved man who not only learned to read and write in the early 1800s — the state later made that illegal for enslaved persons, in 1830 — but made money selling poems to University of North Carolina students. Indeed, Horton’s work was published in the 1820s. Bennett writes that Horton’s public performance of his poetry was “an ancient art revived in the flesh of a prodigy in chains.”
Bennett’s unblinking regard for historical reality is a motif throughout the work. “To me it’s not only about exploring a history that a reader might feel connected to or want to learn more about,” he says. “It’s about honoring those who lived that history, who helped make some of the most beautiful parts of the present possible, through an engagement with the substance of their lives.”
Just my imagination
Many figures in “We (The People of the United States)” are artists, but of many forms. From watching VH1 as a child, Bennett got into the Beach Boys, and he devotes the California entry in the poem to them. Or as Bennett puts it, he was “newly initiated into a sound/I do not understand until I am old enough to be nostalgic/for windswept locales, and singular moments in time/I never lived through.”
Bennett was learning about the Beach Boys while growing up in Yonkers, New York, far from any California beaches. But then, Brian Wilson wasn’t a surfer either — he grew up in an industrial suburb of Los Angeles. Imagination was the coin of the realm for Wilson, something Bennett understood when Beach Boys songs would veer off in unexpected directions.
“I’ve always been drawn to moments of great surprise, or revelation, in the works of art I love,” Bennett says. “Which is part of why I’ve dedicated my life to poetry. You think one thing is happening in a poem, and suddenly that shock comes, that unexpected turn, or volta. Brian Wilson always had a great understanding of that. It works in pop music. Surprise, sometimes, is a shift in register that takes you higher.”
Various poems in the collection have down-to-earth origins. Bennett remembers his father often fixing things in the family home, from toys to the boiler, saying, “Pass me the Phillips-head,” when he needed a screwdriver. Thus Oregon appears in the book: Portland is where the Phillips-head screwdriver was invented.
In conversation, Bennett notes the hopeful disposition of his father, who after living through Jim Crow and serving in the Vietnam War, worked 10-hour shifts at the U.S. Postal Service to support his family. Even with all the difficulty he experienced in his life, Bennett’s father always encouraged his son to pursue his dreams.
“I’m grateful that I inherited a profound sense of belonging, and dignity, from my parents,” Bennett says. “There was always this feeling that we were part of a much larger story, and that we had a responsibility to tell the truth about the world as we knew it.”
And that’s really what Bennett’s new book is about.
“We can reckon with our history in its fullness and work, tirelessly, toward a world that’s worthy of the most vulnerable among us,” Bennett says. “Like Toni Morrison, we can ‘dream the world as it ought to be.’ And then make it real. That’s my vision.”
Ocean bacteria team up to break down biodegradable plasticMIT researchers uncovered the roles of bacterial species from the environment as they consume biodegradable plastic.Biodegradable plastics could help alleviate the plastic waste crisis that is polluting the environment and harming our health. But how long plastics take to degrade and how environmental bacteria work together to break them down is still largely unknown.
Understanding how plastics are broken down by microbes could help scientists create more sustainable materials and even new microbial recycling systems that convert plastic waste into useful materials.
Now MIT researchers have taken an important first step toward understanding how bacteria work together to break down plastic. In a new paper, the researchers uncovered the role of individual ocean bacteria in the breakdown of a widely used biodegradable plastic. They also showed the complementary processes microbes use to fully consume the plastic, with one microbe cleaving the plastic into its component chemicals and others consuming each chemical.
The researchers say it’s one of the first studies illuminating specific bacterial species’ role in the breakdown of plastic and indicates the speed of plastic degradation can vary widely depending on a few key factors.
“There is a lot of ambiguity about how long these materials actually exist in the environment,” says lead author Marc Foster, a PhD student in the MIT-WHOI Joint Program. “This shows plastic biodegradation is highly dependent on the microbial community where the plastic ends up. It’s also dependent on the plastics — the chemistry of the polymer and how they’re made as a product. It’s important to understand these processes because we’re trying to constrain the environmental lifetime of these materials.”
Joining Foster on the paper are MIT PhD candidate Philip Wasson; former MIT postdoc Andreas Sichert; MIT undergraduate Deborah Madden; Woods Hole Oceanographic Institute researchers Matthew Hayden and Adam Subhas; Chong Becker and Sebastian Gross of the international chemical and plastic company BASF; Otto Cordero, an MIT associate professor of civil and environmental engineering; Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor; and Desirée Plata, MIT’s School of Engineering Distinguished Climate and Energy Professor. The paper appears in the journal Environmental Science and Technology.
Uncovering collaboration
Scientists hope biodegradable plastic can be used to address the mountains of plastic waste piling up in our oceans and landfills.
“More than half of produced plastic is either sent to landfills or directly released into the environment,” Foster says. “But without knowing the specifics of different degradation processes, we won’t be able to accurately predict the lifetime of these materials and better control that degradation.”
To date, many studies into the biodegradation of plastics have focused on single microbial organisms, but Foster says that’s not representative of how most plastics are broken down in the environment.
“It’s really rare for a single bacterium to carry out the full degradation process because it requires a significant metabolic burden to carry all of the enzymatic functions to depolymerize the polymer and then use those chemical subunits as a carbon and energy source,” Foster says.
Other studies have sought to capture the molecular footprints of groups of bacteria as they degrade plastic, which gives a snapshot of the species involved without uncovering the mechanisms of action.
For this study, the researchers wanted to uncover the roles of specific bacterial species as they fully degraded plastic. They started with a type of biodegradable plastic known as an aromatic aliphatic co-polyester. Such plastic is used in shopping bags and food packaging. It’s also often laid across the soil of farms to prevent weeds and retain moisture.
To begin the study, researchers at BASF, which produces that type of plastic, first placed samples of the product into different depths of the Mediterranean Sea to let bacteria grow as a thin biofilm around the plastic. The company then shipped the samples to researchers at MIT, who isolated as many species of bacteria as possible from the samples. The researchers mixed those isolates and identified 30 bacterial species that continued to grow in abundance on the plastic.
Using carbon dioxide as a measure of plastic degradation, the researchers isolated each bacterium and found one, Pseudomonas pachastrellae, that could depolymerize the plastic compounds, breaking them into the three chemical components of the plastic: terephthalic acid, sebacic acid, and butanediol.
But that bacterium couldn’t consume all three components on its own. One by one, the researchers exposed each bacterium to each chemical, finding no bacteria that could consume all three, although they did find some species that could consume one or two chemicals on their own.
Finally, the researchers selected five bacterial species based on their complementary breakdown abilities and showed the small group exhibited the same ability to fully degrade the plastic as the 30-member bacteria community.
“I was able to minimize the degradation process to this simplistic set of specific metabolic functions,” Foster says. “And then when I took out one bacterium, the mineralization dropped, which indicated the organism was controlling the degradation of the polymer. Then when I had each one of the bacteria alone in a culture, none of them could reach the same degradation as all five together, indicating there was this complementary function required. It worked much better than I thought it would.”
The researchers also found the five-member bacteria community couldn’t mineralize a different plastic, showing groups of bacteria may only be able to mineralize specific plastics.
“It highlights that the microbes living where this plastic ends up are going to dictate the plastic’s lifetime,” Foster says.
Faster plastic degradation
Foster notes the bacteria in his study are likely specific to the Mediterranean Sea. The study also only involved bacteria that could survive in his lab environment. Still, Foster says it’s one of the first papers that identifies the roles of bacteria in consuming plastic.
“Most studies wouldn’t be able to identify the specific bacteria that’s controlling each complementary mineralization process,” Foster says. “Here we can say this bacteria controls degradation, these bacteria handle mineralization, and then we show the function of each bacteria and show that together, they can remove the entire polymer.”
Foster says the work is an important first step toward creating microbial systems that are better at breaking down plastic or converting it into something useful. In follow-up work for his PhD, he is exploring what makes successful bacterial pairs for faster plastic consumption and how enzymes dock on plastic particles to initiate and continue degradation.
The work was supported by the MIT Climate and Sustainability Consortium and BASF SE. Partial support was provided by the U.S. National Science Foundation Graduate Research Fellowship Program.
New sensor sniffs out pneumonia on a patient’s breathThe technology could enable fast, point-of-care diagnoses for pneumonia and other lung conditions.Diagnosing some diseases could be as easy as breathing into a tube. MIT engineers have developed a test to detect disease-related compounds in a patient’s breath. The new test could provide a faster way to diagnose pneumonia and other lung conditions. Rather than sit for a chest X-ray or wait hours for a lab result, a patient may one day take a breath test and get a diagnosis within minutes.
The new breath test is a portable, chip-scale sensor that traps and detects synthetic compounds, or “biomarkers,” of disease, which are initially attached to inhalable nanoparticles. The biomarkers serve as tiny tags that can only be unlocked and detached from the nanoparticle by a very particular key, such as a disease-related enzyme.
The idea is that a person would first breathe in the nanoparticles, similar to inhaling asthma medicine. If the person is healthy, the nanoparticles would eventually circulate out of the body intact. If a disease such as pneumonia is present, however, enzymes produced as a result of the infection would snip off the nanoparticles’ biomarkers. These untethered biomarkers would be exhaled and measured, confirming the presence of the disease.
Until now, detecting such exhaled biomarkers required laboratory-grade instruments that are not available in most doctor’s offices. The MIT team has now shown they can detect exhaled biomarkers of pneumonia at extremely low concentrations using the new portable, chip-scale breath test, which they’ve dubbed “PlasmoSniff.”
They plan to incorporate the new sensor into a handheld instrument that could be used in clinical or at-home settings to quickly diagnose pneumonia and other diseases.
“In practice, we envision that a patient would inhale nanoparticles and, within about 10 minutes, exhale a synthetic biomarker that reports on lung status,” says Aditya Garg, a postdoc in MIT’s Department of Mechanical Engineering. “Our new PlasmoSniff technology would enable detection of these exhaled biomarkers within minutes at the point of care.”
Garg is the first author of a study that details the team’s new sensor design. The study appears online in the journal Nano Letters. MIT co-authors include Marissa Morales, Aashini Shah, Daniel Kim, Ming Lei, Jia Dong, Seleem Badawy, Sahil Patel, Sangeeta Bhatia, and Loza Tadesse.
Tailored tags
PlasmoSniff is a project led by Loza Tadesse, an assistant professor of mechanical engineering at MIT. Tadesse’s group builds diagnostic devices that can be used directly in doctor’s office and other point-of-care settings. Her work specializes in spectroscopy, using light to identify key fingerprints in a chemical or molecule.
Several years ago, Tadesse teamed up with Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT. Bhatia’s group focuses in part on developing nanoparticle sensors — tiny particles that can be tagged with a synthetic biomarker. Bhatia can tailor these biomarkers to cleave from their nanoparticle only in the presence of specific “protease” enzymes that are produced by certain diseases.
In work that was reported in 2020, Bhatia’s group demonstrated they could detect cleaved biomarkers of pneumonia from the breath of infected mice. The biomarkers were exhaled at extremely low concentrations, of about 10 parts per billion. Nevertheless, the researchers were able to detect the compounds using mass spectrometry — a technology that is highly sensitive but requires bulky and expensive instrumentation that is not widely available in clinical settings.
“We thought, ‘How can we achieve that same sensitivity, in a way that’s accessible, at the point of need, and in a chip format that can be scalable in terms of cost?’” Tadesse says.
A fingerprint trap
For their new study, Tadesse’s group looked to design a sensitive, portable breath test to quickly detect Bhatia’s biomarkers. Their new design centers on “plasmonics” — the study and manipulation of light and how it interacts with matter at the nanoscale.
The researchers noted that molecules exhibit characteristic vibrational modes, corresponding to the motions of atoms within their chemical bonds. These vibrations can be detected using Raman spectroscopy, an optical technique in which molecules are illuminated with light. A small fraction of the scattered light shifts in energy due to interactions with a molecule’s vibrations. By measuring these energy shifts, researchers can identify molecules based on their distinctive vibrational fingerprints.
To detect Bhatia’s biomarkers, however, they would need to isolate the comparatively few molecules from the dense cloud of many other exhaled molecules. They would also need to boost the biomarker’s vibrational signal, as the Raman-scattered light by an individual molecule is inherently extremely small.
“This is a needle-in-a-haystack problem,” Tadesse says. “Our method detects that needle that would otherwise be embedded in the noise.”
The team’s new sensor is designed to trap target biomarkers and boost their vibrational signal. The core of the sensor is made from a thin gold film, above which the researchers suspended a layer of gold nanoparticles. The gold nanoparticles are coated with a porous silica shell, generating a 5-nanometer-wide gap between the gold nanoparticles and the gold film. The silica is modified to strongly bond with molecules of water. The hydrogen in water can in turn bond with the target biomarkers. If any biomarkers pass through the sensor’s gap, they stick to the water molecules like Velcro.
The sensor’s gap is engineered to strongly amplify light due to plasmonic resonance, where electrons in the nearby gold structures collectively oscillate in response to incoming light, concentrating the electromagnetic field into the gap. Biomarkers trapped in these gaps experience a greatly enhanced electromagnetic field, which amplifies their Raman scattering signal. The researchers can then measure the Raman scattered light, and compare the pattern to the biomarker’s known “fingerprint,” to confirm its presence.
The team worked with Daniel Kim, a graduate student in Bhatia’s lab, and tested the sensor’s performance on samples of lung fluid that they obtained from healthy mice. They spiked these samples with biomarkers of pneumonia that Bhatia’s group previously designed. They then placed the spiked fluid in a vial and heated it to evaporate the fluid, to simulate exhaled breath. They placed the new sensor on the underside of the vial’s cap and used a Raman spectrometer to measure the scattered light as the fluid vapor passed through the sensor.
Through these experiments, they showed the sensor quickly detected biomarkers of pneumonia at extremely low, clinically relevant concentrations.
“Our next goal is to have a breath collection system, like a mask you can breathe into,” Garg says. “A patient would first use something like an asthma inhaler to inhale the nanoparticles. They could then breathe through the mask sensor for five minutes. We could then integrate a handheld Raman spectrometer to detect whatever biomarker is breathed out, within minutes.”
Breath tests for disease, sometimes referred to as disease breathalyzers, are an emerging technology. Most designs are still in the experimental stage, and take different approaches to detect various conditions such as certain cancers, intestinal infections, and viruses such as Covid-19. The MIT team notes that its design can be used to detect diseases beyond pneumonia, as well as biomarkers that are not related to disease, as long as the biomarker of interest has a known vibrational “fingerprint.”
“It’s not just limited to these biomarkers or even diagnostic applications,” Tadesse says. “It can sniff out industrial chemicals or airborne pollutants as well. If a molecule can form hydrogen bonds with water, we can use its vibrational fingerprint to detect it. It’s a pretty universal platform.”
This work was supported, in part, by funding from Open Philanthropy (now Coefficient Giving). Several characterization and fabrication steps were conducted at MIT.nano.
From Idaho to MIT, on a quest to cut methane emissionsPhD student Audrey Parker studies methane mitigation strategies in dairy farms and coal mines, to reduce emissions of the potent greenhouse gas.Amid the hum of milking equipment and the shuffle of cow hooves, PhD student Audrey Parker and her collaborators pull a wagon through a dusty path of a dairy barn, measuring an invisible greenhouse gas drifting through the air. Most engineering students wouldn’t expect their graduate research to take them to a dairy farm, but for Parker, this is where some of the most impactful climate solutions are hiding in plain sight.
The scene was part of the civil and environmental engineering student’s PhD work exploring advanced yet practical technologies to mitigate methane emissions. Such emissions are much more effective at trapping heat in the atmosphere than carbon dioxide. Dairy farms are a major source of methane, and Parker’s wagon carried sensors to measure methane concentrations.
Now in her fourth year in the lab of Professor Desirée Plata, Parker looks forward to visiting such farms. When she’s not taking measurements, she can look across the rolling fields and think of home.
Parker grew up in Boise, Idaho. Her childhood was filled with backpacking trips, skiing, horseback riding, and otherwise enjoying what her natural surroundings had to offer.
“Growing up, we were always outside,” she says. “I knew how to cast a fly rod before I knew how to ride a bike.”
That experience motivated Parker to pursue studies related to preserving the environment she loved. She attended Boise State University as an undergraduate, where she studied sustainable materials development under the mentorship of Assistant Dean Paul Davis. In the summer before her senior year, she was accepted to the MIT Summer Research Program (MSRP), which equips students for graduate school by bringing them to MIT to conduct cutting-edge research. That’s where she began working with Plata, MIT’s Distinguished Climate and Energy Professor.
“They do a great job bringing in people of different backgrounds,” Parker says. “It wasn’t until I started working with Desirée that I started applying materials science as a tool to reduce greenhouse gas emissions. That was a profound insight.”
Parker graduated Boise State University as a Top Ten Scholar, the highest academic honor granted to graduating seniors, before driving across the country to begin her studies at MIT. She decided to devote her PhD to exploring methane mitigation strategies, building on her experience from MSRP.
Her focus is on methane emissions from two sources: air being vented from coal mines, and dairy farms. Those two areas alone account for a large portion of human-driven methane emissions. Both sources are dilute compared to the average oil or gas well, which makes the methane challenging to capture and convert into less environmentally harmful molecules.
Parker also wanted to work with community members in the field during her PhD to ensure whatever technical solutions she developed are practical enough to implement at scale.
“Desirée’s approach is to make sure industry is aware of affordable and sustainable ways to remove methane from their operations, while also incorporating the nuanced expertise stakeholders offer,” Parker says. “I appreciate that she is focused on not just doing work for the chapter of a PhD thesis, but also making our work lead to real-world change.”
Parker’s research explores both quantifying methane at emission sources and designing technologies that could be used to convert methane into carbon dioxide, a molecule with significantly less climate warming potential.
“Methane naturally converts into carbon dioxide over the course of about 12 years in the atmosphere,” Parker explains. “The technology we work on simply speeds up this natural process to achieve near-term climate benefits.”
The main technology Parker studies is a catalyst made from zeolites, an abundant and inexpensive mineral with complex internal structures like honeycombs. Parker dopes the zeolites with copper and explores ways to apply external heat to facilitate complete methane conversion.
Parker and her collaborators assess the durability of the material and its performance under different conditions. Recognizing that real-world deployment environments can often be difficult to replicate in lab, they test catalyst performance in operating dairy farms. In a 2025 paper, she analyzed the use of thermal energy to sustain methane combustion in catalyst materials, detailing when the approach actually brings net-climate benefits.
“If your methane concentrations are low and you’re having to provide so much energy into your system, you could become climate-harmful, but there’s also a context where it’s beneficial,” Parker explains. “Understanding where that trade-off occurs is critical to making sure your mitigation technologies are having the benefits you’re anticipating.”
That kind of systems-level thinking is necessary to understand the long-term impacts of interconnected climate systems.
“It lays a framework that other people can use for their mitigation technologies,” Parker says. “There are trade-offs with every technology, and being transparent about that is important. I think as academics it’s easy to get tunnel vision based on our research. There’s such limited funding for mitigation technologies overall and so making sure those few funding dollars are allocated appropriately is critical for achieving our climate goals.”
Some of Parker’s research findings have informed the design of a pilot-scale methane mitigation system in a coal mine, although she hasn’t gotten a chance to visit it just yet.
Outside of her research, Parker co-chairs the MIT Congressional Visit Days, a program run by the Science Policy Initiative that sends MIT students to Washington to meet with lawmakers and advocate for science-based policies.
“On-the-Hill advocacy teaches you about the policy landscape in unparalleled ways,” Parker says. “Those conversations you have with lawmakers can drive transformational change to bridge the gap between science and policy. It is our job as scientists to communicate our findings clearly so policymakers can design regulations that enable effective solutions.”
This spring, Parker is also leading a workshop for the MIT Climate and Sustainability Consortium around financing the voluntary carbon market. Here, she plans to leverage industry insights to catalyze private capital at the scale needed to meet our climate goals.
Parker also still gets plenty of outdoor time, hiking outside Boston and skiing a bit, though she says the New England ski mountains don’t compare to those out west.
Parker, who expects to complete her PhD next year, says it’s gratifying to be able to devote her research to protecting the environment she loves so much.
“For me it’s about preserving the world I grew up in,” Parker says. “Especially in Idaho, where communities are experiencing more frequent wildfires and more intense droughts. As a child, the natural world provided so much wonder. Today, that same sense of wonder is what drives me to protect it.”
Financial Times ranks MIT Sloan No. 1 in 2026 Global MBA RankingThe school tops the list for the first time as MIT Sloan is recognized for preparing students to navigate today’s tech-driven workplaces.The Financial Times has placed MIT Sloan School of Management at the top of its recently released 2026 Global MBA Ranking. It is the school’s first time gaining the No. 1 spot in the list.
In its announcement of the rankings, the publication noted MIT’s school of management tops the list “at a time of sharpening focus from students on the importance of technology, including artificial intelligence, as they prepare for disruptions in the workplace.”
Global education editor Andrew Jack said in the Financial Times News Briefing podcast that MIT is “very much at the center of the tech revolution that we are seeing.” He added, “there’s no question that we’re talking more and more about artificial intelligence and expertise around some of the technical skills related and notably how you might apply AI in the workplace. That certainly reflects both its technical and engineering computer science skills historically. And [MIT Sloan] is doing a lot with those other departments in the university. So I think that says something very much about how the wider job market and the aspirations of students are evolving.”
“MIT Sloan operates at the intersection of management and technology,” says Richard Locke, the John C Head III Dean of the MIT Sloan School of Management. “Our students and alumni are employing artificial intelligence to solve complex problems in the world and across industries. At MIT Sloan, we focus on doing that work in a way that centers human capabilities, ensuring artificial intelligence extends what humans can do to improve organizations and the world.”
To determine its rankings, the Financial Times considers 21 criteria. Eight of those — accounting for 56 percent of the ranking’s weight — are determined by surveying alumni three years after they have completed their MBA program. School data are used for 34 percent percent of the rank. The remaining 10 percent measures how often full-time faculty publish in top journals.
MIT Sloan ranked fourth for its alumni network, which measures how effectively alumni support one another through career advice, internships, job opportunities, and recruiting efforts.
“This ranking underscores the strength of our global alumni community,” says Kathy Hawkes, senior associate dean of external engagement. “'Sloanies Helping Sloanies' isn’t just a phrase — it’s a lived experience. Our 31,000 alumni actively open doors, share expertise, and invest in each other’s success.”
Scientists discover genetics behind leaky brain blood vessels in Rett syndromeBy showing the problem derives from genetic mutations that lead to overexpression of a microRNA, MIT researchers’ study points to potential treatment.MIT researchers have discovered that two common genetic mutations that cause Rett syndrome each set off a molecular chain of events that compromises the structural integrity of developing brain blood vessels, making them leaky. The study traces the problem to overexpression of a particular microRNA (miRNA-126-3p), and shows that tamping down the miRNA’s levels helps to rescue the vascular defect.
Rett syndrome is a severe developmental disorder affecting both the brain and body. It is caused by various mutations in the widely expressed MECP2 gene, but the first symptoms don’t become apparent until affected children (mostly girls) reach 2-3 years of age. Because that’s a critical time in development for the brain’s blood vessels, neuroscientists in The Picower Institute for Learning and Memory at MIT embarked on a study to model how two common but distinct MeCP2 mutations may affect vascular development and contribute to the disease’s profound neurological pathology.
To conduct the research published recently in Molecular Psychiatry, lead author Tatsuya Osaki and senior author Mriganka Sur developed advanced human tissue cultures to model vessel development, with and without the MeCP2 mutations. The cultures not only enabled them to model and closely observe how the mutations affected the vessels, but also allowed them to molecularly dissect the problems they observed and then to test an intervention that helped.
“A role for microRNAs in Rett syndrome has been shown, but now demonstrating that miRNA-126-3p is actually downstream of MeCP2 and directly implicated in the endothelial cell dysfunction is an important piece of the Rett syndrome puzzle,” says Sur, the Newton Professor of Neuroscience in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences.
Building vessels and spotting leaks
Building on years of tissue engineering experience, including time as a postdoc in the lab of co-author and MIT mechanical engineering and biological engineering Professor Roger D. Kamm, Osaki built “3-dimensional microvascular networks” using human induced pluripotent stem cells (iPS cells) donated by patients with Rett syndrome. The donated cells were induced to become stem cells, and then endothelial cells (the backbone of blood vessels). Embedded in a gel and mixed with fibroblast cells, the endothelial cells self-assembled into networks of tubes, which Osaki then hooked up to microfluidics to provide circulation.
One set of the cultures harbored the mutation R306C. Osaki created a control microvasculature that was genetically identical except that it did not have the mutation. Another set of the cultures had the R168X mutation. And again, Osaki paired that with control culture that was identical except for the mutation using CRISPR.
The research team chose these two mutations because they are each relatively common but affect the MeCP2 gene differently, Sur says. The finding that each of these distinct Rett-causing mutations ultimately led to upregulating miRNA-126-3p and undermining blood vessel integrity suggests that vascular problems are indeed a central feature of the disease.
“There is something common across these mutations,” Sur says.
In particular, lab tests showed that the vessels harboring either mutation showed reduced expression of a protein called ZO-1, which is critical for ensuring that the junctions among endothelial cells in blood vessels form a tight seal (like the grout in a tile floor). ZO-1 also didn’t localize to those junctions as well. Sure enough, further tests showed that the Rett-mutation vessel cultures were relatively leaky compared to the controls.
Similar deficiencies were evident in another cell culture the team created, in which they added astrocyte cells to even more closely simulate the blood-brain-barrier (BBB), which tightly regulates what can go in or out of blood vessels and into the brain. BBB problems are widely suspected of contributing to neurodegenerative diseases such as Alzheimer’s, Huntington’s, and ALS and frontotemporal dementia.
To gain some insight into how the vascular problems might undermine neural function in Rett syndrome, the researchers exposed neurons to medium from their Rett vasculature cultures. Those nerve cells showed reduced electrical activity, a possible sign that secretions from the Rett endothelial cells disrupted the neurons.
Catching a culprit
Generally speaking, the role of MeCP2 is to repress the expression of other genes. The scientists’ expectation, therefore, was that when MeCP2 is compromised by mutations the result would be overexpression of many genes. Yet ZO-1 was downregulated. Something had to account for that and miRNAs were a suspect, Osaki says, because they function as regulators of gene expression.
“That’s why we hypothesized that we should have some mediator between the MeCP2 mutation and ZO-1 downregulation and the BBB permeability increase,” Osaki says. “We focused on the microRNAs.”
Indeed, by profiling miRNAs in the Rett cultures and the controls, the scientists found that miRNA-126-3p was overexpressed. And by sequencing RNA, the team identified more molecular pathways needed to support vascular integrity that were dysregulated in the Rett cultures.
While the sequencing and profile associated miRNA-126-3p upregulation with the altered molecular chain of events, Osaki and Sur sought more definitive proof. To obtain it, they treated the Rett-mutation cultures with an “antisense” — a molecule that reduces miRNA-126-3p levels. Doing that resulted in an increase in ZO-1 expression and a partial restoration of endothelial cell barrier function — meaning less leakiness — in the vessel cultures. Knocking down the miRNA’s expression also restored the molecular pathways the scientists were tracking to more healthy states.
It turns out that there is a drug that inhibits miR-126 called miRisten that is undergoing clinical testing for leukemia. Osaki and Sur say they are planning on administering it to mice modeling Rett syndrome to see if it helps them.
In addition to Osaki, Sur, and Kamm, the paper’s co-authors are Zhengpeng Wan, Koji Haratani, Ylliah Jin, Marco Campisi, and David Barbie.
Funding for the study came from sources including the National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Center for the Social Brain.
Next-generation geothermal energy: Promise, progress, and challengesGeothermal innovators at MIT and elsewhere are seeking deeper and hotter rocks to generate electricity at scale.Geothermal energy, a clean, continuous energy source accessible in many locations, has been slow to catch on. Nearly 2,000 years ago, the Romans made extensive use of geothermal energy — heat from the Earth — including at the spa complex at present-day Bath, England. Electricity was first produced from geothermal sources in the early 1900s in Italy. In the United States, the Geysers geothermal field in California began generating electricity at scale in 1960, and routinely produces more than 725 megawatts of baseload power today.
According to the International Energy Agency (IEA), geothermal energy still supplies less than 1 percent of global electricity demand, although countries like Kenya (more than 40 percent of electricity generation) and Iceland (nearly 30 percent of electricity and 90 percent of the heating) have seen widespread adoption.
In recent years, technological advances, an influx of private capital, and shifting energy and environmental policies have driven renewed interest in expanding development of geothermal energy. If project costs continue to decline, the IEA predicts that geothermal energy could meet 15 percent of the growth in global electricity demand between 2024 and 2050. Many countries, including the United States, Indonesia, New Zealand, and Turkey, are prioritizing an expansion of geothermal energy as part of their broader energy strategies.
Achieving large-scale electricity generation from geothermal sources will depend on a significant expansion of so-called next-generation geothermal. This refers to tapping heat from source rocks at temperatures of 100 degrees Celsius to more than 400 C, often at depths of several kilometers below the surface. Last month, U.S. Congressional Rep. Jake Auchincloss (D-MA) and Rep. Mark Amodei (R-NV) introduced bipartisan legislation to promote research, testing, and development of one type of next-generation geothermal energy known as superhot rock.
Geothermal energy at MIT
Through its leadership in producing the influential 2006 “The Future of Geothermal Energy” report led by former MIT professor Jeff Tester, MIT and the predecessor of the MIT Energy Initiative (MITEI) played an important role in national geothermal strategy two decades ago. In 2008, researchers at the Plasma Science and Fusion Center (PSFC) invented millimeter-wave drilling with support from one of the first MITEI seed innovation grants. The technology, which could be particularly useful for geothermal installations in superhot and deep rock, is being commercialized by MIT spinout Quaise Energy.
MITEI is sponsoring next-generation geothermal projects through its Future Energy Systems Center. A project led by MITEI Research Scientist Pablo Duenas-Martinez focuses on the techno-economics of electricity generation from a geothermal plant co-located with a data center, a timely topic given the proliferation of data center power purchase agreements for electricity generated by geothermal energy. MITEI’s March 4 Spring Symposium focused on next-geothermal energy for the generation of firm power, and many of the leading exploration, drilling, reservoir development, and advanced technology companies working in this area sent panelists and speakers. On March 5, MITEI collaborated with the Clean Air Task Force (CATF) to co-host the GeoTech Summit, which explored accelerating technology development for and investment in next-generation geothermal.
To prepare for the recent symposium, MITEI organized a geothermal bootcamp during MIT’s Independent Activities Period (IAP) that introduced more than 40 members of the MIT community to geothermal basics, key technologies, and related MIT research. Carolyn Ruppel, MITEI’s deputy director of science and technology and the organizer of the IAP bootcamp and Spring Symposium, says, “MITEI’s member companies, which represent leading voices on energy, power generation, infrastructure, heavy industry, and digital technology, are increasingly approaching us about their interest in next-generation geothermal. There is also good momentum building across MIT, ranging from projects at the Earth Resources Laboratory to the millimeter-wave testbed being developed by PSFC and its MIT collaborators, individual projects in academic departments, and of course the work MITEI has been funding.”
Geothermal basics
Temperatures a few tens of meters below the ground are typically stable year-round. In some locations, these temperatures are warmer than the surface in winter and cooler in summer, making it possible to use geothermal heat pumps to moderate temperatures in buildings throughout the year. Overlooking the Charles River, Boston University’s 19-story Center for Computing and Data Science meets an estimated 90 percent of its heating and cooling needs using this kind of geothermal system. At the scale of large institutions or whole towns, thermal networks, district heating, and other approaches can efficiently supply heat from shallow geothermal sources without producing greenhouse gas emissions.
Tapping hotter and usually deeper geothermal sources could generate large amounts of electricity for decades at a single site. Next-generation geothermal is the term applied to these higher-temperature systems developed using enhanced, advanced, and superhot technologies. Enhanced geothermal refers to circulating fluids through engineered fracture systems in deep, dry rock with relatively low native permeability. Advanced geothermal adopts a closed loop approach, in which a working fluid is heated by circulating it through pipes embedded in the subsurface. Superhot geothermal, which is in its infancy, will likely use enhanced geothermal technology to circulate supercritical water through rock at almost 400 C.
Next-generation geothermal
Drill deep enough and higher-temperature resources are nearly ubiquitous beneath the continents, but early-stage development must focus on the most promising sites, where the methods and technologies to routinely reach these hotter rocks can be tested and refined. Locations like Iceland and the southwestern U.S. state of Nevada, where tectonic plates are separating or the Earth’s outer layer is thinning, have hotter temperatures closer to the surface than areas like the northeastern United States, where the Earth’s crust is old, thick, and cooler. Even in the southwestern United States, though, reaching the high temperatures required for generating electricity via geothermal systems will require routinely drilling to depths of greater than 4 kilometers in crystalline rock. This is significantly more challenging than drilling in the sedimentary basins that host most of the world’s oil and gas reserves.
For a location to be suitable for a next-generation geothermal installation requires not only heat, but also a fluid (usually water) to carry the heat. Water circulated through the rock formation to extract heat can be present naturally or brought from elsewhere and injected into the reservoir. This type of system also requires connected permeability such as an engineered fracture network oriented to prevent significant fluid losses and to channel fluid toward the extraction well. Closed-loop (advanced) systems replace the freely circulating water with a working fluid that has favorable thermal characteristics and that is confined in piping.
Various geophysical methods are used to find sites with sufficient heat within a few kilometers of the surface, a prerequisite for their development as next-generation geothermal installations. Apart from direct measurements of temperatures in test boreholes, electrical resistivity and magnetotelluric surveys are among the most useful for inferring subsurface temperature regimes. Both techniques infer the electrical conductivity structure beneath the ground, permitting the identification of relatively warmer and more permeable rocks.
Drilling is often the most time-consuming and expensive part of preparing a site for a geothermal plant. This is particularly true for next-generation geothermal, where the targets can be deep, or the system design may require large-scale horizontal drilling. Over the past few years, numerous innovations have increased drilling rates and attainable depths and temperatures and also lowered costs. Nonetheless, even with high-quality geophysical surveys, “you may spend $10 million on an exploratory well and find no heat,” says Andrew Inglis, the geothermal channel venture builder at MIT Proto Ventures.
Superhot geothermal, a next-generation geothermal approach that is advancing rapidly, presents special challenges. The metal drilling tools, the rocks in the formation, and circulating fluids all behave differently at temperatures of several hundred degrees, and standard practices, materials, and sensors must be significantly modified to tolerate the tough conditions. Once temperatures exceed 374 C in a borehole even ~1 km deep, water reaches a supercritical state. This presents substantial advantages for extracting heat from the formation, but introduces the specter of rapid metal corrosion and precipitation of salts and silica that can quickly foul a borehole. Researchers are investigating substitution of supercritical carbon dioxide for water as a working fluid for superhot geothermal.
MIT innovations advancing next-generation geothermal
The millimeter-wave drilling technology invented at PSFC and being commercialized by Quaise Energy is the highest-profile next-generation geothermal innovation to emerge from MIT so far. Millimeter-wave technology uses microwave energy to vaporize rock and could prove to be several times faster than conventional drilling. PSFC and a multidisciplinary MIT team are devising a dedicated laboratory to study how millimeter-wave drilling interacts with crystalline rock at realistic pressure and temperature conditions, and to test improvements to the existing technology. Steve Wukitch, interim director and principal research scientist at PSFC, notes that “the facility we are building at MIT will allow us to test samples 500 times larger than is currently possible. This is an important step for investigating technologies that could unlock superhot geothermal energy."
MIT Proto Ventures, which focuses on creating startups based on technology invented at MIT, currently hosts a dedicated geothermal energy channel led by Inglis. Since arriving at MIT in late 2024, Inglis has identified inventions and research that could advance next-generation geothermal from disciplines as disparate as mechanical and materials engineering, earth sciences, and chemistry. Examples of technologies originating with MIT researchers include sensors that measure micro-cracking in high-temperature rock, advanced metal alloys that could handle superhot fluids at a fraction of the cost of titanium, and anti-fouling coatings to protect pipes from the caustic geofluids common in hot, deep systems.
MITEI Spring Symposium
At the recent MITEI Spring Symposium, these MIT innovators introduced their technology to MITEI member companies in a session led by Inglis. Wukitch, who moderated a panel on advanced drilling, described the planned millimeter-wave testbed, and Duenas-Martinez led a panel on power generation and storage. Terra Rogers, director for superhot rock geothermal energy at the CATF and the organizer of the joint CATF-MITEI GeoTech Summit on March 5, led a discussion of international and U.S. policies and the regulatory environment for expansion of next-generation geothermal.
Poster presenters included MIT graduate students and researchers, MIT’s D-Lab, and the Geo@MIT geothermal-focused MIT student group, which was recognized with a 2024 bonus award by the U.S. Department of Energy’s Geothermal Technologies Office in the nationwide EnergyTech University Prize competition.
How the brain handles the “cocktail party problem”Using a computational model, neuroscientists showed how the brain can selectively focus attention on one voice among others in a noisy environment.MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.
This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.
Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.
“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.
Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.
Modeling attention
Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.
When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.
“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”
This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.
“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.
In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.
To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.
To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.
“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.
Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.
The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.
“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.
Effects of location
Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.
The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.
“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.
These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.
“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.
Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.
The research was funded by the National Institutes of Health.
3 Questions: Fortifying our planetary defensesMIT astronomers are developing a new way to detect, monitor, and mitigate the threats posed by smaller asteroids to our critical space infrastructure.When people think of asteroids, they tend to picture rare, civilization-ending impacts like those depicted in movies such as “Armageddon.” In reality, the asteroids most likely to affect modern society are much smaller. While kilometer-scale impacts occur only every tens of millions of years, decameter-scale (building-sized) objects strike Earth far more frequently: roughly every couple decades. As astronomers develop new ways to detect and track these smaller asteroids, planetary defense becomes increasingly relevant for protecting the space-based infrastructure that underpins modern life, from GPS navigation to global communications.
The good news for us earthlings is that a team of MIT researchers is on this space-case. Associate Professor Julien de Wit, Research Scientist Artem Burdanov, and their colleagues recently developed a new asteroid-detection method that could be used to track potential asteroid impactors and help protect our planet. They have now applied this new technique to the James Webb Space Telescope (JWST), demonstrating that JWST can be used to detect and characterize decameter-scale asteroids all the way out to the main belt, a crucial step in fortifying our planetary safety and security. De Wit and his colleagues recently co-led with with Andrew Rivkin PhD ’91 new observations of an asteroid called 2024 YR4, which made headlines last year when it was first discovered. They were able to determine that the asteroid will not collide with the Moon, which could have had impacts on Earth’s critical satellite systems.
De Wit, Burdanov, Assistant Professor Richard Teague, and Research Scientist Saverio Cambioni spoke to MIT News about the importance of planetary defense and how MIT astronomers are helping to lead the charge to ensure our planet’s safety.
Q: What is planetary defense and how is the field changing?
Burdanov: Planetary defense is a field of science and engineering that’s focused on preventing asteroids and comets from hitting the Earth. While traditionally the field has been focused on much larger asteroids, thanks to new observational capabilities the field is growing to include monitoring much smaller asteroids that could also have an impact.
De Wit: When people think about asteroids they tend to think of impacts along the lines of these rare, civilization-ending “dinosaur killer” asteroids — objects that are scientifically fascinating but, happily, statistically unlikely on human timescales. But as soon as you move to smaller asteroids, there are so many of them that you’re looking at impacts happening every few decades or less. That becomes much more relevant on human timescales.
Now that our society has become increasingly reliant on space-based infrastructure for communication, navigation technologies like GPS and satellite-based security systems, we can be affected by different populations of smaller asteroids. These smaller asteroids will probably lead to zero direct human casualties but would have very different consequences on our space infrastructure. At the same time, because they are smaller, they require different technologies to monitor and understand them, both for the detection and for the characterization. At MIT, we are working to redefine planetary defense in a way that is far more pertinent, personable, and practical — focusing on these much smaller asteroids that could have real consequences. In other words, planetary defense is no longer just about avoiding extinction-level events. It is about protecting the systems we depend on in the near term.
Q: Why are observations with telescopes like the James Webb Space Telescope (JWST) so important to keeping our planet safe?
Teague: We’re entering a time now where we have these large-scale sky surveys that are going to be producing an incredible amount of data. We’re trying to develop the framework here at MIT where we can sift through that data as quickly and efficiently as possible, and then use the resources that we have available, such as the optical and radio observatories that we run like the MIT Haystack and Wallace Observatories, to follow up on those potential threats as quickly as possible and determine whether they could be problematic.
We’ve been doing trial observations to try and piece together how fast we can do this. The challenging thing is that the smaller objects that we’ve been talking about, the decameter ones, they’re really hard to detect from the ground. They’re just so small, and so that’s why we really need to use space-based facilities like JWST to help keep our planet safe. JWST is just incomparable, really, for detecting these very small, faint objects. A lot of our work at the moment at MIT is trying to understand is how do we build that entire pipeline — from detection to risk assessment to mitigation — under one roof to make it as efficient as possible. And I think this is a really MIT-type of problem to solve. There’s not many places that have the same range of experts in astronomy and engineering and technology to really tackle this properly. It’s really exciting that MIT hosts all these sorts of experts that we’re bringing together to solve this problem and keep our planet safer.
Cambioni: There is going to be what I like to call an asteroid revolution coming up because in addition to JWST’s observational capabilities, there is a new observatory in Chile called the Vera Rubin Observatory that could increase the detection of known small objects in space by a factor of 10. The most important thing to keep in mind, though, is that this observatory will detect the objects but may lose a lot of them. This is where a part of our work is coming in, to basically follow that object and map it as soon as possible. Additionally, Vera Rubin only looks at the reflected light, and it doesn’t get a precise estimate of an asteroid’s size. This gap between detection and characterization is a fundamental problem of asteroid science, between how many objects we discover and how fast we can characterize them. At MIT, we are using our in-house capabilities to help characterize these objects. That includes the MIT Wallace Observatory and the MIT Haystack Observatory.
Q: What role can MIT play in this new era of planetary defense?
De Wit: The reality is that, given the occurrence rate of these smaller asteroids and the new observational capabilities now coming online — from the Rubin Observatory to space-based facilities like JWST — we expect that within the next decade we will identify a handful of decameter-scale objects whose trajectories place them on course to impact the Earth-Moon system within this century. At that point, society will face a very practical question: whether, and how, to respond. Because these are much smaller objects than the dinosaur-killing asteroids, the types of mitigation strategies that we may envision are different. This is also where I think MIT might have an important role to play in the development, design, and potentially even construction of cost-effective, rapid-response asteroid-mitigation strategies. To help organize that effort, we have begun bringing together researchers across the Institute through the Planetary Defense at MIT project, working closely with colleagues on the engineering side.
Teague: What I’m particularly excited about is the way we’ve managed to engage students at MIT in this research as well. We’ve really focused on the impactful research and the way we’re bridging departments and labs within MIT, and this has been a fantastic way to engage students with practical astronomy and research. Saverio has run an IAP [Independent Activities Period] course, and we’re also running a student observing lab with the Wallace Observatory, where we hire a cohort of students every semester, and they’re taught how to use these observatories remotely. They take the data, do the analysis, and this semester, we've got on the order of 10 undergraduate students that are going to be working throughout the semester to take these observations and help us build this observation pipeline.
It's great that here at MIT we’re not only pushing the forefront of the research, but we’re also training the next generation of astronomers that is going to come in and carry this project through and into the future.
Two outstanding MIT educators have been named MacVicar Faculty Fellows: professor of mechanical engineering Amos Winter and professor of electrical engineering and computer science Nickolai Zeldovich.
For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Fellows are chosen through an annual and highly competitive nomination process. The Registrar’s Office coordinates and administers the award on behalf of the Division of Graduate and Undergraduate Education. Nominations are reviewed by an advisory committee, and the provost selects the fellows.
Amos Winter: Bringing excitement to the classroom
Amos Winter is the Germeshausen Professor in the Department of Mechanical Engineering (MechE). He joined the faculty in 2012 and is best known for teaching class 2.007 (Design and Manufacturing I).
A hallmark of Winter’s pedagogy is the way he connects technical learning and core engineering science with real-world impacts. His approach keeps students actively engaged and encourages critical thinking while developing their competence and confidence as design engineers. Current graduate student Ariel Mobius ’24 writes, “Professor Winter is a transformative educator. He successfully blends rigorous technical instruction with lessons on problem scoping and hands-on learning and backs it all up with personalized mentorship. He is a committed advocate for his students and has fundamentally shaped my path as a mechanical engineer.”
Especially notable is Winter’s energetic style and use of interactive materials and demonstrations to make fundamental topics tangible. “He wheels in a large steamer trunk filled with demos he has built or collected to illustrate the day’s topic,” writes Class of 1948 Career Development Professor and assistant professor of mechanical engineering Kaitlyn Becker. “Some demos are enduring classics and others newly designed each year.” Through his “Gearhead Moment of Zen” Winter will share an astonishing car stunt to explain the mechanics using course material. “The theatrics stay in students’ minds,” says Becker, highlighting how Winter’s dramatic examples reinforce learning.
These techniques, combined with a supportive culture, allowed Winter to transform 2.007 from a core class and first subject in engineering design into a celebration of student effort and learning. Throughout the term, students learn how to design and build objects culminating in a robot competition in which their creations tackle themed challenges on a life-size game board. In the past, fewer than half the students were able to compete and today, boosted by Winter’s mentorship and enthusiasm, nearly 97 percent finish a competition-ready robot.
Ralph E. and Eloise F. Cross Professor of Mechanical Engineering David Hardt writes, “Thanks to Amos, this subject has become transformative for many MechE undergraduates.” Becker concurs: “He is the heart and captain of the 2.007 ‘cheer squad,’ cultivating a caring and motivated teaching team.”
Current graduate student Aidan Salazar ’25 notes, “His teaching philosophy is grounded in empowerment: he encourages students to take risks when designing while giving them the confidence and support needed to do so with thoughtful engineering analysis.”
Winter is also deeply invested in students’ growth outside the classroom. He serves as faculty supervisor for MIT’s Formula SAE (Society of Automotive Engineers) and Solar Car teams and guides related UROP projects. In fall 2025 alone, he advised nearly 50 UROP students from the teams, demonstrating his commitment to experiential learning and ability to mentor students at scale.
Salazar continues: “He has offered extraordinary contributions in helping MIT undergraduates embody the Institute’s ‘mens-et-manus’ [‘mind-and-hand’] motto, and I am grateful to be one of the individuals shaped by his teaching.”
“I have always looked up to my colleagues who are MacVicar Fellows as the best educators at the Institute,” writes Winter. “What makes this acknowledgement even more special to me is by earning it from teaching 2.007, which I often cite as one of the best parts of my job. The class is where most mechanical engineering undergraduates gain their first real engineering experience by physically realizing a machine of their own conception. It has been extremely gratifying to watch a generation of students translate their knowledge of engineering and design from the class into their careers … I am honored to have played a role in their intellectual growth and done so meaningfully enough to be recognized as a MacVicar Fellow.”
Nickolai Zeldovich: Inspiring independent thinkers and future teachers
Nickolai Zeldovich is the Joan and Irwin M. (1957) Jacobs Professor of Electrical Engineering and Computer Science (EECS). Student testimonials highlight his unique ability to activate their problem-solving skills, cultivate their intellectual curiosity, and infuse learning with joy.
Katarina Cheng ’25 writes, “From my first day of lecture in the course, I was immediately drawn in by Professor Zeldovich’s joy and enthusiasm for every facet of security and its power,” and Rotem Hemo ’17, ’18 says that Zeldovich “empowers students to find solutions themselves.”
Yael Tauman Kalai, the Ellen Swallow Richards (1873) Professor and professor of EECS concurs. She notes that his lectures — with back-and-forth discussion and probing questions — encourage independent thinking and ensure that “everyone feels a little smarter at the end. It is not surprising that students love him.”
Zeldovich’s affinity for problem-solving translates to his curricular work as well. When he arrived at MIT in 2008, Course 6 offered classes in theoretical and applied cryptography, but lacked a dedicated systems security subject. Recognizing this as a significant gap, Zeldovich took it upon himself to create class 6.566/6.858 (Computer Systems Security) in 2009. Since then, the subject has become a central part of the curriculum, but sustained interest from undergraduates revealed another need, and in 2021 he partnered with colleagues to create a dedicated introductory course: 6.1600 (Foundations of Computer Security).
Edwin Sibley Webster Professor of EECS Srini Devadas writes: “What our curriculum was sorely in need of was a systems security class, and Nickolai immediately and single-handedly created [it],” and has “taught this class to rave reviews ever since.”
The impact of Zeldovich’s thoughtful, inquiry-driven approach to pedagogy extends beyond the walls of his classroom, inspiring future educators, teaching assistants (TAs), and even his faculty colleagues at MIT.
Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor of computer science, writes that Zeldovich has “proven himself to be a dedicated teacher of teachers … One of the things that makes teaching with Nickolai so much fun is that he shares his passion with the undergraduates and MEng students who join the course staff as TAs.”
“[He] encourages the TAs to contribute their own creative ideas to the course,” continues Corrigan-Gibbs. “It should not be a surprise then that 100% of the TAs that we have had in our class have signed up to teach with Nickolai again.”
“Due, in no small part, to how I saw Nickolai lead his classroom, I was inspired to become an educator myself,” writes MIT alumna Anna Arpaci-Dusseau ’23, SM ’24. “I saw that the role of an instructor is not only to teach, but to innovate by thinking of creative projects, and to connect by listening to students’ concerns. As I go forward in my career, I am grateful to have such a wonderful example of an educator to look up to.”
Kalai adds, “I have learned a great deal from the two times that I have ‘taken’ (part of) the class from Nickolai. His extensive knowledge and experience are evident in every lecture. There is so much variety to Nickolai’s teaching.”
Nickolai Zeldovich is the recipient of numerous awards including the EECS Spira Teaching Award (2013), the Edgerton Faculty Achievement Award (2014), the EECS Faculty Research Innovation Fellowship (2018), and the EECS Jamieson Award for Excellence in Teaching (2024).
On receiving this award, Zeldovich says, “MIT has a culture of strong undergraduate education, so being selected as a MacVicar Fellow was truly an honor. It’s a joy to teach smart students about computer systems, and the tradition of co-teaching classes in the EECS department helped me improve as a teacher. Most of all, I look forward to continuing to teach MIT’s students!”
Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website.
New MIT class uses anthropology to improve chatbotsMIT computer science students design AI chatbots to help young users become more social, and socially confident.Young adults growing up in the attention economy — preparing for adult life, with social media and chatbots competing for their attention — can easily fall into unhealthy relationships with digital platforms. But what if chatbots weren’t mere distractions from real life? Could they be designed humanely, as moral partners whose digital goal is to be a social guide rather than an addictive escape?
At MIT, a friendship between two professors — one an anthropologist, the other a computer scientist — led to creation of an undergraduate class that set out to find the answer to those questions. Combining the two seemingly disparate disciplines, the class encourages students to design artificial intelligence chatbots in humane ways that help users improve themselves.
The class, 6.S061/21A.S02 (Humane User Experience Design, a.k.a. Humane UXD), is an upper-level computer science class cross-listed with anthropology. This unique cross-listing allows computer science majors to fulfill a humanities requirement while also pursuing their career objectives. The two professors use methods from linguistic anthropology to teach students how to integrate the interactional and interpersonal needs of humans into programming.
Professor Arvind Satyanarayan, a computer scientist whose research develops tools for interactive data visualization and user interfaces, and Professor Graham Jones, an anthropologist whose research focuses on communication, created Humane UXD last summer with a grant from the MIT Morningside Academy for Design (MAD). The MIT MAD Design Curriculum Program provides funding for faculty to develop new classes or enhance existing classes using innovative pedagogical approaches that transcend departmental boundaries. Alongside the grant provided by MAD, Jones and Satyanarayan received funding to develop Humane UXD under the auspices of the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that brings together departments to create courses integrating computing with other disciplines.
The Design Curriculum Program is currently accepting applications for the 2026-27 academic year; the deadline is Friday, March 20.
Jones and Satyanarayan met several years ago when they co-advised a doctoral student’s research on data visualization for visually impaired people. They’ve since become close friends who can pretty much finish one another’s sentences.
“There’s a way in which you don’t really fully externalize what you know or how you think until you’re teaching,” Jones says. “So, it’s been really fun for me to see Arvind unfurl his expertise as a teacher in a way that lets me see how the pieces fit together — and discover underlying commonalities between our disciplines and our ways of thinking.”
Satyanarayan continues that thought: “One of the things I really enjoyed is the reciprocal version of what Graham said, which is that my field — human-computer interaction — inherited a lot of methods from anthropology, such as interviews and user studies and observation studies. And over the decades, those methods have gotten more and more watered down. As a result, a lot of things have been lost.
“For instance, it was very exciting for me to see how an anthropologist teaches students to interview people. It’s completely different than how I would do it. With my way, we lose the rapport and connection you need to build with your interview participant. Instead, we just extract data from them.”
For Jones’ part, teaching with a computer scientist holds another kind of allure: design. He says that human speech and interaction are organized into underlying genres with stable sets of rules that differentiate an interview at a cocktail party from a conversation at a funeral.
“ChatGPT and other large language models are trained on naturally occurring human communication, so they have all those genres inside them in a latent state, waiting to be activated,” he says.
“As a social scientist, I teach methods for analyzing human conversation, and give students very powerful tools to do that. But it ends up usually being an exercise in pure research, whereas this is a design class, where students are building real-world systems.”
The curriculum appears to be on target for preparing students for jobs after graduation. One student sought permission to miss class for a week because he had a trial internship at a chatbot startup; when he returned, he said his work at the startup was just like what he was learning in class. He got the job.
The sampling of group projects below, built with Google’s Gemini, demonstrates some of what’s possible when, as Jones says, “there’s a really deep intertwining of the technology piece with the humanities piece.” The students’ design work shows that entirely new ways of programming can be conceptualized when the humane is made a priority.
The bots demonstrate clearly that an interdisciplinary class can be designed in such a way that everyone benefits: Students learn more and differently; they can fulfill a non-major course requirement by taking a class that is directly beneficial to their careers; and long-term faculty partnerships can be forged or strengthened.
Team Pond
One project promises to be particularly useful for graduating seniors. Pond is designed to help young college graduates adapt to the challenges of independent adult life. Team Pond configured the chatbot not to simply parrot the user, or to sycophantically praise wrong answers. Instead, Pond provides advice to help with “adulting” (behaving as a responsible adult).
“Pond is built to be your companion from college life into post-college life, to help you in your transition from being a small fish in a small pond to being a small fish in a very big pond,” says sophomore Mary Feliz.
“College is very much a high-proximity and high-context environment, in the sense that everybody around you is going through the same thing, and it’s easy to build relationships or find opportunities, because there are structured pathways that you have access to,” explains graduate student Emaan Khan. “Post-grad life is low-context. You’re not always surrounded by your peers or your professors. It’s no-proximity also, in the sense that you don’t have opportunities at your doorstep. Pond is a tool to help empower you to access certain opportunities, or learn how to navigate.”
Pond’s developers are graduate student Claire Camacho, Harvard University graduate student Charlotte Cheah, Feliz, and Khan. They trained Pond to offer expertise in three areas: social life, professional life, and adult skills.
The students demonstrated Pond’s social communication by asking how to start conversations in a hobby group. Next, they exhibited Pond’s adulting communication by typing in a question.
“My landlord has asked me personal questions that make me uncomfortable. What should I do?”
Appearing in Pond’s textbox was this response: “Politely set boundaries. You are not obligated to answer personal questions. You can gently redirect the conversation back to property-related topics. Know your rights: familiarize yourself with tenant rights in your city or state regarding privacy and landlord conduct.”
When asked how to do that, Pond offered specific instructions: “You can try saying phrases like: ‘I prefer to keep our conversations focused on the property,’ and ‘let’s stick to topics related to the apartment, please.’ Always redirect the conversation back to the apartment or a legitimate maintenance issue. Keep your tone polite but firm. Document any conversations if needed.”
Pond also offered a role-playing scenario to help the user learn what polite-but-firm language might be in that situation.
“The ethos of the practice mode is that you are actively building a skill, so that after using Pond for some time, you feel confident that you can swim on your own,” Khan says. The chatbot uses a point system that allows users to graduate from a topic, and a treasure chest to store prizes, elements added to boost the bot’s appeal.
Team News Nest
Another of the projects, News Nest, provides a sophisticated means of helping young people engage with credible news sources in a way that makes it fun. The name is derived from the program’s 10 appealing and colorful birds, each of which focuses on a particular area of news. If you want the headlines, you ask Polly the Parrot, the main news carrier; if you’re interested in science, Gaia the Goose guides you. The flock also includes Flynn the Falcon, sports reporter; Credo the Crow, for crime and legal news; Edwin the Eagle, a business and economics news guide; Pizzazz the Peacock for pop and entertainment stories; and Pixel the Pigeon, a technology news specialist.
News Nest’s development team is made up of MIT seniors Tiana Jiang and Krystal Montgomery, and junior Natalie Tan. They intentionally built News Nest to prevent “doomscrolling,” provide media transparency (sources and political leanings are always shown), and they created a clever, healthy buffer from emotional manipulation and engagement traps by employing birds rather than human characters.
Team M^3 (Multi-Agent Murder Mystery)
A third team, M^3, decided to experiment with making AI humane by keeping it fun. MIT senior Rodis Aguilar, junior David De La Torre, and second-year Deeraj Pothapragada developed M^3, a social deduction multi-agent murder mystery that incorporates four chatbots as different personalities: Gemini, OpenAI’s ChatGPT, xAI’s Grok, and Anthropic’s Claude. The user is the fifth player.
Like a regular murder mystery, there are locations, weapons, and lies. The user has to guess who committed the murder. It’s very similar to a board or online game played with real players, only these are enhanced AI opponents you can’t see, who may or may not tell the truth in response to questions. Users can’t get too involved with one chatbot, because they’re playing all four. Also, as in a real life murder mystery game, the user is sometimes guilty.
New photonic device efficiently beams light into free spaceLight-emitting structures that curl off the chip surface could enable advanced displays, high-speed optical communications, and larger-scale quantum computers.Photonic chips use light to process data instead of electricity, enabling faster communication speeds and greater bandwidth. Most of that light typically stays on the chip, trapped in optical wires, and is difficult to transmit to the outside world in an efficient manner.
If a lot of light could be rapidly and precisely beamed off the chip, free from the confines of the wiring, it could open the door to higher-resolution displays, smaller Lidar systems, more precise 3D printers, or larger-scale quantum computers.
Now, researchers from MIT and elsewhere have developed a new class of photonic devices that enable the precise broadcasting of light from the chip into free space in a scalable way.
Their chip uses an array of microscopic structures that curl upward, resembling tiny, glowing ski jumps. The researchers can carefully control how light is emitted from thousands of these tiny structures at once.
They used this new platform to project detailed, full-color images that are roughly half the size of a grain of table salt. Used in this way, the technology could aid in the development of lightweight augmented reality glasses or compact displays.
They also demonstrated how photonic “ski jumps” could be used to precisely control quantum bits, or qubits, in a quantum computing system.
“On a chip, light travels in wires, but in our normal, free-space world, light travels wherever it wants. Interfacing between these two worlds has long been a challenge. But now, with this new platform, we can create thousands of individually controllable laser beams that can interact with the world outside the chip in a single shot,” says Henry Wen, a visiting research scientist in the Research Laboratory of Electronics (RLE) at MIT, research scientist at MITRE, and co-lead author of a paper on the new platform.
He is joined on the paper by co-lead authors Matt Saha, of MITRE; Andrew S. Greenspon, a visiting scientist in RLE and MITRE; Matthew Zimmermann, of MITRE; Matt Eichenfeld, a professor at the University of Arizona; senior author Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science and principal investigator in the Quantum Photonics and Artificial Intelligence Group and the RLE; as well as others at MIT, MITRE, Sandia National Laboratories, and the University of Arizona. The research appears today in Nature.
A scalable platform
This work grew out of the Quantum Moonshot Program, a collaboration between MIT, the University of Colorado at Boulder, the MITRE Corporation, and Sandia National Laboratories to develop a novel quantum computing platform using the diamond-based qubits being developed in the Englund lab.
These diamond-based qubits are controlled using laser beams, and the researchers needed a way to interact with millions of qubits at once.
“We can’t control a million laser beams, but we may need to control a million qubits. So, we needed something that can shoot laser beams into free space and scan them over a large area, kind of like firing a T-shirt gun into the crowd at a sports stadium,” Wen says.
Existing methods used to broadcast and steer light off a photonic chip typically work with only a few beams at once and can’t scale up enough to interact with millions of qubits.
To create a scalable platform, the researchers developed a new fabrication technique. Their method produces photonic chips with tiny structures that curve upward off the chip’s surface to shine laser beams into free space.
They built these tiny “ski jumps” for light by creating two-layer structures from two different materials. Each material expands differently when it cools down from the high fabrication temperatures.
The researchers designed the structures with special patterns in each layer so that, when the temperature changes, the difference in strain between the materials causes the entire structure to curve upward as it cools.
This is the same effect as in an old-fashioned thermostat, which utilizes a coil of two metallic materials that curl and uncurl based on the temperature in the room, triggering the HVAC system. “Both of these materials, silicon nitride and aluminum nitride, were separate technologies. Finding a way to put them together was really the fabrication innovation that enables the ski jumps. This wouldn’t have been possible without the pioneering contributions of Matt Eichenfield and Andrew Leenheer at Sandia National Labs,” Wen says.
On the chip, connected waveguides funnel light to the ski jump structures. The researchers use a series of modulators to rapidly and precisely control how that light is turned on and off, enabling them to project light off the chip and move it around in free space.
Painting with light
They can broadcast light in different colors and, by tweaking the frequencies of light, adjust the density of the pattern that is emitted. In this way, they can essentially paint pictures in free space using light.
“This system is so stable we don’t even need to correct for errors. The pattern stays perfectly still on its own. We just calculate what color lasers need to be on at a given time and then turn it on,” he says.
Because the individual points of light, or pixels, are so tiny, the researchers can use this platform to generate extremely high-resolution displays. For instance, with their technique, 30,000 pixels can be fit into the same area that can hold only two pixels used in smartphone displays, Wen says.
“Our platform is the ideal optical engine because our pixels are at the physical limit of how small a pixel can be,” he adds.
Beyond high-resolution displays and larger quantum computers with diamond-based qubits, the method could be used to produce Lidars that are small enough to fit on tiny robots.
It could also be utilized in 3D printing processes that fabricate objects using lasers to cure layers of resin. Because their chip generates controllable beams of light so rapidly, it could greatly increase the speed of these printing processes, allowing users to create more complex objects.
In the future, the researchers want to scale their system up and conduct additional experiments on the yield and uniformity of the light, design a larger system to capture light from an array of photonic chips with “ski jumps,” and conduct robustness tests to see how long the devices last.
“We envision this opening the door to a new class of lab-on-chip capabilities and lithographically defined micro-opto-robotic agents,” Wen says.
This research was funded, in part, by the MITRE Quantum Moonshot Program, the U.S. Department of Energy, and the Center for Integrated Nanotechnologies.
A better method for planning complex visual tasksA new hybrid system could help robots navigate in changing environments or increase the efficiency of multirobot assembly teams.MIT researchers have developed a generative artificial intelligence-driven approach for planning long-term visual tasks, like robot navigation, that is about twice as effective as some existing techniques.
Their method uses a specialized vision-language model to perceive the scenario in an image and simulate actions needed to reach a goal. Then a second model translates those simulations into a standard programming language for planning problems, and refines the solution.
In the end, the system automatically generates a set of files that can be fed into classical planning software, which computes a plan to achieve the goal. This two-step system generated plans with an average success rate of about 70 percent, outperforming the best baseline methods that could only reach about 30 percent.
Importantly, the system can solve new problems it hasn’t encountered before, making it well-suited for real environments where conditions can change at a moment’s notice.
“Our framework combines the advantages of vision-language models, like their ability to understand images, with the strong planning capabilities of a formal solver,” says Yilun Hao, an aeronautics and astronautics (AeroAstro) graduate student at MIT and lead author of an open-access paper on this technique. “It can take a single image and move it through simulation and then to a reliable, long-horizon plan that could be useful in many real-life applications.”
She is joined on the paper by Yongchao Chen, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS); Chuchu Fan, an associate professor in AeroAstro and a principal investigator in LIDS; and Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab. The paper will be presented at the International Conference on Learning Representations.
Tackling visual tasks
For the past few years, Fan and her colleagues have studied the use of generative AI models to perform complex reasoning and planning, often employing large language models (LLMs) to process text inputs.
Many real-world planning problems, like robotic assembly and autonomous driving, have visual inputs that an LLM can’t handle well on its own. The researchers sought to expand into the visual domain by utilizing vision-language models (VLMs), powerful AI systems that can process images and text.
But VLMs struggle to understand spatial relationships between objects in a scene and often fail to reason correctly over many steps. This makes it difficult to use VLMs for long-range planning.
On the other hand, scientists have developed robust, formal planners that can generate effective long-horizon plans for complex situations. However, these software systems can’t process visual inputs and require expert knowledge to encode a problem into language the solver can understand.
Fan and her team built an automatic planning system that takes the best of both methods. The system, called VLM-guided formal planning (VLMFP), utilizes two specialized VLMs that work together to turn visual planning problems into ready-to-use files for formal planning software.
The researchers first carefully trained a small model they call SimVLM to specialize in describing the scenario in an image using natural language and simulating a sequence of actions in that scenario. Then a much larger model, which they call GenVLM, uses the description from SimVLM to generate a set of initial files in a formal planning language known as the Planning Domain Definition Language (PDDL).
The files are ready to be fed into a classical PDDL solver, which computes a step-by-step plan to solve the task. GenVLM compares the results of the solver with those of the simulator and iteratively refines the PDDL files.
“The generator and simulator work together to be able to reach the exact same result, which is an action simulation that achieves the goal,” Hao says.
Because GenVLM is a large generative AI model, it has seen many examples of PDDL during training and learned how this formal language can solve a wide range of problems. This existing knowledge enables the model to generate accurate PDDL files.
A flexible approach
VLMFP generates two separate PDDL files. The first is a domain file that defines the environment, valid actions, and domain rules. It also produces a problem file that defines the initial states and the goal of a particular problem at hand.
“One advantage of PDDL is the domain file is the same for all instances in that environment. This makes our framework good at generalizing to unseen instances under the same domain,” Hao explains.
To enable the system to generalize effectively, the researchers needed to carefully design just enough training data for SimVLM so the model learned to understand the problem and goal without memorizing patterns in the scenario. When tested, SimVLM successfully described the scenario, simulated actions, and detected if the goal was reached in about 85 percent of experiments.
Overall, the VLMFP framework achieved a success rate of about 60 percent on six 2D planning tasks and greater than 80 percent on two 3D tasks, including multirobot collaboration and robotic assembly. It also generated valid plans for more than 50 percent of scenarios it hadn’t seen before, far outpacing the baseline methods.
“Our framework can generalize when the rules change in different situations. This gives our system the flexibility to solve many types of visual-based planning problems,” Fan adds.
In the future, the researchers want to enable VLMFP to handle more complex scenarios and explore methods to identify and mitigate hallucinations by the VLMs.
“In the long term, generative AI models could act as agents and make use of the right tools to solve much more complicated problems. But what does it mean to have the right tools, and how do we incorporate those tools? There is still a long way to go, but by bringing visual-based planning into the picture, this work is an important piece of the puzzle,” Fan says.
This work was funded, in part, by the MIT-IBM Watson AI Lab.
2026 MIT Sloan Sports Analytics Conference shows why data make a differenceOver 2,500 — including coaches and players from Team USA, the NBA, WNBA, and more — attended MIT’s industry-leading event, now in its 20th year.With time dwindling in the Olympic women’s ice hockey gold medal game on Feb. 19, players for Team USA and Team Canada lined up for a key faceoff in Canada’s end. Canada had a 1-0 lead. USA had 2:23 left, and an ace up their sleeve: analytics.
USA Coach John Wroblewski pulled the goalkeeper, to get a player advantage, and had forward Alex Carpenter take the faceoff. Statistics show that Carpenter is not only very good at winning faceoffs; she also wins a lot of them cleanly. That allows her team to quickly regain possession, without too many teammates nearby. Knowing that, Wroblewski directed the USA players to spread out, largely away from the faceoff circle, in position to circulate the puck as soon as they got it back.
Carpenter won the faceoff, and Team USA quickly started a passing move. Laila Edwards soon launched a shot that longtime star Hilary Knight deflected in for the crucial, game-tying goal with 2:04 left. Team USA then won in overtime. And data-driven decision-making had also won big; indeed, it helped change the Olympics.
“What it does for a coach, the other thing these analytics do, is … it allows you to move forward with this confidence level,” Wroblewski said on Saturday at the 20th annual MIT Sloan Sports Analytics Conference (SSAC), during a hockey analytics panel where he detailed his decision-making for that faceoff, and in the gold medal game generally.
Using the data, he added, lets coaches “limit the emotion” that might cloud their in-game decisions.
“By the time you get to that decision, you’re then allowed the freedom to step away from the decision, to allow the players to go earn their medal,” Wroblewski added.
You don’t usually find coaches divulging their tactical secrets just three weeks after a big game has been played. But then, this is the MIT Sloan conference, a trailblazing forum that has helped analytics ideas spread throughout sports. Coaches, players, and analysts know any data-driven discussion will find an interested audience.
“Analytics was massive for us going into the gold medal game,” Wroblewski said.
20 years on: From classrooms to convention halls
The 20th edition of SSAC was a strong one, with many substantive panel discussions and interviews; the annual research paper, hackathon, and case study contests; mentorship events and informal networking opportunities; and more. Over 2,500 people attended the two-day event, held at Boston’s Menino Conference and Exhibition Center (MCEC). The conference was founded in 2007 by Daryl Morey, now president of basketball operations for the NBA Philadelphia 76ers, and Jessica Gelman, now CEO of the Kraft Analytics Group.
The first three editions of the conference were held on the MIT campus. In 2010, it first moved to the MCEC (one of two regular convention-center sites it uses), and starting in 2011, the conference became a two-day event.
Today people attend for the panels, the career opportunities, and, in some cases, to make news. NBA Commissioner Adam Silver was on hand this year, engaging in an on-stage conversation with former WNBA great Sue Bird, publicly addressing some of the key issues facing his league, and drawing wide media coverage.
First, though, Silver reflected about attending the second edition of the conference on the MIT campus in 2008, when he was deputy commissioner.
“It was literally a classroom of 20 people we were talking to,” Silver recalled. “I think it was the beginning of the moment when people were taking sports as a discipline more seriously. … I give Jessica and Daryl a lot of credit [for that].”
Addressing tanking and gambling
A core part of Silver’s comments focused on two big issues in pro basketball: tanking and gambling. About eight NBA teams appear to be tanking this season, that is, losing games in order to increase their chances of getting a high draft pick.
“We are going to make substantial changes for next year,” Silver said, although he also added: “I am an incrementalist. I think we’ve got to be a little bit careful about how huge a change we make at once. I’m not ruling anything out. But I am paying attention to that.”
To be sure, tanking has long been a part of professional basketball, as Bird noted during the conversation.
“We did it in Seattle, to be honest,” Bird said. “Breanna Stewart was coming out of college. We were in a ‘rebuild.’”
Still, in this NBA season, tanking has become an epidemic, in “a little bit of a perfect storm,” as Silver put it on Friday. And almost every proposed solution seems to have drawbacks. Perhaps the simplest cure for tanking, actually, would be robust analytical studies showing that it is not a very effective team-building strategy. If that is what the numbers reveal, of course.
Meanwhile, multiple arrests of NBA players and coaches at the beginning of the season show further that sports gambling continues to present challenges to professional sports leagues.
“I personally think there should be more regulation now, not less,” Silver said on Friday, suggesting that federal rules would simplify things in the U.S., where 39 states allow sports gambling to some extent. He also said the NBA can continue to work on monitoring data to protect against gambling scandals.
“I think there are some large-platform companies are that are looking at a business opportunity to come in and in a much more sophisticated way work as a detection service with the league,” Silver said.
Through it all, Silver said, the NBA will continue to be a data-driven operation. Have you watched a game with a long instant-replay review, and gotten a little impatient? Still, have you kept watching that game? So does almost everyone.
“For years people would tell us, ‘Don’t use instant replay, because you’ll turn fans off,’” Silver said. However, he added, “The data suggests, in terms of ratings and what servers tell us, you almost never lose a fan when you’re going to replay. Because they want to see the replay and they want to see what happened.”
The minnows got big
Sports analytics took root in baseball, with its discrete pitcher-hitter actions. Legendary MLB general manager Branch Rickey employed a statistician for the great Brooklyn Dodgers of the 1950s; the famous manager Earl Weaver thought analytically with the Baltimore Orioles in the 1970s. Baseball analyst Bill James made sports analytics a viable pursuit with his annual “Baseball Abstract” bestsellers in the 1980s, and Michael Lewis’ “Moneyball” popularized it.
But data can be applied to all sports — and sometimes is most valuable when only some teams are interested in it. Take soccer. In the English Premier League, about three clubs have been heavily oriented around analytics over the last decade: Liverpool FC, Brighton FC, and Brentford FC. That has helped Liverpool win multiple titles, while Brighton and Brentford, smaller clubs, have startled many with their success.
Saturday at SSAC, Brentford’s majority owner Matthew Benham made one of his most visible public appearances, in an onstage interview with podcaster Roger Bennett. Benham first made money wagering on soccer, then invested in Brentford, his childhood club.
“The information we used in the early days was really, really rudimentary,” Benham said. In his account, his success building an analytics-based club has only partly been about the numbers.
“A lot of the success has just been in running things efficiently.” Benham said. He prefers to have management discussions that are an “exchange of views, rather than debate,” since the latter implies an interaction with a clear winner and loser. Instead, compiling independent-minded views from his executives is more important.
Brentford also uses “a combination of old-style scouting and data” for its player acquisition decisions, Benham said. Not every decision works. Brentford could have signed current Arsenal FC star Eberechi Eze for a mere $4 million pounds in 2019, and passed; Crystal Palace FC acquired Eze, then realized a windfall when Arsenal purchased his services.
Still, pressed by Bennett to specify a little more about his analytical thinking, Benham implied that strikers are valuable not only for their finishing skills, but for consistently getting open for shots on goal. Fans tend to focus too much on a player’s misses, rather than how many chances are created by their off-ball work.
“Getting in position is way, way more informative than finishing,” Benham said.
A similar insight seems to have guided Liverpool’s thinking. As it happens, a Friday panel at SSAC featured Ian Graham, who ran Liverpool’s analytics operations from 2012 to 2023, and weighed in on a number of subjects. Among other things, Graham noted, teams are too cautious when tied late in a match; soccer grants three points for a win, one for a draw, and zero for a loss, so from a tied position, the reward for winning is twice as great as the penalty for losing.
“Teams don’t go for it enough,” Graham said. “Teams think a draw is an okay result.”
The limits of knowledge
Sports, of course, are ultimately played by imperfect, injury-prone, and sometimes exhausted athletes. One consistent lesson from the MIT Sloan conference involves the limits of data and plans.
“We think the data is giving us an answer, when actually it’s giving us some information, and we still have to make a choice,” said Ariana Andonian, vice president of player personnel for the Philadelphia 76ers, during a basketball panel on Saturday.
Asked about the promise of artificial intelligence for sports analytics, Sonia Raman, head coach of the WNBA’s Seattle Storm, noted that its insights might always be limited by circumstances.
“It’s not like you can just get an AI report in the middle of the game that says, ‘Get some shooting in,’” said Raman, who, prior to coaching in the WNBA and NBA served for 12 years as head coach of the MIT women’s basketball team.
“You can have a great plan, but if it’s poorly executed, it’s way worse than a poor plan that’s well executed,” added Steven Adams, a center for the NBA’s Houston Rockets (who is currently not playing due to injury), during the same panel.
And yet, in some games and matches, the analytics do work, the plans do come to fruition, and the numbers do make a difference. When that happens, as John Wroblewski can now attest, the results are golden.
MIT undergraduates help US high schoolers tackle calculusThe MIT4America Calculus Project is a growing source of tutoring support on a topic that’s a “gateway” to many STEM careers.This year in a rural school district in southeastern Montana, one high school student is taking calculus. For many people, calculus is daunting enough, even when teachers are used to offering it and peers are around to help. Studying it solo can be even harder. Yet this lone student has an unusual source of support: weekly tutoring directly from an MIT undergraduate, by Zoom, a long-distance but helpful way to stay on track.
It's part of a new program called the MIT4America Calculus Project, launched from the Institute last summer, in which MIT undergraduates and alumni work with school districts across the U.S., from Montana to Texas to New York, to tutor high school students. The logic is compelling: Students are highly proficient at calculus at MIT, where it is almost a requirement for admissions and success. The new civic-minded outreach program lets those MIT people share their knowledge and skills, getting high schoolers ready for further studies and even jobs, especially in STEM fields.
“Calculus is a gateway for many students into STEM higher education and careers,” says MIT Professor Eric Klopfer, a co-director of the MIT4America Calculus Project. “We can help more students, in more places, fulfill requirements and get into great universities across the country, whether MIT or others, and then into STEM careers. We want to make sure they have the skills to do that.”
At this point, the project is working closely with 14 school districts across the U.S., deploying 30 current MIT undergraduates and seven alumni as tutors. The weekly sessions are carefully coordinated with school administrators and teachers, and the MIT tutors have all received training. The program started with an in-person summer calculus camp in 2025; by next summer, the goal is to be collaborating with about 20 schools districts.
“We want it to have a lasting impact,” says Claudia Urrea, an education scholar and co-director of the MIT4America Calculus Project “It’s not just about students passing an exam, but having tutors who look like what the students want to be in the future, who are mentors, have conversations, and make sure the high school students are learning.”
Klopfer and Urrea bring substantial experience to the project. Klopfer is a professor and director of the Scheller Teacher Education Program and the Education Arcade at MIT; Urrea is executive director for the PreK-12 Initiative at MIT Open Learning.
The MIT4America Calculus Project is supported through a gift from the Siegel Family Endowment and was developed as a project in consultation with David Siegel SM ’86, PhD ’91, a computer scientist and entrepreneur who is chairman of the firm Two Sigma.
“David Siegel came to us with two powerful questions: How can we spread the educational impact of MIT beyond our walls? And how can we open doors to STEM careers for U.S. high school students who don’t have access to calculus?” says MIT President Sally Kornbluth.
She adds: “The MIT4America Calculus Project answers those questions in a perfectly MIT way: Reflecting the Institute’s longstanding commitment to national service, the MIT4America Calculus Project supplies an innovative answer to a hard practical problem, and it taps the uncommon skill of the people of MIT to create opportunity for others. We’re enormously grateful to David for his inspiration and guidance, and to the Siegel Family Endowment for the financial support that brought this idea to life.”
The U.S. has more than 13,000 school districts, and about half of them offer calculus classes. The MIT effort aims to work with districts that already have existing programs but are striving to add educational support for them, often while facing funding constraints or other limitations.
In contrast to the one-student calculus situation in Montana, the project is also working with a 5,000-student district in Texas, south of Dallas, where about 60 high school students take calculus; currently five Institute undergraduates are tutoring 15 students from the district’s schools.
“Other organizations are involved in efforts like this, but I think MIT brings some unique things to it,” Klopfer says. “I think involving our undergraduates in this is an awesome contribution. Our students really do come from all over the place, and are sometimes connecting back to their home states and communities, and that makes a difference on both sides.”
He adds: “I see benefits for our students, too. They develop good ways of communicating, working with other people and building skills. They can gain a lot of great experience.”
In addition to the in-person summer calculus camp, which is expected to continue, and the weekly video tutoring, the MIT4America Calculus Project is working on developing online tools that help guide high school students as well. Still, Urrea emphasizes, the project is built around “the importance of people. A community of support is very important, to have connections that build over time. The human aspect of the program is irreplaceable.”
The MIT tutors must pass rigorous training sessions that cover pedagogy and other aspects of working with high school students, and know they are making a substantial commitment of time and effort.
It has been worth it, as teachers say their high school students have been responding very well to the MIT tutors.
“For students to be able to see themselves in their tutors is a really cool thing,” says Shilpa Agrawal ’15, director of computer science and an AP calculus AB teacher at Comp Sci High in the Bronx, New York, where 15 students are participating in the project.
“It’s led to a lot of success for my students,” adds Agrawal, who majored in computer science at MIT. She is part of the national network of MIT-connected teachers who have been helping the program grow organically, having reached out to Jenny Gardony, manager of the MIT4America Calculus Project.
Gardony, who is also the math project manager in MIT’s Scheller Teacher Education program, has been receiving enthusiastic emails from teachers in other participating districts since the project started.
“I have to start by saying thank you,” one teacher wrote to Gardony, adding that one student “was so excited in class today. The session she had with you made her so confident. She’s always nervous, but today she was smiling and helping others, and that was 100 percent because of you.”
Gardony adds: “The fact that a busy teacher takes the time to send that email, I’m touched they would do that.”
Understanding how “marine snow” acts as a carbon sinkA new study finds hitchhiking bacteria dissolve essential ballast in ubiquitous “snow” particles, which could counteract the ocean’s ability to sequester carbon.In some parts of the deep ocean, it can look like it’s snowing. This “marine snow” is the dust and detritus that organisms slough off as they die and decompose. Marine snow can fall several kilometers to the deepest parts of the ocean, where the particles are buried in the seafloor for millennia.
Now, researchers at MIT and their collaborators have found that as marine snow falls, tiny hitchhikers may limit how deep the particles can sink before dissolving away. The team shows that when bacteria hitch a ride on marine snow particles, the microbes can eat away at calcium carbonate, which is an essential ballast that helps particles sink.
The findings, which appear this week in the Proceedings of the National Academy of Sciences, could explain how calcium carbonate dissolves in shallow layers of the ocean, where scientists had assumed it should remain intact. The results could also change scientists’ understanding of how quickly the ocean can sequester carbon from the atmosphere.
Marine snow is a main vehicle by which the ocean stores carbon. At the ocean’s surface, phytoplankton absorb carbon dioxide from the atmosphere and convert the gas into other forms of carbon, including calcium carbonate — the same stuff that’s found in shells and corals. When they die, bits of phytoplankton drift down through the ocean as marine snow, carrying the carbon with them. If the particles make it to the deep ocean, the carbon they carry can be buried and locked away for hundreds to thousands of years.
But the new study suggests bacteria may be working against the ocean’s ability to sequester carbon. By eroding the particles’ calcium carbonate, bacteria can significantly slow the sinking of marine snow. The more they linger, the more likely the particles are to be respired quickly, releasing carbon dioxide into the shallow ocean, and possibly back into the atmosphere.
“What we’ve shown is that carbon may not sink as deep or as fast as one may expect,” says study co-author Andrew Babbin, an associate professor in the Department of Earth, Atmospheric and Planetary Sciences and a mission director at the Climate Project at MIT. “As humanity tries to design our way out of the problem of having so much CO2 in the atmosphere, we have to take into account these natural microbial mechanisms and feedbacks.”
The study’s primary author is Benedict Borer, a former MIT postdoc who is now an assistant professor of marine and coastal sciences at the Rutgers School of Environmental and Biological Sciences; co-authors include Adam Subhas and Matthew Hayden at the Woods Hole Oceanographic Institution and Ryan Woosley, a principal research scientist at MIT’s Center for Sustainability Science and Strategy.
Losing weight
Marine snow acts as the ocean’s main “biological pump,” the process by which the ocean pulls carbon from the surface down into the deep ocean. Scientists estimate that marine snow is responsible for drawing down billions of tons of carbon each year. Marine snow’s ability to sink comes mainly from minerals such as calcium carbonate embedded within the particles. The mineral is a dense ballast that weighs down the particle. The more calcium carbonate a particle has, the faster it sinks.
Scientists had assumed based on thermodynamics that calcium carbonate should not dissolve within the ocean’s upper layers, given the general temperature and pH conditions in the surface ocean. Any calcium carbonate that is bound up in marine snow should then safely sink to depths greater than 1,000 meters without dissolving along the way.
But oceanographers have long observed signs of dissolved calcium carbonate in the upper layers of the ocean, suggesting that something other than the ocean’s macroscale conditions was dissolving the mineral and slowing down the ocean’s biological pump.
And indeed, the MIT team has found that what is dissolving calcium carbonate in shallow waters is a microscale process that occurs within the immediate environment of an individual particle.
“Most oceanographers think about the macroscale, and in this instance what’s happening in microscopic particles is what is actually controlling bulk seawater chemistry,” Borer says. “Consequences abound for the ocean’s carbon dioxide sequestration capacity.”
A sinking sweetspot
In their new study, the researchers set up an experiment to simulate a sinking particle of marine snow and its interactions at the microscale. The team synthesized particles similar to marine snow that they made from varying concentrations of calcium carbonate and bacteria — organisms that are often found feasting on the particles in the ocean.
“The ocean is a fairly dilute medium with respect to organic matter,” Babbin says. “So organisms like bacteria have to search for food. And particles of marine snow are like cheeseburgers for bacteria.”
The team designed a small microfluidic chip to contain the particles, and flowed seawater through the chip at various rates to simulate different sinking speeds in the ocean. Their experiments revealed that whenever particles hosted any bacteria, they also rapidly lost some calcium carbonate, which dissolved into the surrounding seawater. As bacteria feed on the particles’ organic material, the microbes excrete acidic waste products that act to dissolve the particles’ inorganic, ballasting calcium carbonate.
The researchers also found that the amount of calcium carbonate that dissolves depends on how fast the particles sink. They flowed seawater around the particles at slow, intermediate, and fast speeds and found that both slow and fast sinking limit the amount of calcium carbonate that’s dissolved. With slow sinking, particles don’t receive as much oxygen from their surroundings, which essentially suffocates any hitchhiking bacteria. When particles sink quickly, bacteria may be sufficiently oxygenated, but any waste products that they produce can be easily flushed away before they can dissolve the particles’ calcium carbonate.
At intermediate speeds, there is a sweet spot: Bacteria are sufficiently oxygenated and can also build up enough waste, enabling the microbes to efficiently dissolve calcium carbonate.
Overall, the work shows that bacteria can have a significant effect on marine snow’s ability to sink and sequester carbon in the deep ocean. Bacteria can be found everywhere, and particularly in the shallower ocean regions. Even if macroscale conditions in these upper layers should not dissolve calcium carbonate, the study finds bacteria working at the microscale most likely do.
The findings could explain oceanographers’ observations of dissolved calcium carbonate in shallow ocean regions. They also illustrate that bacteria and other microbes may be working against the ocean’s natural ability to sequester carbon, by dissolving marine snow’s ballast and slowing its descent into the deep ocean. As humans consider climate solutions that involve enhancing the ocean’s biological pump, the researchers emphasize that bacteria’s role must be taken into account.
“Insights from this work are vital to predict how ecosystems will respond to marine carbon dioxide removal attempts, and overall how the oceans will change in response to future climate scenarios,” says Benedict Borer, who carried out the study’s experiments as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.
This work was supported, in part, by the Simons Foundation, the National Science Foundation, and the Climate Project at MIT.
Improving AI models’ ability to explain their predictionsA new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output.
Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.
The concepts the model uses are usually defined in advance by human experts. For instance, a clinician could suggest the use of concepts like “clustered brown dots” and “variegated pigmentation” to predict that a medical image shows melanoma.
But previously defined concepts could be irrelevant or lack sufficient detail for a specific task, reducing the model’s accuracy. The new method extracts concepts the model has already learned while it was trained to perform that particular task, and forces the model to use those, producing better explanations than standard concept bottleneck models.
The approach utilizes a pair of specialized machine-learning models that automatically extract knowledge from a target model and translate it into plain-language concepts. In the end, their technique can convert any pretrained computer vision model into one that can use concepts to explain its reasoning.
“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” says lead author Antonio De Santis, a graduate student at Polytechnic University of Milan who completed this research while a visiting graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.
He is joined on a paper about the work by Schrasing Tong SM ’20, PhD ’26; Marco Brambilla, professor of computer science and engineering at Polytechnic University of Milan; and senior author Lalana Kagal, a principal research scientist in CSAIL. The research will be presented at the International Conference on Learning Representations.
Building a better bottleneck
Concept bottleneck models (CBMs) are a popular approach for improving AI explainability. These techniques add an intermediate step by forcing a computer vision model to predict the concepts present in an image, then use those concepts to make a final prediction.
This intermediate step, or “bottleneck,” helps users understand the model’s reasoning.
For example, a model that identifies bird species could select concepts like “yellow legs” and “blue wings” before predicting a barn swallow.
But because these concepts are often generated in advance by humans or large language models (LLMs), they might not fit the specific task. In addition, even if given a set of pre-defined concepts, the model sometimes utilizes undesirable learned information anyway, which is a problem known as information leakage.
“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis explains.
The MIT researchers had a different idea: Since the model has been trained on a vast amount of data, it may have learned the concepts needed to generate accurate predictions for the particular task at hand. They sought to build a CBM by extracting this existing knowledge and converting it into text a human can understand.
In the first step of their method, a specialized deep-learning model called a sparse autoencoder selectively takes the most relevant features the model learned and reconstructs them into a handful of concepts. Then, a multimodal LLM describes each concept in plain language.
This multimodal LLM also annotates images in the dataset by identifying which concepts are present and absent in each image. The researchers use this annotated dataset to train a concept bottleneck module to recognize the concepts.
They incorporate this module into the target model, forcing it to make predictions using only the set of learned concepts the researchers extracted.
Controlling the concepts
They overcame many challenges as they developed this method, from ensuring the LLM annotated concepts correctly to determining whether the sparse autoencoder had identified human-understandable concepts.
To prevent the model from using unknown or unwanted concepts, they restrict it to use only five concepts for each prediction. This also forces the model to choose the most relevant concepts and makes the explanations more understandable.
When they compared their approach to state-of-the-art CBMs on tasks like predicting bird species and identifying skin lesions in medical images, their method achieved the highest accuracy while providing more precise explanations.
Their approach also generated concepts that were more applicable to the images in the dataset.
“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis says.
In the future, the researchers want to study potential solutions to the information leakage problem, perhaps by adding additional concept bottleneck modules so unwanted concepts can’t leak through. They also plan to scale up their method by using a larger multimodal LLM to annotate a bigger training dataset, which could boost performance.
“I’m excited by this work because it pushes interpretable AI in a very promising direction and creates a natural bridge to symbolic AI and knowledge graphs,” says Andreas Hotho, professor and head of the Data Science Chair at the University of Würzburg, who was not involved with this work. “By deriving concept bottlenecks from the model’s own internal mechanisms rather than only from human-defined concepts, it offers a path toward explanations that are more faithful to the model and opens many opportunities for follow-up work with structured knowledge.”
This research was supported by the Progetto Rocca Doctoral Fellowship, the Italian Ministry of University and Research under the National Recovery and Resilience Plan, Thales Alenia Space, and the European Union under the NextGenerationEU project.
Personal tech, social media, and the “decline of humanity”In Compton Lecture at MIT, social psychologist Jonathan Haidt warns of dramatic global decay in cognition, attention spans, and civic life, and urges curbs to tech use.Social psychologist Jonathan Haidt presented a forceful analysis of the damage smartphones and social media are doing to our cognition, our civic fabric, and our children’s wellbeing, while calling for renewed action to ward off their effects, in the latest of MIT’s Compton Lectures on Wednesday.
“Around the world, people are getting diminished,” Haidt said. “Less intelligent, less happy, less competent. And it’s happening very fast … My argument is that if we continue with current trends as AI is coming in, it’s going to accelerate. The decline of humanity is going to accelerate.”
Haidt is the Thomas Cooley Professor of Ethical Leadership at New York University’s Stern School of Business and the author of the recent bestseller “The Anxious Generation,” which suggests that the widespread adoption of social media in the 2010s has been especially damaging to young women, making them prone to anxiety and depression.
But as Haidt has continued to examine the effects of social media on society, he has started focusing on additional issues. Our inability to put our phones away, our compulsion to check social media, and the way we spend hours a day watching short-form videos, may be causing problems that go far beyond any rise in anxiety and depression.
“It turns out, it’s not the biggest thing,” Haidt said. “There’s something bigger. It is the destruction of the human capacity to pay attention. Because this is affecting most people, including most adults. And if you imagine humanity with 10 to 50 percent of its attentional ability sucked out of it, there’s not much left. We’re not very capable of doing things if we can’t focus or stay on a task for more than 30 seconds.”
Whatever solution may emerge to these problems, Haidt declared, is going to have to come from “human agency. People see a problem, they figure out a way around it. That’s what I’m hoping to promote here [to] this very important audience. So please consider what I’m saying, these trends, and then work to change them.”
Haidt’s lecture, titled, “Life After Babel: Democracy and Human Development in the Fractured, Lonely World That Technology Gave Us,” was delivered before a capacity audience of over 400 people in MIT’s Huntington Hall (Room 10-250).
The lecture spanned a variety of related topics, with Haidt presenting chart after chart showing the onset of declines in cognition, educational achievement, and happiness, which all have seemed to occur soon after the widespread adoption of smartphones in the 2010s. The individual adoption of smartphones, he notes, has been compounded by the way schools brought internet-connected computing devices into classrooms around the same time.
“The biggest, the most costly mistake we’ve ever made in the history of American education [was] to put computers and high tech on people’s desks,” Haidt said.
Distractible students with shorter attention spans are reading fewer books, he noted; some cinema students cannot sit through films. The top quartile of students is continuing to do well, he noted, but for most students, proficiency levels have dipped notably since the 2010s.
“Fifty years of progress in education, 50 years of progress, up in smoke, gone,” Haidt said. “We’re back to where we were 50 years ago. That’s pretty big, that’s pretty serious.”
As Haidt mentioned multiple times in his remarks, he is not an opponent of all forms of technology, or even personal communication technology, but rather is seeking to mitigate its harmful effects.
“I love tech, I love modernity, we’re all dependent on it, I love my iPhone,” Haidt said. Just as he finished that sentence, an audience member’s cellphone started ringing loudly — drawing a huge laugh from the audience.
“I did not plant that, that was a truly spontaneous demonstration of what I’m talking about,” Haidt said.
Haidt was introduced by MIT President Sally A. Kornbluth, who called him “a leading voice for reforming society’s relationship with technology.” She praised Haidt’s work, noting that he wants to “encourage us to imagine a more positive role for technology in humanity’s future.”
The Karl Taylor Compton Lecture Series was introduced in 1957. It is named for MIT’s ninth president, who led the Institute from 1930 to 1948 and also served as chair of the MIT Corporation from 1948 to 1954.
Compton, as Kornbluth observed, helped MIT evolve from being more strictly an engineering school into “a great global university” with “a new focus on fundamental scientific research.” During World War II, she added, Compton “helped invent the longstanding partnership between the federal government and America’s research universities.”
Haidt received his undergraduate degree from Yale University and his PhD from the University of Pennsylvania. He taught on the faculty at the University of Virginia for 16 years before joining New York University. He has written several widely discussed books about contemporary civic life. Haidt observed that the problems stemming from device distraction and compulsion appear to have hit so-called Gen Z — those born from roughly the mid 1990s to the early 2010s — especially hard, though he emphasized that people in that cohort are essentially victims of circumstance.
“I am not blaming Gen Z,” Haidt said. “I am saying we raised our kids in a way — we allowed the technology companies to take over childhood. We allowed a few giant companies to own our children’s attention, to show them millions of short videos, to destroy their ability to pay attention, to stop them from reading books, and this is the result.”
For a portion of his remarks, Haidt also examined the consequences of social media for politics, showing data that chart the global diminishment of democracy since the 2010s, while the world has become soaked in misinformation and conflictual online interactions.
“That, I think, is what digital technology has done to us,” Haidt said. “It was supposed to connect us, but instead it has broken things, divided us, and made it very, very hard to ever have common facts, common truths, common stories again.”
Towards the end of his remarks, Haidt also speculated that the effects of using AI will be corrosive as well, intellectually and psychologically.
“AI is not exactly going to make us better at interacting with human beings,” Haidt said.
With all this in mind, what is to be done, to limit the intellectual and social damage from tech devices and social media? For one thing, Haidt suggested, we should be less impressed by high-tech innovations and social media.
“We need to disenthrall ourselves from technology,” Haidt said, paraphrasing a line written by President Abraham Lincoln. He added: “I suggest that we have a generally negative view … of social media and of AI.” This kind of “more emotionally negative or ambivalent view” will make it easier for us to reverse the way technology seems to control us.
As a practical matter, Haidt suggested, that means taking steps to limit our exposure to technology. His own public-advocacy group, The Anxious Generation Movement, suggests a set of four reforms: No smartphones for kids before they are high-school age; no social media before age 16; making school phone-free, from bell to bell; and giving kids more independence, free play, and responsibility in the world.
Certainly there is movement toward some of these concepts. Some school districts in the U.S. are banning or limiting phone usage; Australia has also instituted a ban on social media for anyone under 16, while a handful of other countries have announced similar plans.
“There’s a gigantic techlash happening right now,” Haidt suggested. For all the sudden changes technology has introduced within the last 15 years, it is still possible, for now, for people to find a way out of our tech-induced predicament.
“The good news is, there is human agency,” Haidt said.
Seeds of something differentKate Brown’s book, “Tiny Gardens Everywhere,” examines the hidden history of urban farming, its extensive use, and the politics of growing food.In Berlin in the early 1870s, tourists began visiting a neighborhood called Barackia. It did not have museums, palaces, or any other typical attractions. Barackia was a working-class neighborhood where people grew their own food, lived in small dwellings, and established communal arrangements outside the normal reach of government. For a while, anyway: In 1872, authorities moved in and cleared out Barackia.
Still, the concept of small urban farming caught on, and by 1900, about 50,000 Berlin households were growing food, often in so-called arbor colonies. The practice has never really been abandoned: Today, by law, Germany provides residents the right to garden, still a very popular activity in urban areas.
“In a little space, you can grow a lot of produce,” says MIT Professor Kate Brown, author of a new history of urban gardening. “Once you set things up, it need not take too much of your time. You can have another job and still grow food. You go to Berlin, and many German cities, and you’re surrounded by these allotment gardens.”
But as the residents of Barackia found out, there is a politics that comes with growing your own food on common land. Other interests may want to claim or at least control the land themselves. Or they may want to tap into the labor being applied to gardening. One way or another, when many people start gardening for themselves, core questions about the organization of society seem to sprout up, too.
Brown examines urban gardening and its politics in her book, “Tiny Gardens Everywhere: The Past Present, and Future of the Self-Provisioning City,” published by W.W. Norton. Brown is the Thomas M. Siebel Distinguished Professor in History of Science within MIT’s Program in Science, Technology, and Society. In a book with global scope, ranging from Estonia to Amsterdam and Washington, Brown contends that urban gardening has many positive spillover effects, from health and environmental benefits to community-building — apart from periods of pushback when others are trying to eliminate it.
“Community after community, people work together to create food provisioning practices,” Brown says. “And after people come together for food and gardening, then they start to solve other problems they have.”
Whose land?
“Tiny Gardens Everywhere” was several years in making, featuring extensive archival research, with firsthand material interspersed too. Brown’s story begins in England, which had a very long tradition of people farming on common land, often in ingenious, productive ways. “Every bit of space was used,” Brown says.
Then in the late 18th century, the advent of “enclosures” for wealthy landowners privatized much land and changed social life for many. Poorer residents, even when given allotments, found them not big enough for self-sustaining farming.
“Private property is largely an English invention of the late 18th century,” Brown says. “Before that, and in many parts of the world to this day, people live with a communal sense of the ownership of the land.”
In Brown’s interpretation, the enclosure movement did not just claim more land for Britain’s upper class. In an industrializing society, it forced peasants into the factory labor force, whether in cities or in rural mills.
“Really what they were doing when they were enclosing land was trying to control labor, as much as controlling land,” Brown says. “Because of their reliance on the commons, peasants were self-sufficient. Who wants to go work in a factory when you could be out having fun in the forest? Expelling people was a way to force them to become homeless, the landless proletariat, with nothing to sell but their labor, for 10 or 18 hours a day.”
As Brown chronicles in detail, conflicts between communal agriculture and propertied classes have often arisen since then, in varying forms. And sometimes, in now-surprising places, because urban gardening has been more extensive than we realize.
A core section of “Tiny Gardens Everywhere” focuses on Washington, in the middle of the 20th century. During the Great Migration, which started a few decades earlier, African Americans moved north en masse, resettling in cities. They brought extensive knowledge with them about agricultural practices. In the part of Washington east of the Anacostia River, Black neighborhoods relied heavily on local gardening.
“They set up workers’ cooperatives and food cooperatives,” Brown observes. Despite often living in difficult circumstances, she adds, “I think it’s very interesting that people found really smart ways to adapt. If the neighborhood had no garbage collection, they’ll compost. No sewers, they’ll compost.”
Over time, though, authorities started claiming more land, designating homes to be torn down, and restricting the ability of residents to garden. And as Brown chronicles in the book, local officials have used restrictions on urban gardening as a form of social control, with one outcome being a homogenized social and physical landscape characterized by grass lawns for the affluent.
How much food?
Even if urban gardening has been fairly common in the past, it is natural to ask: How much food can it really provide? As Brown sees it, there is not one simple answer to that question. At one point, victory gardens provided about 40 percent of all produce grown in the U.S. during World War II, for one thing. More recently, In 1996, 91 percent of the potatoes Russians ate came from urban allotment gardens on 1.5 percent of the country’s arable land.
As Brown also points out in the book, we may not be growing as much produce on giant farms as we think. Only 2 percent of agricultural land in the U.S. is used to produce fruit and vegetables, for instance. The U.S., as a variety of analysts and writers have observed, has corn-and soy-heavy agricultural systems at its largest scales, principally yielding corn-based products. That means, Brown says, “They’re really inefficiently [working] to produce ethanol, corn syrup, chips, and cookies.”
In sum, she adds, “Yes, I do think it’s possible to take an urban space and grow a good part of the fruits and vegetables that people need there.”
It is possible, Brown believes, for things to change on this front. For instance, Florida, Illinois, and Maine, three fairly different states in terms of politics, all have laws providing the right to garden. Oklahoma has a similar bill in the works.
“I think this approach to looking at our right to grow food, to self-provision, to step outside of markets for our most essential needs, is something that represents a unifying set of desires in our hyperpolarized political landscape,” Brown says.
Other scholars have praised “Tiny Gardens Everywhere.” Sunil Amrith, a professor of history at Yale University, has said that Brown uses “enviable skill, craft, and insight” to show “that the past of small-scale urban provisioning contains the seeds of a more resilient future for us all.”
For her part, Brown hopes the book will not only appeal to readers, but spur them to become more active about the issue, as gardeners, local policy advocates, or both.
“One of the drumbeats of this book is that people do — and maybe we all should — win the right to garden,” Brown says.
Studying the genetic basis of disease to explore fundamental biological questionsEliezer Calo’s studies of craniofacial malformations have yielded insight into protein synthesis and embryonic development.When Associate Professor Eliezer Calo PhD ’11 was applying for faculty positions, he was drawn to MIT not only because it’s his alma mater, but also because the Department of Biology places high value on exploring fundamental questions in biology.
In his own lab, Calo studies how craniofacial malformations arise. One motivation is to seek new treatments for those conditions, but another is to learn more about fundamental biological processes such as protein synthesis and embryonic development.
“We use genes that are mutated in disease to uncover fundamental biology,” Calo says. “Mutations that happen in disease are an experiment of nature, telling us that those are the important genes, and then we follow them up not only to understand the disease, but to fundamentally understand what the genes are doing.”
Calo’s work has led to new insights into how ribosomes form and how they control protein synthesis, as well as how the nucleolus, the birthplace of ribosomes in eukaryotic cells, has evolved over hundreds of millions of years.
In addition to earning his PhD at MIT, Calo is also an alumnus of MIT’s Summer Research Program (MSRP), which helps to prepare undergraduate students to pursue graduate education. Since starting his lab at MIT, Calo has made a point to serve as a research mentor for the program every summer.
“I feel that it’s important to pay back to the program that helped me realize what I wanted to do,” he says.
A nontraditional path
Growing up in a mountainous region of Puerto Rico, Calo was the first person from his family to finish high school. While attending the University of Puerto Rico at Rio Piedras, the largest university in Puerto Rico, he explored a few different majors before settling on chemistry.
One of Calo’s chemistry professors invited him to work in her lab, where he did a research project studying the pharmacokinetics of cell receptors found on the surface of astrocytes, a type of brain cell.
“It was a good mix of biology and chemistry,” he says. “I think that that was the catalyst to my pursuit of a career in the sciences.”
He learned about MSRP from Mandana Sassanfar, a senior lecturer in biology at MIT and director of outreach for several MIT departments, at an event hosted by the University of Puerto Rico for students interested in careers in science. He was accepted into the program, and during the summer after his junior year, he worked in the lab of Stephen Bell, an MIT professor of biology. That experience, he says, was transformative.
“Without that experience, I would have probably chosen another career,” Calo says. In Puerto Rico, “science was fun, but it was a struggle. We had to make everything from scratch, and then you spend more time making reagents than doing the experiments. When I came to MIT, I was always doing experiments.”
During that time, he realized he liked working in biology labs more than chemistry labs, so when he applied to graduate school, he decided to move into biology. He applied to five schools, including MIT. “Once MIT sent me the acceptance, I just had to say yes. There was no saying no.”
At MIT, Calo thought he might study biochemistry, but he ended up focusing on cancer biology instead, working with Jacqueline Lees, an MIT biology professor, to study the role of the tumor suppressor protein Rb.
After finishing his PhD, Calo felt burnt out and wasn’t sure if he wanted to continue along the academic track. His thesis committee advisors encouraged him to do a postdoc just to try it out, and he ended up going to Stanford University, where he fell in love with California and switched to a new research focus. Working with Joanna Wysocka, a professor of developmental biology at Stanford, he began investigating how development is affected by the regulation of proteins that make up cellular ribosomes — a topic his lab still studies today.
Returning to MIT
When searching for faculty jobs, Calo focused mainly on schools in California, but also sent an application to MIT. As he was deciding between offers from MIT and the University of California at Berkeley, a phone call from Angelika Amon, the late MIT professor of biology, convinced him to take the cross-country leap back to MIT.
“She had me on the phone for more than one hour telling me why I should come to MIT,” he recalls. “And that was so heartwarming that I could not say no.”
Since starting his lab in 2017, Calo has been studying how defects in the production of ribosomes give rise to diseases, in particular craniofacial malformations such as cleft palate.
Ribosomes, the organelles where protein synthesis occurs, consist of two subunits made of about 80 proteins. A longstanding question in biology has been why mutations that affect ribosome formation appear to primarily affect the development of the face, but not the rest of the body.
In a 2018 study, Calo discovered that this is because the mutations that affect ribosomes can have secondary effects that influence craniofacial development. In embryonic cells that form the face, a mutation in a gene called TCOF1 activates p53 at a higher level than in other embryonic cells. High levels of p53 cause some of those cells to undergo programmed cell death, leading to Treacher-Collins Syndrome, a disorder that produces underdeveloped bones in the jaw and cheek.
His lab has shown that p53 overactivation is also responsible for craniofacial disorders caused by mutations in RNA splicing factors.
Calo’s work on ribosome formation also led him to explore another cell organelle known as the nucleolus, whose role is to help build ribosomes. In 2023, he found that a gene called TCOF1, which can lead to craniofacial malformations when mutated, is critical for forming the three compartments that make up the nucleolus.
That finding, he says, could help to explain a major evolutionary shift that occurred around 300 million years ago, when the nucleolus transitioned from two to three compartments. This “tripartite” nucleolus is found in all reptiles, birds, and mammals.
“That was quite surprising,” Calo says. “Studying disease-related genes allowed us to understand a very fundamental biological process of how the nucleolus evolved, which has been a question in the field that nobody could figure out the answer for.”
X-raying rocks reveals their carbon-storing capacityNew research by MIT geophysicists could assist efforts to remove carbon from the atmosphere and store it underground.To avoid the worst effects of climate change, many billions of metric tons of industrially generated carbon dioxide will have to be captured and stored away by the end of this century. One place to store such an enormous amount of greenhouse gas is in the Earth itself. If carbon dioxide were pumped into the cracks and crevices of certain underground rocks, the fluid would react with the rocks and solidify carbon into minerals. In this way, carbon dioxide could potentially be locked in the rocks in stable form for millions of years without escaping back into the atmosphere.
Some pilot projects are already underway to demonstrate such “carbon mineralization.” These efforts have shown promising results in terms of successfully mineralizing a large fraction of injected CO2. However, it’s less clear how the rocks will evolve in response. As carbonate minerals build up, could they clog up cracks and crevices, and ultimately limit the amount of CO2 that can be stored there?
In a new study appearing today in the journal AGU Advances, MIT geophysicists explored this question by injecting fluid into rocks and using X-ray imaging to reveal how the rocks’ pores and cracks changed as the fluid mineralized over time.
Their experiments showed that as fluid was pumped into a rock, the rock’s permeability (the ability of fluid to flow through the rock) dropped sharply. Meanwhile, the rock’s porosity (its total amount of empty space, in the form of pores, cracks, and crevices) remained relatively the same.
The researchers found that the minerals were precipitating out of the fluid in the narrower tunnels connecting larger pores, preventing the fluid from flowing into larger pore spaces. Even so, the fluid did keep flowing through the rock, albeit at a lower rate, and minerals continued to form in some cracks and crevices.
“This study gives you information about what the rock does during this complex mineralization process, which could give you ideas of how to engineer it in your favor,” says study co-author Matėj Peč, an associate professor of geophysics at MIT.
“If you were injecting CO2 into the Earth and saw a massive drop in permeability, some operators might think they clogged up the well,” adds co-author Jonathan Simpson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But as this study shows, in some cases, it might not matter that much. As long as you maintain some flow rate, you could still form minerals and sequester carbon.”
The study’s co-authors include EAPS Research Scientist Hoagy O’Ghaffari as well as Sharath Mahavadi and Jean Elkhoury of the Schlumberger-Doll Research Center.
Drilling down
Basalt is a type of erupted volcanic rock that is found in places such as Hawaii and Iceland. When fresh, it’s highly porous, with many pores, cracks, and fractures running through the rock. The material also is highly concentrated in iron, calcium, and magnesium. When these elements come in contact with fluid that is rich in carbon dioxide, they can dissolve and mix with CO2, and eventually form a new carbon-based mineral such as calcite or dolomite.
A project based in Iceland and piloted by the company CarbFix is currently injecting CO2-rich water into the region’s underground basalt to see how much of the gas can be converted and stored as minerals in the rock. The company’s runs have shown that more than 95 percent of the CO2 injected into the ground turns into minerals within two years. The project is proving that the chemistry works: CO2 can be stored as stone.
But the MIT team wondered how this mineralization process would change the basalt itself and its capacity to store carbon over time.
“Most studies investigating carbon mineralization have focused on optimizing the geochemistry, but we wanted to know how mineralization would affect real reservoir rocks,” Peč says.
Rocky X-rays
The team set out to study how the permeability and porosity of basalt changes as carbonate-rich fluid is pumped into and mineralized throughout the rock.
“Porosity refers to the total amount of open space in the rock, which could be in the form of vesicles, or fractures that connect vesicles, or even areas between sand grains,” Simpson explains. “Because there is so much variability in porosity patterns, there is no one-to-one relationship between porosity and permeability. You could have a lot of pores that are not necessarily connected. So, even if 20 percent of the rock is porous, if they’re not connected, then permeability would be zero.”
“The details of that are important to understand for all these problems of injecting fluids into the subsurface,” Peč emphasizes.
For their experiments, the team used samples of basalt that Peč and others collected during a trip to Iceland in 2023. They placed small samples of basalt in a custom-built holder that they connected to two tubes, through which they flowed two different fluids, each containing a solution that, when mixed, quickly forms carbonate minerals. The team chose this combination of fluids in order to speed up the mineralization process.
In the actual process of injecting CO2 into the ground, CO2 is mixed with water. When it is pumped through rock, the fluid first goes through a “dissolution” phase, in which it draws elements such as iron, calcium, and magnesium out from the basalt and into the CO2-rich fluid. This dissolution process can take some time, before the mineralization process, in which CO2 mixes with the drawn-out elements, can proceed.
The researchers used two different fluids that quickly mineralize when combined, in order to skip over the dissolution phase and efficiently study the effects of the mineralization process. The team was able to see the mineralization process occurring within the rock, at an unprecedented level of detail, by performing experiments inside an X-ray CT scanner. The team set up their experiment in a CT scanner (similar to the ones used for medical imaging in hospitals) and took frequent, high-resolution, three-dimensional snapshots of the basalt periodically over several days to weeks as they flowed the fluids through.
Their imaging revealed how the pores, cracks, and crevices in the rock evolved, and filled in with minerals as the fluid flowed through over time. Over multiple experiments, they found that the rock’s permeability quickly dropped within a day, by an order of magnitude. The rock’s porosity, however, decreased at a much slower rate. At the end of the longest-duration experiments, only about 5 percent of the original pore space was filled with new minerals.
“Our findings tell us that the minerals are initially forming in really small microcracks that connect the bigger pore spaces, and clogging up those spaces,” Simpson says. “You don’t need much to clog up the tiny microfractures. But when you do clog them up, that really drops the permeability.”
Even after the initial drop in permeability, however, the team could continue to flow fluid through, and minerals continued to form in tight spaces within the rock. This suggests that even when it seems like an underground reservoir is full, it might still be able to store more carbon.
The researchers also monitored the rock with ultrasonic sensors during each experiment and found that the sensor could track even small changes in the rock’s porosity. The less porous, or more filled in the rock was with minerals, the faster sound waves traveled through the material. These results suggest that seismic waves could be a reliable way to monitor the porosity of underground rocks and ultimately their capacity to store carbon.
“Overall, we think that carbon mineralization seems like a promising avenue to permanently store large volumes of CO2,” Peč concludes. “There are plenty of reservoirs and they should be injectable over extended periods of time if our results can be extrapolated.”
This work was supported by MIT’s Advanced Carbon Mineralization Initiative funded by Beth Siegelman SM ’84 and Russ Siegelman ’84, with additional funding from the Chan-Zuckerberg Foundation.
New catalog more than doubles the number of gravitational-wave detections made by LIGO, Virgo, and KAGRA observatoriesThe latest crop of space-time wobbles includes a variety of heavy, fast-spinning, and lopsided colliding black holes.When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.
And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.
Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.
The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)
This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.
“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.
“In the past decade, gravitational wave astronomy has progressed from the first detection to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”
“Pushing the edges”
Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.
A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.
Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.
The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.
“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”
Unusual signals
The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.
For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.
“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”
Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.
Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.
The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other.
“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”
Cosmic connections
From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.
“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.
This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.
The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.
“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”
Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.
“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”
The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.
Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.
“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”
By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).
“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.
“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."
Nitrous oxide, a product of fertilizer use, may harm some soil bacteriaWhile some N2O is produced naturally at the plant root, agricultural practices can increase its levels, to the detriment of some microbes that support plant growth.Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.
While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.
Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.
The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.
“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”
A toxic gas
Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.
“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”
While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.
They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.
Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.
Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.
“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.
“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”
From the lab to the farm
In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.
Wasson calls the paper a proof of concept and plans to study agricultural soil next.
“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”
McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.
“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”
The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).
How some skills become second naturePatterns of gaze and attention can reveal how some people unconsciously figure out how to master a task, new research shows.Expertise isn’t easy to pass down. Take riding a bike: A seasoned cyclist might talk a beginner through the basics of how to sit and when to push off. But other skills, like how hard to pedal to keep balanced, are more intuitive and harder to articulate. This implicit know-how is known as tacit knowledge, and very often, it can only be learned with experience and time.
But a team of MIT engineers wondered: Could an expert’s unconscious know-how be accessed, and even taught, to quickly bring a novice up to an expert’s level?
The answer appears to be “yes,” at least for a particular type of visual-learning task.
In a study published today in the Journal of Neural Engineering, the engineers identified tacit knowledge in volunteers who were tasked with classifying images of various shapes and patterns. As the volunteers were shown images to organize, the team recorded their eye movements and brain activity to measure their visual focus and cognitive attention, respectively.
The measurements showed that, over time, the volunteers shifted their focus and attention to a part of each image that made it easier to classify. However, when asked directly, the volunteers were not aware that they had made such a shift. The researchers concluded that this unconscious shift in attention and focus was a form of tacit knowledge that the volunteers possessed, even if they could not articulate it. What’s more, when the volunteers were made aware of this tacit knowledge, their accuracy in classifying images improved significantly.
The study is the first to directly show that visual attention can reveal unconscious, tacit knowledge during image classification tasks. It also finds for the first time that bringing this concealed knowledge to the surface can enhance experts’ performance.
While the results are specific to the study’s experiment, the researchers say they suggest that some forms of hidden know-how can be made explicit and applied to boost one’s learning experience. They suspect that tacit knowledge could be accessed for disciplines that require keen observation skills, including certain physical trades and crafts, sports, and image analysis, such as medical X-ray diagnoses.
“We as humans have a lot of knowledge, some that is explicit that we can translate into books, encyclopedias, manuals, equations. The tacit knowledge is what we cannot verbalize, that’s hidden in our unconscious,” says study author Alex Armengol-Urpi, a research scientist in MIT’s Department of Mechanical Engineering. “If we can make that knowledge explicit, we can then allow for it to be transferred easier, which can help in education and learning in general.”
The study’s co-authors include Andrés F. Salazar-Gomez, research scientist at the MIT Media Lab; Pawan Sinha, professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences; and Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering.
Hidden gaze
The concept of tacit knowledge is credited to the scientist and philosopher Michael Polyani, who in the mid 20th century was the first to investigate the notion that “we know more than we can tell.” His insights revealed that humans can hold a form of knowledge that is internalized, almost second nature, and often difficult to express or translate to others.
Since Polyani’s work, many studies have highlighted how tacit knowledge may play a part in perfecting certain skills, spanning everything from diagnosing medical images to discerning the sex of cats from images of their faces.
For Armengol-Urpi, these studies raised a question: Could a person’s tacit knowledge be revealed through unconscious signals, such as patterns in their eye movements? His PhD work focused on visual attention, and he had developed methods to study how humans focus their attention, by using cameras to follow the direction of their gaze, and electroencephalography (EEG) monitors to record their brain activity. In his research, he learned of a previous study that used similar methods to investigate how radiologists diagnose nodules in X-ray images. That study showed that the doctors unconsciously focused on areas of an image that helped them to correctly detect the nodules.
“That paper didn’t focus on tacit knowledge, but it suggested that there are some hidden clues in our gaze that could be explored further,” Armengol-Urpi says.
The shape of knowledge
For their new study, the team looked at whether they could identify signs of tacit knowledge from measurements of visual focus and attention. In their experiment, they asked 30 volunteers to look sequentially at over 120 images. They could look at each image for several seconds and then were asked to classify the image as belonging to either group A, or group B, before they were shown the next image.
Each image contained two simple shapes on either side of the image — a square, a triangle, a circle, and any combination of the three, along with different colors and patterns for each shape. The researchers designed the images such that they should be classified into one of two groups, based on an intricate combination of shape, color, and pattern. Importantly, only one side of each image was relevant for the classification.
The volunteers, however, were given no guidelines on how to classify the images. Therefore, for about the first half of the experiment, they were considered “novices,” and more or less guessed at their classifications. Over time, and many more images, their accuracy improved to a level that the researchers considered “expert.” Throughout the experiment, the team used cameras to follow each participant’s eye movements, as a measure of visual focus.
They also outfitted volunteers with EEG sensors to record their brain waves, which they used as a measure of cognitive attention. They designed each image to show two shapes, each of which flickered at different, imperceptible frequencies. They found they could identify where a volunteer’s attention landed, based on which shape’s flicker their brain waves synced up with.
For each volunteer, the team created maps of where their gaze and attention were focused, both during their novice and expert phases. Overall, these maps showed that in the beginning, the volunteers focused on all parts of an image as they tried to make sense of how to classify it. Toward the end, as they got a grasp of the exercise and improved their accuracy, their attention shifted to just one side of each image. This side happened to be the side that the researchers designed to be most relevant, while the other side was just random noise.
The maps showed that the volunteers picked up some knowledge of how to accurately classify the images. But when they were given a survey and asked to articulate how they learned the task, they always maintained that they focused on each entire image. It seemed their actual shift in focus was an unconscious, tacit skill.
“They were unconsciously focusing their attention on the part of the image that was actually informative,” Armengol-Urpi says. “So the tacit knowledge they had was hidden inside them.”
Going a step further, the team then showed each participant the maps of their gaze and attention, and how the maps changed from their novice to expert phases. When they were then shown additional images, the volunteers seemed to use this once-tacit knowledge, and further improved their classification accuracy.
“We are currently extending this approach to other domains where tacit knowledge plays a central role,” says Armengol-Urpi, who is exploring tacit knowledge in skilled crafts and sports such as glassblowing and table tennis, as well as in diagnosing medical imaging. “We believe the underlying principle — capturing and reinforcing implicit expertise through physiological signals — can generalize to a wide range of perceptual and skill-based domains.”
This research was supported, in part, by Takeda Pharmaceutical Company.
A “ChatGPT for spreadsheets” helps solve difficult engineering challenges fasterThe approach could help engineers tackle extremely complex design problems, from power grid optimization to vehicle design.Many engineering challenges come down to the same headache — too many knobs to turn and too few chances to test them. Whether tuning a power grid or designing a safer vehicle, each evaluation can be costly, and there may be hundreds of variables that could matter.
Consider car safety design. Engineers must integrate thousands of parts, and many design choices can affect how a vehicle performs in a collision. Classic optimization tools could start to struggle when searching for the best combination.
MIT researchers developed a new approach that rethinks how a classic method, known as Bayesian optimization, can be used to solve problems with hundreds of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods.
Their technique leverages a foundation model trained on tabular data that automatically identifies the variables that matter most for improving performance, repeating the process to hone in on better and better solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This allows them to adapt to different applications.
The researchers’ tabular foundation model does not need to be constantly retrained as it works toward a solution, increasing the efficiency of the optimization process. The technique also delivers greater speedups for more complicated problems, so it could be especially useful in demanding applications like materials development or drug discovery.
“Modern AI and machine-learning models can fundamentally change the way engineers and scientists create complex systems. We came up with one algorithm that can not only solve high-dimensional problems, but is also reusable so it can be applied to many problems without the need to start everything from scratch,” says Rosen Yu, a graduate student in computational science and engineering and lead author of a paper on this technique.
Yu is joined on the paper by Cyril Picard, a former MIT postdoc and research scientist, and Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering. The research will be presented at the International Conference on Learning Representations.
Improving a proven method
When scientists seek to solve a multifaceted problem but have expensive methods to evaluate success, like crash testing a car to know how good each design is, they often use a tried-and-true method called Bayesian optimization. This iterative method finds the best configuration for a complicated system by building a surrogate model that helps estimate what to explore next while considering the uncertainty of its predictions.
But the surrogate model must be retrained after each iteration, which can quickly become computationally intractable when the space of potential solutions is very large. In addition, scientists need to build a new model from scratch any time they want to tackle a different scenario.
To address both shortcomings, the MIT researchers utilized a generative AI system known as a tabular foundation model as the surrogate model inside a Bayesian optimization algorithm.
“A tabular foundation model is like a ChatGPT for spreadsheets. The input and output of these models are tabular data, which in the engineering domain is much more common to see and use than language,” Yu says.
Just like large language models such as ChatGPT, Claude, and Gemini, the model has been pre-trained on an enormous amount of tabular data. This makes it well-equipped to tackle a range of prediction problems. In addition, the model can be deployed as-is, without the need for any retraining.
To make their system more accurate and efficient for optimization, the researchers employed a trick that enables the model to identify features of the design space that will have the biggest impact on the solution.
“A car might have 300 design criteria, but not all of them are the main driver of the best design if you are trying to increase some safety parameters. Our algorithm can smartly select the most critical features to focus on,” Yu says.
It does this by using a tabular foundation model to estimate which variables (or combinations of variables) most influence the outcome.
It then focuses the search on those high-impact variables instead of wasting time exploring everything equally. For instance, if the size of the front crumple zone significantly increased and the car’s safety rating improved, that feature likely played a role in the enhancement.
Bigger problems, better solutions
One of their biggest challenges was finding the best tabular foundation model for this task, Yu says. Then they had to connect it with a Bayesian optimization algorithm in such a way that it could identify the most prominent design features.
“Finding the most prominent dimension is a well-known problem in math and computer science, but coming up with a way that leveraged the properties of a tabular foundation model was a real challenge,” Yu says.
With the algorithmic framework in place, the researchers tested their method by comparing it to five state-of-the-art optimization algorithms.
On 60 benchmark problems, including realistic situations like power grid design and car crash testing, their method consistently found the best solution between 10 and 100 times faster than the other algorithms.
“When an optimization problem gets more and more dimensions, our algorithm really shines,” Yu added.
But their method did not outperform the baselines on all problems, such as robotic path planning. This likely indicates that scenario was not well-defined in the model’s training data, Yu says.
In the future, the researchers want to study methods that could boost the performance of tabular foundation models. They also want to apply their technique to problems with thousands or even millions of dimensions, like the design of a naval ship.
“At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical,” says Ahmed.
“The approach presented in this work, using a pretrained foundation model together with high‑dimensional Bayesian optimization, is a creative and promising way to reduce the heavy data requirements of simulation‑based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings,” says Wei Chen, the Wilson-Cook Professor in Engineering Design and chair of the Department of Mechanical Engineering at Northwestern University, who was not involved in this research.
Injectable “satellite livers” could offer an alternative to liver transplantationThe engineered tissue grafts could take on the liver’s function and help thousands of people with liver failure.More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.
To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.
In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.
“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).
Bhatia is the senior author of the new study, which appears today in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.
Restoring liver function
The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.
Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.
Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.

To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.
In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.
“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”
The injected mixture also includes fibroblast cells — supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.
Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.
In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.
“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.
An alternative to transplantation
In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.
“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they're supposed to, and they produced the proteins that we expect them to.”
After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.
“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”
With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.
The research was funded by the Koch Institute Support (core) grant from the National Cancer Institute, the National Institutes of Health, the Wellcome Leap HOPE Program, a National Science Foundation Graduate Research Fellowship, and the Howard Hughes Medical Institute.
Les Perelman, expert in writing assessment and champion of writing education, dies at 77The longtime MIT faculty member and former dean established an influential writing program at the Institute and was known for his fierce criticism of automated essay grading.Leslie “Les” Perelman, an influential figure in college writing assessment; a champion of writing instruction across all subject matters for over three decades at MIT; and a former MIT associate dean for undergraduate education, died on Nov. 12, 2025, at home in Lexington, Massachusetts. He was 77.
A Los Angeles native, Perelman attended the University of California at Berkeley, joining in its lively activist years, and in 1980 received his PhD in English from the University of Massachusetts at Amherst. After stints at the University of Southern California and Tulane University, he returned to Massachusetts — to MIT — in 1987, and stayed for the next 35 years.
Perelman became best known for his dogged critique of autograding systems and writing assessments that didn’t assess actual college writing. The Boston Globe dubbed him “The man who killed the SAT essay.” He told NPR that colleges “spend the first year deprogramming [students] from the five-paragraph essay.”
His widow, MIT Professor Emerita Elizabeth Garrels, says that while attending a conference, Perelman — who was practically blind without his glasses — arranged to stand at one end of a room in order to “grade” essays held up for him on the other side. “He would call out the grade that each essay would likely receive on standardized scoring,” Garrels says. “And he was consistently right.” Perelman was doing what automatic scorers were: He was, he said in the NPR interview, “mirroring how automated or formulaic grading systems often reward form over substance.”
Perelman also “ruffled a lot of feathers” in industry, says Garrels, with his 2020 paper documenting his BABEL (“Basic Automatic B.S. Essay Language”) Generator, which output nonsense that commercial autograders nevertheless gave top marks. He saved some of his most systematic criticism for autograders’ defenders in academia, at one point calling out peers at the University of Akron for the methodology in their widely-touted paper claiming autograders performed just as well as human graders.
At least one service, though, E.T.S., partly welcomed Perelman’s critique by making its autograder available to him for testing. (Others, like Pearson and Vantage Learning, declined.) He discovered he could ace the tests, even when his essay included non-factual gibberish and typographical errors:
Teaching assistants are paid an excessive amount of money. The average teaching assistant makes six times as much money as college presidents. In addition, they often receive a plethora of extra benefits such as private jets, vacations in the south seas, a staring roles in motion pictures. Moreover, in the Dickens novel Great Expectation, Pip makes his fortune by being a teaching assistant. It doesn’t matter what the subject is, since there are three parts to everything you can think of.
MIT career
Within MIT, Perelman’s legacy was his push to embed writing instruction into the whole of MIT’s curriculum, not as standalone expository writing subjects, let alone as merely a writing exam that incoming students could use to pass out of writing subjects altogether. Supported by a $325,000 National Science Foundation grant, he convinced MIT to hire writing instructors who were also subject matter experts, often with STEM PhDs. They were tasked with collaborating with departments to plant writing instruction into both existing curricula and new subjects. That effort eventually became the Writing Across the Curriculum program (today named Writing, Rhetoric, and Professional Communication) with a staff of more than 30 instructors.
Building out the infrastructure wasn’t quick, however. Perelman’s successor, Suzanne Lane ’85, says it took him almost 15 years. It started with proving to others just how uneven writing instruction at MIT actually was. “A whole cohort of students who took a lot of writing classes or got communication instruction in various places would make great progress,” Lane says. “But it was definitely possible to get through all of MIT without doing much writing at all.”
To bolster his case, Perelman turned to alumni surveys. “The surveys asked how well MIT prepared you for your career,” says Lane. “The technical skills scored really high, but — what is horribly termed, sometimes, as ‘soft skills’ — communication skills, collaboration, etc., these scored really high on importance to career, but really low on how well MIT had prepared them.”
In other words, MIT alumni knew their stuff but were bad at communicating it, at a cost to their careers.
This led Perelman and others to push for a new undergraduate communication requirement. That NSF grant supported a 1997 pilot, designing experiments for courses that would be communication-intensive. It was a huge success. Every department participated. It involved 24 subjects and roughly 300 students. MIT faculty, following “lively” discussion at an April 1999 faculty meeting, approved the proposal of the creation of a report on the communication requirement’s implementation, followed a year later by its formal passage, effective fall 2001.
From that initial pilot of 24, there are now nearly 300 subjects that count toward the requirement, from class 1.013 (Senior Civil and Environmental Engineering Design) to 24.918 (Workshop in Linguistic Research).
Connections beyond MIT
Early in his career, Perelman worked with Vincent DiMarco, a literature scholar at the University of Massachusetts at Amherst, to publish “The Middle English Letter of Alexander to Aristotle” (Brill, 1978). With Wang Computers as publisher, he was a technical writer and project leader on the “DOS Release 3.30 User’s Reference Guide.” He edited a book and chapter on writing studies and assessment with New Jersey Institute of Technology professor Norbert Elliot. And in a project he was particularly proud of, he worked with the New South Wales Teachers Federation in 2018 to convince Australia to reject the adoption of an automated essay grading regime.
“Les was brilliant, with a Talmudic way of asking questions and entering academic debates,” says Nancy Sommers, whose work on undergraduate writing assessment at Harvard University paralleled Perelman’s. “I loved the way his eyes sparkled when he was ready to rip an adversary or a colleague who wasn’t up to his quick mind and vast, encyclopedic knowledge.”
Openness to rhetorical combat didn’t keep Perelman from being a wonderful friend, Sommers says, saying he once waited for her at the airline gate with a sandwich and a smile after a canceled flight. “That was Les, so gracious, generous, anticipating the needs of friends, always there to offer sustenance and friendship.”
Donations in Perelman’s name can be made to UNICEF’s work supporting children in Ukraine, the Lexington Refugee Assistance Program, Doctors Without Borders, and the Ash Grove Movie Finishing Fund.
Coping with catastropheJapan incorporates more disaster planning into its buildings and public spaces than any other nation. Miho Mazereeuw’s new book explains how they do it.Each April in Japan, people participate in a tradition called “hanami,” or cherry-blossom viewing, where they picnic under the blooming trees. The tradition has a second purpose: The presence of people at these gatherings, often by water, helps solidify riverbanks and protect them from spring floods. The celebration has a dual purpose, by addressing, however incrementally, the threat of natural disaster.
The practice of creating things that also protect against disasters can be seen all over Japan, where many new or renovated school buildings have design features unfamiliar to students elsewhere. In Tokyo, one elementary school has a roof swimming pool that stores water and is used to help the building’s toilets flush, plus an additional rainwater catchment tank and exterior stairs leading to a large balcony that wraps around one side of the building.
Why? Well, Japan is prone to natural disasters, such as tsunamis, earthquakes, and flooding. The country’s schools often double as evacuation sites for local residents, and design practices increasingly reflect this. In normal times, the roof pool is where students learn to swim and helps keep the school cool, and the large balcony is used by spectators watching the adjacent school athletics field. In emergencies, water storage is crucial and exterior stairs help people ascend quickly to the gymnasium, built on the second floor — to keep evacuees safer during flooding.
Meanwhile, in one Tokyo district, rooftop solar power is now common. Some schools feature skylights and courtyards to bring in natural light. Again, these architectural features serve dual purposes. Solar power, for one, lowers annual operating costs, and it provides electricity even in case of grid troubles.
These are examples of what MIT scholar Miho Mazereeuw has termed “anticipatory design,” in which structures and spaces are built with dual uses, for daily living and for when crisis strikes.
“The idea is to have these proactive measures in place rather than being reactionary and jumping into action only after something has happened,” says Mazereeuw, an associate professor in MIT’s Department of Architecture and a leading expert on resilient design.
Now Mazereeuw has a new book on the subject, “Design Before Disaster: Japan’s Culture of Preparedness,” published by the University of Virginia Press. Based on many years of research, with extensive illustrations, Mazereeuw examines scores of successful design examples from Japan, both in terms of architectural features and the civic process that created them.
“I’m hoping there can be a culture shift,” Mazereeuw says. “Wherever you can invent design outcomes to help society be more resilient beforehand, it is not at exorbitant cost. You can design for exceptional everyday spaces but embed other infrastructure and flexibility in there, so when there is a flood event or earthquake, those buildings have more capability.”
Bosai and barbecue
Mazereeuw, who is also the head of MIT’s Urban Risk Lab, has been studying disaster preparedness for over 30 years. As part of the Climate Project at MIT, she is also one of the mission directors and has worked with communities around the world on resiliency planning.
Japan has a particularly well-established culture of preparedness, often referred to through the Japanese word “bosai.” Mazereeuw has been studying the country’s practices carefully since the 1990s. In researching the book, she has visited hundreds of sites in the country and talked to many officials, designers, and citizens along the way.
Indeed, Mazereeuw emphasizes, “A major theme in the book is connecting the top-down and bottom-up.” Some good design ideas come from planners and architects. Other have come from community groups and local residents. All these sources are important.
“The Japanese government does invest a lot in disaster research and recovery,” Mazereeuw says. “But I would hate for people in other countries to think this isn’t possible elsewhere. It’s the opposite. There are a lot of examples in here that don’t cost extra, because of careful design through community participation.”
As one example, Mazereeuw devotes a chapter of the book to public parks, which are often primary evacuation spaces for residents in case of emergency. Some have outdoor cooking facilities, which in normal times are used for, say, a weekend barbecue or local community events but are also there in case of emergency. Some parks also have water storage, or restroom facilities designed to expand if needed, and many serve as flood reservoirs, protecting the surrounding neighborhood.
“The barbecue facilities are a great example of dual use, connecting the everyday with disaster preparedness,” Mazereeuw says. “You can bring food into this beautiful park, so you’re used to using this space for cooking already. The idea is that your cognitive map of where you should go is connected to fun things you have done in the past.”
Some of the parks Mazereeuw surveys in the book are tiny pocket parks, which are also filled with useful resilience tools.
“Anticipatory design does not have to be monumental,” Mazereeuw writes in the book.
Negotiating through design
To be sure, some disaster mitigation measures are difficult to enact. In the Naiwan district of Kesennuma, as Mazereeuw outlines in the book, much of the local port area was destroyed in the 2011 tsunami, and the government wanted to build a seawall as part of the reconstruction plan. Some local residents and fishermen were unenthusiastic; a seawall could limit ocean access. Finally, after extended negotiations, designers created a seawall integrated into a new commercial district with cafes and stores, as well as new areas of public water access.
“This project used the power of design to negotiate between prefectural and local regulations, structural integrity and aesthetics, ocean access and safety,” Mazereeuw says.
Ultimately, working to build a coalition in support of resilience measures can help create more interesting and useful designs.
Other scholars have praised “Design Before Disaster.” Daniel P. Aldrich, a professor at Northeastern University, has called the book a “well-researched, clearly written investigation” into Japanese disaster-management practices, adding that any officials or citizens around the world “who seek to keep residents and communities safe from shocks of all kinds will learn something important from this book. It sets a high bar for future scholarship in the field.”
For her part, Mazereeuw emphasizes, “We can learn from the Japanese example, but it’s not a copy-paste thing. The book is so people can understand the essence of it and then create their own disaster preparedness culture and approach. This should be an all-hands process. Emergency management is not about relying on managers. It’s figuring out how we all play a part.”
Designing a more resilient future for plants, from the cell upForay Bioscience, founded by Ashley Beckwith SM ’18, PhD ’22, is engineering single plant cells to create new materials and meet growing demand.In a narrow strip of land along the Andes mountain range in central Chile, an Indigenous community has long celebrated the bark of a rare tree for its medicinal properties. Modern science only recently caught up to the tradition, finding the so-called soapbark tree contains potent compounds for boosting the human immune system.
The molecules have since been harnessed to make the world’s first malaria vaccine and to boost the effectiveness of vaccines for everything from shingles to Covid-19 and cancer. Unfortunately, unsustainable harvesting has threatened the existence of the tree species, leading the Chilean government to severely restrict lumbering.
The soapbark tree’s story is not unique. Plants are the foundation of industries such as pharmaceuticals, beauty, agriculture, and forestry, yet around 45 percent of plant species are in danger of going extinct. At the same time, human demand for plant products continues to rise. Ashley Beckwith SM ’18, PhD ’22 believes meeting that demand requires rethinking how plants are grown. Her company, Foray Bioscience, aims to make plant production faster, more adaptable, and less damaging to fragile natural supply chains.
The company is working to make it possible to grow any plant or plant product from single cells using biomanufacturing powered by artificial intelligence. Foray has already developed molecules, materials, and fabricated seeds with various partners, including academic researchers, nurseries, conservationists, and companies.
In one new partnership, Foray is working with the nursery West Coast Chestnut to deploy a more disease-resistant version of the chestnut trees that once filled forests across the eastern U.S. but have since been wiped out. The project is just one example of how AI and plant science can be leveraged to protect the plant populations that bring so much value to humans and the planet.
“Plant systems underpin every aspect of our daily lives, from the air we breathe to the food we eat, the clothes we wear, the homes we live in, and more,” Beckwith says. “But these plant systems are fragile and in decline. We need new strategies to ensure lasting access to the plant products and ecosystems we depend on.”
From human cells to plants
Beckwith focused on biology and materials manufacturing as a master’s student in MIT’s Department of Mechanical Engineering. Her research involved building platforms to enable precision treatments for human diseases. After graduating, she worked on a regenerative, self-sufficient farm that mimicked natural ecosystems, and began thinking about applying her work to address the fragility of plant systems.
Beckwith returned to MIT for her PhD to explore the idea of regenerative plant systems, studying in the lab of Research Scientist Luis Fernando Velásquez-García.
“To address organ shortages for transplants, scientists aspire to grow kidneys that don’t have to be harvested from a human using tissue engineering,” Beckwith says. “What if we could do something similar for our plant systems?”
Beckwith went on to publish papers showing she could grow wood-like plant material in a lab. By adjusting certain chemicals, the researchers could precisely control properties like stiffness and density.
“I was thinking about how we build products, like wood, from the cell up instead of extracting from the top down,” Beckwith recalls. “It led to some foundational demonstrations that underpin the work we do at Foray today, but it also opened up questions: Where are these new approaches most urgently needed? What would it take to apply these tools where they’re needed, fast?”
Beckwith began exploring the idea of starting a company in 2021, participating in accelerator programs run by the E14 Fund and The Engine — both MIT-affiliated initiatives designed to support breakthrough science ventures. She officially founded Foray in February of 2022 after completing her PhD.
“Our early research showed that we could grow wood-like material directly from plant cells,” she says. “We are now able to grow not just wood without the tree, but also produce harvest-free molecules, materials, and even seeds by steering single cells to develop precisely into the products we need without ever having to grow the whole plant.”
Beckwith describes her lab-grown wood innovation as analogous to Uber if there were no internet — a powerful idea without the digital backbone to scale. To create the data foundation and ecosystem to scale plant innovation, Foray is now building the Pando AI platform to enable rapid discovery and deployment of these novel plant solutions.
“Pando functions like a Google Maps for plant growth,” Beckwith says. “It helps scientists navigate a really complex field of variables and arrive at a research destination efficiently — because to steer a cell to produce a particular product, there might be 50 different variables to tweak. It would take a lifetime to explore each of those, and that’s one reason why plant research is so slow today.”
The “operating system for plant science”
Foray’s team includes experts in plant biology, artificial intelligence, machine learning, computational biology, and process engineering.
“This is a very intersectional problem,” Beckwith says. “One of the most exciting things for me is building this highly capable team that is able to deliver solutions that could never be created in a silo.”
After a year of pilot collaborations with select researchers, Foray is preparing for a broader public launch of its Pando platform early this year.
Over the next several years, Beckwith hopes Foray will serve as an innovation engine for researchers and companies working across agriculture, materials, pharmaceuticals, and conservation. Foray already uses Pando internally to create plant solutions that overcome limitations in natural production.
“Fabricated seeds are one capability that we’re really excited about,” Beckwith says. “Being able to grow seeds from cells lets you create really timely and scalable seed supplies to address gaps in restoration, or shorten the path to market for new, resilient crop varieties. There’s a lot to be gained by making our plant systems more adaptive.”
“We want to shorten plant development timelines, so solutions can be built in months, not decades,” Beckwith says. “We’re excited to be building tools that represent a step change in the way plant production can be done.”
As Foray’s products scale and more researchers use its platform, the company is hoping to help the plant science industry respond to some of our planet’s most pressing challenges.
“Right now, we’re focused on plants in labs,” Beckwith says. “In five years, we aim to be the operating system for all of plant science, making it possible to build anything from a single plant cell.”
Tackling industry’s burdensome bubble problemMIT researchers uncovered the physics behind bubble-removing membranes that could improve bioreactors, chemical production, and more.In industrial plants around the world, tiny bubbles cause big problems. Bubbles clog filters, disrupt chemical reactions, reduce throughput during biomanufacturing, and can even cause overheating in electronics and nuclear power plants.
MIT Professor Kripa Varanasi has long studied methods to reduce bubble disruption. In a new study, Varanasi, along with PhD candidate Bert Vandereydt and former postdoc Saurabh Nath, have uncovered the physics behind a promising type of debubbling membrane material that is “aerophilic” — Greek for “air-loving.” The material can be used in systems of all types, allowing anyone to optimize their machine’s performance by breaking free from bubble-borne disruptions.
“We have figured out the structure of these bubble-attracting membrane materials to allow gas to evacuate in the fastest possible manner,” says Varanasi, the senior author of the study. “Think of trying to push honey through a coffee strainer: It’s not going to go through easily, whereas water will move through, and gas will move through even more easily. But even gas will reach a throughput limit, which depends on the properties of the gas and the liquid involved. By uncovering those limits, our research allows engineers to build better membranes for their systems.”
In the paper, which appears in the journal PNAS this week, the researchers distill their findings into a graph that allows anyone to plot a few characteristics of their system — like the viscosity of their gas and the surrounding liquid — and find the best membrane to make bubble removal near-instantaneous. Using their approach, the research team demonstrated a 1,000-fold acceleration in bubble removal in a bioreactor that’s used in the pharmaceutical industry, food and beverage manufacturing, cosmetics, chemical production, and more.
The researchers say the membranes, which repel water, could be used to improve the throughput of a wide range of advanced systems whose operation has been plagued to date by bubbles.
Better bubble breakers
Companies today try everything to burst bubbles. They deploy foam breakers that physically shear them, chemicals that act as antifoaming agents, even ultrasound. Such approaches have drawbacks in tightly controlled environments like bioreactors, where chemical defoamers can be toxic to cells, while mechanical agitation can damage delicate biological materials. Similar limitations apply to other industries where contamination or physical disturbance is unacceptable. As a result, many applications that cannot tolerate chemical defoamers or mechanical intervention remain fundamentally bottlenecked by foam formation.
“Biomanufacturing has really taken off in the last 10 years,” Vandereydt says. “We’re making a lot more out of biologic systems like cells and bacteria, and our reactors have increased in throughput from 5 million cells per millimeter of solution to 100 million cells per millimeter. However, the bubble evacuation and defoaming haven’t kept up — it’s becoming a significant rate-limiting step.”
To better understand the interaction between aerophilic membranes and bubbles, the MIT researchers used MIT.nano facilities to create a series of tiny porous silicon membranes with holes ranging in size from 10 microns to 200 microns. They coated the membranes with hydrophobic silica nanoparticles.
Placing them on the surface of different liquids, the researchers released single bubbles with varying viscosity and recorded the interaction using high-speed imaging as each collided with the membranes.
“We started by trying to take a very complicated system, like foam being generated in a bioreactor, and study it in the simplest form to understand what’s happening,” Vandereydt says.
At first, the bigger the holes, the faster the bubbles disappeared. The researchers also changed the bubble gas from air to hydrogen, which has half the viscosity, and found the speed of bubble destruction doubled.
But after about a 1,000-fold acceleration in bubble destruction, the researchers hit a wall no matter how big the membrane holes were. They had run up against a different physical limit to investigate.
The researchers then tried changing the viscosity of their liquid, from water to something closer to honey. They found viscosity only plays a role in the speed of bubble destruction when the liquid is 200 times the viscosity of liquid. Further experiments revealed the biggest factor for slowing bubble evacuation was inertial resistance in the liquid.

“Through experimentation, we showed there are three different limits [to the speed of bubble destruction],” Vandereydt says. “There is the viscous limit of the gas in a low-viscosity, low-permeability setup. Then there’s the viscous resistance of the liquid in the high-permeability, high-viscosity regime. Then we have the inertial limit of the liquid.”
The team used a bioreactor to experimentally validate their findings and charted them in a map that engineers can use to enter the characteristics of their system and find both the best membrane for their situation and the biggest factor slowing bubble evacuation.
The science of bubbles
The research should be useful for anyone trying to accelerate the destruction of bubbles in their industrial device, but it also improves our understanding of the physics underpinning bubble dynamics.
“We have identified three different throughput limits, and the physics behind those limits, and we have reduced it to very simple laws,” Nath explains. “How fast you can go is first dictated between surface tension and inertia. But you may also hit a different limit, where the pores are extremely small, so the gas finds it difficult to move through them. In that case, the viscosity of the gas is meaningful. But you may also have a bubble which was originally in something like honey, which means it’s not enough the gas is moving, the liquid also must refill the space behind it. No matter what your conditions are, you will be switching between these three limits.”
Varanasi says health care companies, chemical manufacturers, and even breweries have expressed interest in the work. His team plans to commercially develop the membranes for industrial use.
“These physical insights allowed us to design membranes that, quite surprisingly, evacuate bubbles even faster than a free liquid-gas interface,” says Varanasi.
The researchers’ design map could also be used to model natural systems and even liquid-liquid systems, which could be used to create membranes that remove oil spills from water or help efficiently extract hydrogen from water-splitting electrodes. Ultimately the biggest beneficiaries of the findings will be companies grappling with bubbles.
“Though small, bubbles quietly dictate the performance limits of many advanced technologies,” says Varanasi. “Our results provide a way to eliminate that bottleneck and unlock entirely new levels of performance across industries. These membranes can be readily retrofitted into existing systems, and our framework allows them to be rapidly designed and optimized for specific applications. We’re excited to work with industry to translate these insights into impact.”
The work was supported, in part, by MIT Lincoln Laboratory and used MIT.nano facilities.
New method could increase LLM training efficiencyBy leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning.
But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process. While a few of the high-power processors continuously work through complicated queries, others in the group sit idle.
Researchers from MIT and elsewhere found a way to use this computational downtime to efficiently accelerate reasoning-model training.
Their new method automatically trains a smaller, faster model to predict the outputs of the larger reasoning LLM, which the larger model verifies. This reduces the amount of work the reasoning model must do, accelerating the training process.
The key to this system is its ability to train and deploy the smaller model adaptively, so it kicks in only when some processors are idle. By leveraging computational resources that would otherwise have been wasted, it accelerates training without incurring additional overhead.
When tested on multiple reasoning LLMs, the method doubled the training speed while preserving accuracy. This could reduce the cost and increase the energy efficiency of developing advanced LLMs for applications such as forecasting financial trends or detecting risks in power grids.
“People want models that can handle more complex tasks. But if that is the goal of model development, then we need to prioritize efficiency. We found a lossless solution to this problem and then developed a full-stack system that can deliver quite dramatic speedups in practice,” says Qinghao Hu, an MIT postdoc and co-lead author of a paper on this technique.
He is joined on the paper by co-lead author Shang Yang, an electrical engineering and computer science (EECS) graduate student; Junxian Guo, an EECS graduate student; senior author Song Han, an associate professor in EECS, member of the Research Laboratory of Electronics and a distinguished scientist of NVIDIA; as well as others at NVIDIA, ETH Zurich, the MIT-IBM Watson AI Lab, and the University of Massachusetts at Amherst. The research will be presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
Training bottleneck
Developers want reasoning LLMs to identify and correct mistakes in their critical thinking process. This capability allows them to ace complicated queries that would trip up a standard LLM.
To teach them this skill, developers train reasoning LLMs using a technique called reinforcement learning (RL). The model generates multiple potential answers to a query, receives a reward for the best candidate, and is updated based on the top answer. These steps repeat thousands of times as the model learns.
But the researchers found that the process of generating multiple answers, called rollout, can consume as much as 85 percent of the execution time needed for RL training.
“Updating the model — which is the actual ‘training’ part — consumes very little time by comparison,” Hu says.
This bottleneck occurs in standard RL algorithms because all processors in the training group must finish their responses before they can move on to the next step. Because some processors might be working on very long responses, others that generated shorter responses wait for them to finish.
“Our goal was to turn this idle time into speedup without any wasted costs,” Hu adds.
They sought to use an existing technique, called speculative decoding, to speed things up. Speculative decoding involves training a smaller model called a drafter to rapidly guess the future outputs of the larger model.
The larger model verifies the drafter’s guesses, and the responses it accepts are used for training.
Because the larger model can verify all the drafter’s guesses at once, rather than generating each output sequentially, it accelerates the process.
An adaptive solution
But in speculative decoding, the drafter model is typically trained only once and remains static. This makes the technique infeasible for reinforcement learning, since the reasoning model is updated thousands of times during training.
A static drafter would quickly become stale and useless after a few steps.
To overcome this problem, the researchers created a flexible system known as “Taming the Long Tail,” or TLT.
The first part of TLT is an adaptive drafter trainer, which uses free time on idle processors to train the drafter model on the fly, keeping it well-aligned with the target model without using extra computational resources.
The second component, an adaptive rollout engine, manages speculative decoding to automatically select the optimal strategy for each new batch of inputs. This mechanism changes the speculative decoding configuration based on the training workload features, such as the number of inputs processed by the draft model and the number of inputs accepted by the target model during verification.
In addition, the researchers designed the draft model to be lightweight so it can be trained quickly. TLT reuses some components of the reasoning model training process to train the drafter, leading to extra gains in acceleration.
“As soon as some processors finish their short queries and become idle, we immediately switch them to do draft model training using the same data they are using for the rollout process. The key mechanism is our adaptive speculative decoding — these gains wouldn’t be possible without it,” Hu says.
They tested TLT across multiple reasoning LLMs that were trained using real-world datasets. The system accelerated training between 70 and 210 percent while preserving the accuracy of each model.
As an added bonus, the small drafter model could readily be utilized for efficient deployment as a free byproduct.
In the future, the researchers want to integrate TLT into more types of training and inference frameworks and find new reinforcement learning applications that could be accelerated using this approach.
“As reasoning continues to become the major workload driving the demand for inference, Qinghao’s TLT is great work to cope with the computation bottleneck of training these reasoning models. I think this method will be very helpful in the context of efficient AI computing,” Han says.
This work is funded by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT Amazon Science Hub, Hyundai Motor Company, and the National Science Foundation.
AI to help researchers see the bigger picture in cell biologyBy providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.Studying gene expression in a cancer patient’s cells can help clinical biologists understand the cancer’s origin and predict the success of different treatments. But cells are complex and contain many layers, so how the biologist conducts measurements affects which data they can obtain. For instance, measuring proteins in a cell could yield different information about the effects of cancer than measuring gene expression or cell morphology.
Where in the cell the information comes from matters. But to capture complete information about the state of the cell, scientists often must conduct many measurements using different techniques and analyze them one at a time. Machine-learning methods can speed up the process, but existing methods lump all the information from each measurement modality together, making it difficult to figure out which data came from which part of the cell.
To overcome this problem, researchers at the Broad Institute of MIT and Harvard and ETH Zurich/Paul Scherrer Institute (PSI) developed an artificial intelligence-driven framework that learns which information about a cell’s state is shared across different measurement modalities and which information is unique to a particular measurement type.
By pinpointing which information came from which cell parts, the approach provides a more holistic view of the cell’s state, making it easier for a biologist to see the complete picture of cellular interactions. This could help scientists understand disease mechanisms and track the progression of cancer, neurodegenerative disorders such as Alzheimer’s, and metabolic diseases like diabetes.
“When we study cells, one measurement is often not sufficient, so scientists develop new technologies to measure different aspects of cells. While we have many ways of looking at a cell, at the end of the day we only have one underlying cell state. By putting the information from all these measurement modalities together in a smarter way, we could have a fuller picture of the state of the cell,” says lead author Xinyi Zhang SM ’22, PhD ’25, a former graduate student in the MIT Department of Electrical Engineering and Computer Science (EECS) and an affiliate of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, who is now a group leader at AITHYRA in Vienna, Austria.
Zhang is joined on a paper about the work by G.V. Shivashankar, a professor in the Department of Health Sciences and Technology at ETH Zurich and head of the Laboratory of Multiscale Bioimaging at PSI; and senior author Caroline Uhler, a professor in EECS and the Institute for Data, Systems, and Society (IDSS) at MIT, member of MIT’s Laboratory for Information and Decision Systems (LIDS), and director of the Eric and Wendy Schmidt Center at the Broad Institute. The research appears today in Nature Computational Science.
Manipulating multiple measurements
There are many tools scientists can use to capture information about a cell’s state. For instance, they can measure RNA to see if the cell is growing, or they can measure chromatin morphology to see if the cell is dealing with external physical or chemical signals.
“When scientists perform multimodal analysis, they gather information using multiple measurement modalities and integrate it to better understand the underlying state of the cell. Some information is captured by one modality only, while other information is shared across modalities. To fully understand what is happening inside the cell, it is important to know where the information came from,” says Shivashankar.
Often, for scientists, the only way to sort this out is to conduct multiple individual experiments and compare the results. This slow and cumbersome process limits the amount of information they can gather.
In the new work, the researchers built a machine-learning framework that specifically understands which information overlaps between different modalities, and which information is unique to a particular modality but not captured by others.
“As a user, you can simply input your cell data and it automatically tells you which data are shared and which data are modality-specific,” Zhang says.
To build this framework, the researchers rethought the typical way machine-learning models are designed to capture and interpret multimodal cellular measurements.
Usually these methods, known as autoencoders, have one model for each measurement modality, and each model encodes a separate representation for the data captured by that modality. The representation is a compressed version of the input data that discards any irrelevant details.
The MIT method has a shared representation space where data that overlap between multiple modalities are encoded, as well as separate spaces where unique data from each modality are encoded.
In essence, one can think of it like a Venn diagram of cellular data.
The researchers also used a special, two-step training procedure that helps their model handle the complexity involved in deciding which data are shared across multiple data modalities. After training, the model can identify which data are shared and which are unique when fed cell data it has never seen before.
Distinguishing data
In tests on synthetic datasets, the framework correctly captured known shared and modality-specific information. When they applied their method to real-world single-cell datasets, it comprehensively and automatically distinguished between gene activity captured jointly by two measurement modalities, such as transcriptomics and chromatin accessibility, while also correctly identifying which information came from only one of those modalities.
In addition, the researchers used their method to identify which measurement modality captured a certain protein marker that indicates DNA damage in cancer patients. Knowing where this information came from would help a clinical scientist determine which technique they should use to measure that marker.
“There are too many modalities in a cell and we can’t possibly measure them all, so we need a prediction tool. But then the question is: Which modalities should we measure and which modalities should we predict? Our method can answer that question,” Uhler says.
In the future, the researchers want to enable the model to provide more interpretable information about the state of the cell. They also want to conduct additional experiments to ensure it correctly disentangles cellular information and apply the model to a wider range of clinical questions.
“It is not sufficient to just integrate the information from all these modalities,” Uhler says. “We can learn a lot about the state of a cell if we carefully compare the different modalities to understand how different components of cells regulate each other.”
This research is funded, in part, by the Eric and Wendy Schmidt Center at the Broad Institute, the Swiss National Science Foundation, the U.S. National Institutes of Health, the U.S. Office of Naval Research, AstraZeneca, the MIT-IBM Watson AI Lab, the MIT J-Clinic for Machine Learning and Health, and a Simons Investigator Award.