MIT Professor Michael T. Laub as well as 21 MIT alumni have been elected as fellows of the American Association for the Advancement of Science (AAAS).
The 2025 class of AAAS Fellows includes 449 scientists, engineers, and innovators, spanning all 24 of AAAS disciplinary sections, who are recognized for their scientific achievements.
Laub, the Salvador E. Luria Professor in the MIT Department of Biology and an HHMI Investigator, studies the biological mechanisms and evolution of how cells process information to regulate their own growth and proliferation, using bacteria as a model organism to develop a deeper, fundamental understanding of how bacteria function and evolve. Laub was honored as a AAAS Fellow for distinguished contributions to the field of bacterial information processing, particularly to the understanding of coevolution of host-pathogen response and immunity.
“This year’s AAAS Fellows have demonstrated research excellence, made notable contributions to advance science, and delivered important services to their communities,” said Sudip S. Parikh, AAAS chief executive officer and executive publisher of the Science family of journals. “These fellows and their accomplishments validate the importance of investing in science and technology for the benefit of all.”
The following alumni were also named fellows of the AAAS:
Picture this scenario in a business: An employee, Brad, disclosed some information that wound up in the hands of a competitor. He may not have meant to, but he did, and a few people at the firm know this. So, at the next company meeting, another employee, Linda, looks pointedly at Brad and says, “I know that no one would ever dream of leaking information, intentionally or otherwise, from our discussions.”
Linda means the opposite of what she says, of course. She is letting people know that Brad is to blame. However, while Linda is making her message public, she also wants what we often call “plausible deniability” for her statement. If anyone asks later if she was insinuating anything about Brad, she can claim she was just making a general comment about the firm.
From the boardroom to the courtroom, the talk show, and beyond, people frequently seek plausible deniability for their statements. It seems to work, too. Indeed, to have plausible deniability, the denial need not be plausible.
“People can say, ‘That’s not what I meant,’ and completely get away with it, even though it’s totally obvious they’re lying,” says MIT philosopher Sam Berstler. “They wouldn’t be getting away with it in the same respect by putting the content in explicit words.”
She adds: “This should be very puzzling to us, because in both cases the intent is maximally obvious.”
So why does plausible deniability work, and work like this? And what does it tell us about how we interact? Berstler, who studies language and communication, has published a new paper on plausible deniability, examining these issues. It is part of a larger body of work Berstler is generating, focused on everyday interactions involving deception.
To understand plausible deniability, Berstler thinks we should recognize that our conversations cannot be understood simply by analyzing the words we use. Our interactions always take place in social contexts, often have a performative aspect, and occasionally intersect with “non-acknowedgement norms,” the practice of keeping quiet about what we all know. Plausible deniability is bound up with social practices that incentivize us to not be fully transparent.
“A lot of indirect speech is designed, as it were, to facilitate this kind of deniability,” Berstler says.
The paper, “Non-Epistemic Deniability,” is published in the journal MIND. Berstler, the Laurance S. Rockefeller Career Development Chair and assistant professor of philosophy at MIT, is the sole author.
Managing a personal “Cold War”
In Berstler’s view, there are multiple ways to create plausible deniability. One is through the practice of open secrets, the subject of one of her previous papers. An open secret is widely known information that is never acknowledged, for reasons of power or in-group identification, among other things. Indeed, no one even acknowledges that they are not acknowledging the open secret.
Examining open secrets led Berstler directly to her analysis of plausible deniability. However, the new paper focuses more on another way of creating plausible deniability, which she calls “two-tracking norms.” Two-tracking is when a group divides its communications into two parts: One track consists of official, limited, courteous interaction, and the second track consists more of informal, resentful, uncooperative interactions. Linda, in our example, is engaging in two-tracking.
But why do we two-track at all? Why not just be fully transparent? Well, in an office scenario, if Linda is mad that Brad divulged some company secrets, calling out Brad directly might lead to recriminations and conflict beyond what Linda is willing to tolerate for the sake of critizing Brad on the record.
“It's like a Cold War situation where we each have an interest in not letting the conflict go to a state where we’re firing warheads at each other, but we can’t just purely manage relations around the negotiating table because we’re adversaries,” Berstler says. “We’re going to aggress against each other, but in a limited way. In a two-track conversation, communicating in the second track is like fighting a proxy battle, but we’re also providing evidence to each other that we’re only going to engage in a proxy battle.”
In this way, Linda takes Brad to task and some people pick up on it, but Brad is not explicitly publicly shamed. And though he might be unhappy, he is less likely to wreck all company norms in an attempt to retaliate. The firm more or less rolls on as usual.
Waiting for Goffman
Where Berstler differs in part from other philosophers is in her emphasis on the extent to which social practices are integral to our ways of deploying deniability. Our interactions are not just limited to rhetoric, but have additional layers.
“What we mean can often be different from what we say, or enhanced from what we say,” Berstler says. “Sometimes we figure out what others mean by relying on what they say in literal language. But sometimes we’re relying on other things, like the context.”
So, back at the firm, the colleagues of Linda and Brad might have some knowledge of a confidentiality breach, or they might know that Linda does not usually speak up at meetings, or they might read things into her tone of voice and the way she appeared to look at Brad. There is more to be gleaned than her literal words.
In this kind of analysis, Berstler finds illumination in the work of the midcentury sociologist Erving Goffman, who studied in minute detail the performative parts of our everyday interactions and speech. Goffman, as Berstler notes in the paper, proposed that we have a ritualized, social self (or “face”) and that normal, everyday behavior generally allows us, and others, to keep this face intact.
Relatedly, Goffman and some of his intellectual followers concluded that habits such as two-tracking are very common in everyday life; the price we pay for saving face is a bit less transparency, and a bit more secrecy and deniability.
“What I’m suggesting is we have these other established practices like two-tracking and open secrecy, where the deniability is just a byproduct,” Berstler says.
What’s the solution?
By bringing sociological ideas into her work, Berstler is moving beyond the normal philosophical discussion of the subject. On the other hand, she is not directly disputing core ideas in linguistics or the philosophy of language; she is just suggesting we add another layer to our analysis of communication and meaning.
Digging into issues of plausible deniability also raises the question of what to do about it. There may be something pernicious in the practice, but calling out plausible deniability threatens to dismantle our social guardrails and break the “Cold War” norms used to help people co-exist.
Berstler, though, has another suggestion: Instead of calling out such subterfuge, we can become verbally and performatively skilled enough to counteract it.
“I think the actual answer is becoming rhetorically clever,” Berstler says. “It’s being the person who uses indirect speech to respond strategically, without violating these norms. That is possible. It also means you have agency. You could become very good at verbal sparring.”
Besides, Berstler says, “Often that can be more powerful than just calling them out, and demonstrates your own verbal fluency. I think we admire it when we see it. Conversational skill is an important component of being morally good, in these cases by reprimanding someone in a way that’s not going to be counterproductive.”
She adds: “People who buy into the rhetoric of transparency can be setting back their own interests. Maybe speaking transparently is morally virtuous in some respects, but given the reality of our speech practices, transparency is not necessarily going to be the most effective way of handling things.”
Jacob Andreas and Brett McGuire named Edgerton Award winnersThe associate professors of EECS and chemistry, respectively, are honored for exceptional contributions to teaching, research, and service at MIT.MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.
“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”
“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”
Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the National Accrediting Agency for Clinical Laboratory Sciences, the International Conference on Machine Learning, and the Association for Computational Linguistics.
Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”
Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”
Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.
The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.
“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”
“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”
Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven't felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.”
“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”
“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”
Bringing AI-driven protein-design tools to biologists everywhereFounded by Tristan Bepler PhD ’20 and former MIT professor Tim Lu PhD ’07, OpenProtein.AI offers researchers open-source models and other tools for protein engineering.Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.
The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.
The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.
“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”
Advancing biology with AI
Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.
“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”
Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.
“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”
After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.
“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”
Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.
“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”
OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.
PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.
“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”
The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.
Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.
“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”
Enabling the next generation of therapies
The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.
Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.
“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”
Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.
“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.
As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.
“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”
With navigating nematodes, scientists map out how brains implement behaviorsMIT scientists create a detailed map of exactly what happens in the brains of C. elegans worms when they “follow their nose” to savor attractive odors or avoid unappealing ones.Animal behavior reflects a complex interplay between an animal’s brain and its sensory surroundings. Only rarely have scientists been able to discern how actions emerge from this interaction. A new open-access study in Nature Neuroscience by researchers in The Picower Institute for Learning and Memory at MIT offers one example by revealing how circuits of neurons within C. elegans nematode worms respond to odors and generate movement as they pursue of smells they like and evade ones they don’t.
“Across the animal kingdom, there are just so many remarkable behaviors,” says study senior author Steven Flavell, associate professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. “With modern neuroscience tools, we are finally gaining the ability to map their mechanistic underpinnings.”
By the end of the study, which former graduate student Talya Kramer PhD ’25 led as her doctoral thesis research, the team was able to show exactly which neurons in the worm’s brain did which of the jobs needed to sense where smells were coming from, plan turns toward or away from them, shift to reverse (like old-fashioned radio-controlled cars, C. elegans worms turn in reverse), execute the turns, and then go back to moving forward. Not only did the study reveal the sequence and each neuron’s role in it, but it also demonstrated that worms are more skillful and intentional in these actions than perhaps they’ve received credit for. And finally, the study demonstrated that it’s all coordinated by the neuromodulatory chemical tyramine.
“One thing that really excited us about this study is that we were able to see what a sensorimotor arc looks like at the scale of a whole nervous system: all the bits and pieces, from responses to the sensory cue until the behavioral response is implemented,” Flavell says.
Seeing the sequence
To do the research, Kramer put worms in dishes with spots of odors they’d either want to navigate toward or slither away from. With the lab’s custom microscopes and software, she and her co-authors could track how the worms navigated and all the electrical activity of more than 100 neurons in their brains during those behaviors (the worms only have 302 neurons total).
The surveillance enabled Kramer, Flavell, and their colleagues to observe that the worms weren’t just ambling randomly until they happened to get where they’d want to be. Instead, the worms would execute turns with advantageous timing and at well-chosen angles. The worms seemed to know what they were doing as they navigated along the gradients of the odors.
Inside their heads, patterns of electrical activity among a cohort of 10 neurons (indicated by flashing green light tied to the flux of calcium ions in the cells), revealed the sequence of neural activation that enabled the worms to execute these sensible sensory-guided motions: forward, then into reverse, then into the turn, and then back to forward. Particular neurons guided each of these steps, including detecting the odors, planning the turn, switching into reverse, and then executing the turns.
A couple of neurons stood out as key gears in the sequence. A neuron called SAA proved pivotal for integrating odor detection with planning movement, as its activity predicted the direction of the eventual turn. Several neurons were flexible enough to show different activity patterns depending on factors such as where the odors were and whether the worm was moving forward or in reverse.
And if the neurons are indeed turning and shifting gears, then the neuromodulator tyramine (the worm analog of norepinephrine) was the signal essential to switch their gears. After the worms started moving in reverse, tyramine from the neuron RIM enabled other neurons in the sequence to change their activity appropriately to execute the turns. In several experiments the scientists knocked out RIM tyramine and saw that the navigation behaviors and the sequence of neural activity largely fell apart.
“The neuromodulator tyramine plays a central role in organizing these sequential brain activity patterns,” Flavell says.
In addition to Flavell and Kramer, the paper’s other authors are Flossie Wan, Sara Pugliese, Adam Atanas, Sreeparna Pradhan, Alex Hiser, Lillie Godinez, Jinyue Luo, Eric Bueno, and Thomas Felt.
A MathWorks Science Fellowship, the National Institutes of Health, the National Science Foundation, The McKnight Foundation, The Alfred P. Sloan Foundation, the Freedom Together Foundation, and HHMI provided funding to support the work.
Understanding community effects of Asian immigrants’ US housing purchases Findings suggest that at the county level, rise in prices is due, in part, to the fact that new neighbors have a positive impact on K-12 education.Asian immigrants are both the fastest-growing and highest-earning immigrant ethnic group in the United States, facts that have caught the attention of many economists interested in how these groups — whether investors or residents — impact housing prices, K-12 education, and other important aspects of community life.
A new study by economists at MIT and the University of Cincinnati delves into this trend, focusing on the potential mechanisms at work behind the correlation of rising home prices and subsequent improvements in education at the county level. Their findings suggest that home prices rise not simply due to increased demand, but because the new neighbors have a positive influence on the quality of K-12 education, which in turn increases desirability.
The study focuses on 2008 to 2019, a period that saw a relative spike in US immigration from six Asian countries in particular — China, India, Japan, Korea, the Philippines, and Vietnam. Among this group, the economists focused specifically on those who arrived on non-permanent visas for study or work — a cohort that represents a distinct and growing channel of new immigrant inflow, and is often pre-selected by universities and employers.
“We’re looking at a window when the influx of Asian immigrants has a particularly strong preference for education, and who themselves were also highly educated,” says Eunjee Kwon, the West Shell, Jr. Assistant Professor of Real Estate in the Department of Finance at the University of Cincinnati, a co-author on the study published in the May issue of the Journal of Urban Economics. “This period also marks a notable shift in the socioeconomic profile of Asian immigrants to the U.S., with this cohort arriving with higher levels of education and income relative to earlier waves of Asian immigrants and, in many cases, relative to the native-born population.”
While county data is not granulated to the neighborhood or even municipality level, the researchers found that 30 to 40 percent of the rise in home values purchased in areas where Asian immigrant buyers have school-age children correlates with improved quality of education, as indicated by the average rise in standardized test scores of all children in the county.
“Maybe some Asian buyers are pure investors, but many of them become residents who buy homes for themselves and their families, and transform the neighborhoods,” says co-author Siqi Zheng, the Samuel Tak Lee Professor of Urban and Real Estate Sustainability at the MIT Center for Real Estate and the Department of Urban Studies and Planning. “We show that this is not negligible; it is a big component. We can attribute at least one-third of housing price increases to improved education.”
Amanda Ang, a postdoc in the Department of Economics at Aalto University in Helsinki, is the third co-author of the paper. The work is somewhat personal for the scientists, who undertook the study without funding in order to see for themselves what impact this particular group of immigrants had on neighborhoods.
“We wanted to understand what this group contributes to the communities where they settle," Kwon says. “We found that their presence benefits children of all other backgrounds, too."
Ang, Kwon, and Zheng use an econometric approach called an instrumental variable to home in on a causal correlation, and not just an association. To help ensure accuracy, they carefully omitted counties that have long been home to large Asian communities — such as San Francisco, Los Angeles, and New York — in order to capture the impact of recent immigrants on other counties.
“I believe that this will be a highly influential paper because it asks a very important question and uses credible statistical methods to try to disentangle selection effects from treatment effects, using a subtle analysis accounting for displacement,” says Matthew Kahn, the Provost Professor of Economics and Spatial Sciences at the University of Southern California, who was not involved with the research.
“What really interests me about this paper is that it suggests that there can be a positive spillover effect: that U.S. areas that attract Asian immigrants also gain from improved school quality,” Kahn says. “It’s the first I’ve seen undertaken on this very important hypothesis, which certainly merits additional future research, possibly using school-level and individual-level data.”
Light-activated gel could impact wearables, soft robotics, and moreNew MIT work advances the growing field of ionotronics, in which data are transferred through ions, potentially providing a bridge between electronics and biological tissue.Consider the chief difference between living systems and electronics: The first is generally soft and squishy, while the latter is hard and rigid. Now, in work that could impact human-machine interfaces, biocompatible devices, soft robotics, and more, MIT engineers and colleagues have developed a soft, flexible gel that dramatically changes its conductivity upon the application of light.
Enter the growing field of ionotronics, which involves transferring data through ions, or charged molecules. Electronics does the same, with electrons. But while the latter is well established, ionotronics is still being developed, with one huge exception: living systems. The cells in our bodies communicate with a variety of ions, from potassium to sodium.
Ionotronics, in turn, can provide a bridge between electronics and biological tissues. Potential applications range from soft wearable technology to human-machine interfaces
“We’ve found a mechanism to dynamically control local ion population in a soft material,” says Thomas J. Wallin, the John F. Elliott Career Development Professor in MIT’s Department of Materials Science and Engineering and leader of the work. “That could allow a system that is self-adaptive to environmental stimuli, in this case light.” In other words, the system could automatically change in response to changes in light, which could allow complex signal processing in soft materials.
An open-access paper about the work was published online recently in Nature Communications.
A growing field
Although others have developed ionotronic materials with high conductivities that allow the quick movement of ions, those conductivities cannot be controlled. “What we’re doing is using light to switch a soft material from insulating to something that is 400 times more conductive,” says Xu Liu, first author of the paper and former MIT postdoc in materials science and engineering who is now an incoming assistant professor at King’s College London.
Key to the work is a class of materials known as photo-ion generators (PIGs). These can become some 1,000 times more conductive upon the application of light. The MIT team optimized a way to incorporate a PIG into polyurethane rubber by first dissolving a PIG powder into a solvent, and then using a swelling method to get it into the rubber.
Much potential
In the material reported in the current work, the change in conductivity is irreversible. But Liu is confident that future versions could switch back and forth between insulating and conducting states.
She notes that the current material was developed using only one kind of PIG, polymer (the polyurethane rubber), and solvent, but there are many other kinds of all three. So there is great potential for creating even better light-responsive soft materials.
Liu also notes the potential for developing soft materials that respond to other environmental stimuli, such as heat or magnetism. “We’re inspired to do more work in this field by changing the driving force from light to other forms of environmental stimuli,” she says.
“Our work has the potential to lead to the creation of a subfield that we call soft photo-ionotronics,” Liu continues. “We are also very excited about the opportunities from our work to create new soft machines impacting soft wearable technology, human-machine interfaces, robotics, biomedicine, and other fields.”
Additional authors of the paper are Steven M. Adelmund, Shahriar Safaee, and Wenyang Pan of Reality Labs at Meta.
3 Questions: A running shoe that adapts to the runnerAssociate Professor Skylar Tibbits discusses a new technology that uses granular convection to deliver individualized performance.Granular convection takes place everywhere: candy in a box, sand on the beach, foam in a cushion. Often referred to as the “Brazil nut effect,” granular convection occurs when solid, independent, irregularly shaped particles reorder themselves following agitation. One might think, intuitively, that the larger pieces fall to the bottom, but it is their size, and not their density, that alters their location, and the larger pieces end up on the top.
In the world of competitive running, elite athletes have their footwear individually designed for needs such as foot shape and pressure points. Comfortable and supportive footwear can assist optimal performance. However, most footwear is standardized and doesn’t offer a personalized performance.
MIT associate professor of architecture Skylar Tibbits, founder and co-director of the Self-Assembly Lab in the MIT School of Architecture and Planning, along with various MIT colleagues, have been developing tests surrounding the phenomenon of granular convection within the midsole — or middle layer, between the outsole (bottom) and insole (top) — of running shoes to create a shoe that evolves over time to provide an individualized product. As we approach the running of the 130th Boston Marathon — one of the world's most prominent displays of footwear supporting athletes — Tibbits answers three questions about bead-based technologies as applied to running shoes.
Q. What are the advantages of an adaptive midsole over the current bead-based midsole technology?
A. Currently, the standard midsoles in running shoes are static. They aren’t customized to the shape of our foot or the force we deliver when running or walking. They also don’t change or improve over time as we run in them. Some products — blue jeans, baseball gloves, and hats, for example — get more comfortable as you wear them. We were exploring how this could be taken even further with a running shoe so that you would have the cushion, support, and stiffness where you need it and have it improve these features as you use it so that, over time, the actual performance of the shoe gets better. It’s not a personalized fit; it’s a performance-driven adaptation.
There are three advantages to this technology. The first is that customization is not only for elite athletes. Most elite athletes are already getting gear personalized for their specific needs by their sponsoring brands. Now, customized gear can be available for everyone. Second, customized gear currently does not adapt to an athlete’s performance. But you need your footwear to evolve because your needs as a runner evolve. You need to get the comfort, cushioning, and protection, to support your performance.
A third advantage is the manufacturability of this type of shoe. Custom shoes are now made in a factory for the specifications of a single athlete. That doesn’t scale. You can’t produce a manufacturing process where every single person’s shoe is going to be custom-made for them. We’ve shown that every shoe can be the same and mass produced, but, over time, the shoe will evolve to your personal needs. That is a way to get customization without having to change the manufacturing process.
Q: Why the interest in granular systems, and granular convection in particular?
A: We’ve worked on reversible construction techniques with granular jamming over the years, which is at the opposite end of the spectrum. Granular convection promotes the movement of particles; the more they are mixed, the more they separate. Our vision was looking at footwear that adapts with you over time. We thought we could use granular convection as a mechanism for the footwear to evolve.
We put particles with different stiffness, different material properties, and unique sizes, so that over time, we know the softer particles, which are the larger particles, will rise to the top, and the stiffer particles that are smaller will sink to the bottom, towards the outsole. We designed how these particles moved based on the vibration and the impact of walking and running.
We also designed the container. We had three different particle sizes; we conducted tests to try to dial it into the right number of steps for it to evolve over the course of about 20,000 steps. About the length of a marathon. We could either speed up or slow down that process.
Q. Are there future applications of customization for granular convection? If so, where do you see your research going next?
A: Any products that need cushioning systems that improve over time would benefit from this technology. With custom packaging, you have molded foam that fits around a product — a flat-screen television, for example — that is tossed out after it has been shipped from factory to distributor to customer. I worked with a furniture company that wrapped blankets around chairs for transport, but there were still some chairs that sustained damage. Maybe we could develop a blanket or some kind of material that adapts over the journey so that it creates just the right amount of cushion for the shape and property of that product and, once it’s delivered, its shape could be “released” and then reused. How can we reset this product in a timely manner so it can be used again?
Wheelchairs are another product where we would want seat cushions that can adapt to how a person sits, the force distribution, and the environment in which they are being used, such as a sidewalk or a gravel path. We considered this as it relates to footwear. You might want to reset your shoes because you’re going to be running road races on a given day and trail races another day. How can we empty and refill the midsole with different particles so it can adapt again? More importantly, how can we upgrade or change our shoes without throwing them away? This is exciting future work for us to explore.
A regulatory loophole could delay ozone recovery by yearsScientists say an exception in the Montreal Protocol for the use of ozone-depleting feedstocks could set the ozone recovery back seven years.Often hailed as the most successful international environmental agreement of all time, the 1987 Montreal Protocol continues to successfully phase out the global production of chemicals that were creating a growing hole in the ozone layer, causing skin cancer and other adverse health effects.
MIT-led studies have since shown the subsequent reduction in ozone-depleting substances is helping stratospheric ozone to recover. (It could return to 1980 levels by as early as 2040, according to some estimates.) But the Montreal Protocol made an exception in its rules for the use of ozone-depleting substances as feedstocks in the production of other materials. That’s because it was thought that only a small amount — just 0.5 percent — of the ozone-depleting substances used for this purpose would leak into the atmosphere.
In recent years, however, scientists have observed more ozone-depleting substances in the atmosphere than expected, and have increased their estimates of leakage from feedstocks.
Now an international group of scientists, including researchers from MIT, has calculated the impact of different feedstock leakage rates on the ozone’s fragile recovery. They find the higher leakage rates, if not addressed by the Montreal Protocol, could delay ozone recovery by about seven years.
“We’ve realized in the last few years that these feedstock chemicals are a bug in the system,” says author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, who was part of the original research team that linked the chemicals to the ozone hole. “Production of ozone-depleting substances has pretty much ceased around the world except for this one use, which is when you have a chemical you convert into something else.”
The paper, which was published in Nature Communications today, is the first to comprehensively quantify the impact of leaked feedstocks, which are currently used to make plastics and nonstick chemicals. They are also used to make substitute chemicals for the ones regulated under the Montreal Protocol. The researchers say it shows the importance of curbing use and preventing leakage of such feedstocks, especially as the production of their end products, like plastic, is projected to grow.
“We’ve gotten to the point where, if we want the protocol to be as successful in the future as it has been in the past, the parties really need to think about how to tighten up the emissions of these industrial processes,” says first author Stefan Reimann of the Swiss Federal Laboratories for Materials Science and Technology.
“To me, it’s only fair, because so many other things have already been completely discontinued. So why should this exemption exist if it’s going to be damaging?” says Solomon.
Joining Reimann on the paper are his colleagues Martin K. Vollmer and Lukas Emmenegger; Luke Western and Susan Solomon of the MIT Center for Sustainability Science and Strategy and the Department of Earth, Atmospheric and Planetary Sciences; David Sherry of Nolan-Sherry and Associates Ltd; Megan Lickley of Georgetown University; Lambert Kuijpers of the A/gent Consultancy b.v.; Stephen A. Montzka and John Daniel of the National Oceanic and Atmospheric Administration; Matthew Rigby of the University of Bristol; Guus J.M. Velders of Utrecht University; Qing Liang of the NASA Goddard Space Flight Center; and Sunyoung Park of Kyungpook National University.
Repairing the ozone
In 1985, scientists discovered a growing hole in the ozone layer over Antarctica that was allowing more of the sun’s harmful ultraviolet radiation to reach Earth’s surface. The following year, researchers including Solomon traveled to Antarctica and discovered the cause of the ozone deterioration: a class of chemicals called chlorofluorocarbons, or CFCs, which were then used in refrigeration, air conditioning, and aerosols.
The revelations led to the Montreal Protocol, an international treaty involving 197 countries and the European Union restricting the use of CFCs. The subsequent decision to exempt the use of ozone-depleting substances for use as feedstocks was based partially on industry estimates of how much of their feedstocks leaked.
“It was thought that the emissions of these substances as a feedstock were minor compared to things like refrigerants and foams,” Western says. “It was also believed that leakage from these sources was minor — around half a percent of what went in — because people would essentially be leaking their profits if their feedstocks were released into the atmosphere.”
Unfortunately, some of those assumptions are no longer true. Western and Reimann are part of the Advanced Global Atmospheric Gases Experiment (AGAGE), a global monitoring network co-founded by Ronald Prinn, MIT’s TEPCO Professor of Atmospheric Science. AGAGE monitors emissions of ozone-depleting substances around the world, and in recent years researchers have revised their estimates of feedstock leakage upwards, to about 3.6 percent. For some chemicals, the number was even higher.
In the new paper, the researchers estimated a 3.6 percent feedstock leakage as the baseline for most chemicals. They compared that with a scenario where 0.5 percent of feedstocks are leaked from 2025 onward and a scenario with zero feedstock-related emissions. The researchers also looked at production trends between 2014 and 2024 to project how much of each specific ozone-depleting chemical would be used as feedstock between 2025 and 2100.
The analysis shows that until 2050, total ozone-depleting chemical emissions decrease in all scenarios as rising feedstock emissions are offset by declining uses enforced by the Montreal Protocol. In the scenario with continued 3.6 percent leakage, however, emissions level off around 2045, and total emissions only decrease by 50 percent overall by 2100.
The researchers then evaluated the impact of feedstock-related emissions on stratospheric ozone depletion. In the scenario where feedstock leakage is 0.5 percent, the ozone returns to its 1980 status by 2066. In the scenario with zero feedstock leakage, the ozone reclaims its 1980 health in 2065. But in the baseline scenario, the recovery is delayed about seven years, to 2073.
“This paper sends an important message that these emissions are too high and we have to find a way to reduce them,” Reimann says. “Either that means no longer using these substances as feedstocks, swapping out chemicals, or reducing the leakage emissions when they are used.”
A global response
Solomon is confident industries will be able to adjust to the latest findings.
“There are a lot of innovators in the chemical industry,” Solomon says. “They make new chemicals and improve chemicals for a living. It’s true they can perhaps get too entrenched with certain chemicals, but it doesn’t happen that often. Actually, they’re usually quite willing to consider alternatives. There are thousands of other chemicals that could be used instead, so why not switch? That’s been the attitude.”
Solomon says the fact that AGAGE can detect the impact of feedstock emissions is a testament to the progress the world has made in reducing emissions from other sources up to this point. She believes raising awareness of the feedstock problem is the first step.
“This isn’t the first time that the AGAGE Network has made measurements that have allowed the world to see we need to do a little better here or there,” Western says. “Often, it’s just a mistake. Sometimes all it takes is making people more aware of these things to tighten up some processes.”
Members of the Montreal Protocol meet every year. In those meetings, they split into working groups around different topics. Feedstock emissions are already one of those topics, so participants will review the evidence together. Typically, they release a statement about mitigation strategies if needed.
“We wanted to raise the warning flag that something is wrong here,” Reimann says. “We could reduce the period of ozone depletion by years. It might not sound like a long time, but if you could count the skin cancer cases you’d avoid in that time, it would seem quite significant.”
The work was supported, in part, by the U.S. National Science Foundation, the U.S. National Aeronautics and Space Administration (NASA), the Swiss Federal Office for the Environment, the VoLo Foundation, the United Kingdom Natural Environment Research Council, and the Korea Meteorological Administration Research and Development Program.
Youth may increase vulnerability to a carcinogen found in contaminated water and some drugsA new study suggests that the chemical NDMA is much more likely to cause cancerous mutations after exposure early in life.A new study from MIT suggests that a carcinogen that has been found in medications and in drinking water contaminated by chemical plants may have a much more severe impact on children than adults.
In a study of mice, the researchers found that juveniles exposed to drinking water containing this compound, known as NDMA, showed dramatically higher rates of DNA damage and cancer than adults.
The findings may help to explain an epidemiological association between childhood cancer and prenatal exposure to NDMA in people living near a contaminated site in Wilmington, Massachusetts, the researchers say. The study also suggests that it is critical to evaluate the impact of potential carcinogens across all ages.
“We really hope that groups that do safety testing will change their paradigm and start looking at young animals, so that we can catch potential carcinogens before people are exposed,” says Bevin Engelward, an MIT professor of biological engineering. “As a solution to cancer, cancer prevention is clearly much better than cancer treatment, so we hope we can spot dangerous chemicals before people are exposed, and therefore prevent extensive cancer risk.”
MIT postdoc Lindsay Volk is the lead author of the paper. Engelward is the senior author of the study, which appears in Nature Communications.
From DNA damage to cancer
NDMA (N-Nitrosodimethylamine) can be generated as a byproduct of many industrial chemical processes, and it is also found in cigarette smoke and processed meats. In recent years, NDMA has been detected in some formulations of the drugs valsartan, ranitidine, and metformin. It was also found in drinking water in Wilmington, Massachusetts, in the 1990s, as a result of contamination from the Olin Chemical site.
In 2021, a study from the Massachusetts Department of Health suggested a link between that water contamination and an elevated incidence of childhood cancer in Wilmington. Between 1990 and 2000, 22 Wilmington children were diagnosed with cancer. The contaminated wells were closed in 2003.
Also in 2021, Engelward and others at MIT published a study on the mechanism of how NDMA can lead to cancer. In the new Nature Communications paper, Engelward and her colleagues set out to see if they could determine why the compound appears to affect children more than adults.
Most studies that evaluate potential carcinogens are performed in mice that are at least 4 to 6 weeks old, and often older. For this study, the researchers studied two groups of mice — one 3 weeks old (juvenile), and one 3 months old (adult). Each group was given drinking water with low levels of NDMA, about five parts per million, for two weeks.
Inside the body, NDMA is metabolized by a liver enzyme called CYP2E1. This produces toxic metabolites that can damage DNA by adding a small chemical group known as a methyl group to DNA bases, creating lesions known as adducts.
When the researchers examined the livers of the mice, they found that juveniles and adults showed similar levels of DNA adducts. However, there were dramatic differences in what happened after that initial damage. In juvenile mice, DNA adducts led to significant accumulation of double-stranded DNA breaks, which occur when cells try to repair adducts. These breaks produce mutations that eventually lead to the development of liver cancer.
In the adult mice, the researchers saw essentially no double-stranded breaks and significantly fewer mutations compared to juveniles. Furthermore, the livers did not develop severe pathology, including tumors, even though they experienced the same initial level of DNA adducts.
“The initial structural changes to the DNA had very different consequences depending on age,” Engelward says. “The double-stranded breaks were exclusively observed in the young.”
Further experiments revealed that these differences stem from differences in the rates of cell proliferation. Cells in the juvenile liver divide rapidly, giving them more opportunity to turn DNA adducts into mutations, while cells of the adult liver rarely divide.
“This really emphasizes the overall problem that we’re trying to highlight in the paper,” Volk says. “With toxicological studies, oftentimes the standard is to use fully grown mice. At that point, they’re already slowing down cell division, so if we are testing the harmful effects of NDMA in adult mice, then we’re completely missing how vulnerable particular groups are, such as younger animals.”
While most of these effects were seen in the liver, because that is where NDMA is metabolized, a few of the mice developed other types of cancer, including lung cancer and lymphoma.
Adult risk is not zero
For most of these studies, the researchers used mice that had two of their DNA repair systems knocked out. This speeds up the mutation process, allowing the researchers to see the effects of NDMA exposure more easily, without needing to study a large population of mice.
However, a small study in mice with normal DNA repair showed that juveniles experienced NDMA-induced double-strand breaks, regenerative proliferation, and large-scale mutations that were completely absent in adults. This occurs because the fast-growing juveniles possess highly active DNA replication machinery that encounters the DNA adducts before the cell has time to repair them.
The researchers also found that if they treated adult mice with thyroid hormone, which stimulates proliferation of liver cells, those cells began accumulating mutations as quickly as the juvenile liver cells. Previous work done in the Engelward laboratory has shown that inflammation can also stimulate cell proliferation-driven vulnerability to DNA damage, so the findings of this study suggest that anything that causes liver inflammation could make the adult liver more vulnerable to damage caused by agents such as NDMA.
“We certainly don’t want to say that adults are completely resistant to NDMA,” Volk says. “Everything impacts your susceptibility to a carcinogen, whether that’s your genetics, your age, your diet, and so forth. In adults, if they have a viral infection, or a high fat diet, or chronic binge alcohol drinking, this can impact proliferation within the liver and potentially make them susceptible to NDMA.”
The researchers are now investigating how a high-fat diet might influence cancer development in mice that also have exposure to NDMA.
This collaborative effort across several MIT labs was funded by the National Institutes of Environmental and Health Sciences (NIEHS) Superfund Research Program, a NIEHS Core Center Grant, a National Institutes of Health Training Grant, and the Anonymous Fund for Climate Action.
MIT study reveals a new role for cell membranes Long thought to be mainly a structural support, the cell membrane also influences how cells respond to signals and may contribute to the growth of cancer cells.Cells are enveloped by a lipid membrane that gives them structure and provides a barrier between the cell and its environment. However, evidence has recently emerged suggesting that these membranes do more than simply provide protection — they also influence the behavior of the protein receptors embedded in them.
A new study from MIT chemists adds further support to that idea. The researchers found that changing the composition of the cell membrane can alter the function of a membrane receptor that promotes proliferation.
Epidermal growth factor receptor (EGFR) can be locked into an overactive state when the cell membrane has a higher than normal concentration of negatively charged lipids, the researchers found. This may help to explain why cancer cells with high levels of those lipids enter a highly proliferative state that allows them to divide uncontrollably.
“The longstanding dogma of what a membrane does is that it’s just a scaffold, an organizational structure. However, there have been increasing observations that suggest that maybe these membrane lipids are actually playing a role in receptor function,” says Gabriela Schlau-Cohen, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT and the senior author of the study.
The findings open up the possibility of discovering new ways to treat tumors by neutralizing the negative charge, which might turn down EGFR signaling, she adds.
Shwetha Srinivasan PhD ’22 is the lead author of the paper, which appears in the journal eLife. Other authors include former MIT postdocs Xingcheng Lin and Raju Regmi, Xuyan Chen PhD ’25, and Bin Zhang, an associate professor of chemistry at MIT.
Receptor dynamics
The EGF receptor, which is found on cells that line body surfaces and organs, is one of many receptors that help control cell growth. Some types of cancer, especially lung cancer and glioblastoma, overexpress the EGF receptor, which can lead to uncontrolled growth.
Like most receptor proteins, EGFR spans the entire cell membrane. Until recently, it has been challenging to study how signals are conveyed across the entire receptor, because of the difficulty of creating membranes that have proteins going all the way through them and then studying both ends of those proteins.
To make it easier to study these signaling processes, Schlau-Cohen’s lab uses nanodiscs, a special type of self-assembling membrane that mimics the cell membrane. When making these discs, the researchers can embed receptors in them, allowing the team to study the function of the full-length receptor.
Using a technique called single molecule FRET (fluorescence resonance energy transfer), the researchers can study how the shape of the receptor changes under different conditions. Single molecule FRET allows them to measure the distance between different parts of the protein by labeling them with fluorescent tags and then measuring how fast energy travels between the tags.
In previous work, Schlau-Cohen and Zhang used single molecule FRET and molecular dynamics simulations to reveal what happens when EGFR binds to EGF. They found that this binding causes the transmembrane section of the receptor to change shape, and that shape-shift triggers the section of the receptor that extends inside the cell to activate cellular machinery that stimulates growth.
Stuck in an overactive state
In the new study, the researchers used a similar approach to investigate how altering the composition of the membrane affects the function of the receptor. First, they explored how elevated levels of negatively charged lipids would affect the cell membrane and EGFR function.
Normally, about 15 percent of the cell membrane is made up of negatively charged lipids. The researchers found that membranes with negatively charged lipids in the range of 15 to 30 percent behaved normally, but if that level reached 60 percent, then the EGFR receptor would become locked into an active state.
In that state, the pro-growth signaling pathway is turned on all the time, even when no EGF is bound to the receptor. Many cancer cells show increased levels of these lipids, and this mechanism could help to explain why those cells are able to grow unchecked, Schlau-Cohen says.
“If the membrane has high levels of negatively charged lipids, then it’s always in that open conformation. It doesn’t matter if ligand is bound or unbound,” she says. “It’s always in the conformation that’s telling the cell to grow, not just when EGF binds.”
The researchers also used this system to explore the role of cholesterol in EGFR function. When the researchers created nanodiscs with elevated cholesterol levels, they found that the membranes became more rigid, and this rigidity suppressed EGFR signaling.
The research was funded by the National Institutes of Health and MIT’s Department of Chemistry.
Waves hit different on other planetsFrom lazy ripples to towering breakers, waves should vary widely from one planet to another, according to a new model.On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.
This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.
In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.
The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock.
“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”
The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.
“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”
If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.
“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”
The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.
“The first puff”
When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.
“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”
She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.
“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”
Making waves
The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.
The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.
They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.
“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”
The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.
Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.
The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.
This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.
Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.
“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”
This work was supported, in part, by NASA and the National Science Foundation.
Geothermal energy turns red hotMIT Energy Initiative symposium maps a path to tap the planet’s heat-rich rocks for clean power at scale.Drill deep and drill differently. That’s what’s needed to exploit the nearly bottomless promise of geothermal energy in the United States and around the globe, according to participants at the 2026 Spring Symposium, titled “Next-generation geothermal energy for firm power.”
Sponsored by the MIT Energy Initiative (MITEI), the March 4 event drew 120 people, including MIT faculty and students, investors, and representatives from startups, multinational energy companies, and zero-carbon advocacy groups.
“The time feels right to pull together good policy, great corporate partners, and the research and technological innovations … to make significant advances in the widespread utilization of this incredible resource,” said Karen Knutson, the vice president for government affairs at MIT, in welcoming attendees.
Technology from the oil and gas industry helped usher in a first wave of geothermal energy. But chewing vertical holes through rocks in traditional ways can’t deliver on the full potential of this resource. And the real treasure — geologic formations radiating heat at 374 degrees Celsius and above — lies kilometers beneath Earth’s surface, far beyond the reach of most conventional drilling rigs.
Panelists explored the many innovations in accessing and circulating subsurface heat, as well as digging to unprecedented depths through extremely challenging geological conditions, discussing advanced drilling technologies, materials, and subsurface imaging.
This work is needed urgently, as demand for firm (always-on) power skyrockets in response to the electrification of industry and rise of data centers, said Pablo Dueñas‑Martínez, a MITEI research scientist. “We cannot get through this only with solar and wind; we need dense, deployable energy like geothermal.”
From “minuscule” to “almost inexhaustible” energy
In her opening remarks, Carolyn Ruppel, MITEI’s deputy director of science and technology, noted that despite decades of successful projects in places like the United States, Kenya, Iceland, Indonesia, and Turkey, geothermal still contributes only a “minuscule” share of global electricity. “The tremendous heat beneath our feet remains largely untouched,” she said.
Citing MIT’s milestone 2006 study “The Future of Geothermal Energy,” keynote speaker John McLennan, a professor at the University of Utah and co–principal investigator of the U.S. Department of Energy’s Utah FORGE enhanced geothermal systems (EGS) field laboratory, reminded attendees that the continental crust holds enough accessible heat to supply power for generations. “For practical purposes, it’s almost inexhaustible,” he said.
The question now, he said, is how to access that resource economically and responsibly.
At the Utah FORGE test site, McLennan has been part of a team investigating one method — adapting the oil and gas industry’s drilling and reservoir engineering expertise for hot, relatively impermeable rocks.
The project has drilled multiple deep wells into crystalline granitic rock, including a pair of wells that have been hydraulically stimulated and connected. In a recent circulation test, cold water was pumped down one well, flowed through fractures, and returned hot through the other.
“On a commercial basis … this hot water would be converted to electricity at the surface,” McLennan said. “This has now been demonstrated at Utah FORGE.”
The basic physics, in other words, work. The harder problems now are cost, repeatability, and scale.
Geothermal on the grid
Several panels highlighted the fact that next-generation geothermal is already beginning to deliver firm power.
At Lightning Dock, New Mexico, geothermal company Zanskar used a probabilistic modeling framework that simulated thousands of possible subsurface configurations to identify where to drill a new production well at an underperforming geothermal field. By thermal power delivered, the resulting well is now “the most-productive pumped geothermal well in the country,” said Joel Edwards, Zanskar’s co-founder and chief technology officer — powering the entire 15 megawatt (MW) Lightning Dock plant from a single well.
This data-driven approach enables the company to find and develop new resources faster and more cheaply than traditional methods, said Edwards.
José Bona, the director of next-generation geothermal at Turboden, explained how his company’s technology uses specialized turbines to circulate organic fluids that conserve heat better than water, and then convert that heat efficiently into electrical power. This closed-cycle technology can utilize low- to medium-temperature heat sources. Turboden is supplying its technology both to the Lightning Dock geothermal facility in New Mexcio and to Fervo Energy’s Cape Station in southwest Utah, an EGS project that will begin delivering 100 MW of baseload, clean electricity to the grid this year, aiming for 500 MW by 2028.
In Geretsried, Germany, Eavor has developed its own proprietary closed-loop system by creating a kind of underground radiator.
“We drilled to about 4.5 kilometers vertical depth, completed six horizontal multilateral pairs, and we delivered the first power to the grid in December,” said Christian Besoiu, the team lead of technology development at Eavor. The project will ultimately be capable of supplying 8.2 MW of electricity to the 32,000 households in the Bavarian town of Geretsried and 64 MW of thermal energy to the district in which the town lies, prioritizing heat when needed.
Beyond oil and gas technology
Early geothermal exploration typically targeted preexisting faults using vertical wells left by oil and gas drilling. Today, companies are experimenting with rock fracturing at multiple subsurface levels and creating heat reservoirs in previously untenable formations by using propping materials.
“Instead of vertical wells, we’re going to horizontal wells, we’re going to cased wells, we’re introducing proppants [solid materials that hold open hydraulically fractured rock] … we do dozens of stages with these designs,” said Koenraad Beckers, the geothermal engineering lead at ResFrac. This shale-style approach has already yielded much higher flow rates and more-reliable performance than earlier EGS.
Some current geothermal wells manage to achieve depths close to 15,000 feet using the oil and gas industry’s polycrystalline diamond compact drill bits, which can bore through hard rock like granite at more than 100 feet per hour. But these bits and the rigs that drive them are no match for conditions six or more kilometers down — and it is at those depths that the heat on hand begins to make an overwhelming economic case for geothermal.
“If we go to around 300 to 350 degrees, your power potential increases 10 times,” said Lev Ring, CEO of Sage Geosystems. “At that point, with reasonable CAPEX [capital expenditure] assumptions, levelized cost of electricity [a metric for comparing the cost of electricity across different generation technologies] is around 4 cents, and geothermal becomes cheaper than any other alternative.”
But “at 10 kilometers down … the largest land rigs in existence today cannot handle it,” Ring added. “We need alternatives — new materials, new ways to handle pressure, maybe even welding on the rig … a whole space that has not been addressed yet.”
One panel, featuring Quaise Energy, an MIT spinout with MITEI roots, spotlighted just how radically drilling might change. Co-founder Matt Houde described the company’s millimeter-wave drilling approach, which uses high-frequency electromagnetic waves derived from fusion research to vaporize rock instead of grinding it, as with conventional drilling. In a recent Texas field test, the team drilled 100 meters of hard basement rock in about a month, and is now planning kilometer-scale trials aimed at reaching superhot rock temperatures around 400 C, where each well could deliver many times the power of today’s geothermal projects.
Innovations for deep drilling
Moderating a panel on “MIT innovations for next-generation geothermal,” Andrew Inglis, the venture builder in residence with MIT Proto Ventures, whose position is sponsored by the U.S. Department of Energy GEODE program, framed the Institute’s role in getting such hard-tech ideas out of the lab and into the field. “The way MIT thinks about tech development, uniquely from other universities, can play a very singular role in geothermal commercial liftoff,” he said.
Materials researchers on that panel illustrated the point. Matěj Peč, an associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences, outlined work to build sensors that survive up to 900 C so that rock deformation and fracturing can be studied at supercritical conditions. Michael Short, the Class of 1941 Professor in the Department of Nuclear Science and Engineering, and C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering, respectively described coatings and alloys designed to resist corrosion, fouling, and cracking in extreme environments. In response to audience questions after their talks, Tasan made an important point, highlighting how academics need input from industry to understand the real-world problems (e.g., corrosion of pipes by geofluids) that require engineering solutions.
Other researchers are rethinking how to detect geothermal resources: Wanju Yuan, a research scientist with the Geological Survey of Canada at Natural Resources Canada, is using satellite imagery and thermal infrared sensing to screen vast regions for subtle hot spots and structures, processing thousands of images to identify promising sites in just a few months of work. “It’s a very efficient way to screen potential areas before more expensive exploration, thus reducing exploration and drilling risks,” he said.
Policy as backdrop, not center stage
Policy loomed in the background of many discussions — from bipartisan support for geothermal exploration and tax incentives to issues of regulation and permitting.
For Ruppel, that was by design.
“We wanted this meeting to showcase what’s technically possible and what’s already happening on the ground,” she said. “The policy world is starting to pay attention. Our job is to make sure that when that spotlight turns our way, next-generation geothermal is ready.”
MITEI’s Spring Symposium was followed by a gathering of geothermal entrepreneurs, investors, and energy industry experts co-hosted by MITEI and the Clean Air Task Force. “GeoTech Summit: Accelerating geothermal technology, projects, and deal flow” explored the financing challenges and opportunities of geothermal energy today.
MIT faculty, alumni receive 2025-26 American Physical Society honorsTwo faculty and six additional alumni win top APS awards and prizes; four faculty and 12 additional alumni named APS Fellows.The American Physical Society (APS) recently honored two MIT faculty members — professors Yoel Fink PhD ’00 and Mehran Kardar PhD ’83 — as well as six alumni with prizes and awards for their contributions to physics and academic leadership.
In addition, several MIT faculty members — Professor Jorn Dunkel, Professor Yen-Jie Lee PhD ’11, Associate Professor Mingda Li PhD ’15, and Associate Professor Julien Tailleur — as well as 12 additional alumni were named APS Fellows.
Yoel Fink PhD ’00, the Danae and Vasilis (1961) Salapatas Professor in the Department of Materials Science and Engineering, received the Andrei Sakharov Prize “for defending the academic freedom and human rights of scientists working in the U.S.”
The prize, named for physicist and human rights advocate Andrei Sakharov, recognizes scientists whose leadership and impact advance the principles of intellectual freedom and human dignity. Fink’s research focuses on “computing fabrics” — fibers and textiles that sense, communicate, store, and process information. By embedding functionality at the fiber level, fabrics become computing systems that can infer human activity and context while keeping the traditional qualities of garments. These textiles enable noninvasive monitoring of physiological and health conditions, with applications ranging from fetal and maternal health to human performance analytics, injury prevention in challenging environments, and defense.
Mehran Kardar PhD ’83, the Francis Friedman Professor of Physics, received the Lars Onsager Prize “for ground-breaking contributions to statistical physics, including the Kardar-Parisi-Zhang equation, Casimir forces, active matter, and aspects of biological physics.”
Kardar’s research focuses on how complex behavior emerges from simple interactions in systems both in and far from equilibrium, including stable ones like a still pond and rapidly changing ones such as growing surfaces. The Kardar-Parisi-Zhang equation, which he helped develop, provides a unifying framework for understanding how randomness and fluctuations shape evolving phenomena, from fluids and interfaces to biological and quantum systems. His work has also advanced the theoretical understanding of disordered materials, soft matter such as polymers and gels, and fluctuation-induced forces — including Casimir forces arising from quantum and thermal effects. More recently, he has applied these ideas to active matter — systems of self-driven units — and biological systems, helping reveal patterns in living and evolving systems.
Alumni receiving awards
Joel Butler PhD ’75 was presented the W.K.H. Panofsky Prize in Experimental Particle Physics “for wide-ranging scientific, technical, and strategic contributions to particle physics, particularly exceptional leadership in fixed-target quark flavor experiments at Fermilab and collider physics at the Large Hadron Collider.”
Anthony Duncan PhD ’75 is the recipient of the Abraham Pais Prize for History of Physics “for research on the history of quantum physics between 1900 and 1927 that culminated in 'Constructing Quantum Mechanics,' an exemplary work that uses primary sources masterfully and employs scaffold and arch metaphors to describe developments in the quantum revolution.”
Laura A. Lopez ’04 was presented the Edward A. Bouchet Award “for pioneering contributions to X-ray astronomy, including foundational studies of supernova remnants, compact objects, and stellar feedback in galaxies, and for transformative leadership in advancing equity and inclusion in physics through innovative mentorship programs, national advocacy, and unwavering support for students from historically marginalized communities.”
Zhiquan Sun PhD ’25 is the recipient of the J.J. and Noriko Sakurai Dissertation Award in Theoretical Particle Physics “for applying effective field theory to advance our understanding of QCD [quantum chromodynamics], including establishing a new formalism to study heavy quark fragmentation, determining how confinement affects energy correlators, and revealing an overlooked complexity of the axion solution to the strong CP [charge conjugation symmetry and parity symmetry] problem.”
Charles B. Thorn III ’68 received the Dannie Heineman Prize for Mathematical Physics for “fundamental contributions to elementary particle physics, primarily the theory of strong interactions and the development of string theory.”
Christina Wang ’19 received the Mitsuyoshi Tanaka Dissertation Award in Experimental Particle Physics “for pioneering a novel technique using CMS [Compact Muon Solenoid] muon chambers to search for weakly-coupled sub-GeV [giga-electronvolt] mass dark matter using long-lived particle searches, and for groundbreaking work in quantum sensing to enable new probes of dark matter.”
APS Fellows
Several MIT faculty were elected 2025 APS Fellows:
Jorn Dunkel, MathWorks Professor of Mathematics, is the recipient of the Division of Statistical and Nonlinear Physics Fellowship “for pioneering contributions to statistical, nonlinear, and biological physics, notably in understanding pattern formation in soft matter and biology, cell positioning in tissues, and turbulence in active media.”
Yen-Jie Lee PhD '11, professor of physics, received the Division of Nuclear Physics Fellowship “for pioneering measurements of jet quenching, medium response and heavy-quark diffusion in the quark-gluon plasma, and for using electron-positron collisions as an innovative control to understand collectivity in small collision systems.”
Mingda Li PhD '15, associate professor of nuclear science and engineering, is the recipient of the Topical Group on Data Science Fellowship “for pioneering the integration of artificial intelligence with scattering and spectroscopy, enabling breakthroughs in phonons, topological states, optical and time-resolved spectra, and data-driven discovery for quantum and energy applications.”
Julien Tailleur, associate professor of physics, is the recipient of the Division of Soft Matter Fellowship “for foundational theoretical work on motility-induced phase separation and emergent collective behavior in scalar active matter.”
The following additional MIT alumni were also honored as APS Fellows:
Andrew Cross SM ’05, PhD ’08 (EECS), Division of Quantum Information Fellowship
Kevin D. Dorfman SM '01, PhD '02 (ChemE), Division of Polymer Physics Fellowship
Geoffroy Hautier PhD '11 (DMSE), Division of Computational Physics Fellowship
Douglas J. Jerolmack PhD '06 (EAPS), Division of Statistical and Nonlinear Physics Fellowship
Brian Lantz '92, PhD '99 (Physics), Division of Gravitational Physics Fellowship
Valerio Lucarini SM '03 (EAPS), Topical Group on Physics of Climate Fellowship
Giles Novak '81 (Physics), Division of Astrophysics Fellowship
Steve Presse PhD '08 (Physics), Division of Biological Physics Fellowship
Jonathan Rothstein PhD '01 (MechE), Division of Fluid Dynamics Fellowship
Gray Rybka PhD '07 (Physics), Division of Particles and Fields Fellowship
Sarah Sheldon '08, PhD '13 (Physics, NSE), Forum on Industrial and Applied Physics Fellowship
Lian Shen ScD '01 (MechE), Division of Fluid Dynamics Fellowship
Multitasking quantum sensors can measure several properties at onceThe devices represent a key step toward practical quantum sensing, with applications in biomedical sensing, materials characterization, and more.A special class of sensors leverages quantum properties to measure tiny signals at levels that would be impossible using classical sensors alone. Such quantum sensors are currently being used to study the inner workings of cells and the outer depths of our universe.
Particularly promising are solid-state quantum sensors, which can operate at room temperature. Unfortunately, most solid-state quantum sensors today only measure one physical quantity at a time — such as the magnetic field, temperature, or strain in a material. Trying to measure both the magnetic field and temperature of a material at the same time causes their signals to get mixed up and measurements to become unreliable.
Now, MIT researchers have created a way to simultaneously measure multiple physical quantities with a solid-state quantum sensor. They achieved this by exploiting entanglement, where particles become correlated into a single quantum state. In a new paper, the team demonstrated its approach in a commonly used quantum sensor at room temperature, measuring the amplitude, frequency, and phase of a microwave field in a single measurement. They also showed the approach works better than sequentially measuring each property or using traditional sensors.
The researchers say the approach could enable quantum sensors that can deepen our understanding of the behavior of atoms and electrons inside materials and living systems like cancer cells.
“Quantum multiparameter estimation has been mostly theoretical to date,” says co-lead author of the paper Takuya Isogawa, a graduate student in nuclear science and engineering. “There have been very few experiments that actually demonstrate it, and that work focused on photons. We wanted to demonstrate multiparameter estimation in a more application-oriented setup: a solid-state quantum sensor in use today.”
Joining Isogawa on the paper are co-lead authors Guoqing Wang PhD ’23 and MIT PhD candidate Boning Li. The other authors on the paper are former MIT visiting students Zhiyao Hu and Ayumi Kanamoto; University of Tokyo PhD candidate Shunsuke Nishimura; Chinese University of Hong Kong Professor Haidong Yuan; and Paola Cappellaro, MIT’s Ford Professor of Engineering, a professor of nuclear science and engineering and of physics, and a member of the Research Laboratory of Electronics.
Quantum effects for measurement
Quantum sensors exploit quantum effects like entanglement, spin states, and superposition to measure changes in magnetic fields, electric fields, gravity, acceleration, and more. As such, they can be used to measure the activity of single molecules in ways that are useful for understanding biology and space, like tracking the activity of metabolites or enzymes inside cells.
One particularly useful sensor in biology leverages what’s known as nitrogen-vacancy (NV) centers in diamonds, a defect where a carbon atom in the diamond’s crystal lattice is replaced by a nitrogen atom, and a neighboring lattice site is missing, or vacant. The defect hosts an electronic spin whose transition frequencies can be read out optically. The NV center’s spin state is extremely sensitive to external effects, such as magnetic fields and temperature, which can shift the spin state in ways that can be measured at extremely high resolution.
Unfortunately, different external effects change the energy resonances of the spin in similar ways, making it difficult to measure multiple effects at once. The result is that most solid-state quantum sensor applications measure a single physical quantity at one time.
“If you can only measure one quantity at a time, you have to repeat experiments to measure quantities one by one,” Isogawa says. “That takes more time, which means less sensitivity. It also makes experiments more susceptible to errors.”
For their experiment, the researchers used NV centers inside of a 5-square-millimeter diamond. They pointed a laser into the diamond and studied its fluorescence to make their measurements, a common approach for such sensors. To study the electronic spin of the NV center, they used a microwave antenna. To study the spin of the nitrogen atom they used a radio frequency field.
“We used those two spins as two qubits,” Isogawa says, referring to the building blocks of quantum computing systems. “If you have only one qubit, you can only measure one outcome: basically, 0 or 1. It’s the probability that it spins up or down. Think of it like a coin toss, with the probability of getting heads or tails. With two qubits, we increased the parameters that we could extract.”
The system worked because the spins of the sensor qubit and auxiliary qubit were entangled, a quantum property where the state of one particle is dependent on another. With one qubit, you get a binary outcome. With two, you get four possible outcomes with a total of three possible parameters.
The two qubits allowed researchers to measure those three quantities simultaneously using a technique known as the Bell state measurement.
Other researchers had used the Bell state measurement at extremely low temperatures before, but the MIT researchers developed a new technique to perform the measurement at room temperature. That technique was first proposed by Wang, who was previously a graduate student in Professor Cappellaro’s lab.
The researchers used the approach to simultaneously measure the amplitude, detuning, and phase of a microwave magnetic field. The researchers also say the approach could be used to measure electric fields, temperature, pressure, and strain.
“Measuring these parameters simultaneously can help us explore spin waves in materials, which is an important topic in condensed matter physics,” Isogawa says. “NV center sensors have extremely high spatial resolution and versatility. It can measure a lot of different physical quantities.”
More practical quantum sensing
The researchers say this work is an important step toward using solid-state quantum sensors to more fully characterize systems in biomedical research and materials characterization. That’s because multiparameter estimation had never been achieved in realistic settings or in widely used quantum sensors.
“What makes the NV center quantum sensors so special is they can operate at room temperature,” Isogawa says. “It’s very suitable for biological measurements or condensed matter physics experiments.”
Although the researchers say their sensor didn’t measure each quantity at the highest possible precision, in future work they plan to explore if their approach can achieve higher precision for each parameter.
They also plan to explore how their approach works to characterize heterogenous materials.
“In an extremely uniform environment, you could use many different classical and quantum sensors and measure each physical quantity at the same time,” Isogawa says. “But if the physical quantities change at different locations, you need high spatial sensors, and you need a sensor that can measure multiple physical quantities. This approach has major advantages in such situations.”
The work was supported, in part, by the U.S. National Science Foundation, the National Research Foundation of Korea, and the Research Grants Council of Hong Kong.
Human-machine teaming dives underwaterResearchers are developing hardware and algorithms to improve collaboration between divers and autonomous underwater vehicles engaged in maritime missions.The electricity to an island goes out. To find the break in the underwater power cable, a ship pulls up the entire line or deploys remotely operated vehicles (ROVs) to traverse the line. But what if an autonomous underwater vehicle (AUV) could map the line and pinpoint the location of the fault for a diver to fix?
Such underwater human-robot teaming is the focus of an MIT Lincoln Laboratory project funded through an internally administered R&D portfolio on autonomous systems and carried out by the Advanced Undersea Systems and Technology Group. The project seeks to leverage the respective strengths of humans and robots to optimize maritime missions for the U.S. military, including critical infrastructure inspection and repair, search and rescue, harbor entry, and countermine operations.
"Divers and AUVs generally don't team at all underwater," says principal investigator Madeline Miller. "Underwater missions requiring humans typically do so because they involve some sort of manipulation a robot can't do, like repairing infrastructure or deactivating a mine. Even ROVs are challenging to work with underwater in very skilled manipulation tasks because the manipulators themselves aren't agile enough."
Beyond their superior dexterity, humans excel at recognizing objects underwater. But humans working underwater can't perform complex computations or move very quickly, especially if they are carrying heavy equipment; robots have an edge over humans in processing power, high-speed mobility, and endurance. To combine these strengths, Miller and her team are developing hardware and algorithms for underwater navigation and perception — two key capabilities for effective human-robot teaming.
As Miller explains, divers may only have a compass and fin-kick counts to guide them. With few landmarks and potentially murky conditions caused by a lack of light at depth or the presence of biological matter in the water column, they can easily become disoriented and lost. For robots to help divers navigate, they need to perceive their environment. However, in the presence of darkness and turbidity, optical sensors (cameras) cannot generate images, while acoustic sensors (sonar) generate images that lack color and only show the shapes and shadows of objects in the scene. The historical lack of large, labeled sonar image datasets has hindered training of underwater perception algorithms. Even if data were available, the dynamic ocean can obscure the true nature of objects, confusing artificial intelligence. For instance, a downed aircraft broken into multiple pieces, or a tire covered in an overgrowth of mussels, may no longer resemble an aircraft or tire, respectively.
"Ultimately, we want to devise solutions for navigation and perception in expeditionary environments," Miller says. "For the missions we're thinking about, there is limited or no opportunity to map out the area in advance. For the harbor entry mission, maybe you have a satellite map but no underwater map, for example."
On the navigation side, Miller's team picked up on work started by the MIT Marine Robotics Group, led by John Leonard, to develop diver-AUV teaming algorithms. With their navigation algorithms, Leonard's group ran simulations under optimal conditions and performed field testing in calm waters using human-paddled kayaks as proxies for both divers and AUVs. Miller's team then integrated these algorithms into a mission-relevant AUV and began testing them under more realistic ocean conditions, initially with a support boat acting as a diver surrogate, and then with actual divers.
"We quickly learned that you need more sensing capabilities on the diver when you factor in ocean currents," Miller explains. "With the algorithms demonstrated by MIT, the vehicle only needed to calculate the distance, or range, to the diver at regular intervals to solve the optimization problem of estimating the positions of both the vehicle and diver over time. But with the real ocean forces pushing everything around, this optimization problem blows up quickly."
On the perception side, Miller's team has been developing an AI classifier that can process both optical and sonar data mid-mission and solicit human input for any objects classified with uncertainty.
"The idea is for the classifier to pass along some information — say, a bounding box around an image — to the diver and indicate, "I think this is a tire, but I'm not sure. What do you think?" Then, the diver can respond, "Yes, you've got it right, or no, look over here in the image to improve your classification," Miller says.
This feedback loop requires an underwater acoustic modem to support diver-AUV communication. State-of-the-art data rates in underwater acoustic communications would require tens of minutes to send an uncompressed image from the AUV to the diver. So, one aspect the team is investigating is how to compress information into a minimum amount to be useful, working within the constraints of the low bandwidth and high latency of underwater communications and the low size, weight, and power of the commercial off-the-shelf (COTS) hardware they're using. For their prototype system, the team procured mostly COTS sensors and built a sensor payload that would easily integrate into an AUV routinely employed by the U.S. Navy, with the goal of facilitating technology transition. Beyond sonar and optical sensors, the payload features an acoustic modem for ranging to the diver and several data processing and compute boards.
Miller's team has tested the sensor-equipped AUV and algorithms around coastal New England — including in the open ocean near Portsmouth, New Hampshire, with the University of New Hampshire's (UNH) Gulf Surveyor and Gulf Challenger coastal research vessels as diver surrogates, and on the Boston-area Charles River, with an MIT Sailing Pavilion skiff as the surrogate.
"The UNH boats are well-equipped and can access realistic ocean conditions. But pretending to be a diver with a large boat is hard. With the skiff, we can move more slowly and get the relative motion in tune with how a diver and AUV would navigate together."
Last summer, the team started testing equipment with human divers at Michigan Technological University's Great Lakes Research Center. Although the divers lacked an interface to feed back information to the AUV, each swam holding the team's tube-shaped prototype tablet, dubbed a "tube-let." The tube-let was equipped with a pressure and depth sensor, inertial measurement unit (to track relative motion), and ranging modem — all necessary components for the navigation algorithms to solve the optimization problem.
"A challenge during testing was coordinating the motion of the diver and vehicle, because they don't yet collaborate," Miller says. "Once the divers go underwater, there is no communication with the team on the surface. So, you have to plan where to put the diver and vehicle so they don't collide."
The team also worked on the perception problem. The water clarity of the Great Lakes at that time of year allowed for underwater imaging with an optical sensor. Caroline Keenan, a Lincoln Scholars Program PhD student jointly working in the laboratory's Advanced Undersea Systems and Technology Group and Leonard's research group at MIT, took the opportunity to advance her work on knowledge transfer from optical sensors to sonar sensors. She is exploring whether optical classifiers can train sonar classifiers to recognize objects for which sonar data doesn't exist. The motivation is to reduce the human operator load associated with labeling sonar data and training sonar classifiers.
With the internally funded research program coming to an end, Miller's team is now seeking external sponsorship to refine and transition the technology to military or commercial partners.
"The modern world runs on undersea telecommunication and power cables, which are vulnerable to attack by disruptive actors. The undersea domain is becoming increasingly contested as more nations develop and advance the capabilities of autonomous maritime systems. Maintaining global economic security and U.S. strategic advantage in the undersea domain will require leveraging and combining the best of AI and human capabilities," Miller says.
Q&A: MIT SHASS and the future of education in the age of AIAs the School of Humanities, Arts, and Social Sciences marks 75 years, Dean Agustín Rayo reflects on how AI is reshaping higher education and why SHASS disciplines continue to be central to MIT’s mission.The MIT School of Humanities, Arts, and Social Sciences (SHASS) was founded in 1950 in response to “a new era emerging from social upheaval and the disasters of war,” as outlined in the 1949 Lewis Committee Report.
The report’s findings emphasized MIT’s role and responsibility in the new nuclear age, which called for doubling down on genuine “integration” of scientific and technical topics with humanistic scholarship and teaching. Only that way, the committee wrote, could MIT tackle “the most difficult and complicated problems confronting our generation.”
As SHASS marks its 75th anniversary, Dean Agustín Rayo answers questions about why the need for developing students with broad minds and human understanding is as urgent as ever, given pressing challenges in the midst of a new technological revolution.
Q: Many universities are responding to artificial intelligence by launching new technical programs or updating curricula. You’ve suggested the change is deeper than that. Why?
A: Artificial intelligence isn’t just changing the way students learn — it’s transforming every aspect of society. The labor market is experiencing a dramatic shift, upending traditional paths to financial stability. And AI is changing the ways we bring meaning to our lives: the ways we build relationships, the ways we pay attention, and the things we enjoy doing.
The upshot is that the most important question universities need to ask is not how to adapt our pedagogy to AI — although we certainly need to address that. The most important question we need to ask is how to provide an education that brings real value to students in the age of AI.
We need to ensure that universities provide students with the tools they need to find a path to financial security and to build meaningful lives.
We need to produce students with minds that are both nimble and broad. We need our students to not only be able to execute tasks effectively, but also have the judgment to determine which tasks are worth executing. We need students who have a moral compass, and who understand how the world works, in all of its political, economic, and human complexity. We need students who know how to think critically, and who have excellent communication and leadership skills.
Q: What role do the humanities, arts, and social sciences play in preparing MIT students for that future?
A: They’re essential, and are rightly a core part of an MIT education: MIT has long required its undergraduates take at least eight courses in HASS disciplines to graduate.
Fields like philosophy, political science, economics, literature, history, music, and anthropology are crucial to developing the parts of our lives that are essentially human — the parts that will not be replaced by AI.
They are crucial to developing critical thinking and a moral compass. They are crucial to understanding people — our values, institutions, cultures, and ways of thinking. They are crucial to creating students who are broad thinkers who understand the way the world works. They are crucial to developing students who are excellent communicators and are able to describe their projects — and their lives — in a way that endows them with meaning.
Our students understand this. Here is how one of them put the point: “Engineering gives me the tools to measure the world; the humanities teach me how to interpret it. That balance has shaped both how I do science and why I do it.” (Full interview here.)
Q: Some people worry that emphasizing humanistic study could dilute MIT’s technological edge. How do you respond to that concern?
A: I think the opposite is true.
MIT is an important engine for social mobility in the United States, and a catalyst for entrepreneurship, which has added billions of dollars to the American economy. That cannot be separated from the fact that we are a technical institution, which brings together the country’s most talented undergraduates — regardless of socioeconomic background — and transforms them into the next generation of our country's top scientific and engineering leaders.
MIT plays an incredibly important role in our country. So, the last thing I want to do is mess with our secret sauce.
But I also think that the age of AI is forcing us to rethink what it means to be a top engineer.
Think about artificial intelligence itself. The challenges we face are not just technical. Issues like bias, accountability, governance, and the societal impact of automation are no less important. Understanding those dimensions helps technologists design better systems and anticipate real-world consequences.
Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world.
Q: What kinds of changes is MIT SHASS pursuing to support this vision?
A: There’s a lot going on!
We’ve launched the MIT Human Insight Collaborative (MITHIC) as a way of strengthening research in the humanities, arts, and social sciences, and of deepening collaboration with colleagues across MIT.
We’re shaping the undergraduate experience to ensure that every MIT student engages with the big societal questions shaping our time, from democratic resilience to climate change to the ethics of new technologies.
We’re building stronger connections through initiatives like the creation of shared faculty positions with the MIT Schwarzman College of Computing (SCC). And we recently launched a new Music Technology and Computation Graduate Program with the School of Engineering.
We’re partnering with SERC (the SCC’s Social and Ethical Responsibilities of Computing) to design new classes on the intersection of computing and human-centered issues, such as ethics.
And we’re elevating the humanities — for their own sake, and as a space for experimentation, bringing together students, faculty, and partners to explore new forms of research, teaching, and public engagement.
This is a very exciting time for SHASS.
Flying at the edge of the stratosphereMIT students see the Earth's curvature in reborn AeroAstro intro course.All the ingredients to leave the first layer of the atmosphere were laying on a picnic table. T-minus 30 minutes before launch from the New York Catskills, students in MIT's reborn 16.00 (Introduction to Aerospace Engineering) course tore open hand warmers to fight the December morning chill. One hot pack for cold hands. One for the electronics payload, which would need the warmth on the way up. This series of balloon launches rose to more than 20 kilometers above the surface.
Five student teams completed stratospheric balloon launches for a final project in the MIT Department of Aeronautics and Astronautics (AeroAstro) first-year exploratory course. This fall semester was the first iteration of the reimagined 16.00. The course was co-taught by MIT professors Jeffery Hoffman, a former NASA astronaut, and Oliver de Weck, Apollo Program Professor of Astronautics and Engineering Systems. The course was reintroduced to the curriculum in 2025 to give first-year students a design-build experience from the very start, says de Weck, who is also AeroAstro's associate department head.
"This course had been taught for more than 25 years. And then the pandemic came," he explains. "We felt that it was time to bring the course back, to revive it, give it new life."
De Weck taught a version of this hands-on project from 2012 to 2016 in Unified Engineering, with 20 balloon launches over that time. Hoffman taught a version that focused on blimps, indoor flights, and achieving neutral buoyancy and control. Those prior courses inspired the new program. The current 16.00 course is an early introduction to design-build flying, offered before the well-known Unified Engineering course for Course 16 sophomores.
"Students don't want to sit through long lectures, with lots of PowerPoints and notes and blackboards," says de Weck. He referenced feedback from students that is framing the department's upcoming strategic plan. "Those hands-on visceral experiences is what we want to provide them."
The AeroAstro program adds about 60 undergraduates per year. Future students can expect to see different versions of the 16.00 course, including those focused on fixed-wing aircraft, quadcopter drones, and rockets. Future balloon courses will be called 16.00B. A fixed-wing remote-controlled aircraft course will be 16.00A.
Over 13 weeks, the students attended lectures on subjects including atmospheric composition, radio waves, and flight planning and regulations. In labs, they practiced building Arduino-based pressure and temperature sensors, and testing communication systems.
On that cold launch day, Jackson Lunfelt kept his grip against the pull of an oversized helium balloon moments before his team's launch. His team worked for weeks configuring GPS and radio communications and testing balloon buoyancy. Among their trials and errors, they had to find the right weight for a 3D printed frame to attach the balloon and parachute. It was too heavy at first. They figured out how to reduce the weight of the plastic to keep the payload buoyant.
"Fortunately, a lot of preparation had helped us," he says.
Lunfelt, a first-year student, grew up just a few hours away from the Catskills in upstate New York. In high school, he was active in Future Farmers of America, welding, and robotics. On launch day, his team was worried their onboard GoPro would shut off from the cold high-altitude temperatures. They got the green light to add a battery bank. They would need to re-calculate the weight and helium needed at the final hour.
"It was one of those things that if you don't do this, you're not gonna launch,” says Lunfelt.
That first week of December brought frigid air, gusts, and wind patterns that meant the class would have to rethink its launch site. The team aimed to fly east, over Massachusetts, and land before reaching the ocean. The new weather pattern pushed the team even farther west across the New York border.
The balloon lifted the 3.5 pound payload from the Catskills while the mission control group monitored progress from Cambridge, Massachusetts. It rose hundreds of feet per minute. It passed the troposphere and flew across Western Massachusetts at 100 miles an hour, pushed by the strong upper-level winds of the jet stream. It climbed to an estimated 22 kilometers above the surface. At that height, an onboard GoPro camera recorded the curvature of the Earth.
"Every single moment of that video was amazing. It was truly a story in itself," says Lunfelt.
Then the latex balloon burst, as designed, and descended back down — aided by a parachute. The GoPros captured that spectacular moment, too. The winds carried them just north of the Massachusetts-New Hampshire border. They landed in a neighborhood around Nashua, New Hampshire. Locals saw the MIT identifiers written on the side of the payloads and helped the teams recover them. The landing made it onto the local news.
After a very early morning and late evening monitoring the launch returns, de Weck, alongside teaching assistant Jonathan Stoppani and Senior Technical Instructor Dave Robertson, agreed that the feeling of pride from the whole class was palpable. The payloads all came back in one piece, a test of successful design-builds and last-minute adjustments. The AeroAstro flying tradition is back for first-year students.
Carbon removal project supports Maine’s blue economy, broader marine healthA chemical-free approach to balancing ocean acidity protects marine life and could dramatically impact the global aquaculture market.Oceans absorb roughly 25 to 30 percent of the carbon dioxide (CO2) that is released into the atmosphere. When this CO2 dissolves in seawater, it forms carbonic acid, making the water more acidic and altering its chemistry. Elevated levels of acidity are harmful to marine life like corals, oysters, and certain plankton that rely on calcium carbonate to build shells and skeletons.
“As the oceans absorb more CO2, the chemistry shifts — increasing bicarbonate while reducing carbonate ion availability — which means shellfish have less carbonate to form shells,” explains Kripa Varanasi, professor of mechanical engineering at MIT. “These changes can propagate through marine ecosystems, affecting organism health and, over time, broader food webs.”
Loss of shellfish can lead to water quality decline, coastal erosion, and other ecosystem disruptions, including significant economic consequences for coastal communities. “The U.S. has such an extensive coastline, and shellfish aquaculture is globally valued at roughly $60 billion,” says Varanasi. “With the right innovations, there is a substantial opportunity to expand domestic production.”
“One might think, ‘this [depletion] could happen in 100 years or something,’ but what we’re finding is that they are already affecting hatcheries and coastal systems today,” he adds. “Without intervention, these trends could significantly alter marine ecosystems and the coastal economies that rely on them over time.”
Varanasi and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering, Post-Tenure, at MIT, have been collaborating for years to develop methods for removing carbon dioxide from seawater and turn acidic water back to alkaline. In recent years, they’ve partnered with researchers at the University of Maine Darling Marine Center to deploy the method in hatcheries.
“The way we farm oysters, we spawn them in special tanks and rear them through about a two-week larval period … until they’re big enough so that they can be transferred out into the river as the water warms up,” explains Bill Mook, founder of Mook Sea Farm. Around 2009, he noticed problems with production of early-stage larvae. “It was a catastrophe. We lost several hundred thousand dollars’ worth of production,” he says.
Ultimately, the problem was identified as the low pH of the water that was being brought in: The water was too acidic. The farm’s initial strategy, a common practice in oyster farming, was to buffer the water by adding sodium bicarbonate. The new approach avoids the use of chemicals or minerals.
“A lot of researchers are studying direct air capture, but very few are working in the ocean-capture space,” explains Hatton. “Our approach is to use electricity, in an electrochemical manner, rather than add chemicals to manipulate the solution pH.”
The method uses reactive electrodes to release protons into seawater that is collected and fed into the cells, driving the release of the dissolved carbon dioxide from the water. The cyclic process acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. The water is then fed to a second set of cells with a reversed voltage to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea.
Maine’s Damariscotta River Estuary, where Mook farms is located, provides about 70 percent of the state’s oyster crop. Damian Brady, a professor of oceanography based at the University of Maine and key collaborator on the project, says the Damariscotta community has “grown into an oyster-producing powerhouse … [that is] not only part of the economy, but part of the culture.” He adds, “there’s actually a huge amount that we could learn if we couple the engineering at MIT with the aquaculture science here at the University of Maine.”
“The scientific underpinning of our hypothesis was that these bivalve shellfish, including oysters, need calcium carbonate in order to form their shells,” says Simon Rufer PhD ’25, a former student in Varanasi’s lab and now CEO and co-founder of CoFlo Medical. “By alkalizing the water, we actually make it easier for the oysters to form and maintain their shells.”
In trials conducted by the team, results first showed that the approach is biocompatible and doesn't kill the larvae, and later showed that the oysters treated by MIT's buffer approach did better than mineral or chemical approaches. Importantly, Hatton also notes, the process creates no waste products. Ocean water goes in, CO2 comes out. This captured CO2 can potentially be used for other applications, including to grow algae to be used as food for shellfish.
Varanasi and Hatton first introduced their approach in 2023. Their most recent paper, “Thermodynamics of Electrochemical Marine Inorganic Carbon Removal,” which was published last year in journal Environmental Science & Technology, outlines the overall thermodynamics of the process and presents a design tool to compare different carbon removal processes. The team received a “plus-up award” from ARPA-E to collaborate with University of Maine and further develop and scale the technology for application in aquaculture environments.
Brady says the project represents another avenue for aquaculture to contribute to climate change mitigation and adaptation. “It pushes a new technology for removing carbon dioxide from ocean environments forward simultaneously,” says Brady. “If they can be coupled, aquaculture and carbon dioxide removal improve each other’s bottom line."
Through the collaboration, the team is improving the robustness of the cells and learning about their function in real ocean environments. The project aims to scale up the technology, and to have significant impact on climate and the environment, but it includes another big focus.
“It’s also about jobs,” says Varanasi. “It’s about supporting the local economy and coastal communities who rely on aquaculture for their livelihood. We could usher in a whole new resilient blue economy. We think that this is only the beginning. What we have developed can really be scaled.”
Mook says the work is very much an applied science, “[and] because it’s applied science, it means that we benefit hugely from being connected and plugged into academic institutions that are doing research very relevant to our livelihoods. Without science, we don’t have a prayer of continuing this industry.”
Jazz in the key of lifeSaxophonist Miguel Zenón, a Grammy-winning MIT faculty member, creates a distinctive blend of jazz and traditional Puerto Rican music.It is not hard to find glowing reviews of saxophonist Miguel Zenón, a creative jazz artist whose compositions incorporate musical elements from his native Puerto Rico.
For instance, The Jazz Times called “Jibaro,” Zenón’s breakthrough 2005 album, “profound yet joyful.” The New York Times called the same music “strong and light,” adding that we have “rarely seen a jazz composer step forward with a project so impressively organized, intellectually powerful and well played from the start.”
In 2009, when Zenón won a prestigious MacArthur Fellowship, the MacArthur Foundation called Zenón’s work “elegant and innovative,” with “a high degree of daring and sophistication.” In 2012, The New York Times reviewed another Zenón work, “Puerto Rico Nació en Mi: Tales From the Diaspora,” by calling the music “deeply hybridized and original, complex but clear.”
As you may have noticed, these notices all contain multiple descriptive terms. That’s because Zenón’s work is many things at once: jazz, combined with other musical genres; technically rigorous, and supple; novel, yet steeped in tradition. Indeed, Zenón has always seen jazz as being multifaceted.
“What I discovered, when I first encountered jazz, was this idea that you were using improvisation to portray your personality directly to your listeners,” Zenón explains. “And it was connected to a very interesting and intricate improvisational language. That provided something I hadn’t encountered in music before, this idea that you could have something personal and heartfelt walking hand in hand with something that was intellectual and brainy. That balance spoke to me.”
It is still speaking. In 2024, Zenón won the Grammy Award for Best Latin Jazz Album for “El Arte Del Bolero Vol. 2,” a collaboration with Venezuelan pianist Luis Perdomo, a musical partner in the Miguel Zenón Quartet.
Zenón has taught at MIT for three years now. He became a tenured faculty member last year, in MIT’s Music and Theater Arts program, where he helps students find the same satisfaction in music that he does.
“When I first got into music, I was looking for fulfillment,” Zenón says. “It wasn’t about success. I was just looking for music to fulfill something within me. And I still search for that now. And sometimes it still feels like it did 25 or 30 years ago, when I first encountered that feeling. It’s nice to have that in your pocket, to say, this is what I’m looking for, that initial feeling.”
Paradise in the Back Bay
Zenón grew up in San Juan, Puerto Rico. Around age 11, he started attending a performing arts school and playing the saxophone. In his last year of school, Zenón was admitted into college to study engineering. However, a few years before, he had encountered something new: jazz. Zenón’s training had been in classical music. But jazz felt different.
“Discovering jazz music ignited a passion for music in me that had not existed up to that point,” says Zenón, who decided to pursue music in college. “I kind of jumped ship, and it was a blind jump. I didn’t know what to expect, I didn’t know what was on the other side, I didn’t have any artists or any musicians in my family. I just followed a hunch, followed my heart.”
After teachers recommended he study at the renowned Berklee College of Music in Boston, Zenón worked to find a scholarship and funding.
“This was way before the internet. I was looking at catalogs,” Zenón recalls. “I had never been to Boston in my life, I didn’t even know what Berklee looked like. But at Berklee it was the first time I was able to connect with a jazz teacher in a formal way, to learn about history, theory, harmony, and I soaked in it. Also, I was surrounded by young people like myself, who were as enamored and passionate about music as I was. It really felt like paradise.”
After earning his BA from Berklee in 1998, Zenón then moved to New York City. He earned an MA from the Manhattan School of Music in 2001 and began playing more extensively with new bandmates.
“I just wanted to be able to play with people who were better than me, and learn from the experience,” Zenón says. He started generating new ideas, writing music, and performing publicly. With Antonio Sánchez, Hans Glawischnig, and Perdomo, he founded the Miguel Zenón Quartet.
“That led to going into the studio and making an album,” Zenón recounts. “And that led to more experience, and more albums.”
Did it ever. Zenón has now been the leader for about 20 albums, mostly featuring the quartet. (After several years, Henry Cole replaced Sánchez as the group’s drummer.) Zenón has played on many recordings by other artists, and helped found the SFJAZZ Collective.
Not many prolific musicians will name any one recording as their best, and Zenón is the same way, but he is willing to cite a few that were milestones for him.
“Jibaro” draws on the music of Puerto Rico’s jibaro singers, troubadors using 10-line stanzas with eight-syllable lines, something Zenón adopted for jazz-quartet use. “Esta Plena,” a 2009 record, fuses jazz and the structures of “plena,” a traditional percussion-based Puerto Rican song form. “Alma Adentro,” a 2011 album, covers classic songs from Puerto Rico.
“It would be impossible for me to pick one favorite, but what I would say is, there are a couple of albums in the earlier part of my career that explored a balance between things coming from a jazz world and coming from traditional Puerto Rican traditional music and folklore, when I was able to feel like that balance was right, it felt like me,” Zenón says. “This is what I have to give. This is my persona.”
In 2008, Zenón was also honored with a Guggenheim Fellowship, which helped him conduct music research, another facet of his career. Zenón has often extensively interviewed traditional Puerto Rican musicians about the intricacies of their works before writing material in those forms.
And Zenón has made a point of giving back, founding the Caravana Cultural, a project that brings free jazz concerts to rural Puerto Rico.
Work, joy, and love
Zenón is now settled in at MIT, which boasts a vibrant music program. More than 1,500 MIT students take a music class each year, and over 500 students participate in one of 30 campus ensembles. Last year, MIT opened its new Edward and Joyce Linde Music Building, a purpose-built performance, rehearsal, and teaching space.
“There are definitely students at MIT who could be at some of the best music schools in the world,” Zenón says. “That’s not in question.”
Moreover, among MIT students, Zenón says, “There is a communal approach to music. Everything they do, they do for each other. They look out for each other, they work together. And that has been one of the most rewarding things to see.”
He continues: “Of course the students are brilliant and the faculty are too. In terms of what I like to teach, it’s been a good fit for me personally, and I couldn’t be happier about the opportunity. There’s more and more interest in jazz, more and more interest in creating things together, and there’s a unique mindset being built in front of our eyes.”
He is also pleased to work in the Linde Music Building: “It’s amazing to have the building, not only in terms of the facilities, but it’s also a symbol of the place music has within the Institute. We’re not just talking about music, we’re creating it. It’s a great commitment from the school and says a lot about our leadership.”
Meanwhile, along with teaching, Zenón’s own recording career continues at full speed. With Luis Perdomo, he is working on “El Arte Del Bolero Vol. 3,” the follow-up to his Grammy-winning album. And Zenón has plans for still another album, to be recorded in Puerto Rico with a large ensemble, based on music he is writing about Puerto Rico’s history and present.
“Things are always linked,” Zenón explains. “Once you finish one project, the next one starts. It feels natural for me to do it that way.”
In conversation, Zenón is engaging, genial, and reflective. So what advice does he have for younger musicians? Not everyone who plays an instrument will become Miguel Zenón. But what about people who want to pursue music, not knowing how far it will take them?
“If you find something you enjoy, just enjoy it for the sake of it,” Zenón says. “Find what brings joy, and make sure you don’t lose that. Having said that, with music, like any art form, or anything else in life, in order to make progress, it takes work and commitment. There’s no hiding that. So if music is something you’re serious about, set goals you can achieve over time, so you always have something to work for. In my experience, that’s key. But I always pair that with the idea of joy and love for music — keeping that love close to your heart.”
Professor Emeritus Jack Dennis, pioneering developer of dataflow models of computation, dies at 94The influential first leader of the Computation Structures Group at MIT played a key role in the development of asynchronous computing.Jack Dennis, an influential MIT professor emeritus of computer science and engineering, died on March 14 at age 94. The original leader of the Computation Structures Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), he pioneered the development of dataflow models of computation, and, subsequently, many novel principles of computer architecture inspired by dataflow models.
The second child of an engineer and a textile designer, Dennis showed early interest in both engineering and music, rewriting Gilbert and Sullivan lyrics with his parents and playing piano with the Norwalk Symphony Orchestra in Connecticut as a teen, while building a canoe at home with his father. As an undergraduate at MIT, he developed his wide array of interests further, joining the VI-A Cooperative Program in Electrical Engineering; working at the Air Force Cambridge Research Laboratories on projects in speech processing and novel radar systems; participating in the model railroad club; and joining the MIT Symphony Orchestra, where he met his first wife, Jane Hodgson ’55, SM ’56, PhD ’61. (The two later separated when she went to study medicine in Florida.)
Dennis earned his BS (1953), MS (1954), and ScD (1958) from MIT before joining the then-Department of Electrical Engineering as a faculty member. He was promoted to full professor status in 1969. His doctoral thesis, entitled, “Mathematical Programming and electrical networks,” explored analogies between electric circuit theory and quadratic programming problems. Ideas he developed in that paper further crystallized in his 1964 paper, “Distributed solution of network programming problems,” which created an important early class of digital distributed optimization solvers.
In a 2003 piece that Dennis wrote for his undergraduate class’s 50th reunion, he remembered his earliest encounters with computers at the Institute: “I prepared programs written in assembly language on punched paper tape using Frieden 'Flexowriters,' and stood aside watching the myriad lights blink and flash while operator Mike Solamita fed the tapes [...] That was 1954. Fifty years later, much has changed: A room full of vacuum tubes has become a tiny chip with millions of transistors. A phenomenon once limited to research laboratories has become an industry producing commodity products that anyone can own and use beneficially.”
Dennis’ influence in steering that change was profound. As a collaborator with the teams behind both Project MAC and Multics, the earliest attempts to allow multiple users to work with a single computer seemingly simultaneously (i.e., a time-shared operating system), Dennis helped to specify the unique segment addressing and paging mechanisms that became a fundamental part of the General Electric Model 645 computer. His insights stemmed from a tendency to pay equal attention to both hard- and software when others considered themselves specialists in one or the other.
“I formed the Computation Structures Group [within CSAIL] and focused on architectural concepts that could narrow the acknowledged gap between programming concepts and the organization of computer hardware,” Dennis explained in his 2003 recollection. “I found myself dismayed that people would consider themselves to be either hardware or software experts, but paid little heed to how joint advances in programming and architecture could lead to a synergistic outcome that might revolutionize computing practice.”
Dennis’ emphasis on synergy did not go unnoticed. Gerald Sussman, the Panasonic Professor of Electrical Engineering, points out “the relationship of [Dennis’] dataflow architecture to single-assignment programs, and thus to pure functional programs. This coupled the virtue of referential transparency in programming to the effective use of hardware parallelism. Dennis also pioneered the use of self-timed circuits in digital systems. The ideas from that work generalize to much of the work on highly distributed systems.”
The Computation Structures Group attracted multiple scholars interested in developing asynchronous computing and dataflow architecture, many of whom became lifelong friends and collaborators. These included Peter Denning, with whom Dennis and Joseph Qualitz co-authored the textbook “Machines, Languages, and Computation” (1978); the late Arvind, who became faculty head of computer science for the Department of Electrical Engineering and Computer Science (EECS), and the late Guang R. Gao, who became distinguished professor of electrical and computer engineering at the University of Delaware.
In recognition of his contributions to the Multics project, Dennis was elected fellow of the Institute of Electrical and Electronics Engineers (IEEE). Many additional honors would follow: He received the Association for Computing Machinery (ACM)/IEEE Eckert-Mauchly Award in 1984; was inducted as a fellow of the ACM (1994); was named to the National Academy of Engineering (2009); was elected to the (ACM) Special Interest Group on Operating Systems (SIGOPS) Hall of Fame (2012); and was awarded the IEEE John von Neumann Medal (2013).
A successful researcher, Dennis was perhaps equally influential in the development of EECS’ curriculum, developing six subjects in areas of computer theory and systems: Theoretical Models for Computation; Computation Structures; Structure of Computer Systems; Semantic Theory for Computer Systems; Semantics of Parallel Computation; and Computer System Architecture (taught in collaboration with Arvind.) Several of the courses that Dennis developed continue to be taught, in updated form, to this day.
Following his retirement from teaching in 1987, he consulted on projects relating to parallel computer hardware and software for such varied groups as NASA Research Institute for Advanced Computer Science; Boeing Aerospace; McGill University; the Architecture Group of Carlstedt Elektronik in Gothenburg, Sweden; and Acorn Networks, Inc. His fruitful relationship with former student Guang Gao continued in the form of a lecture tour through China, as well as co-authorship of a book, “Dataflow Architecture,” currently in progress at MIT Press.
A voracious lifelong learner, Dennis was fond of repeating a friend’s observation that “a scholar is just a book’s way of making another book.” In a full and active retirement, he still made room for music, trying his hand at composing; performing at Tanglewood as a tenor in Chorus Pro Musica; playing piano at the marriage of Guang Gao’s son Nick; and joining the chorus at the First Church in Belmont, Massachusetts, where his celebration of life (with concurrent livestreaming) will be held on Monday, June 8, at 2 p.m.
Dennis is survived by his wife Therese Smith ’75; children David Hodgson Dennis of North Miami, Florida; Randall Dennis of Connecticut; and Galen Dennis, a resident of Australia.
Learning with audiobooksA new study finds that audiobooks help students learn new words — especially when paired with one-on-one instruction.Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.
“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.
Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.
“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”
So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.
Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.
“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”
Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.
Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.
A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.
Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.
Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.
For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.
The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning.
A new type of electrically driven artificial muscle fiberElectrofluidic fibers mimic how natural muscle fibers bundle, and could enable compact, silent robotic and prosthetic systems.Muscles are remarkably effective systems for generating controlled force, and engineers developing hardware for robots or prosthetics have long struggled to create analogs that can approach their unique combination of strength, rapid response, scalability, and control. But now, researchers at the MIT Media Lab and Politecnico di Bari in Italy have developed artificial muscle fibers that come closer to matching many of these qualities.
Like the fibers that bundle together to form biological muscles, these fibers can be arranged in different configurations to meet the demands of a given task. Unlike conventional robotic actuation systems, they are compliant enough to interface comfortably with the human body and operate silently without motors, external pumps, or other bulky supporting hardware.
The new electrofluidic fiber muscles — electrically driven actuators built in fiber format — are described in a recent paper published in Science Robotics. The work is led by Media Lab PhD candidate Ozgun Kilic Afsar; Vito Cacucciolo, a professor at the Politecnico di Bari; and four co-authors.
The new system brings together two technologies, Afsar explains. One is a fluidically driven artificial muscle known as a thin McKibben actuator, and the other is a miniaturized solid-state pump based on electrohydrodynamics (EHD), which can generate pressure inside a sealed fluid compartment without moving parts or an external fluid supply.
Until now, most fluid-driven soft actuators have relied on external “heavy, bulky, oftentimes noisy hydraulic infrastructure,” Afsar says, “which makes them difficult to integrate into systems where mobility or compact, lightweight design is important.” This has created a fundamental bottleneck in the practical use of fluidic actuators in real-world applications.
The key to breaking through that bottleneck was the use of integrated pumps based on electrohydrodynamic principles. These millimeter-scale, electrically driven pumps generate pressure and flow by injecting charge into a dielectric fluid, creating ions that drag the fluid along with them. Weighing just a few grams each and not much thicker than a toothpick, they can be fabricated continuously and scaled easily. “We integrated these fiber pumps into a closed fluidic circuit with the thin McKibben actuators,” Afsar says, noting that this was not a simple task given the different dynamics of the two components.
A key design strategy was to pair these fibers in what are known as antagonistic configurations. Cacucciolo explains that this is where “one muscle contracts while another elongates,” as when you bend your arm and your biceps contract while your triceps stretch. In their system, a millimeter-scale fiber pump sits between two similarly scaled McKibben actuators, driving fluid into one actuator to contract it while simultaneously relaxing the other.
“This is very much reminiscent of how biological muscles are configured and organized,” Afsar says. “We didn’t choose this configuration simply for the sake of biomimicry, but because we needed a way to store the fluid within the muscle design.” The need for an external reservoir open to the atmosphere has been one of the main factors limiting the practical use of EHD pumps in robotic systems outside the lab. By pairing two McKibben fibers in line, with a fiber pump between them to form a closed circuit, the team eliminated that need entirely.
Another key finding was that the muscle fibers needed to be pre-pressurized, rather than simply filled. “There is a minimum internal system pressure that the system can tolerate,” Afsar says, “below which the pump can degrade or temporarily stop working.” This happens because of cavitation, in which vapor bubbles form when the pressure at the pump inlet drops below the vapor pressure of the liquid, eventually leading to dielectric breakdown.
To prevent cavitation, they applied a “bias” pressure from the outset so that the pressure at the fiber pump inlet never falls below the liquid’s vapor pressure. The magnitude of this bias pressure can be adjusted depending on the application. “To achieve the maximum contraction the muscle can generate, we found there is a specific bias pressure range that is optimal,” she says. “If you want to configure the system for faster response, you might increase that bias pressure, though with some reduction in maximum contraction.”
Cacucciolo adds that most of today’s robotic limbs and hands are built around electric servo motors, whose configuration differs fundamentally from that of natural muscles. Servo motors generate rotational motion on a shaft that must be converted into linear movement, whereas muscle fibers naturally contract and extend linearly, as do these electrofluidic fibers.
“Most robotic arms and humanoid robots are designed around the servo motors that drive them,” he says. “That creates integration constraints, because servo motors are hard to package densely and tend to concentrate mass near the joints they drive. By contrast, artificial muscles in fiber form can be packed tightly inside a robot or exoskeleton and distributed throughout the structure, rather than concentrated near a joint.”
These electrofluidic muscles may be especially useful for wearable applications, such as exoskeletons that help a person lift heavier loads or assistive devices that restore or augment dexterity. But the underlying principles could also apply more broadly. “Our findings extend to fluid-driven robotic systems in general,” Cacucciolo says. “Wherever fluidic actuators are used, or where engineers want to replace external pumps with internal ones, these design principles could apply across a wide range of fluid-driven robotic systems.”
This work “presents a major advancement in fiber-format soft actuation,” which “addresses several long-standing hurdles in the field, particularly regarding portability and power density,” says Herbert Shea, a professor in the Soft Transducers Laboratory at Ecole Polytechnique Federale de Lausanne in Switzerland, who was not associated with this research. “The lack of moving parts in the pump makes these muscles silent, a major advantage for prosthetic devices and assistive clothing,” he says.
Shea adds that “this high-quality and rigorous work bridges the gap between fundamental fluid dynamics and practical robotic applications. The authors provide a complete system-level solution — characterizing the individual components, developing a predictive physical model, and validating it through a range of demonstrators.”
In addition to Afsar and Cacucciolo, the team also included Gabriele Pupillo and Gennaro Vitucci at Politecnico di Bari and Wedyan Babatain and Professor Hiroshi Ishii at the MIT Media Lab. The work was supported by the European Research Council and the Media Lab’s multi-sponsored consortium.
New technique makes AI models leaner and faster while they’re still learningResearchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance.
Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.
The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.
"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."
The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.
"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”
The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.
"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."
What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.
The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.
The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.
The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.
There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.
The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.
Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.
"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."
"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."
The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.
The flawed fundamentals of failing banksMIT economist Emil Verner’s historical detective work shows how banking-sector crises develop out of bad business practices.Bank runs are dramatic: Picture Depression-era footage of customers lined up, trying to get their deposits back. Or recall Lehmann Brothers emptying out in 2008 or Silicon Valley Bank collapsing in 2023.
But what causes these runs in the first place? One viewpoint is that something of a self-fulfilling prophecy is involved. Panic spreads, and suddenly many customers are seeking their money back, until an otherwise solid institution is run into the ground.
That is not exactly Emil Verner’s position, however. Verner, an MIT economist, has been studying bank failures empirically for years and now has a different perspective. Verner and his collaborators have produced extensive evidence suggesting that when banks fail, it is usually because they are in a fundamentally shaky position. A bank run generally finishes off an already flawed business rather than upending a viable one.
“What we essentially find is that banks that fail are almost always very weak, and are in trouble,” says Verner, who is the Jerome and Dorothy Lemelson Professor of Management and Financial Economics at the MIT Sloan School of Management. “Most banks that have been subject to runs have been pretty insolvent. Runs are more the final spasm that brings down weak banks, rather than the causes of indiscriminate failures.”
This conclusion has plenty of policy relevance for the banking sector and follows a lengthy analysis of historical data. In one forthcoming paper, in the Quarterly Journal of Economics, Verner and two colleagues reviewed U.S. bank data from 1863 to 2024, concluding that “the primary cause of bank failures and banking crises is almost always and everywhere a deterioration of bank fundamentals.” In a 2021 paper in the same journal, Verner and two other colleagues studied banking data from 46 countries covering 1870-2016, and found that declining bank fundamentals usually preceded runs. And currently, Verner is working to make more historical U.S. bank data publicly available to scholars.
Seen in this light, sure, bank runs are damaging, but bank failures likely have more to do with bad portfolios, poor risk management, and minimal assets in reserve, rather than sentiment-driven client behavior.
“From the idea that bank crises are really about sudden runs on bank debt, we’re moving to thinking that runs are one symptom of crisis that runs deeper,” Verner says. “For most people, we’re saying something reasonable, refining our knowledge, and just shifting the emphasis,” Verner says.
For his research and teaching, Verner received tenure at MIT last year.
Landing in a “great place”
Verner is a native of Denmark who also lived in the U.S. for several years while growing up. Around the time he was finishing school, the U.S. housing market imploded, taking some financial institutions with it.
“Everything came crashing down,” Verner said. “I got obsessed with understanding it.”
As an undergraduate, he studied economics at the University of Copenhagen. After three years, Verner was unconvinced the discipline had fully explained financial crises. He decided to keep studying economics in graduate school, and was accepted into the PhD program at Princeton University.
Along the way, Verner became a historically minded economist, digging into data and cases from past decades to shed light on larger patterns about crises and bank insolvency.
“I’ve always thought history was extremely fascinating in itself,” Verner says. And while history may not repeat, he notes, it is “a really valuable tool. It helps you think through what could happen, what are similar scenarios, and how agents acted when facing similar constraints and incentives in the past.”
For studying financial crises in particular, he adds, history helps in multiple ways. Crises are rare, so historical cases add data. Changes over time, like more financial regulations and more complex investment tools, provide different settings to examine the same cause-and-effect issues. “History is a useful laboratory to study these questions,” Verner says.
After earning his PhD from Princeton, Verner went on the job market and landed his faculty position at MIT Sloan. Many aspects of Institute life — the classroom experience, the collegiality, the campus — have strongly resonated with him.
“MIT is a great place,” Verner says simply. “Great colleagues, great students.”
Focused on fundamentals
Over the last decade, Verner has published papers on numerous topics in addition to banking crises. As an outgrowth of his doctoral work, for instance, he published innovative papers examining the dampening effect that household debt has on economic growth in many countries. He also co-authored the lead paper in an issue of the American Economic Review last year examining the way German hyperinflation after World War I reallocated wealth to large business with substantial debt, leading them to grow faster.
Still, the main focus of Verner’s work right now is on banking crises and bank failures — including their causes. In a 2024 paper looking at private lending in 117 countries since 1940, Verner and economist Karsten Müller showed that financial crises are often preceded by credit booms in what scholars call the “non-tradeable” sector of the economy. That includes industries such as retail or construction, which do not produce easily tradeable goods. Firms in the non-tradeable sector tend to rely more heavily on loans secured by real estate; during real estate booms, such firms use high valuations to borrow more, and they become more vulnerable to crashes — which helps explain why bank portfolios, in turn, can crater as well.
In recent years, in the process of studying these topics, Verner has helped expand the domain of known U.S. historical data in the field. Working with economists Sergio Correia and Stephan Luck, Verner has helped apply large language models to historical newspaper collections, unearthing information about 3,421 runs on individual banks from 1863 to 1934; they are making that data freely available to other scholars.
This topic has important policy implications. If runs are a contagion bringing down worthy banks, then one solution is to provide banks with more liquidity to get through the crisis — something that has indeed been tried in the U.S. However, if bank failures are more based in fundamentals about risk and not keeping enough capital on hand, more systemic policy options about best practices might be logical. At a minimum, substantive new research can help alter the contents of those discussions.
“When banks fail, it’s usually because these banks have taken a lot of risk and have big losses,” Verner says. “It’s rarely unjustified. So that means these types of liquidity interventions alone are not enough to stop a crisis.”
The expansive research Verner has helped conduct includes a number of specific indicators that fundamentals are a big factor in failure. For instance, examining how infrequently banks recover their all assets shows how shaky their foundations are.
“The recovery rate on assets is informative about how solvent a bank was,” Verner says. “This is where I think we’ve contributed something new.” Some economists in the past have cited particular examples of struggling banks making depositors whole, but those are exceptions, not the rule. “Sometimes people argue this or that bank was actually solvent because depositors ended up getting all their money back, and that might be true of one bank, but on aggregate it’s not the case,” Verner says.
Overall, Verner intends to keep following the facts, digging up more evidence, and seeing where it leads.
“While there is this notion that liquidity problems can arise pretty much out of nowhere, I think we are changing that emphasis by showing that financial crises happen basically because banks become insolvent,” Verner underscores. “And then the bank run is that final dramatic spasm — which slightly shifts how we teach and talk about it, and perhaps think about the policy response.”
Desirée Plata appointed associate dean of engineeringFaculty member in civil and environmental engineering will advance research and entrepreneurial initiatives across the School of Engineering.Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor in the MIT Department of Civil and Environmental Engineering, has been named associate dean of engineering, effective July 1.
In her new role, Plata will focus on fostering early-stage research initiatives across the school’s faculty and on strengthening entrepreneurial and innovation efforts. She will also support the school’s Technical Leadership and Communication (TLC) Programs, including: the Gordon Engineering Leadership Program, the Daniel J. Riccio Graduate Engineering Leadership Program, the School of Engineering Communication Lab, and the Undergraduate Practice Opportunities Program.
Plata will join Associate Dean Hamsa Balakrishnan, who continues to lead faculty searches, fellowships, and outreach programs. Together, the two associate deans will serve on key leadership groups including Engineering Council and the Dean’s Advisory Council to shape the school’s strategic priorities.
“Desirée’s leadership, scholarship, and commitment to excellence have already had a meaningful impact on the MIT community, and I look forward to the perspective and energy she will bring to this role,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering.
Plata’s research centers on the sustainable design of industrial processes and materials through environmental chemistry, with an emphasis on clean energy technologies. She develops ways to make industrial processes more environmentally sustainable, incorporating environmental objectives into the design phase of processes and materials. Her work spans nanomaterials and carbon-based materials for pollution reduction, as well as advanced methods for environmental cleanup and energy conversion. Plata directs MIT’s Parsons Laboratory, which conducts interdisciplinary research on natural systems and human adaptation to environmental change.
Plata is a leader on campus and beyond in climate and sustainability initiatives. She serves as director of the MIT Climate and Sustainability Consortium (MCSC), an industry–academia collaboration launched to accelerate solutions for global climate challenges. She founded and directs the MIT Methane Network, a multi-institution effort to cut global methane emissions within this decade. Plata also co-directs the National Institute of Environmental Health Sciences MIT Superfund Research Program, which focuses on strategies to protect communities concerned about hazardous chemicals, pollutants, and other contaminants in their environment.
Beyond academia, Plata has co-founded two climate and energy startups, Nth Cycle and Moxair. Nth Cycle is redefining metal refining and the domestic battery supply chain. Earlier this month, the company signed a $1.1 billion off-take agreement to help establish a secure and circular technology for battery minerals.
Her company Moxair specializes in advanced approaches for low-level methane monitoring and destruction. In 2026, with support from the U.S. Department of Energy and collaboration with MIT, Moxair will build and demonstrate a first-of-a-kind dilute methane oxidation technology to tackle methane emissions using transition metal catalysts.
As an educator, Plata has helped develop programs that enhance research experience for students and postdocs. She played a pivotal role in the founding of the MIT Postdoctoral Fellowship Program for Engineering Excellence, serving on its faculty steering committee, overseeing admissions, and leading both the academic track and entrepreneurship track. She also helped design the MCSC Climate and Sustainability Scholars Program, a yearlong program open to juniors and seniors across MIT.
Plata earned a BS in chemistry from Union College in 2003 and a PhD in the joint MIT-Woods Hole Oceanographic Institution program in oceanography and applied ocean science in 2009. After completing her doctorate, she held faculty positions at Mount Holyoke College, Duke University, and Yale University. While at Yale, she served as associate director of research at the university’s Center for Green Chemistry and Green Engineering. In 2018, Plata joined MIT’s faculty in the Department of Civil and Environmental Engineering.
Her work as a scholar and educator has earned numerous awards and honors. She received MIT’s Harold E. Edgerton Faculty Achievement Award in 2020, recognizing her excellence in research, teaching, and service. She has also been honored with an NSF CAREER Award and the Odebrecht Award for Sustainable Development. Plata is a fellow of the American Chemical Society and was a Young Investigator Sustainability Fellow at Caltech.
Plata is a two-time National Academy of Engineering Frontiers of Engineering Fellow and a two-time National Academy of Sciences Kavli Frontiers of Science Fellow. Her dedication to mentoring was recognized with MIT’s Junior Bose Award for Excellence in Teaching and the Frank Perkins Graduate Advising Award.
Physicists zero in on the mass of the fundamental W boson particleThe team’s ultra-precise measurement confirms the Standard Model’s predictions.When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.
In a paper appearing today in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.
The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.
Now, scientists have determined the mass of the W boson by analyzing more than 1 billion proton-colliding events produced by the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) in Switzerland. The LHC accelerates protons toward each other at close to the speed of light. When they collide, two protons can produce a W boson, among a shower of other particles.
Catching a W boson is nearly impossible, as it decays almost immediately into two types of particles, one of which, a neutrino, is so elusive that it cannot be detected. Scientists are left to measure the other particle, known as a muon, and model how it might add up to the total mass of its parent, the W boson. In the new study, scientists used the Compact Muon Solenoid (CMS) experiment, a particle detector at the LHC that precisely tracks muons and other particles produced in the aftermath of proton collisions.
From billions of proton-proton collisions, the team identified 100 million events that produced a W boson decaying to a muon and a neutrino. For each of these events, they carried out detailed analyses to narrow in on a precise mass measurement. In the end, they determined that the W boson has a mass of 80360.2 ± 9.9 megaelectron volts (MeV). This new mass is in line with predictions of the Standard Model, which is physicists’ best rulebook for describing the fundamental particles and forces of nature.
The precision of the new measurement is on par with a previous measurement made in 2022 by the Collider Detector at Fermilab (CDF). That measurement took physicists by surprise, as it was significantly heavier than what the Standard Model predicted, and therefore raised the possibility of “new physics,” such as particles and forces that have yet to be discovered.
Because the new CMS measurement is just as precise as the CDF result and agrees with the Standard Model along with a number of other experiments, it is more likely that physicists are on solid ground in terms of how they understand the W boson.
“It’s just a huge relief, to be honest,” says Kenneth Long, a lead author of the study, who is a senior postdoc in MIT’s Laboratory for Nuclear Science. “This new measurement is a strong confirmation that we can trust the Standard Model.”
The study is authored by more than 3,000 members of CERN’s CMS Collaboration. The core group who worked on the new measurement includes about 30 scientists from 10 institutions, led by a team at MIT that includes Long; Tianyu Justin Yang PhD ’24; David Walter and Jan Eysermans, who are both MIT postdocs in physics; Guillelmo Gomez-Ceballos, a principal research scientist in the Particle Physics Collaboration; Josh Bendavid, a former research scientist; and Christoph Paus, a professor of physics at MIT and principal investigator with the Particle Physics Collaboration.
Piecing together
The W boson was first discovered in 1983 and is predicted to be the fourth heaviest among all the fundamental particles. Multiple experiments have aimed to narrow in on the particle’s mass, with varying degrees of precision. For the most part, these experiments have produced measurements that agree with the Standard Model’s predictions. The 2022 measurement by Fermilab’s CDF experiment is the one significant outlier. It also happens to be the most precise experiment to date.
“If you take the CDF measurement at face value, you would say there must be physics beyond the Standard Model,” says co-author Christoph Paus. “And of course that was the big mystery.”
Paus and his colleagues sought to either support or refute the CDF’s findings by making an independent measurement, with an experiment that matches CDF’s precision. Their new W boson mass measurement is a product of 10 years’ worth of work, both to analyze actual particle collision events and to simulate all the scenarios that could produce those events.
For their new study, the physicists analyzed proton collision events that were produced at the LHC in 2016. When it is running, the particle collider generates proton collisions at a furious rate of about one every 25 nanoseconds. The team analyzed a portion of the LHC’s 2016 dataset that encompasses billions of proton-proton collisions. Among these, they identified about 100 million events that produced a very short-lived W boson.
“A particle like the W boson exists for a teeny tiny moment — something like 10-24 seconds — before decaying to two particles, one of which is a neutrino that can’t be measured directly,” Long explains. “That’s the tricky part: You have to measure the other particle — a muon — really well, and be able to piece things together with only one piece of the puzzle.”
Gathering momentum
When a muon is produced from the decay of a W boson, it carries half of the W boson’s mass, which is converted into momentum that carries the muon away from the original collision. Due to the strong magnetic field inside the CMS detector, the electrically charged muon follows a path whose curvature is a function of its momentum. Scientists’ challenge is to track the muon’s path and every interaction it may have with other particles and its surroundings, in order to estimate its initial momentum.
The muon’s momentum is also influenced by the momentum of the W boson before it decays. Decoding the impact of the W boson’s motion from the effects of its mass presented a major challenge. To infer the W boson mass, the team first carried out simulations of every scenario they could think of that a muon might experience after a proton-proton collision in the chaotic environment of the particle collider. In all, the team produced 4 billion such simulated events described by state-of-the-art theoretical calculations. The simulations encoded diverse hypotheses about how the muon momentum is affected by the physical features of the CMS detector, as well as uncertainties in the predictions that govern W boson production in LHC collisions.
The researchers compared their simulations with data from the 2016 LHC run. For every proton-proton collision event that occurs in the collider, scientists can use the CMS detector at CERN’s LHC to precisely measure the energy and momentum of resulting particles such as muons. The team analyzed CMS measurements of muons that were produced from over 100 million W boson events. They then overlaid this data onto their simulations of the muon momentum, which they then converted to a new mass for the W boson.
That mass — 80360.2 ± 9.9 megaelectron volts — is significantly lighter than the CDF experiment’s measurement. What’s more, the new estimate is within the range of what the Standard Model predicts for the W boson’s mass, bolstering physicists’ confidence in the Standard Model and its descriptions of the major particles and forces of nature.
“With the combination of our really precise result and other experiments that line up with the Standard Model’s predictions, I think that most people would place their bets on the Standard Model,” Long says. “Though I do think people should continue doing this measurement. We are not done.”
“We want to add more data, make our analysis techniques more precise, and basically squeeze the lemon a little harder. There is always some juice left,” Paus adds. “With a better look, then we can say for certain whether we truly understand this one fundamental building block.”
This work was supported, in part, by multiple funding agencies, including the U.S. Department of Energy, and the SubMIT computing facility, sponsored by the MIT Department of Physics.
MIT graduate engineering and business programs ranked highly by U.S. News for 2026-27Graduate engineering program is No. 1 in the nation; MIT Sloan is No. 6.U.S. News and World Report has again placed MIT’s graduate program in engineering at the top of its annual rankings, released today. The Institute has held the No. 1 spot since 1990, when the magazine first ranked such programs.
The MIT Sloan School of Management also placed highly, occupying the No. 6 spot for the best graduate business programs.
Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering, chemical engineering, computer engineering (tied with the University of California at Berkeley), electrical/electronic/communications engineering (tied with Stanford University and Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering.
In the rankings of individual MBA specialties, MIT placed first in four areas: business analytics, entrepreneurship (with Stanford), production/operations, and supply chain/logistics. It placed second in executive MBA programs (with the University of Chicago).
U.S. News bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of graduate programs in the sciences, social sciences, and humanities are based solely on reputational surveys.
In the sciences, ranked by U.S. News for the first time in four years, MIT’s doctoral programs placed first in four areas: biology (with Scripps Research Institute), chemistry (with Berkeley and Caltech), computer science (with Carnegie Mellon University and Stanford), and physics (with Caltech, Princeton University, and Stanford). The Institute placed second in mathematics (with Harvard University, Stanford, and Berkeley).
Helping data centers deliver higher performance with less hardwareResearchers developed a system that intelligently balances workloads to improve the efficiency of flash storage hardware in a data center.To improve data center efficiency, multiple storage devices are often pooled together over a network so many applications can share them. But even with pooling, significant device capacity remains underutilized due to performance variability across the devices.
MIT researchers have now developed a system that boosts the performance of storage devices by handling three major sources of variability simultaneously. Their approach delivers significant speed improvements over traditional methods that tackle only one source of variability at a time.
The system uses a two-tier architecture, with a central controller that makes big-picture decisions about which tasks each storage device performs, and local controllers for each machine that rapidly reroute data if that device is struggling.
The method, which can adapt in real-time to shifting workloads, does not require specialized hardware. When the researchers tested this system on realistic tasks like AI model training and image compression, it nearly doubled the performance delivered by traditional approaches. By intelligently balancing the workloads of multiple storage devices, the system can increase overall data center efficiency.
“There is a tendency to want to throw more resources at a problem to solve it, but that is not sustainable in many ways. We want to be able to maximize the longevity of these very expensive and carbon-intensive resources,” says Gohar Chaudhry, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique. “With our adaptive software solution, you can still squeeze a lot of performance out of your existing devices before you need to throw them away and buy new ones.”
Chaudhry is joined on the paper by Ankit Bhardwaj, an assistant professor at Tufts University; Zhenyuan Ruan PhD ’24; and senior author Adam Belay, an associate professor of EECS and a member of the MIT Computer Science and Artificial Intelligence Laboratory. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation.
Leveraging untapped performance
Solid-state drives (SSDs) are high-performance digital storage devices that allow applications to read and write data. For instance, an SSD can store vast datasets and rapidly send data to a processor for machine-learning model training.
Pooling multiple SSDs together so many applications can share them improves efficiency, since not every application needs to use the entire capacity of an SSD at a given time. But not all SSDs perform equally, and the slowest device can limit the overall performance of the pool.
These inefficiencies arise from variability in SSD hardware and the tasks they perform.
To utilize this untapped SSD performance, the researchers developed Sandook, a software-based system that tackles three major forms of performance-hampering variability simultaneously. “Sandook” is an Urdu word that means “box,” to signify “storage.”
One type of variability is caused by differences in the age, amount of wear, and capacity of SSDs that may have been purchased at different times from multiple vendors.
The second type of variability is due to the mismatch between read and write operations occurring on the same SSD. To write new data to the device, the SSD must erase some existing data. This process can slow down data reads, or retrievals, happening at the same time.
The third source of variability is garbage collection, a process of gathering and removing outdated data to free up space. This process, which slows SSD operations, is triggered at random intervals that a data center operator cannot control.
“I can’t assume all SSDs will behave identically through my entire deployment cycle. Even if I give them all the same workload, some of them will be stragglers, which hurts the net throughput I can achieve,” Chaudhry explains.
Plan globally, react locally
To handle all three sources of variability, Sandook utilizes a two-tier structure. A global schedular optimizes the distribution of tasks for the overall pool, while faster schedulers on each SSD react to urgent events and shift operations away from congested devices.
The system overcomes delays from read-write interference by rotating which SSDs an application can use for reads and writes. This reduces the chance reads and writes happen simultaneously on the same machine.
Sandook also profiles the typical performance of each SSD. It uses this information to detect when garbage collection is likely slowing operations down. Once detected, Sandook reduces the workload on that SSD by diverting some tasks until garbage collection is finished.
“If that SSD is doing garbage collection and can’t handle the same workload anymore, I want to give it a smaller workload and slowly ramp things back up. We want to find the sweet spot where it is still doing some work, and tap into that performance,” Chaudhry says.
The SSD profiles also allow Sandook’s global controller to assign workloads in a weighted fashion that considers the characteristics and capacity of each device.
Because the global controller sees the overall picture and the local controllers react on the fly, Sandook can simultaneously manage forms of variability that happen over different time scales. For instance, delays from garbage collection occur suddenly, while latency caused by wear and tear builds up over many months.
The researchers tested Sandook on a pool of 10 SSDs and evaluated the system on four tasks: running a database, training a machine-learning model, compressing images, and storing user data. Sandook boosted the throughput of each application between 12 and 94 percent when compared to static methods, and improved the overall utilization of SSD capacity by 23 percent.
The system enabled SSDs to achieve 95 percent of their theoretical maximum performance, without the need for specialized hardware or application-specific updates.
“Our dynamic solution can unlock more performance for all the SSDs and really push them to the limit. Every bit of capacity you can save really counts at this scale,” Chaudhry says.
In the future, the researchers want to incorporate new protocols available on the latest SSDs that give operators more control over data placement. They also want to leverage the predictability in AI workloads to increase the efficiency of SSD operations.
“Flash storage is a powerful technology that underpins modern datacenter applications, but sharing this resource across workloads with widely varying performance demands remains an outstanding challenge. This work moves the needle meaningfully forward with an elegant and practical solution ready for deployment, bringing flash storage closer to its full potential in production clouds,” says Josh Fried, a software engineer at Google and incoming assistant professor at the University of Pennsylvania, who was not involved with this work.
This research was funded, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the Semiconductor Research Corporation.
Toward cheaper, cleaner hydrogen productionCo-founded by Dan Sobek ’88, SM ’92, PhD ’97, 1s1 Energy has developed electrochemical cell materials for hydrogen electrolyzers that it says reduces energy use by 30 percent.Hydrogen sits at the center of some of the world’s most important industrial processes, but its production still comes with a heavy environmental cost. Today, most hydrogen is produced through high-emissions processes like steam methane reforming and coal gasification.
But hydrogen can also be made by splitting water molecules using renewable electricity, eliminating fossil fuel emissions and other toxic byproducts. Such “green hydrogen” is made by running an electric current through water in an electrolyzer.
Green hydrogen won’t scale through decarbonization alone. It also has to be cost-competitive with the traditional methods of production.
1s1 Energy thinks it has the technology to finally make green hydrogen go mainstream. The company says its boron-based membrane material unlocks previously unachievable performance and durability in electrolyzers.
In tests with partners, 1s1 says, electrolyzers with its membranes needed just 70 percent of the energy to produce each kilogram of hydrogen, compared to incumbent devices.
“Green hydrogen has been a hard industry to have success in so far,” acknowledges 1s1 co-founder Dan Sobek ’88, SM ’92, PhD ’97. “The difference with us is we’ve done very targeted customer discovery. We have a very strong value proposition that’s not just about decarbonization. We have a pipeline of potential customers that see around a 60 percent reduction in operating costs with our technology. That’s a nice point of entry.”
Although 1s1 is focused on hydrogen production now, its technology could also be used in fuel cells and solid-state batteries, and to extract critical metals from mining waste. The company is beginning trials in some of those applications, and it is working with a large materials company to scale up production of its membranes for hydrogen production.
“We’re at an inflection point for the company,” Sobek says. “The plan is, by 2030, to have a solid business in several segments: electrolyzers, mineral extraction, and in collaborations with several large companies. But right now, we have to be judicious and focused.”
Improving electrolyzers
Sobek was born and raised in Argentina, but he also grew up at MIT over the course of three degrees and more than a decade. He first studied aeronautics and astronautics at MIT, then jumped to mechanical engineering as a graduate student, then moved to the Department of Electrical Engineering and Computer Science, where he worked under PhD advisors and MIT professors Martha Gray and Stephen Senturia. His thesis focused on a technique for quickly measuring optical properties of large numbers of biological cells.
“A lot of my learnings around microfabrication and materials chemistry ended up being really relevant for 1s1,” Sobek says. “A class that was very important to me was taught by Professor Amar Bose. I was a teaching assistant for him for a couple of semesters, and that had an incredible influence on my thinking.”
Following graduation, Sobek worked in microelectronics and microfluidics before founding his own company, Zymera, in 2004. The company developed deep-tissue imaging technology for detecting cancer and other serious diseases.
Around 2013, Sobek started talking to his Zymera co-founder, Sukanta Bhattacharyya, about making electrolysis more efficient, focusing on “proton exchange membrane” electrolyzers. Such electrolyzers employ a large amount of electricity to split water into hydrogen and oxygen ions. At their center is a membrane that can lose efficiency through voltage resistance.
On top of the efficiency challenge, electricity is often more expensive than fossil fuels in many parts of the world. Traditional hydrogen production also has the benefit of existing infrastructure, making it that much more difficult for green hydrogen production to scale.
Sobek and Bhattacharyya knew the most important part of such electrolyzers is their proton-conducting membrane, which shuttles hydrogen ions from the anode to the cathode in the electrolyzer’s electrochemical cell.
“I asked Sukanta how we could improve the efficiency and durability of that element,” Sobek recalls. “He gave me a one-word answer: boron.”
Boron can be given a negative charge, which makes hydrogen ions, or protons, bond to it more quickly. The hydrogen ions can then be filtered through the membrane and released as they move through the cell. Boron-based materials are also more stable and resistant to corrosion, further improving the long-term performance of electrolyzers.
The company was officially founded in late 2019. After years of development, today 1s1 attaches a chemically tailored version of boron onto polymer materials to create its membranes for exchanging protons.
“These are first-of-a-kind membranes with stable and durable, super-acid proton exchange groups that do not poison catalysts,” Sobek says.
Tiny membranes with big impact
In 2021, the U.S. Department of Energy set a goal for proton exchange membrane electrolysis to achieve 77 percent electrical efficiency by 2031. Sobek says 1s1 is already reaching that milestone in tests.
“It’s not just the technology, but the way we’re applying it,” Sobek says, “We’re making hydrogen viable for use in the production of different industrial chemicals.”
1s1 is currently conducting pilots with partners, including an electrical utility owned by a large steel company in Brazil. The company is also actively exploring other applications for its technology. Last year, 1s1 announced a project to produce green ammonia with the company Nitrofix through joint funding from the U.S. Department of Energy and the Israeli Ministry of Energy and Infrastructure. It’s also working with a large mine in Brazil to extract a material called niobium, which is useful for high-strength steel as well as fast-charging batteries. A similar process could even be used to extract gold.
“We can do that without using harsh chemicals, because the standard processes used to extract niobium and gold use extremely strong acids at high temperatures or extremely toxic chemicals,” Sobek says. “It’s gratifying for me because my home country of Argentina has had a lot of problems with the use of toxic chemicals to extract gold. We’re trying to enable low-cost, responsible mining.”
As 1s1 scales its membrane technology, Sobek says the goal is to deploy wherever the technology can improve processes.
“We have a large number of potential customers because this technology is really foundational,” Sobek says. “Creating high-impact technologies is always fun.”
Lincoln Laboratory laser communications terminal launches on historic Artemis II moon missionHigh-definition video and data sent from the lunar vicinity to Earth will demonstrate the first use of laser communications on a crewed mission.In 1969, Apollo 11 astronaut Neil Armstrong stepped onto the moon’s surface — a momentous engineering and science feat marked by his iconic words: "That’s one small step for man, one giant leap for mankind." Now, NASA is making history again.
With the successful launch of NASA’s Artemis II mission yesterday, four astronauts are set to become the first humans to travel to the moon in more than 50 years. In 2022, the uncrewed Artemis I mission demonstrated that NASA’s new Orion spacecraft could travel farther into space than ever before and return safely to Earth. Building on that success, the 10-day Artemis II mission will pave the way for future Artemis missions, which aim to land astronauts on the moon to prepare for a lasting lunar presence, and eventually human missions to Mars.
As it orbits the moon, the Orion spacecraft will carry an optical (laser) communications system developed at MIT Lincoln Laboratory in collaboration with NASA Goddard Space Flight Center. Called the Orion Artemis II Optical Communications System (O2O), the system is capable of higher-bandwidth data transmissions from space compared to traditional radio-frequency (RF) systems. During the Artemis II mission, O2O will use laser beams to send high-resolution video and images of the lunar surface down to Earth.
"Space-based communications has always been a big challenge," says lead systems engineer Farzana Khatri, a senior staff member in the laboratory’s Optical and Quantum Communications Group. "RF communications have served their purpose well. However, the RF spectrum is highly congested now, and RF does not scale well to longer distances across space. Laser communication [lasercom] is a solution that could solve this problem, and the laboratory is an expert in the field, which was really pioneered here."
Artemis II is historic not only for renewing human exploration beyond Earth, but also for being the first crewed lunar flight to demonstrate lasercom technologies, which are poised to revolutionize how spacecraft communicate. Lincoln Laboratory has been developing such technologies for more than two decades, and NASA has been infusing them into its missions to meet the growing demands of long-distance and data-intensive space exploration.
"The Orion spacecraft collects a huge amount of data during the first day of a mission, and typically these data sit on the spacecraft until it splashes down and can take months to be offloaded," Khatri says. "With an optical link running at the highest rate, we should be able to get all the data down to Earth within a few hours for immediate analysis. Furthermore, astronauts will be able to communicate in real-time over the optical link to stay in touch with Earth during their journey, inspiring the public and the next generation of deep-space explorers, much like the Apollo 11 astronauts who first landed on the moon 57 years ago."
At the heart of O2O is the laboratory-developed Modular, Agile, Scalable Optical Terminal (MAScOT). About the size of a house cat, MAScOT features a 4-inch telescope mounted on a two-axis pivoted support (gimbal) with fixed backend optics. The gimbal precisely points the telescope and tracks the laser beam through which communications signals are emitted and received in the direction of the desired data recipient or sender. Underneath the gimbal, in a separate assembly, are the backend optics, which contain light-focusing lenses, tracking sensors, fast-steering mirrors, and other components to finely point the laser beam.
MAScOT made its debut in space as part of the laboratory’s Integrated Laser Communications Relay Demonstration (LCRD) LEO User Modem and Amplifier Terminal (ILLUMA-T), which launched to the International Space Station in November 2023. Over the following six months, the laboratory team performed experiments to test and characterize the system's basic functionality, performance, and utility for human crews and user applications. Initially, the team checked whether the ILLUMA-T-to-LCRD optical link was operating at the intended data rates in both directions: 622 Mbps down and 51 Mbps up. In fact, even higher data rates were achieved: 1.2 Gbps down and 155 Mbps up. MAScOT’s lasercom terminal architecture, which was recognized with a 2025 R&D 100 Award, is now being used for Artemis II and will support future space missions.
"Our success with ILLUMA-T laid the foundation for streaming HD [high-definition] video to and from the moon," says co-principal investigator Jade Wang, an assistant leader of the Optical and Quantum Communications Group. "You can imagine the Artemis astronauts using videoconferencing to connect with physicians, coordinate mission activities, and livestream their lunar trips."
A dedicated operations team from Lincoln Laboratory is following the 10-day Artemis II mission from ground stations in Houston, Texas, and White Sands, New Mexico, and even as far as an experimental ground station in Australia, which allows for a better view of the spacecraft from the Southern Hemisphere. Leading up to the launch, the operations team had been making monthly trips to the Houston and White Sands ground stations to perform maintenance and simulations of various stages of the Artemis mission — from prelaunch to launch to the journey to the moon and back to the splashdown at the end of the mission.
"Doing these monthly simulations is important so we all stay fresh and engaged, especially when there is a launch delay," says Khatri, who adds that team members have had the opportunity to meet and speak with the four astronauts several times during these trips.
Lessons learned throughout the Artemis II mission will pave the way for humans to return to the lunar surface and beyond, eventually to Mars. Through the Artemis program, NASA will travel farther into space and explore more of the moon while creating an enduring presence in deep space and a legacy for future generations.
O2O is funded by the Space Communication and Navigation (SCaN) program at NASA Headquarters in Washington. O2O was developed by a team of engineers from NASA’s Goddard Space Flight Center and Lincoln Laboratory. This partnership has led to multiple lasercom missions, such as the 2013 Lunar Laser Communication Demonstration (LLCD), the 2021 LCRD, the 2022 TeraByte Infrared Delivery (TBIRD), and the 2023 ILLUMA-T.
MIT researchers measure traffic emissions, to the block, in real-timeA new study pieces together existing data sources in order to develop a detailed, dynamic picture of auto emissions.In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.
The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.
“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”
Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”
The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.
“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.
The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.
The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.
Manhattan measurements
To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.
The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.
“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.
Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.
For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.
Major emissions drop
On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.
To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.
This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.
“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”
There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.
“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.
The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.
It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.
But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?
To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.
The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences.
The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.
“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.
Evaluating ethics
In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.
Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.
Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.
Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.
“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.
Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.
For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.
These ethical criteria may not be well-specified, so they can’t be measured analytically.
The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.
SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.
“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.
Encoding subjectivity
To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.
The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.
“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.
SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.
In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.
For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.
To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.
The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.
“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.
To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.
In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.
This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.
Preview tool helps makers visualize 3D-printed objectsBy quickly generating aesthetically accurate previews of fabricated objects, the VisiPrint system could make prototyping faster and less wasteful.Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.
But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.
To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.
Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.
The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.
Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.
“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.
She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.
Accurate aesthetics
The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.
Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.
VisiPrint uses two AI models that work together to overcome those challenges.
The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.
From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.
It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.
The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.
Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.
“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.
A user-focused system
The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.
The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.
In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.
To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.
In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.
“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.
In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.
“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.
“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.
This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.
Two physicists and a curious host walk into a studio…On GBH’s new show The Curiosity Desk, MIT LIGO researchers revel in the beauties of fundamental discovery science and MIT astronomers talk planetary defense.This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).
Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.
In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”
Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.
While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)
In fact, it’s fantastic to have people from both worlds at MIT, said Vitale. Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.
As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.
But what if I’m not interested in any of that, asked Herwick? Why should I care?
“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”
Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:
“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing.
“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”
Watch the full conversation below and on YouTube:
Planetary defense
Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.
“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.
There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”
He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”
Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”
Watch the full conversation below and on the GBH YouTube channel:
Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team.
Tomás Palacios named director of the Institute for Soldier NanotechnologiesThe electrical engineering and nanotechnology leader will guide the US Army-sponsored research center as it advances next-generation materials, electronics, and photonics for national security.Tomás Palacios, the Clarence J. LeBel Professor of Electrical Engineering at MIT, has been appointed director of the MIT Institute for Soldier Nanotechnologies (ISN). Palacios assumed the role on Feb. 4, and will continue to serve as the director of the MIT Microsystems Technology Laboratories (MTL).
Founded in 2002, ISN is a U.S. Army-sponsored University Affiliated Research Center focused on advancing fundamental science and engineering to enable next-generation capabilities for protection, survivability, sensing, and system performance. ISN brings together researchers from across MIT to address challenges at the intersection of materials, devices, and systems. In collaboration with industry, MIT Lincoln Laboratory, the U.S. Army, and other U.S. military services, ISN works to transition promising technologies for both commercial and defense applications.
As director, Palacios will oversee ISN’s research portfolio, facilities, and strategic partnerships, working closely with the ISN leadership team, MIT administration, U.S. Army, and other research sponsors to guide the institute’s next phase of research and collaboration.
“Tomás Palacios brings exceptional energy, vision, and leadership to the Institute for Soldier Nanotechnologies,” says Ian A. Waitz, MIT’s vice president for research, who announced the appointment in a recent letter. “As director of Microsystems Technology Laboratories, he has demonstrated a rare ability to build strong research communities and partnerships across academia, industry, and government. I am confident he will guide ISN’s next phase with momentum, scientific excellence, and a deep sense of service to MIT and the nation.”
Palacios brings deep leadership experience within MIT and across national research collaborations. As director of MTL, he leads one of MIT’s flagship interdisciplinary research laboratories supporting work in micro- and nano-scale materials, devices, and systems. He is a member of the MIT.nano Leadership Council and, since 2023, has served as associate director of the multi-university SUPeRior Energy-efficient Materials and dEvices (SUPREME) Center, a Semiconductor Research Corp. JUMP 2.0 program focused on next-generation energy-efficient semiconductor technologies. Palacios is also the co-founder of several technology companies, including Vertical Semiconductor, Finwave Semiconductor, and CDimension, Inc.
“MIT’s motto, ‘mens et manus’ — ‘mind and hand’ — reminds us that fundamental research and real-world impact must go hand-in-hand,” says Palacios. “At ISN, our mission is to help protect and empower those who defend our nation. That responsibility demands urgency, creativity, and deep collaboration. I look forward to building on ISN’s strong partnership with the U.S. Army, industry, and colleagues across MIT to push the frontiers of nanotechnology and translate discovery into meaningful impact at the speed of relevance.”
Palacios is internationally recognized for his work on wide-bandgap semiconductors, nanoelectronics, and advanced electronic materials. An IEEE Fellow, his research spans fundamental device physics through system-level integration, with applications in high-power and high-frequency electronics, sensing, and energy systems. He is widely recognized for his research contributions, as well as for his leadership in education and mentoring.
Palacios succeeds John Joannopoulos, who served as ISN director from 2006 until his death in August 2025. During his nearly two decades of ISN leadership, Joannopoulos strengthened ISN’s interdisciplinary culture, devoting significant effort to fostering collaborations among ISN-funded principal investigators, building partnerships that extend across MIT and beyond to the Army research community. Joannopoulos, an extraordinary researcher and a generous mentor, was also a co-founder of companies such as WiTricity and OmniGuide, helping to translate many of ISN’s foundational scientific discoveries into commercial technologies. Raúl Radovitzky, ISN’s associate director, served as interim director during the search for a new director, providing continuity to ISN’s research programs, facilities, and partnerships.
“It is an honor to serve as director of the Institute for Soldier Nanotechnologies at such an important moment in time,” says Palacios. “ISN has built an extraordinary foundation of interdisciplinary excellence under Professor John Joannopoulos’ leadership and, more recently, Prof. Radovitzky’s. I look forward to working with the ISN community to advance breakthrough research at the intersection of materials, devices, and systems — research that not only strengthens national security, but also translates into technologies that benefit society more broadly.”
Climate change may produce “fast-food” phytoplanktonWith warmer ocean temperatures, the composition of marine plankton could shift from protein-rich to carb-heavy, a new study suggests.We are what we eat. And in the ocean, most life-forms source their food from phytoplankton. These microscopic, plant-like algae are the primary food source for krill, sea snails, some small fish, and jellyfish, which in turn feed larger marine animals that are prey for the ocean’s top predators, including humans.
Now MIT scientists are finding that phytoplankton's composition, and the basic diet of the ocean, will shift significantly with climate change.
In an open-access study appearing today in the journal Nature Climate Change, the team reports that as sea surface temperatures rise over the next century, phytoplankton in polar regions will adapt to be less rich in proteins, heavier in carbohydrates, and lower in nutrients overall.
The conclusions are based on results from the team’s new model, which simulates the composition of phytoplankton in response to changes in ocean temperature, circulation, and sea ice coverage. In a scenario in which humans continue to emit greenhouse gases through the year 2100, the team found that changing ocean conditions, particularly in the polar regions, will shift phytoplankton’s balance of proteins to carbohydrates and lipids by approximately 20 percent. The researchers analyzed observations from the past several decades, and already have found a signature of this change in the real world.
“We’re moving in the poles toward a sort of fast-food ocean,” says lead author and MIT postdoc Shlomit Sharoni. “Based on this prediction, the nutritional composition of the surface ocean will look very different by the end of the century.”
The study’s MIT co-authors are Mick Follows, Stephanie Dutkiewicz, and Oliver Jahn; along with Keisuke Inomura of the University of Rhode Island; Zoe Finkel, Andrew Irwin, and Mohammad Amirian of Dalhousie University in Halifax, Canada; and Erwan Monier of the University of California at Davis.
Nutritional information
Phytoplankton drift through the upper, sun-lit layers of the ocean. Like plants on land, the marine microalgae are photosynthetic. Their growth depends on light from the sun, carbon dioxide from the atmosphere, and nutrients such as nitrogen and iron that well up from the deep ocean.
When studying how phytoplankton will respond to climate change, scientists have primarily focused on how rising ocean temperatures will affect phytoplankton populations. Whether and how the plankton’s composition will change is less well-understood.
“There’s been an awareness that the nutritional value of phytoplankton can shift with climate change,” says Sharoni, “But there has been very little work in directly addressing that question.”
She and her colleagues set out to understand how ocean conditions influence phytoplankton macromolecular composition. Macromolecules are large molecules that are essential for life. The main types of macromolecules include proteins, lipids, carbohydrates, and nucleic acids (the building blocks of DNA and RNA). Every form of life, including phytoplankton, is composed of a balance of macromolecules that helps it to survive in its particular environment.
“Nearly all the material in a living organism is in these broad molecular forms, each having a particular physiological function, depending on the circumstances that the organism finds itself in,” says Follows, a professor in the Department of Earth, Atmospheric and Planetary Sciences.
An unbalanced diet
In their new study, the researchers first looked at how today’s ocean conditions influence phytoplankton’s macromolecular composition. The team used data from lab experiments carried out by their collaborators at Dalhousie. These experiments revealed ways in which phytoplankton’s balance of macromolecules, such as proteins to carbohydrates, shifted in response to changes in water temperature and the availability of light and nutrients.
With these lab-based data, the group developed a quantitative model that simulates how plankton in the lab would readjust its balance of proteins to carbohydrates under different light and nutrient conditions. Sharoni and Inomura then paired this new model with an established model of ocean circulation and dynamics developed previously at MIT. With this modeling combination, they simulated how phytoplankton composition shifts in response to ocean conditions in different parts of the world and under different climate scenarios.
The team first modeled today’s current climate conditions. Consistent with observations, their model predicts that that a little more than half of the average phytoplankton cell today is composed of proteins. The rest is a mix of carbohydrates and lipids.
Interestingly, in polar regions, phytoplankton are slightly more protein-rich. At the poles, the cover of sea ice limits the amount of sunlight phytoplankton can absorb. The researchers surmise that phytoplankton may have adapted by making more light-harvesting proteins to help the organisms efficiently absorb the weak sunlight.
However, when they modeled a future climate change scenario, the team found a significant shift in phytoplankton composition. They simulated a scenario in which humans continue to emit greenhouse gases through the year 2100. In this scenario, the ocean sea surface temperatures will rise by 3 degrees Celsius, substantially reducing sea ice coverage. Warmer temperatures will also limit the ocean’s circulation, as well as the amount of nutrients that can circulate up from the deep ocean.
Under these conditions, the model predicts that the population of phytoplankton growth in polar regions will increase significantly, consistent with earlier studies. Uniquely, this model predicts that phytoplankton in polar regions will shift from a protein-rich to a carb- and lipid-heavy composition. They found that plankton will not need as much light-harvesting protein, since less sea ice will make sunlight more easily available for the organisms to absorb. Total protein levels in these polar phytoplankton will decline by up to 30 percent, with a corresponding increase in the contribution of carbs and lipids.
It’s unclear what impact a larger population of carb- and lipid-heavy phytoplankton may have on the rest of the marine food web. While some organisms may be stressed by a reduction in protein, others that make lipid stores to survive through the winter might thrive.
The team also simulated phytoplankton in subtropical, higher-latitude regions. In these ocean areas, it’s expected that phytoplankton populations will decline by 50 percent. And the team’s modeling shows that their composition will also shift.
With warmer temperatures, the ocean’s circulation will slow down, limiting the amount of nutrients that can upwell from the deep ocean. In response, subtropical phytoplankton may have to find ways to live at deeper depths, to strike a balance between getting enough sunlight and nutrients. Under these conditions, the organisms will likely shift to a slightly more protein-rich composition, making use of the same photosynthetic proteins that their polar counterparts will require less of.
On balance, given the projected changes in phytoplankton populations with climate change, their average composition around the world will shift to a more carb-heavy, low-nutrient composition.
The researchers went a step further and found that their modeling agrees with available small set of actual phytoplankton field samples that other scientists previously collected from Arctic and Antarctic regions. These samples showed compositions of phytoplankton have become more carb- and lipid-heavy over the past few decades, as the team’s model predicts under climate warming.
“In these regions, you can already see climate change, because sea ice is already melting,” Sharoni explains. “And our model shows that proteins in polar plankton have been declining, while carbs and lipids are increasing.”
“It turns out that climate change is accelerated in the Arctic, and we have data showing that the composition of phytoplankton has already responded,” Follows adds. “The main message is: The caloric content at the base of the marine food web is already changing. And it’s not a clear story as to how this change will transmit through the food web.”
This work was supported, in part, by the Simons Foundation.
MIT researchers use AI to uncover atomic defects in materials A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.In biology, defects are generally bad. But in materials science, defects can be intentionally tuned to give materials useful new properties. Today, atomic-scale defects are carefully introduced during the manufacturing process of products like steel, semiconductors, and solar cells to help improve strength, control electrical conductivity, optimize performance, and more.
But even as defects have become a powerful tool, accurately measuring different types of defects and their concentrations in finished products has been challenging, especially without cutting open or damaging the final material. Without knowing what defects are in their materials, engineers risk making products that perform poorly or have unintended properties.
Now, MIT researchers have built an AI model capable of classifying and quantifying certain defects using data from a noninvasive neutron-scattering technique. The model, which was trained on 2,000 different semiconductor materials, can detect up to six kinds of point defects in a material simultaneously, something that would be impossible using conventional techniques alone.
“Existing techniques can’t accurately characterize defects in a universal and quantitative way without destroying the material,” says lead author Mouyang Cheng, a PhD candidate in the Department of Materials Science and Engineering. “For conventional techniques without machine learning, detecting six different defects is unthinkable. It’s something you can’t do any other way.”
The researchers say the model is a step toward harnessing defects more precisely in products like semiconductors, microelectronics, solar cells, and battery materials.
“Right now, detecting defects is like the saying about seeing an elephant: Each technique can only see part of it,” says senior author and associate professor of nuclear science and engineering Mingda Li. “Some see the nose, others the trunk or ears. But it is extremely hard to see the full elephant. We need better ways of getting the full picture of defects, because we have to understand them to make materials more useful.”
Joining Cheng and Li on the paper are postdoc Chu-Liang Fu, undergraduate researcher Bowen Yu, master’s student Eunbi Rha, PhD student Abhijatmedhi Chotrattanapituk ’21, and Oak Ridge National Laboratory staff members Douglas L Abernathy PhD ’93 and Yongqiang Cheng. The paper appears today in the journal Matter.
Detecting defects
Manufacturers have gotten good at tuning defects in their materials, but measuring precise quantities of defects in finished products is still largely a guessing game.
“Engineers have many ways to introduce defects, like through doping, but they still struggle with basic questions like what kind of defect they’ve created and in what concentration,” Fu says. “Sometimes they also have unwanted defects, like oxidation. They don’t always know if they introduced some unwanted defects or impurity during synthesis. It’s a longstanding challenge.”
The result is that there are often multiple defects in each material. Unfortunately, each method for understanding defects has its limits. Techniques like X-ray diffraction and positron annihilation characterize only some types of defects. Raman spectroscopy can discern the type of defect but can’t directly infer the concentration. Another technique known as transmission electron microscope requires people to cut thin slices of samples for scanning.
In a few previous papers, Li and collaborators applied machine learning to experimental spectroscopy data to characterize crystalline materials. For the new paper, they wanted to apply that technique to defects.
For their experiment, the researchers built a computational database of 2,000 semiconductor materials. They made sample pairs of each material, with one doped for defects and one left without defects, then used a neutron-scattering technique that measures the different vibrational frequencies of atoms in solid materials. They trained a machine-learning model on the results.
“That built a foundational model that covers 56 elements in the periodic table,” Cheng says. “The model leverages the multihead attention mechanism, just like what ChatGPT is using. It similarly extracts the difference in the data between materials with and without defects and outputs a prediction of what dopants were used and in what concentrations.”
The researchers fine-tuned their model, verified it on experimental data, and showed it could measure defect concentrations in an alloy commonly used in electronics and in a separate superconductor material.
The researchers also doped the materials multiple times to introduce multiple point defects and test the limits of the model, ultimately finding it can make predictions about up to six defects in materials simultaneously, with defect concentrations as low as 0.2 percent.
“We were really surprised it worked that well,” Cheng says. “It’s very challenging to decode the mixed signals from two different types of defects — let alone six.”
A model approach
Typically, manufacturers of things like semiconductors run invasive tests on a small percentage of products as they come off the manufacturing line, a slow process that limits their ability to detect every defect.
“Right now, people largely estimate the quantities of defects in their materials,” Yu says. “It is a painstaking experience to check the estimates by using each individual technique, which only offers local information in a single grain anyway. It creates misunderstandings about what defects people think they have in their material.”
The results were exciting for the researchers, but they note their technique measuring the vibrational frequencies with neutrons would be difficult for companies to quickly deploy in their own quality-control processes.
“This method is very powerful, but its availability is limited,” Rha says. “Vibrational spectra is a simple idea, but in certain setups it’s very complicated. There are some simpler experimental setups based on other approaches, like Raman spectroscopy, that could be more quickly adopted.”
Li says companies have already expressed interest in the approach and asked when it will work with Raman spectroscopy, a widely used technique that measures the scattering of light. Li says the researchers’ next step is training a similar model based on Raman spectroscopy data. They also plan to expand their approach to detect features that are larger than point defects, like grains and dislocations.
For now, though, the researchers believe their study demonstrates the inherent advantage of AI techniques for interpreting defect data.
“To the human eye, these defect signals would look essentially the same,” Li says. “But the pattern recognition of AI is good enough to discern different signals and get to the ground truth. Defects are this double-edged sword. There are many good defects, but if there are too many, performance can degrade. This opens up a new paradigm in defect science.”
The work was supported, in part, by the Department of Energy and the National Science Foundation.
Implantable islet cells could control diabetes without insulin injectionsThe cells can survive in the body for at least three months, producing enough insulin to control blood sugar levels, research shows.Most diabetes patients must carefully monitor their blood sugar levels and inject insulin multiple times per day, to help keep their blood sugar from getting too high.
As a possible alternative to those injections, MIT researchers are developing an implantable device that contains insulin-producing cells. The device encapsulates the cells, protecting them from immune rejection, and it also carries an on-board oxygen generator to keep the cells healthy.
This device, the researchers hope, could offer a way to achieve long-term control of type 1 diabetes. In a new study, they showed that these encapsulated pancreatic islet cells could survive in the body for at least 90 days. In mice that received the implants, the cells remained functional and produced enough insulin to control the animals’ blood sugar levels.
“Islet cell therapy can be a transformative treatment for patients. However, current methods also require immune suppression, which for some people can be really debilitating,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Our goal is to find a way to give patients the benefit of cell therapy without the need for immune suppression.”
Anderson is the senior author of the study, which appears today in the journal Device. Former MIT research scientist Siddharth Krishnan, who is now an assistant professor of electrical engineering at Stanford University, and former MIT postdoc Matthew Bochenek are the lead authors of the paper. Robert Langer, the David H. Koch Institute Professor at MIT, is also a co-author.
Insulin on demand
Islet cell transplantation has already been used successfully to treat diabetes in patients. Those islet cells typically come from human cadavers, or more recently, can be generated from stem cells. In either case, patients must take immunosuppressive drugs to prevent their immune system from rejecting the transplanted cells.
Another way to prevent immune rejection is to encapsulate cells in a protective device. However, this raises new challenges, as the coating that surrounds the cells can prevent them from receiving enough oxygen.
In a 2023 study, Anderson and his colleagues reported an islet-encapsulation device that also carries an on-board oxygen generator. This generator consists of a proton-exchange membrane that can split water vapor (found abundantly in the body) into hydrogen and oxygen. The hydrogen diffuses harmlessly away, while oxygen goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.
Cells encapsulated within this device, they found, could produce insulin for up to a month after being implanted in mice.
“A month is a good timeframe in that it shows basic proof-of-concept. But from a translational standpoint, it’s important to show that you can go quite a bit longer than that,” Krishnan says.
In the new study, the researchers increased the lifespan of the devices by making them more waterproof and more resilient to cracking. They also improved the device electronics to deliver more power to the oxygen generator. The implant is powered wirelessly by an external antenna placed on the skin, which transfers energy to the device. By optimizing the circuitry, the researchers were able to increase the amount of power reaching the oxygen-generating system.
The additional power allowed the device to produce more oxygen, helping the encapsulated cells to survive and function more effectively. As a result, the cells were able to generate much more insulin over time.
Protein factories
In studies in rats and mice, the researchers showed that the new device could function for at least 90 days after being implanted under the skin. During this time, donor islet cells were able to produce enough insulin to keep the animals’ blood sugar levels within a healthy range.
The researchers saw similar results with islet cells derived from induced pluripotent stem cells, which could one day provide an indefinite supply that could be used for any patient who needs them. These islets didn’t fully reverse diabetes, but they did achieve some control of blood sugar levels.
“We’re hoping that in the future, if we can give the cells a little bit longer to fully mature, that they’ll secrete even more insulin to better regulate diabetes in the animals,” Bochenek says.
The researchers now plan to study whether they can get the devices to last for even longer in the body — up to two years, or longer.
“Long-term survival of the islets is an important goal,” Anderson says. “The cells, if they’re in the right environment, seem to be able to survive for a long time. We are excited by the duration we’ve already achieved, and we will be working to extend their function as long as possible.”
The researchers are also exploring the possibility of using this approach to deliver cells that could produce other useful proteins, such as antibodies, enzymes, or clotting factors.
“We think that these technologies could provide a long-term way to treat human disease by making drugs in the body instead of outside of the body,” Anderson says. “There are many protein therapies where patients must receive repeated, lengthy infusions. We think it may be possible to create a device that could continuously create protein therapeutics on demand and as needed by the patient.”
The research was funded, in part, by Breakthrough TID, the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, and a Koch Institute Support (core) Grant from the National Cancer Institute.
Study reveals why some cancer therapies don’t work for all patientsA backup survival pathway can help tumor cells resist certain lung cancer and other drugs. Combining therapies may offer a solution.Drugs that block enzymes called tyrosine kinases are among the most effective targeted therapies for cancer. However, they typically work for only 40 to 80 percent of the patients who would be expected to respond to them.
In a new study, MIT researchers have figured out why those drugs don’t work in all cases: Many of these tumors have turned on a backup survival pathway that helps them keep growing when the targeted pathway is knocked out.
“This seems to be hardwired into the cells and seems to be providing activation of a critical survival pathway in cancer cells,” says Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT. “This pathway allows the cells to be resistant to a wide variety of therapies, including chemotherapies.”
Additionally, the researchers found that they could kill those drug-resistant cancer cells by treating with both a tyrosine kinase inhibitor and a drug that targets the backup pathway. Clinical trials are now underway to test one such combination in lung cancer patients.
White is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Cameron Flower PhD ’24, who is now a postdoc at Dana-Farber Cancer Institute and Boston Children’s Hospital, is the paper’s lead author.
Tumor survival
Tyrosine kinases are involved in many signaling pathways that allow cells to receive input from the external environment and convert it into a response such as growing or dividing. There are about 90 types of these kinases in human cells, and many of them are overactive in cancer cells.
“These kinases are very important for regulating cell growth and mitosis, and pushing the cell from a nondividing state to a dividing state depends on the activity of a lot of different tyrosine kinases,” Flower says. “We see a lot of mutations and overexpression of these kinases in cancer cells.”
These cancer-associated kinases include EGFR and BCR-ABL. Many cancer drugs targeting these kinases, including imatinib (Gleevec), have been approved to treat leukemia and other cancers. However, these drugs are not effective for all of the patients whose tumors overexpress tyrosine kinases — a phenomenon that has puzzled cancer researchers.
That lower-than-expected success rate motivated the MIT team to look into these drugs and try to figure out why some tumors do not respond to them.
For this study, the researchers examined six different cancer cell lines, which originally came from lung cancer patients. They chose two cell lines with EGFR mutations, two with mutations in a tyrosine kinase called MET and two with mutations in a tyrosine kinase called ALK. Each pair included one line that responded well to the tyrosine kinase inhibitor targeting the overactive pathway and one line that did not.
Using a technique called phosphoproteomics, the researchers were able to analyze the signaling pathways that were active in each of the cells, before and after treatment. Phosphoproteomics is used to identify proteins that have had a phosphate group added to them by a kinase. This process, known as phosphorylation, can activate or deactivate the target protein.
The researchers’ analysis revealed that the drugs were working as intended in all of the cancer cells. Even in resistant cells, the drugs did knock out signaling by their target kinase. However, in the cells that were resistant, an alternative network was already turned on, which helped the cells survive in spite of the treatment.
“Even before the therapy begins, the cells are in a state that intrinsically is resistant to the drug,” Flower says.
This survival network consists of signaling pathways that are regulated by another type of kinases known as SRC family kinases. Activation of this network appears to help cancer cells proliferate and possibly to migrate to new locations in the body. In addition to lung cancer, researchers from White’s lab have also found SRC family kinases activated in melanoma cells, where they also play a role in drug resistance, and in glioblastoma, a type of brain cancer.
“As inhibitors for SRC kinases are also drugs, the work suggests that combining inhibitors of driver oncogenes with SRC inhibitors could increase the number of patients who would benefit. This strategy merits testing in new clinical trials,” says Benjamin Neel, a professor of medicine at NYU Grossman School of Medicine, who was not involved in the study.
These findings might also explain why some patients who initially respond to tyrosine kinase inhibitors end up having their tumors recur later; the cells may end up activating this same survival pathway, but not until sometime after the initial treatment.
Combating resistance
The researchers also found that treating the resistant cells with both a tyrosine kinase inhibitor and a drug that inhibits SRC family kinases led to much greater cell death rates. By coincidence, a clinical trial testing the combination of a tyrosine kinase inhibitor called osmertinib and an SRC inhibitor is now underway, in patients with lung cancer. The MIT team now hopes to work with the same drug company to run a similar trial in pancreatic cancer patients.
The researchers also showed that they could use phosphoproteomics to analyze patient biopsy samples to see which cells already have the SRC pathways turned on.
“We are really excited to watch these clinical trials and to see how well patients do on these combinations. And I really think there’s a future for using tyrosine phosphoproteomics to guide this clinical decision-making,” White says.
This therapy might also be useful for patients whose tumors are originally susceptible to tyrosine kinase inhibitors but then later become resistant by turning on SRC pathways.
“Among the sensitive cells, some of them are able to upregulate this survival pathway and survive, which might be the residual disease that’s still there after treatment,” White says. “One of the interesting avenues here is, could we improve therapy for almost everybody, regardless of whether their tumors have intrinsic or adaptive resistance?”
The research was funded by the National Institutes of Health and the MIT Center for Precision Cancer Medicine.
AI system learns to keep warehouse robot traffic running smoothly This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns.
To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly. Their method learns which robots should go first at each moment, based on how congestion is forming, and adapts to prioritize robots that are about to get stuck. In this way, the system can reroute robots in advance to avoid bottlenecks.
The hybrid system utilizes deep reinforcement learning, a powerful artificial intelligence method for solving complex problems, to figure out which robots should be prioritized. Then, a fast and reliable planning algorithm feeds instructions to the robots, enabling them to respond rapidly in constantly changing conditions.
In simulations inspired by actual e-commerce warehouse layouts, this new approach achieved about a 25 percent gain in throughput over other methods. Importantly, the system can quickly adapt to new environments with different quantities of robots or varied warehouse layouts.
“There are a lot of decision-making problems in manufacturing and logistics where companies rely on algorithms designed by human experts. But we have shown that, with the power of deep reinforcement learning, we can achieve super-human performance. This is a very promising approach, because in these giant warehouses even a 2 or 3 percent increase in throughput can have a huge impact,” says Han Zheng, a graduate student in the Laboratory for Information and Decision Systems (LIDS) at MIT and lead author of a paper on this new approach.
Zheng is joined on the paper by Yining Ma, a LIDS postdoc; Brandon Araki and Jingkai Chen of Symbotic; and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in the Journal of Artificial Intelligence Research.
Rerouting robots
Coordinating hundreds of robots in an e-commerce warehouse simultaneously is no easy task.
The problem is especially complicated because the warehouse is a dynamic environment, and robots continually receive new tasks after reaching their goals. They need to be rapidly redirected as they leave and enter the warehouse floor.
Companies often leverage algorithms written by human experts to determine where and when robots should move to maximize the number of packages they can handle.
But if there is congestion or a collision, a firm may have no choice but to shut down the entire warehouse for hours to manually sort the problem out.
“In this setting, we don’t have an exact prediction of the future. We only know what the future might hold, in terms of the packages that come in or the distribution of future orders. The planning system needs to be adaptive to these changes as the warehouse operations go on,” Zheng says.
The MIT researchers achieved this adaptability using machine learning. They began by designing a neural network model to take observations of the warehouse environment and decide how to prioritize the robots. They train this model using deep reinforcement learning, a trial-and-error method in which the model learns to control robots in simulations that mimic actual warehouses. The model is rewarded for making decisions that increase overall throughput while avoiding conflicts.
Over time, the neural network learns to coordinate many robots efficiently.
“By interacting with simulations inspired by real warehouse layouts, our system receives feedback that we use to make its decision-making more intelligent. The trained neural network can then adapt to warehouses with different layouts,” Zheng explains.
It is designed to capture the long-term constraints and obstacles in each robot’s path, while also considering dynamic interactions between robots as they move through the warehouse.
By predicting current and future robot interactions, the model plans to avoid congestion before it happens.
After the neural network decides which robots should receive priority, the system employs a tried-and-true planning algorithm to tell each robot how to move from one point to another. This efficient algorithm helps the robots react quickly in the changing warehouse environment.
This combination of methods is key.
“This hybrid approach builds on my group’s work on how to achieve the best of both worlds between machine learning and classical optimization methods. Pure machine-learning methods still struggle to solve complex optimization problems, and yet it is extremely time- and labor-intensive for human experts to design effective methods. But together, using expert-designed methods the right way can tremendously simplify the machine learning task,” says Wu.
Overcoming complexity
Once the researchers trained the neural network, they tested the system in simulated warehouses that were different than those it had seen during training. Since industrial simulations were too inefficient for this complex problem, the researchers designed their own environments to mimic what happens in actual warehouses.
On average, their hybrid learning-based approach achieved 25 percent greater throughput than traditional algorithms as well as a random search method, in terms of number of packages delivered per robot. Their approach could also generate feasible robot path plans that overcame congestion caused by traditional methods.
“Especially when the density of robots in the warehouse goes up, the complexity scales exponentially, and these traditional methods quickly start to break down. In these environments, our method is much more efficient,” Zheng says.
While their system is still far away from real-world deployment, these demonstrations highlight the feasibility and benefits of using a machine learning-guided approach in warehouse automation.
In the future, the researchers want to include task assignments in the problem formulation, since determining which robot will complete each task impacts congestion. They also plan to scale up their system to larger warehouses with thousands of robots.
This research was funded by Symbotic.
Why solid-state batteries keep short-circuitingNew insights into metallic cracks that harm battery performance could advance the longstanding quest to develop energy-dense solid-state batteries.Batteries that use solids as their charge-carrying electrolyte could potentially be a safer and far more energy-dense alternative to lithium-ion batteries. However, these solid-state batteries have been plagued by the formation of metallic cracks called dendrites that cause them to short circuit.
The problem has so far prevented such batteries from becoming a major player in energy storage. But now, research from MIT could finally help engineers find a way to get past this hurdle.
For decades, many researchers have treated dendrites as largely the result of mechanical stress — like cracks that form on the sidewalk when a tree root grows underneath. But MIT engineers have discovered the exact opposite: Faster dendrite growth was associated with lower stress levels in a commonly used battery electrolyte material. Using a new technique that allowed them to directly measure the stress around growing dendrites, the researchers found cracks formed at stress levels as low as 25 percent of what would be expected under mechanical stress alone.
The experiments, published in Nature today, instead revealed another culprit: chemical reactions caused by high electrical currents that weaken the electrolyte and make it more susceptible to dendrite growth. Researchers had previously proposed that such reactions cause dendrite growth, but the new study provides the first experimental data on the interplay between chemical and mechanical stress in dendrite formation.
“Direct measurement techniques allowed us to see how tough the material is as we cycle the cell,” says Cole Fincher, the paper’s first author and an MIT PhD student in materials science and engineering. “What we saw was that if you just test the ceramic electrolyte on the benchtop, it’s about as tough as your tooth. But during charging, it gets a lot weaker — closer to the brittleness of a lollipop.”
The findings reveal why developing stronger electrolytes alone hasn’t solved the decades-old dendrite problem. It also points to the importance of developing more chemically stable materials to finally fulfill the promise of high-density solid-state batteries.
“There’s a large community of researchers that are constantly trying to discover and design better solid electrolytes to enable the solid-state battery,” says senior author Yet-Ming Chiang, MIT’s Kyocera Professor of Materials Science and Engineering. “This study provides guidance in those efforts. We discovered a new mechanism by which these dendrites grow, allowing us to explore ways to design around it to make solid-state batteries successful.”
Joining Fincher and Chiang on the paper are MIT PhD student Colin Gilgenbach; Thermo Fisher Scientific scientists Christian Roach and Rachel Osmundsen; MIT.nano researcher Aubrey Penn; MIT Toyota Professor in Materials Processing W. Craig Carter; MIT Kyocera Professor of Materials Science and Engineering James LeBeau; University of Michigan Professor Michael Thouless; and Brown University Professor Brian W. Sheldon.
Measuring stress
Dendrites have presented a major roadblock to battery development since the 1970s. One reason lithium-ion batteries have become ubiquitous while other approaches have stalled is that their commonly used graphite anodes are less susceptible to dendrite formation. That’s a shame because solid-state batteries that use lithium metal as an anode and a solid electrolyte could theoretically store far more energy in the same sized package with less weight. They could thus enable longer-lasting phones and laptops, or electric cars with double the range of today’s options.
“There’s no more energy-dense form of lithium than lithium metal,” Chiang says. “But the dendrite problem has limited progress with solid-state batteries.”
Lithium metal is soft like taffy. Fincher, who has been studying the dendrite problem in the labs of Chiang and Carter, says one puzzle is how such a soft material can penetrate into the hard electrolyte materials being explored for use in solid-state batteries.
“The ceramics that have been used in these applications are stiff, like a coffee mug, so it’s been hoped that solid-state batteries would stop this relatively soft dendrite from growing,” Fincher explains.
Believing that mechanical stress causes dendrites, scientists have worked to develop stronger electrolytes that can withstand more mechanical stress. Some researchers have proposed that chemical reactions play a role in dendrite formation, but how those reactions worked with mechanical stress was not known.
For their Nature study, the researchers set out to directly observe mechanical and chemical changes in a commonly used solid-state electrolyte material as dendrites grew. Solid-state batteries are typically organized like a sandwich, which makes it hard to look inside the middle electrolyte layer. For their first experiment, the researchers developed a special solid-state battery cell in which the ceramic layers can be observed from the side, allowing the researchers to watch dendrite growth occurring in the electrolyte.
The researchers also used a measurement technique called birefringence microscopy to precisely measure the stress around the dendrite, which Fincher developed as part of his PhD thesis.
“It works the same way as polarized sunglasses when you look at something like a windshield,” Fincher explains of the technique. “When light comes through, residual stresses in the glass enable light of some orientations to pass faster than others, and that can give rise to observable rainbow patterns. These patterns can be used to measure stress.”
The technique gave the researchers a way to both visualize and quantify stress around actively growing dendrites for the first time, leading to the unexpected findings.
“Normally you would expect that the faster a dendrite grows, the more stress it creates,” Chiang says. “Instead, we observed exactly the opposite. The faster it grew, the lower the stress around it, meaning the solid electrolyte is breaking under a lower stress, and therefore it’s been embrittled.”
In fact, the dendrites grew at stress levels far weaker than expected. Fincher describes the weaker electrolyte as electrochemically corroded.
“Imagine you test a piece of glass one day, and the next day it’s only a quarter as strong,” Chiang says. “It was very surprising.”
Led by LeBeau, the researchers then cooled the electrolyte to extremely low temperatures and applied a powerful imaging technique called cryogenic scanning transmission electron microscopy that allowed them to study the area around the dendrite on nearly atomic scales. The imaging revealed that the passage of ionic current through the material had caused chemical reactions that made it more brittle.
“The electric current drives the flow of lithium ions through the solid electrolyte,” Chiang explains. “That causes a highly concentrated flow of lithium ions at the dendrite tip. We believe that leads to a chemical reduction of the material compound, which leads to its decomposition into new phases. You start with a crystalline phase of the electrolyte, then there’s a volume contraction after the deposition that is consistent with the embrittlement we see.”
Toward better batteries
The experiment was done on one of the most stable electrolytes used in solid-state batteries, making the researchers confident the findings will carry over to other electrolyte materials.
“This tells us we have to look for electrolyte materials that are even more stable, especially when in contact with lithium metal, which chemically speaking is very reducing,” Chiang says. “This will help direct the search for new materials.”
For instance, Chiang says now that they understand more about the chemical changes causing embrittlement, researchers could explore materials that actually get tougher as cracks grow.
The researchers say it will take more work to figure out what electrochemical reactions are taking place to make the electrolyte so much weaker. But they say their approach for directly observing stresses could also help improve materials for use in devices like fuel cells and electrolyzers.
The work was supported by the center for Mechano-Chemical Understanding of Solid Ionic Conductors, a Department of Energy Engineering Frontiers Research Center, the National Science Foundation, and Fincher’s Department of Defense Science and Engineering Graduate Fellowship, and was carried out using MIT.nano facilities.
QS World University Rankings rates MIT No. 1 in 12 subjects for 2026The Institute also ranks second in seven subject areas.QS World University Rankings has placed MIT in the No. 1 spot in 12 subject areas for 2026, the organization announced today.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Chemistry; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Engineering and Technology; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; and Physics and Astronomy.
MIT also placed second in seven subject areas: Architecture/Built Environment; History of Art; Biological Sciences; Economics and Econometrics; Marketing; Natural Sciences; and Statistics and Operational Research.
For 2026, universities were evaluated in 55 specific subjects and five broader subject areas.
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 14 straight years.
The next time you’re scrolling your phone, take a moment to appreciate the feat: The seemingly mundane act is possible thanks to the coordination of 34 muscles, 27 joints, and over 100 tendons and ligaments in your hand. Indeed, our hands are the most nimble parts of our bodies. Mimicking their many nuanced gestures has been a longstanding challenge in robotics and virtual reality.
Now, MIT engineers have designed an ultrasound wristband that precisely tracks a wearer’s hand movements in real-time. The wristband produces ultrasound images of the wrist’s muscles, tendons, and ligaments as the hand moves, and is paired with an artificial intelligence algorithm that continuously translates the images into the corresponding positions of the five fingers and palm.
The researchers can train the wristband to learn a wearer’s hand motions, which the device can communicate in real-time to a robot or a virtual environment.
In demonstrations, the team has shown that a person wearing the wristband can wirelessly control a robotic hand. As the person gestures or points, the robot does the same. In a sort of wireless marionette interaction, the wearer can manipulate the robot to play a simple tune on the piano and shoot a small basketball into a desktop hoop. With the same wristband, a wearer can also manipulate objects on a computer screen, for instance pinching their fingers together to enlarge and minimize a virtual object.
The team is using the wristband to gather hand motion data from many more users with different hand sizes, finger shapes, and gestures. They envision building a large dataset of hand motions that can be plumbed, for instance, to train humanoid robots in dexterity tasks, such as performing certain surgical procedures. The ultrasound band could also be used to grasp, manipulate, and interact with objects in video games, design applications, or other virtual settings.
“We think this work has immediate impact in potentially replacing hand tracking techniques with wearable ultrasound bands in virtual and augmented reality,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering at MIT. “It could also provide huge amounts of training data for dexterous humanoid robots.”
Zhao, Gengxi Lu, and their colleagues present the wristband’s new design in a paper appearing today in Nature Electronics. Their MIT co-authors are former postdocs Xiaoyu Chen, Shucong Li, and Bolei Deng; graduate students SeongHyeon Kim and Dian Li; postdocs Shu Wang and Runze Li; and Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science. Other co-authors are graduate students Yushun Zheng and Junhang Zhang, Baoqiang Liu, Chen Gong, and Professor Qifa Zhou from the University of Southern California.
Seeing strings
There are currently a number of approaches to capturing and mimicking human hand dexterity in robots. Some approaches use cameras to record a person’s hand movements as they manipulate objects or perform tasks. Others involve having a person wear a glove with sensors, which records the person’s hand movements and transmits the data to a receiving robot. But erecting a complex camera system for different applications is impractical and prone to visual obstacles. And sensor-laden gloves could limit a person’s natural hand motions and sensations.
A third approach uses the electrical signals from muscles in the wrist or forearm that scientists then correlate with specific hand movements. Researchers have made significant advances in this approach, however these signals are easily affected by noise in the environment. They are also not sensitive enough to distinguish subtle changes in movements. For instance, they may discern whether a thumb and index finger are pinched together or pulled apart, but not much of the in-between path.
Zhao’s team wondered whether ultrasound imaging might capture more dexterous and continuous hand movements. His group has been developing various forms of ultrasound stickers — miniaturized versions of the transducers used in doctor’s offices that are paired with hydrogel material that can safely stick to skin.
In their new study, the team incorporated the ultrasound sticker design into a wearable wristband to continuously image the muscles and tendons in the wrist.
“The tendons and muscles in your wrist are like strings pulling on puppets, which are your fingers,” Lu says. “So the idea is: Each time you take a picture of the state of the strings, you’ll know the state of the hand.”
Mapping manipulation
The team designed a wristband with an ultrasound sticker that is the size of a smartwatch, and added onboard electronics that are about as small as a cellphone. They attached the wristband to a volunteer’s wrist and confirmed that the device produced clear and continuous images of the wrist as the volunteer moved their fingers in various gestures.
The challenge then was to relate the black and white ultrasound images of the wrist to specific positions of the hand. As it turns out, the fingers and thumb are capable of 22 degrees of freedom, or different ways of extending or angling. The researchers found that they could identify specific regions in their ultrasound images of the wrist that correlate to each of these 22 degrees of freedom. For instance, changes in one region relate to thumb extension, while changes in another region correlate with movements of the index finger.
To establish these connections, a volunteer wearing the wristband would move their hand in various positions while the researchers recorded the gestures with multiple cameras surrounding the volunteer. By matching changes in certain regions of the ultrasound images with hand positions recorded by the cameras, the team could label wrist image regions with the corresponding degree of freedom in the hand. But to do this translation continuously, and in real-time, would be an impossible task for humans.
So, the team turned to artificial intelligence. They used an AI algorithm that can be trained to recognize image patterns and correlate them with specific labels and, in this case, the hand’s various degrees of freedom. The researchers trained the algorithm with ultrasound images that they meticulously labeled, annotating the image regions associated with a specific degree of freedom. They tested the algorithm on a new set of ultrasound images and found it correctly predicted the corresponding hand gestures.
Once the researchers successfully paired the AI algorithm with the wristband, they tested the device on more volunteers. For the new study, eight volunteers with different hand and wrist sizes wore the wristband while they formed various hand gestures and grasps, including making the signs for all 26 letters in American Sign Language. They also held objects such as a tennis ball, a plastic bottle, a pair of scissors, and a pencil. In each case, the wristband precisely tracked and predicted the position of the hand.
To demonstrate potential applications, the team developed a simple computer program that they wirelessly paired with the wristband. As a wearer went through the motions of pinching and grasping, the gestures corresponded to zooming in and out on an object on the computer screen, and virtually moving and manipulating it in a smooth and continuous fashion.
The researchers also tested the wristband as a wireless controller of a simple commercial robotic hand. While wearing the wristband, a volunteer went through the motions of playing a keyboard. The robot in turn mimicked the motions in real-time to play a simple tune on a piano. The same robot was also able to mimic a person’s finger taps to play a desktop basketball game.
Zhao is planning to further miniaturize the wristband’s hardware, as well as train the AI software on many more gestures and movements from volunteers with wider ranging hand sizes and shapes. Ultimately, the team is building toward a wearable hand tracker that can be worn by anyone, to wirelessly manipulate humanoid robots or virtual objects with high dexterity.
“We believe this is the most advanced way to track dexterous hand motion, through wearable imaging of the wrist,” Zhao says. “We think these wearable ultrasound bands can provide intuitive and versatile controls for virtual reality and robotic hands.”
This research was supported, in part, by MIT, the U.S. National Institutes of Health, the U.S. National Science Foundation, the U.S. Department of Defense, and Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology.
Enduring passions for medicine, journalism, and triathlons As an aspiring physician-scientist and editor-in-chief of The Tech, MIT senior Alex Tang has found inspiration in the lives of patients and others in his community.Alex Tang’s dream of becoming a physician started in grade school when he read Lisa Sanders’ “Diagnosis” column in The New York Times Magazine. Although he often encountered unfamiliar medical terms, Tang was captivated by the magic of medicine, as Sanders described how physicians turned puzzling sets of symptoms into concrete diagnoses and treatment plans for patients.
A decade later, Tang is one step closer to achieving his dream. The MIT senior has challenged himself academically, dual-majoring in chemistry and biology and minoring in biomedical engineering. “All of the courses have encouraged me to think about problems through different lenses,” he says.
Tang has also challenged himself as the editor-in-chief of MIT’s student newspaper, The Tech, and as a competitive triathlete. In the fall, he will begin medical school, where he hopes to develop clinical skills and continue honing his scientific abilities. Ultimately, he aspires to pursue a career as a physician-scientist, focusing on how cancers respond to and resist treatment. He wants to help convert those insights into novel therapies that can be tailored to individual cancer patients.
“I want to advance precision oncology, ensuring that each patient receives the most effective, personalized treatment possible,” he says.
Thriving in the lab
Originally from Massachusetts, Tang was eager to make the most of his MIT experience, especially because of its extensive research opportunities. “Both my parents worked in the Cambridge biotech space, and being able to contribute to innovative science here has been a priority,” he says.
Early on, Tang gravitated toward oncology after joining the Nir Hacohen Lab at the Broad Institute, an interest cemented after taking 7.45 (Cancer Biology), which was taught by professors Tyler Jacks and Michael Hemann. Fascinated by how new cancer therapies were changing patients’ lives, he joined a project with implications for patients with difficult prognoses: For the last three-and-half years, Tang has been studying the effects of combined immunotherapy and targeted molecular therapy on tumors in patients with metastatic colorectal cancer.
“I hope my work can provide clarity for patients and physicians, and empower them to be confident in their options for care,” Tang says.
Last year, Tang was awarded a prestigious Goldwater Scholarship, which supports undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
In addition to gaining technical skills, Tang has found working in the Hacohen Lab to be enriching in other important ways.
“What’s been great about research is learning from experts in the field who become your role models,” he says, “They are at the frontiers of investigating the most challenging questions in the field, and iterating through the scientific process with them is such a joy.”
Looking forward to medical school, he hopes to complement his basic science research with work that is more clinically involved.
“I want to bridge the gap between fundamental discoveries and tangible improvements in patient care,” Tang says. He has already set out on this mission, recently leading the development of a prognostic assay in lung cancer.
Breaking news
After stopping by the booth for MIT’s student newspaper, The Tech, during Campus Preview Weekend, Tang knew he wanted to join and contribute to a publication that has long chronicled MIT’s history and culture. Starting as a news writer and later serving as editor-in-chief, he learned how to write under pressure, reported on major campus events, and balanced leadership with collaboration.
“It’s been such an honor and pleasure to document people across the diverse MIT community who are all contributing to the character of the Institute in different ways,” he says.
It’s an activity he’ll drop everything for.
“When we have things come up and we have to do a breaking news story or we have some editorial thing that needs to be managed, I’ll just stop working to sort out whatever’s happening,” he says. “I think that’s what passion really is about.”
His journey with The Tech has not always been easy. In the summer between his first and second year, he found himself solely responsible for producing the paper’s news content amidst a staff shortage while the paper was facing financial difficulties.
“Coming into sophomore fall, I focused on recruiting more staff and seeking out ways to get more funding,” Tang says. “The paper wouldn’t be here without the people, both students and faculty advisors alike, who bought into The Tech’s mission.”
Though he hopes to pursue a career in medicine, Tang has found journalism to be integral in shaping how he will connect and communicate with patients and colleagues.
“You are responsible for taking someone’s story, breaking it down, and retelling it in your own words in a way that you feel would resonate with the audience and serve the community,” he says.
An outlet through triathlon
Despite his busy schedule, Tang prioritizes staying active and maintaining fitness. A former competitive swimmer in high school and now a triathlete, he still finds himself drawn back to the water when everything around him feels fast-paced.
“Swimming, biking, and running are good ways to de-stress,” Tang says. “It’s therapeutic in the sense that you can just let go. The race is just that culmination of letting it go at a more elevated level.”
He credits MIT’s infrastructure for helping him stay committed to training. “My dorm is steps away from the pool and the track,” he says. “The convenience is superb.”
Tang has found success in competitions, most recently placing third in his age group at the 2025 Boston Triathlon. In fact, it is the feeling of accomplishment that pushes him every day.
“There are many days when you want to take it easy, but you have to remember the joy waiting for you at the end of the race when you’ve put in the work,” he says. “It motivates me to be conscious and aware of what I’m doing in practice.”
During the summer, Tang and his younger brother go out for long runs in the Boston suburbs. “It is great to have my brother push me every day,” Tang says. “There has been no one more supportive of me than my family.”
How to create “humble” AI An MIT-led team is designing artificial intelligence systems for medical diagnosis that are more collaborative and forthcoming about uncertainty.Artificial intelligence holds promise for helping doctors diagnose patients and personalize treatment options. However, an international group of scientists led by MIT cautions that AI systems, as currently designed, carry the risk of steering doctors in the wrong direction because they may overconfidently make incorrect decisions.
One way to prevent these mistakes is to program AI systems to be more “humble,” according to the researchers. Such systems would reveal when they are not confident in their diagnoses or recommendations and would encourage users to gather additional information when the diagnosis is uncertain.
“We’re now using AI as an oracle, but we can use AI as a coach. We could use AI as a true co-pilot. That would not only increase our ability to retrieve information but increase our agency to be able to connect the dots,” says Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School.
Celi and his colleagues have created a framework that they say can guide AI developers in designing systems that display curiosity and humility. This new approach could allow doctors and AI systems to work as partners, the researchers say, and help prevent AI from exerting too much influence over doctors’ decisions.
Celi is the senior author of the study, which appears today in BMJ Health and Care Informatics. The paper’s lead author is Sebastián Andrés Cajas Ordoñez, a researcher at MIT Critical Data, a global consortium led by the Laboratory for Computational Physiology within the MIT Institute for Medical Engineering and Science.
Instilling human values
Overconfident AI systems can lead to errors in medical settings, according to the MIT team. Previous studies have found that ICU physicians defer to AI systems that they perceive as reliable even when their own intuition goes against the AI suggestion. Physicians and patients alike are more likely to accept incorrect AI recommendations when they are perceived as authoritative.
In place of systems that offer overconfident but potentially incorrect advice, health care facilities should have access to AI systems that work more collaboratively with clinicians, the researchers say.
“We are trying to include humans in these human-AI systems, so that we are facilitating humans to collectively reflect and reimagine, instead of having isolated AI agents that do everything. We want humans to become more creative through the usage of AI,” Cajas Ordoñez says.
To create such a system, the consortium designed a framework that includes several computational modules that can be incorporated into existing AI systems. The first of these modules requires an AI model to evaluate its own certainty when making diagnostic predictions. Developed by consortium members Janan Arslan and Kurt Benke of the University of Melbourne, the Epistemic Virtue Score acts as a self-awareness check, ensuring the system’s confidence is appropriately tempered by the inherent uncertainty and complexity of each clinical scenario.
With that self-awareness in place, the model can tailor its response to the situation. If the system detects that its confidence exceeds what the available evidence supports, it can pause and flag the mismatch, requesting specific tests or history that would resolve the uncertainty, or recommending specialist consultation. The goal is an AI that not only provides answers but also signals when those answers should be treated with caution.
“It’s like having a co-pilot that would tell you that you need to seek a fresh pair of eyes to be able to understand this complex patient better,” Celi says.
Celi and his colleagues have previously developed large-scale databases that can be used to train AI systems, including the Medical Information Mart for Intensive Care (MIMIC) database from Beth Israel Deaconess Medical Center. His team is now working on implementing the new framework into AI systems based on MIMIC and introducing it to clinicians in the Beth Israel Lahey Health system.
This approach could also be implemented in AI systems that are used to analyze X-ray images or to determine the best treatment options for patients in the emergency room, among others, the researchers say.
Toward more inclusive AI
This study is part of a larger effort by Celi and his colleagues to create AI systems that are designed by and for the people who are ultimately going to be most impacted by these tools. Many AI models, such as MIMIC, are trained on publicly available data from the United States, which can lead to the introduction of biases toward a certain way of thinking about medical issues, and exclusion of others.
Bringing in more viewpoints is critical to overcoming these potential biases, says Celi, emphasizing that each member of the global consortium brings a distinct perspective to a broader, collective understanding.
Another problem with existing AI systems used for diagnostics is that they are usually trained on electronic health records, which weren’t originally intended for that purpose. This means that the data lack much of the context that would be useful in making diagnoses and treatment recommendations. Additionally, many patients never get included in those datasets because of lack of access, such as people who live in rural areas.
At data workshops hosted by MIT Critical Data, groups of data scientists, health care professionals, social scientists, patients, and others work together on designing new AI systems. Before beginning, everyone is prompted to think about whether the data they’re using captures all the drivers of whatever they aim to predict, ensuring they don’t inadvertently encode existing structural inequities into their models.
“We make them question the dataset. Are they confident about their training data and validation data? Do they think that there are patients that were excluded, unintentionally or intentionally, and how will that affect the model itself?” he says. “Of course, we cannot stop or even delay the development of AI, not just in health care, but in every sector. But, we must be more deliberate and thoughtful in how we do this.”
The research was funded by the Boston-Korea Innovative Research Project through the Korea Health Industry Development Institute.
A complicated future for a methane-cleansing moleculeA new model shows how levels of the “atmosphere’s detergent” may rise and fall in response to climate change.Methane is a powerful greenhouse gas that is second only to carbon dioxide in driving up global temperatures. But it doesn’t linger in the atmosphere for long thanks to molecules called hydroxyl radicals, which are known as the “atmosphere’s detergent” for their ability to break down methane. As the planet warms, however, it’s unclear how the air-cleaning agents will respond.
MIT scientists are now shedding some light on this. The team has developed a new model to study different processes that control how levels of hydroxyl radical will shift with warming temperatures.
They find that the picture is complicated. As temperatures increase, so too will water vapor in the atmosphere, which will in turn boost the molecule’s concentrations. But rising temperatures will also increase “biogenic volatile organic compound emissions” — gases that are naturally released by some plants and trees. These natural emissions can reduce hydroxyl radical and dampen water vapor’s boosting effect.
Specifically, the team finds that if the planet’s average temperatures rise by 2 degrees Celsius, the accompanying rise in water vapor will increase hydroxyl radical levels by about 9 percent. But the corresponding increase in biogenic emissions would in turn bring down hydroxyl radical levels by 6 percent. The final accounting could mean a small boost, of about 3 percent, in the atmosphere’s ability to break down methane and other chemical compounds as the planet warms.
“Hydroxyl radicals are important in determining the lifetime of methane and other reactive greenhouse gases, as well as gases that affect public health, including ozone and certain other air pollutants,” says study author Qindan Zhu, who led the work as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
“There’s a whole range of environmental reasons why we want to understand what’s going on with this molecule,” adds Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in EAPS. “We want to make sure it’s around to chemically remove all these gases and pollutants.”
Fiore and Zhu’s new study appears today in the Journal of Advances in Modeling Earth Systems (JAMES). The study’s MIT co-authors include Jian Guan and Paolo Giani, along with Robert Pincus, Nicole Neumann, George Milly, and Clare Singer of Lamont-Doherty Earth Observatory and the Columbia Climate School, and Brian Medeiros at the National Center for Atmospheric Research.
A natural neutralizer
The hydroxyl radical, known chemically as OH, is made up of one oxygen atom and one hydrogen atom, along with an unpaired electron. This configuration makes the molecule extremely reactive. Like a chemical vacuum cleaner, OH easily pulls an electron or hydrogen atom away from other molecules, breaking them down into weaker, more water-soluble forms. In this way, OH reduces a vast range of chemicals, including some air pollutants, pathogens, and ozone. And changes in OH are a powerful lever on methane.
“For methane, the reaction with OH is considered the most important loss pathway,” Zhu says. “About 90 percent of the methane that’s removed from the atmosphere is due to the reaction with OH.”
Indeed, it’s thanks to reactions with hydroxyl radical that methane can only stick around in the atmosphere for about a decade — far shorter than carbon dioxide, which can linger for 1,000 years or longer. But even as OH breaks down methane already in the atmosphere, more methane continues to accumulate. Rising methane concentrations, in addition to human-derived emissions of carbon dioxide, are driving global warming, and it’s unclear how OH’s methane-clearing power will keep up.
“The questions we’re exploring here are: What are the main processes that control OH concentrations? And how will OH respond to climate change?” Fiore says.
An aquaplanet’s air
For their study, the researchers developed a new model to simulate levels of OH in the atmosphere under a current global climate scenario, compared to a future warmer climate. Their model, dubbed “AquaChem,” is an expansion of a simplified model that is part of a suite of tools developed by the Community Earth System Model (CESM) project. The model that the team chose to build off is one that represents the Earth as a simplified “aquaplanet,” with an entirely ocean-covered surface.
Aquaplanet models allow scientists to study detailed interactions in the atmosphere in response to changes in surface temperatures, without having to also spend computing time and energy on simulating complex dynamics between the land, water, and polar ice caps.
To the aquaplanet model, Zhu added an atmospheric chemistry component that simulates detailed chemical reactions in the atmosphere consistent with the applied surface temperatures. The chemical reactions that she modeled represent those that are known to affect OH concentrations.
OH is primarily produced when ozone interacts with sunlight in the presence of water vapor. For instance, scientists have found that OH levels can vary depending certain anthropogenic and natural emissions, all of which Zhu incorporated separately and together into the AquaChem model in order to isolate the impact of each process on OH.
The emissions in particular include carbon monoxide, methane, nitrogen oxides, and volatile organic compounds (VOCs), some of which are emitted through human practices, and others that are given off by natural processes. One type of naturally-derived VOCs are “biogenic” emissions — gases, such as isoprene, that some plants and trees emit through tiny pores called stomata during transpiration.
Into the AquaChem model, Zhu plugged in data that were available for each type of emissions from the year 2000 — a year that is generally considered to represent the current climate in a simplified form. She set the aquaplanet’s sea surface temperatures to the zonal annual mean of that year, and found that the model accurately reproduced the major sensitivities of OH chemistry to the underlying chemical processing as simulated in a more complex chemistry-climate model.
Then, Zhu ran the model under a second, globally warming scenario. She set the planet’s sea surface temperatures to warm by 2 degrees Celsius (a warming that is likely to occur unless global anthropogenic carbon emissions are mitigated). The team looked at how this warming would affect the various types of emissions and chemical processes, and how these changes would ultimately affect levels of OH in the atmosphere.
In the end, they found the two biggest drivers of OH levels were rising water vapor and biogenic emissions. They found that global warming would increase the amount of water vapor to the atmosphere, which in turn would boost production of OH by 9 percent. However, this same degree of warming would also increase biogenic emissions such as isoprene, which reacts with and breaks down OH, bringing down its levels by 6 percent.
The team recognizes that there are many other factors that affect the response of isoprene emissions to surface warming. Rising CO2, not considered in this study, may dampen this temperature-driven response. Of all the factors that can shift OH levels under global warming, the researchers caution that biogenic emissions are the most uncertain, even though they appear to have a large influence. Going forward, the scientists plan to update AquaChem to continue studying how biogenic emissions, as well as other processes and climate scenarios, could sway OH concentrations.
“We know that changes in atmospheric OH, even of a few percent, can actually matter for interpreting how methane might accumulate in the atmosphere,” Zhu says. “Understanding future trends of OH will allow us to determine future trends of methane.”
This work was supported, in part, by Spark Climate Solutions and the National Oceanic and Atmospheric Administration.
On algorithms, life, and learningOperations research expert Dimitris Bertsimas delivered the annual Killian Lecture, providing a look at the past and future of his work.From enhancing international business logistics to freeing up more hospital beds to helping farmers, MIT Professor Dimitris Bertsimas SM ’87, PhD ’88 summarized how his work in operations research has helped drive real-world improvements, while delivering the 54th annual James R. Killian Faculty Achievement Award Lecture at MIT on Thursday, March 19.
Bertsimas also described how artificial intelligence is now being used in some of his scholarly projects and as a tool in MIT Open Learning efforts, which he currently directs — another facet of a highly productive and lauded career over four decades at the Institute. The Killian Award is the highest prize MIT gives its faculty.
“I have tried to improve the human condition,” Bertsimas said, summarizing the breadth of his work and the many applications to everyday living that he has found for it.
At MIT, Bertsimas is the vice provost for open learning, associate dean for online education and artificial intelligence, Boeing Leaders for Global Operations Professor of Management, and professor of operations research in the MIT Sloan School of Management. He also served as the inaugural faculty director of the master of business analytics program at MIT Sloan, and has held the position of associate dean of business analytics.
Bertsimas’ remarks encompassed both his past insights and his ongoing studies, as well as his current efforts to add AI to his research. Describing the concept of “robust optimization,” a highly influential approach that Bertsimas helped develop in the early 2000s, he explained how it has enabled, for instance, more reliable shipping through the Panama Canal. Other approaches to optimization aimed at getting more vessels through the canal every day — up to 48 — but would encounter significant problems at times. Bertsimas’ approach identified that 45 vessels a day was better — a slightly lower number, but one that “was always feasible,” he noted.
Over time, Bertsimas’ work has helped structure all kinds of solutions in business logistics; it has even been used for the allocation of school buses in Boston.
More recently, as Bertsimas explained in the lecture, he and his collaborators have been working with Hartford HealthCare in Connecticut on a wide range of issues, and are increasingly incorporating AI into the development of tools for diagnostics, among other things. On the optimization front, their research has suggested ways to reduce the average stay of a hospital patient, from 5.38 days to 4.93 days. In the main Hartford hospital they have studied, given the number of existing beds, that reduction has enabled more than 5,000 additional patient stays per year.
“It’s a very different ballgame,” Bertsimas said.
Bertsimas delivered his lecture, titled “Algorithms for Life: AI and Operations Research Transforming Healthcare, Education, and Agriculture,” to an audience of over 300 MIT community members in Huntington Hall (Room 10-250) on campus.
The award was established in 1971 to honor James Killian, whose distinguished career included serving as MIT’s 10th president, from 1948 to 1959, and subsequently as chair of the MIT Corporation, from 1959 to 1971.
“Professor Bertsimas’ scholarly contributions are both extensive and groundbreaking,” said Roger Levy, chair of the MIT faculty and a professor in the Department of Brain and Cognitive Sciences, while making introductory remarks. “He’s one of the rare individuals who has made significant contributions to both intellectual threads in the field of operations research: one, optimization — combinatorial, linear, and nonlinear — and number two, stochastic processes.”
Indeed, Bertsimas’ work has helped develop both better tools for studying and conducting operations, while also having a wide range of applications. As Bertsimas noted in his lecture, the deaths of both of his parents in 2009 helped propel him to start looking at extensively at ways operations research could help health care.
Bertsimas received his BS in electrical engineering and computer science from the National Technical University of Athens in Greece. Moving to MIT for his graduate work, he then earned his MS in operations research and his PhD in applied mathematics and operations research. Bertsimas joined the MIT faculty after receiving his doctorate, and has remained at the Institute ever since.
Bertsimas is also known as an energetic teacher who has been the principal advisor to a remarkable number of PhD students — 106 and counting, at this point.
“It is far and away my favorite activity, to supervise my doctoral students,” Bertsimas said. “It is a privilege, in my opinion, to work with exceptional young people like the ones we have at MIT, in ability and character and aspiration. They actually make me a better scientist, and a better person.”
“MIT is part of my identity,” Bertsimas quipped while noting that he is the only faculty member on campus who has those three letters, in order, in his first name.
In the latter part of the lecture, Bertsimas highlighted work he has been doing as vice provost of open learning at MIT. He has personally developed an large online course based on his own material, “The Analytics Edge.” In his current role, Bertsimas said, he now aspires for MIT to reach a billion learners with online courses, part of his effort to “democratize access to education.”
Bertsimas also demonstrated for the audience some AI tools he and his colleagues are working to bring to online education, including ways of condensing material, and the translation of online material into other languages.
It is just one more chapter in a long and broad-ranging career dedicated to grasping phenomena and developing tools to help us navigate it.
Or as Berstimas noted while summarizing his scholarship at one point in the lecture, “I try to increase the human understanding of how the world works.”
Bridging medical realities in the study of technology and healthAnthropologist Amy Moran-Thomas studies overlooked insights from people health care is meant to reach.A few weeks ago, Amy Moran-Thomas and 20 students in her class 21A.311 (The Social Lives of Medical Objects) were gathered around a glucose meter, a jar of test strips, and various spare medical parts in the MIT Museum seminar room, talking about how to make them work better.
The class had just heard a presentation from the president of the Belize Diabetes Association in Dangriga, Norma Flores, a nurse whose hospital had recently received a huge shipment of insulin that, although durable in theory, seemed to have spoiled in a heat wave. Flores and the students discussed whether scientists could develop temperature-stable insulin and design repairable glucose meters and other technologies for hospitals worldwide.
“Whenever people keep saying they are concerned about an issue, but the medical literature doesn’t describe it yet, there is a key question about what’s happening,” says Moran-Thomas. “Ethnography can help us learn about it.”
For Moran-Thomas, an MIT anthropologist, that class session was a way of connecting people and ideas that are too often overlooked. Flores was a central figure in Moran-Thomas’ 2019 book, “Traveling with Sugar: Chronicles of a Global Epidemic,” about diabetes in Belize and the failures of medical technology designed to treat it. (At the end of class, Flores surprised Moran-Thomas with a framed commendation from the Belize Diabetes Association for their nearly 20 years of work together.)
That approach informs all of Moran-Thomas’ work. Currently she is co-leading a group working on a project called the “Sugar Atlas,” mapping the social and economic dimensions of diabetes in the Caribbean, in tandem with scholars Nicole Charles of the University of Toronto and Tonya Haynes of the University of West Indies. Moran-Thomas has also spent more than a decade following the case of notorious medical experiments that took place in Guatemala in the 1940s, the subject of a recent paper she published with Susan Reverby of Wellesley College.
Closer to home, Moran-Thomas is working on a book about how energy extraction affects chronic conditions and mental health in her native Pennsylvania, at a time of increasing hospital closures. As part of this research, she has been working with MIT seismologist William Frank to develop low-cost sensors that people can use to measure the impact of industrial activity on their home neighborhoods. The research team was recently awarded a grant by the MIT Human Insight Collaborative (MITHIC) for the work. And with another MITHIC grant, Moran-Thomas and several colleagues are working to create a new “Health and Society” educational program at MIT.
“A through line in my work is the question about how to put people at the center of health and medicine,” says Moran-Thomas, an associate professor in MIT’s anthropology program. “Because that’s not how it feels to most people in the world. Care technologies that work for everybody, and health technologies in relation to chronic disease, connect all these different projects.”
The work Moran-Thomas may be best known for occurred in 2020, during the Covid-19 pandemic, when her research recovered an array of neglected clinical studies showing that oximeters functioned differently depending on the skin color of patients. After she published a piece about it in the Boston Review, further hospital studies by physicians who found the essay confirmed a pattern of disproportionately inaccurate readings, leading to subsequent efforts to improve the technology — all steming from her careful, patient-centric approach.
“What anthropology has to offer the world, and other knowledge systems, is the insights of people that might be missing from many accounts, and highlighting these stories that are getting left out,” Moran-Thomas says. “Those are not footnotes, but the central thing to follow. And those histories are also alive in the material world around us.”
Thinking across medical and climate technologies
After growing up in Pennsylvania, Moran-Thomas majored in literature while earning her BA from American University. She decided to pursue ethnographic research as a graduate student, and entered Princeton University’s program in anthropology, earning an MA in 2008 and her PhD in 2012. After postdoc stints at Princeton and Brown University, Moran-Thomas joined the MIT faculty in 2015.
At Princeton, Moran-Thomas’ dissertation research examined the diabetes epidemic in Belize, forming the basis of her first book, “Traveling with Sugar,” whose title is an expression in Belize for living with diabetes. As she chronicles in the book, plantation-era changes that undermined indigenous agriculture, among other things, contributed to a local economy that made diets sugar-heavy, while medical technologies are often unreliable or ill-suited to local conditions. The book also traces breakdowns in care technologies, such as prosthetic limbs (often sought after diabetes-linked amputations), glucose meters, hyperbaric chambers, insulin supply chains, dialysis machines, and pain management technologies.
“Traveling with Sugar” also develops a critique that has become a theme of Moran-Thomas’ work: that society often shifts the blame for illness onto patients while minimizing the larger-scale factors affecting everyday health.
“There can be this focus on exclusively prevention without care, the implicit assumption that patients need to act differently,” Moran-Thomas says. “Blame falls on individuals and families instead of a focus on other questions. Why are these technologies always breaking down? How are they designed, and by whom, for whom? What role is history playing in the present? And how are people trying to remake those structures?”
Those issues are highlighted in Moran-Thomas’ ongoing project, “Sugar Atlas: Counter-Mapping Diabetes from the Caribbean,” which is backed by a two-year Digital Justice Seed Grant from the American Council of Learned Societies. Whereas international organizations tend to lump North America and the Caribbean together when tracking diabetes, this project zooms in on specific aspects of the disease and its historical and structural contributors in the Caribbean, such as the distance people must travel to buy vegetables, their proximity to insulin supplies, and the ways climate change is affecting sea life and fishing practices.
“We’re trying to create a community platform offering a different vision of these conditions,” Moran-Thomas says of the effort to map otherwise unrecorded aspects of the global diabetes epidemic, while tracing mutual aid networks and people’s “arts of care” in the present.
Better design for common devices
Following her research in Belize, where glucose meters were prone to breaking, Moran-Thomas began taking a more active focus on the design of medical technology. At MIT, she began co-teaching a course with tech innovator Jose Gomez-Marquez, 21A.311 (The Social Lives of Medical Objects). The idea was to get students to think about device design that could lead to more durable, fixable, and equitable products.
In turn, Moran-Thomas’ interest in devices led her to question the pulse oximeter readings she started seeing first-hand during the Covid-19 pandemic. Pulse oximeters measure oxygen saturation levels in patients and are a part of even routine appointment check-ins. They work visually, casting beams of light to measure the color of hemoglobin, which varies depending on how much oxygen it contains.
After firsthand encounters with the sensors led to more research, Moran-Thomas learned that some medical professionals had lingering, unanswered questions about pulse oximeters and they way they were calibrated. After she published her essay in the Boston Review, arguing for more data collection, medical researchers examined the issue more closely, finding that patients with darker skin were about three times more likely to have erroneous blood-oxygen readings than patients with lighter skin. Ultimately, an FDA panel recommended changes to the devices.
“A lot of my work has been learning about health and medicine technologies from the perspectives of patients, families, and nurses, rather than beginning with engineers and doctors,” Moran-Thomas says. “Those two projects, about blood sugar and blood oxygen, were about the shortcomings of those devices and how they could be improved. Those are perspectives I can highlight in hopes others will pick up on them and make other kinds of designs and policies possible.”
Moran-Thomas’ interest in device design has continued with her current book project, about the chronic health effects of energy production in Pennsylvania. She has worked with MIT seismologist William Frank, of the Department of Earth, Atmospheric and Planetary Sciences, to construct an inexpensive meter people can use to measure shaking in their homes caused by industrial activities. (After colleagues in western Pennsylvania reached out with seismic concerns, Moran-Thomas first got the idea to contact Frank after reading about his work in MIT News, incidentally).
The effort is also inspired by guidance from community leaders based at the Center for Coalfield Justice in western Pennsylvania. The research team has received a MITHIC SHASS+ Connectivity grant for their project, “Seismic Collaboratory: Rural Health, Missing Science, and Communicating the Chronic Impacts of Extraction.”
“I’ve met people who have been told by their doctors they must have vertigo, while they thought the walls of their house were really shaking,” Moran-Thomas says. “In a case like that, the device you need is not in the clinic, it’s a monitor at home.”
The book, overall, will examine the effects of energy production on chronic disease and mental health issues in Pennsylvania, something exacerbated by more hospitals being shuttered in the state.
Moran-Thomas is simultaneously working with several co-investigators to create the “Health and Society” educational program at MIT, including Katharina Ribbeck, Erica James, Aleshia Carlsen-Bryan, and Dina Asfaha. Their work was recently awarded an Education Innovation Seed Grant from MITHIC.
From small devices to large-scale changes in health care systems, from the U.S. to other regions, Moran-Thomas remains focused on a core set of issues about how to improve and broaden health care — and lessen the need for it in the first place.
“Thinking across scales is something that’s really useful about anthropology,” Moran-Thomas says. “Even one medical device is a tiny piece of a bigger infrastructure. In order to study that technology or device or sensor, you have to understand the bigger infrastructure it’s attached to, and that there are people involved in all parts of it.”
What’s the right path for AI?Conference speakers discussed the unfolding trajectory of AI and the benefits of shaping technology to meet people’s needs.Who benefits from artificial intelligence? This basic question, which has been especially salient during the AI surge of the last few years, was front and center at a conference at MIT on Wednesday, as speakers and audience members grappled with the many dimensions of AI’s impact.
In one of the conferences’s keynote talks, journalist Karen Hao ’15 called for an altered trajectory of AI development, including a move away from the massive scale-up of data use, data centers, and models being used to develop tools under the rubric of “artificial general intelligence.”
“This scale is unnecessary,” said Hao, who has become a prominent voice in AI discussions. “You do not need this scale of AI and compute to realize the benefits.” Indeed, she added, “If we really want AI to be broadly beneficial, we urgently need to shift away from this approach.”
Hao is a former staff member at The Wall Street Journal and MIT Technology Review, and author of the 2025 book, “Empire of AI.” She has reported extensively on the growth of the AI industry.
In her remarks, Hao outlined the astonishing size of datasets now being used by the biggest AI firms to develop large language models. She also emphasized some of the tradeoffs in this scale-up, such as the massive energy consumption and emissions of hyper-scale data centers, which also consume large amounts of water. Drawing on her own reporting, Hao also noted the human toll from the input work that global gig-economy employees do, inputting data manually for the hyper-scale models.
By contrast, Hao offered, an alternate path for AI might exist in the example of AlphaFold, the Nobel Prize-winning tool used to identify protein structures. This represents the concept of the “small, task-specific AI model tackling a well-scoped problem that lends itself to the computational strengths of AI,” Hao said.
She added: “It’s trained on highly curated data sets that only have to do with the problem at hand: protein folding and amino acid sequences. … There’s no need for fast supercomputing because the datasets are small, the model is small, and it’s still unlocking enormous benefit.”
In a second keynote address, scholar Paola Ricaurte underscored the desirability of purpose-driven AI approaches, outlining a number of conceptual keys to evaluating the usefulness of AI.
“There is no sense in having technologies that are not going to respond to the communities that are going to use them,” said Ricaurte.
She is a professor at Tecnologico de Monterrey in Mexico and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. Ricaurte has also served on expert committees such as the Global Partnership for AI, UNESCO’s AI Ethics Experts Without Borders, and the Women for Ethical AI project.
The event was hosted by the MIT Program in Women’s and Gender Studies. Manduhai Buyandelger, the program’s director and a professor of anthropology, provided introductory remarks.
Titled “Gender, Empire, and AI: Symposium and Design Workshop,” the event was held in the conference space at the MIT Schwarzman College of Computing, with over 300 people in attendance for the keynote talks. There was also a segment of the event devoted to discussion groups, and an afternoon session on design, in a half-dozen different subject areas.
In her talk, Hao decried the often-vague nature of AI discourse, suggesting it impedes a more thoughtful discussion about the industry’s direction.
“Part of the challenge in talking about AI is the complete lack of specificity in the term ‘artificial intelligence,’” Hao said. “It’s like the word ‘transportation.’ You could be referring to anything from a bicycle to a rocket.” As a result, she said, “when we talk about accessing its benefits, we actually have to be very specific. Which AI technologies are we talking about, and which ones do we want more of?”
In her view, the smaller-sized tools — more akin to the bicycle, by analogy — are more useful on an everyday basis. As another example, Hao mentioned the project Climate Change AI, focused on tools that can help improve the energy efficiency of buildings, track emissions, optimize supply chains, forecast extreme weather, and more.
“This is the vision of AI that we should be building towards,” Hao said.
In conclusion, Hao encouraged audience members to be active participants in AI-related discourse and projects, saying the trajectory of the technology was not yet fixed, and that public interventions matter.
Citing the writer Rebecca Solnit, Hao suggested to the audience that “Hope locates itself in the premise that we don’t know what will happen, and that in the spaciousness of uncertainty is room to act.” She also noted, “Each and every one of you has an active role to play in shaping technology development.”
Ricaurte, similarly, encouraged attendees to be proactive participants in AI matters, noting that technologies will work best when the pressing everyday needs of all citizens are addressed.
“We have the responsibility to make hope possible,” Ricaurte said.