The field of tissue engineering aims to replicate the structure and function of real biological tissues. This engineered tissue has potential applications in disease modeling, drug discovery, and implantable grafts.
3D bioprinting, which uses living cells, biocompatible materials, and growth factors to build three-dimensional tissue and organ structures, has emerged as a key tool in the field. To date, one of the most-used approaches for bioprinting relies on additive manufacturing techniques and digital models, depositing 2D layers of bio-inks, composed of cells in a soft gel, into a support bath, layer-by-layer, to build a 3D structure. While these techniques do enable fabrication of complex architectures with features that are not easy to build manually, current approaches have limitations.
“A major drawback of current 3D bioprinting approaches is that they do not integrate process control methods that limit defects in printed tissues. Incorporating process control could improve inter-tissue reproducibility and enhance resource efficiency, for example limiting material waste,” says Ritu Raman, the Eugene Bell Career Development Chair of Tissue Engineering and an assistant professor of mechanical engineering.
She adds, “given the diverse array of available 3D bioprinting tools, there is a significant need to develop process optimization techniques that are modular, efficient, and accessible.”
The need motivated Raman to seek the expertise of Professor Bianca Colosimo of the Polytechnic University of Milan, also known as Polimi. Colosimo recently completed a sabbatical at MIT, which was hosted by John Hart, Class of 1922 Professor, co-director of MIT’s Initiative for New Manufacturing, director of the Center for Advanced Production Technologies, and head of the Department of Mechanical Engineering.
“Artificial Intelligence and data mining are already reshaping our daily lives, and their impact will be even more profound in the emerging field of 3D bioprinting, and in manufacturing at large,” says Colosimo. During her MIT sabbatical, she collaborated with Raman and her team to co-develop a solution that represents a first step toward intelligent bioprinting.
“This solution is now available in both our labs at Polimi and MIT, serving as a twin platform to exchange data and results across different environments and paving the way for many new joint projects in the years to come,” Colosimo says.
A new paper by Raman, Colosimo, and lead authors Giovanni Zanderigo, a Rocca Fellow at Polimi, and Ferdows Afghah of MIT published this week in the journal Device presents a novel technique that addresses this challenge. The team built and validated a modular, low-cost, and printer-agnostic monitoring technique that integrates a compact tool for layer-by-layer imaging. In their method, a digital microscope captures high-resolution images of tissues during printing and rapidly compares them to the intended design with an AI-based image analysis pipeline.
“This method enabled us to quickly identify print defects, such as depositing too much or too little bio-ink, thus helping us identify optimal print parameters for a variety of different materials,” says Raman. “The approach is a low-cost — less than $500 — scalable, and adaptable solution that can be readily implemented on any standard 3D bioprinter. Here at MIT, the monitoring platform has already been integrated into the 3D bioprinting facilities in The SHED. Beyond MIT, our research offers a practical path toward greater reproducibility, improved sustainability, and automation in the field of tissue engineering. This research could have a positive impact on human health by improving the quality of the tissues we fabricate to study and treat debilitating injuries and disease.”
The authors indicate that the new method is more than a monitoring tool. It also serves as a foundation for intelligent process control in embedded bioprinting. By enabling real-time inspection, adaptive correction, and automated parameter tuning, the researchers anticipate that the approach can improve reproducibility, reduce material waste, and accelerate process optimization for real-world applications in tissue engineering.
A more precise way to edit the genomeMIT researchers have dramatically lowered the error rate of prime editing, a technique that holds potential for treating many genetic disorders.A genome-editing technique known as prime editing holds potential for treating many diseases by transforming faulty genes into functional ones. However, the process carries a small chance of inserting errors that could be harmful.
MIT researchers have now found a way to dramatically lower the error rate of prime editing, using modified versions of the proteins involved in the process. This advance could make it easier to develop gene therapy treatments for a variety of diseases, the researchers say.
“This paper outlines a new approach to doing gene editing that doesn’t complicate the delivery system and doesn’t add additional steps, but results in a much more precise edit with fewer unwanted mutations,” says Phillip Sharp, an MIT Institute Professor Emeritus, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the new study.
With their new strategy, the MIT team was able to improve the error rate of prime editors from about one error in seven edits to one in 101 for the most-used editing mode, or from one error in 122 edits to one in 543 for a high-precision mode.
“For any drug, what you want is something that is effective, but with as few side effects as possible,” says Robert Langer, the David H. Koch Institute Professor at MIT, a member of the Koch Institute, and one of the senior authors of the new study. “For any disease where you might do genome editing, I would think this would ultimately be a safer, better way of doing it.”
Koch Institute research scientist Vikash Chauhan is the lead author of the paper, which appears today in Nature.
The potential for error
The earliest forms of gene therapy, first tested in the 1990s, involved delivering new genes carried by viruses. Subsequently, gene-editing techniques that use enzymes such as zinc finger nucleases to correct genes were developed. These nucleases are difficult to engineer, however, so adapting them to target different DNA sequences is a very laborious process.
Many years later, the CRISPR genome-editing system was discovered in bacteria, offering scientists a potentially much easier way to edit the genome. The CRISPR system consists of an enzyme called Cas9 that can cut double-stranded DNA at a particular spot, along with a guide RNA that tells Cas9 where to cut. Researchers have adapted this approach to cut out faulty gene sequences or to insert new ones, following an RNA template.
In 2019, researchers at the Broad Institute of MIT and Harvard reported the development of prime editing: a new system, based on CRISPR, that is more precise and has fewer off-target effects. A recent study reported that prime editors were successfully used to treat a patient with chronic granulomatous disease (CGD), a rare genetic disease that affects white blood cells.
“In principle, this technology could eventually be used to address many hundreds of genetic diseases by correcting small mutations directly in cells and tissues,” Chauhan says.
One of the advantages of prime editing is that it doesn’t require making a double-stranded cut in the target DNA. Instead, it uses a modified version of Cas9 that cuts just one of the complementary strands, opening up a flap where a new sequence can be inserted. A guide RNA delivered along with the prime editor serves as the template for the new sequence.
Once the new sequence has been copied, however, it must compete with the old DNA strand to be incorporated into the genome. If the old strand outcompetes the new one, the extra flap of new DNA hanging off may accidentally get incorporated somewhere else, giving rise to errors.
Many of these errors might be relatively harmless, but it’s possible that some could eventually lead to tumor development or other complications. With the most recent version of prime editors, this error rate ranges from one per seven edits to one per 121 edits for different editing modes.
“The technologies we have now are really a lot better than earlier gene therapy tools, but there’s always a chance for these unintended consequences,” Chauhan says.
Precise editing
To reduce those error rates, the MIT team decided to take advantage of a phenomenon they had observed in a 2023 study. In that paper, they found that while Cas9 usually cuts in the same DNA location every time, some mutated versions of the protein show a relaxation of those constraints. Instead of always cutting the same location, those Cas9 proteins would sometimes make their cut one or two bases further along the DNA sequence.
This relaxation, the researchers discovered, makes the old DNA strands less stable, so they get degraded, making it easier for the new strands to be incorporated without introducing any errors.
In the new study, the researchers were able to identify Cas9 mutations that dropped the error rate to 1/20th its original value. Then, by combining pairs of those mutations, they created a Cas9 editor that lowered the error rate even further, to 1/36th the original amount.
To make the editors even more accurate, the researchers incorporated their new Cas9 proteins into a prime editing system that has an RNA binding protein that stabilizes the ends of the RNA template more efficiently. This final editor, which the researchers call vPE, had an error rate just 1/60th of the original, ranging from one in 101 edits to one in 543 edits for different editing modes. These tests were performed in mouse and human cells.
The MIT team is now working on further improving the efficiency of prime editors, through further modifications of Cas9 and the RNA template. They are also working on ways to deliver the editors to specific tissues of the body, which is a longstanding challenge in gene therapy.
They also hope that other labs will begin using the new prime editing approach in their research studies. Prime editors are commonly used to explore many different questions, including how tissues develop, how populations of cancer cells evolve, and how cells respond to drug treatment.
“Genome editors are used extensively in research labs,” Chauhan says. “So the therapeutic aspect is exciting, but we are really excited to see how people start to integrate our editors into their research workflows.”
The research was funded by the Life Sciences Research Foundation, the National Institute of Biomedical Imaging and Bioengineering, the National Cancer Institute, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Working to make fusion a viable energy sourceAs the Norman C. Rasmussen Adjunct Professor, George Tynan is looking forward to addressing the big physics and engineering challenges of fusion plasmas.George Tynan followed a nonlinear path to fusion.
Following his undergraduate degree in aerospace engineering, Tynann's work in the industry spurred his interest in rocket propulsion technology. Because most methods for propulsion involve the manipulation of hot ionized matter, or plasmas, Tynan focused his attention on plasma physics.
It was then that he realized that plasmas could also drive nuclear fusion. “As a potential energy source, it could really be transformative, and the idea that I could work on something that could have that kind of impact on the future was really attractive to me,” he says.
That same drive, to realize the promise of fusion by researching both plasma physics and fusion engineering, drives Tynan today. It’s work he will be pursuing as the Norman C. Rasmussen Adjunct Professor in the Department of Nuclear Science and Engineering (NSE) at MIT.
An early interest in fluid flow
Tynan’s enthusiasm for science and engineering traces back to his childhood. His electrical engineer father found employment in the U.S. space program and moved the family to Cape Canaveral in Florida.
“This was in the ’60s, when we were launching Saturn V to the moon, and I got to watch all the launches from the beach,” Tynan remembers. That experience was formative and Tynan became fascinated with how fluids flow.
“I would stick my hand out the window and pretend it was an airplane wing and tilt it with oncoming wind flow and see how the force would change on my hand,” Tynan laughs. The interest eventually led to an undergraduate degree in aerospace engineering at California State Polytechnic University in Pomona.
The switch to a new career would happen after work in the private sector, when Tynan discovered an interest in the use of plasmas for propulsion systems. He moved to the University of California at Los Angeles for graduate school, and it was here that the realization that plasmas could also anchor fusion moved Tynan into this field.
This was in the ’80s, when climate change was not as much in the public consciousness as it is today. Even so, “I knew there’s not an infinite amount of oil and gas around, and that at some point we would have to have widespread adoption of nuclear-based sources,” Tynan remembers. He was also attracted by the sustained effort it would take to make fusion a reality.
Doctoral work
To create energy from fusion, it’s important to get an accurate measurement of the “energy confinement time,” which is a measure of how long it takes for the hot fuel to cool down when all heat sources are turned off. When Tynan started graduate school, this measure was still an empirical guess. He decided to focus his research on the physics of observable confinement time.
It was during this doctoral research that Tynan was able to study the fundamental differences in the behavior of turbulence in plasma as compared to conventional fluids. Typically, when an ordinary fluid is stirred with increasing vigor, the fluid’s motion eventually becomes chaotic or turbulent. However, plasmas can act in a surprising way: confined plasmas, when heated sufficiently strongly, would spontaneously quench the turbulent transport at the boundary of the plasma
An experiment in Germany had unexpectedly discovered this plasma behavior. While subsequent work on other experimental devices confirmed this surprising finding, all earlier experiments lacked the ability to measure the turbulence in detail.
Brian LaBombard, now a senior research scientist at MIT’s Plasma Science and Fusion Center (PSFC), was a postdoc at UCLA at the time. Under LaBombard’s direction, Tynan developed a set of Langmuir probes, which are reasonably simple diagnostics for plasma turbulence studies, to further investigate this unusual phenomenon. It formed the basis for his doctoral dissertation. “I happened to be at the right place at the right time so I could study this turbulence quenching phenomenon in much more detail than anyone else could, up until that time,” Tynan says.
As a PhD student and then postdoc, Tynan studied the phenomenon in depth, shuttling between research facilities in Germany, Princeton University’s Plasma Physics Laboratory, and UCLA.
Fusion at UCSD
After completing his doctorate and postdoctoral work, Tynan worked at a startup for a few years when he learned that the University of California at San Diego was launching a new fusion research group at the engineering school. When they reached out, Tynan joined the faculty and built a research program focused on plasma turbulence and plasma-material interactions in fusion systems. Eventually, he became associate dean of engineering, and later, chair of the Department of Mechanical and Aerospace Engineering, serving in these roles for nearly a decade.
Tynan visited MIT on sabbatical in 2023, when his conversations with NSE faculty members Dennis Whyte, Zach Hartwig, and Michael Short excited him about the challenges the private sector faces in making fusion a reality. He saw opportunities to solve important problems at MIT that complemented his work at UC San Diego.
Tynan is excited to tackle what he calls, “the big physics and engineering challenges of fusion plasmas” at NSE: how to remove the heat and exhaust generated by burning plasma so it doesn’t damage the walls of the fusion device and the plasma does not choke on the helium ash. He also hopes to explore robust engineering solutions for practical fusion energy, with a particular focus on developing better materials for use in fusion devices that will make them longer-lasting, while minimizing the production of radioactive waste.
“Ten or 15 years ago, I was somewhat pessimistic that I would ever see commercial exploitation of fusion in my lifetime,” Tynan says. But that outlook has changed, as he has seen collaborations between MIT and Commonwealth Fusion Systems (CFS) and other private-sector firms that seek to accelerate the timeline to the deployment of fusion in the real world.
In 2021, for example, MIT’s PSFC and CFS took a significant step toward commercial carbon-free power generation. They designed and built a high-temperature superconducting magnet, the strongest fusion magnet in the world.
The milestone was especially exciting because the promise of realizing the dream of fusion energy now felt closer. And being at MIT “seemed like a really quick way to get deeply connected with what’s going on in the efforts to develop fusion energy,” Tynan says.
In addition, “while on sabbatical at MIT, I saw how quickly research staff and students can capitalize on a suggestion of a new idea, and that intrigued me,” he adds.
Tynan brings his special blend of expertise to the table. In addition to extensive experience in plasma physics, he has spent a lot more time on hardcore engineering issues like materials, as well. “The key is to integrate the whole thing into a workable and viable system,” Tynan says.
Q&A: David Whelihan on the challenges of operating in the ArcticHow do you access and conduct research in one of the world's harshest and most demanding environments?To most, the Arctic can feel like an abstract place, difficult to imagine beyond images of ice and polar bears. But researcher David Whelihan of MIT Lincoln Laboratory's Advanced Undersea Systems and Technology Group is no stranger to the Arctic. Through Operation Ice Camp, a U.S. Navy–sponsored biennial mission to assess operational readiness in the Arctic region, he has traveled to this vast and remote wilderness twice over the past few years to test low-cost sensor nodes developed by the group to monitor loss in Arctic sea ice extent and thickness. The research team envisions establishing a network of such sensors across the Arctic that will persistently detect ice-fracturing events and correlate these events with environmental conditions to provide insights into why the sea ice is breaking up. Whelihan shared his perspectives on why the Arctic matters and what operating there is like.
Q: Why do we need to be able to operate in the Arctic?
A: Spanning approximately 5.5 million square miles, the Arctic is huge, and one of its salient features is that the ice covering much of the Arctic Ocean is decreasing in volume with every passing year. Melting ice opens up previously impassable areas, resulting in increasing interest from potential adversaries and allies alike for activities such as military operations, commercial shipping, and natural resource extraction. Through Alaska, the United States has approximately 1,060 miles of Arctic coastline that is becoming much more accessible because of reduced ice cover. So, U.S. operation in the Arctic is a matter of national security.
Q: What are the technological limitations to Arctic operations?
A: The Arctic is an incredibly harsh environment. The cold kills battery life, so collecting sensor data at high rates over long periods of time is very difficult. The ice is dynamic and can easily swallow or crush sensors. In addition, most deployments involve "boots-on-the-ice," which is expensive and at times dangerous. One of the technological limitations is how to deploy sensors while keeping humans alive.
Q: How does the group's sensor node R&D work seek to support Arctic operations?
A: A lot of the work we put into our sensors pertains to deployability. Our ultimate goal is to free researchers from going onto the ice to deploy sensors. This goal will become increasingly necessary as the shrinking ice pack becomes more dynamic, unstable, and unpredictable. At the last Operation Ice Camp (OIC) in March 2024, we built and rapidly tested deployable and recoverable sensors, as well as novel concepts such as using UAVs (uncrewed aerial vehicles), or drones, as "data mules" that can fly out to and interrogate the sensors to see what they captured. We also built a prototype wearable system that cues automatic download of sensor data over Wi-Fi so that operators don't have to take off their gloves.
Q: The Arctic Circle is the northernmost region on Earth. How do you reach this remote place?
A: We usually fly on commercial airlines from Boston to Seattle to Anchorage to Prudhoe Bay on the North Slope of Alaska. From there, the Navy flies us on small prop planes, like Single and Twin Otters, about 200 miles north and lands us on an ice runway built by the Navy's Arctic Submarine Lab (ASL). The runway is part of a temporary camp that ASL establishes on floating sea ice for their operational readiness exercises conducted during OIC.
Q: Think back to the first time you stepped foot in the Arctic. Can you paint a picture of what you experienced?
A: My first experience was at Prudhoe Bay, coming out of the airport, which is a corrugated metal building with a single gate. Before you open the door to the outside, a sign warns you to be on the lookout for polar bears. Walking out into the sheer desolation and blinding whiteness of everything made me realize I was experiencing something very new.
When I flew out onto the ice and stepped out of the plane, I was amazed that the area could somehow be even more desolate. Bright white snowy ice goes in every direction, broken up by pressure ridges that form when ice sheets collide. The sun is low, and seems to move horizontally only. It is very hard to tell the time. The air temperature is really variable. On our first trip in 2022, it really wasn't (relatively) that cold — only around minus 5 or 10 degrees during the day. On our second trip in 2024, we were hit by minus 30 almost every day, and with winds of 20 to 25 miles per hour. The last night we were on the ice that year, it warmed up a bit to minus 10 to 20, but the winds kicked up and started blowing snow onto the heaters attached to our tents. Those heaters started failing one by one as the blowing snow covered them, blocking airflow. After our heater failed, I asked myself, while warm in my bed, whether I wanted to go outside to the command tent for help or try to make it until dawn in my thick sleeping bag. I picked the first option, but mostly because the heater control was beeping loudly right next to my bunk, so I couldn’t sleep anyway. Shout-out to the ASL staff who ran around fixing heaters all night!
Q: How do you survive in a place generally inhospitable to humans?
A: In partnership with the native population, ASL brings a lot of gear — from insulated, heated tents and communications equipment to large snowblowers to keep the runway clear. A few months before OIC, participants attend training on what conditions you will be exposed to and how to protect yourself through appropriate clothing, and how to use survival gear in case of an emergency.
Q: Do you have plans to return to the Arctic?
A: We are hoping to go back this winter as part of OIC 2026! We plan to test a through-ice communication device. Communicating through 4 to 12 feet of ice is pretty tricky but could allow us to connect underwater drones and stationary sensors under the ice to the rest of the world. To support the through-ice communication system, we will repurpose our sensor-node boxes deployed during OIC 2024. If this setup works, those same boxes could be used as control centers for all sorts of undersea systems and relay information about the under-ice world back home via satellite.
Q: What lessons learned will you bring to your upcoming trip, and any potential future trips?
A: After the first trip, I had a visceral understanding of how hard operating there is. Prototyping of systems becomes a different game. Prototypes are often fragile, but fragility doesn't go over too well on the ice. So, there is a robustification step, which can take some time.
On this last trip, I realized that you have to really be careful with your energy expenditure and pace yourself. While the average adult may require about 2,000 calories a day, an Arctic explorer may burn several times more than that exerting themselves (we do a lot of walking around camp) and keeping warm. Usually, we live on the same freeze-dried food that you would take on camping trips. Each package only has so many calories, so you find yourself eating multiple of those and supplementing with lots of snacks such as Clif Bars or, my favorite, Babybel cheeses (which I bring myself). You also have to be really careful of dehydration. Your body's reaction to extreme cold is to reduce blood flow to your skin, which generally results in less liquid in your body. We have to drink constantly — water, cocoa, and coffee — to avoid dehydration.
We only have access to the ice every two years with the Navy, so we try to make the most of our time. In the several-day lead-up to our field expedition, my research partner Ben and I were really pushing ourselves to ready our sensor nodes for deployment and probably not eating and drinking as regularly as we should. When we ventured to our sensor deployment site about 5 kilometers outside of camp, I had to learn to slow down so I didn't sweat under my gear, as sweating in the extremely cold conditions can quickly lead to hypothermia. I also learned to pay more attention to exposed places on my face, as I got a bit of frostnip around my goggles.
Operating in the Arctic is a fine balance: you can't spend too much time out there, but you also can't rush.
Decoding the sounds of battery formation and degradationNew findings could provide a way to monitor batteries for sounds that could guide manufacturing, indicate remaining usable life, or flag potential safety issues.Before batteries lose power, fail suddenly, or burst into flames, they tend to produce faint sounds over time that provide a signature of the degradation processes going on within their structure. But until now, nobody had figured out how to interpret exactly what those sounds meant, and how to distinguish between ordinary background noise and significant signs of possible trouble.
Now, a team of researchers at MIT’s Department of Chemical Engineering have done a detailed analysis of the sounds emanating from lithium ion batteries, and has been able to correlate particular sound patterns with specific degradation processes taking place inside the cells. The new findings could provide the basis for relatively simple, totally passive and nondestructive devices that could continuously monitor the health of battery systems, for example in electric vehicles or grid-scale storage facilities, to provide ways of predicting useful operating lifetimes and forecasting failures before they occur.
The findings were reported Sept. 5 in the journal Joule, in a paper by MIT graduate students Yash Samantaray and Alexander Cohen, former MIT research scientist Daniel Cogswell PhD ’10, and Chevron Professor of Chemical Engineering and professor of mathematics Martin Z. Bazant.
“In this study, through some careful scientific work, our team has managed to decode the acoustic emissions,” Bazant says. “We were able to classify them as coming from gas bubbles that are generated by side reactions, or by fractures from the expansion and contraction of the active material, and to find signatures of those signals even in noisy data.”
Samantaray explains that, “I think the core of this work is to look at a way to investigate internal battery mechanisms while they’re still charging and discharging, and to do this nondestructively.” He adds, “Out there in the world now, there are a few methods that exist, but most are very expensive and not really conducive to batteries in their normal format.”
To carry out their analysis, the team coupled electrochemical testing with recording of the acoustic emissions, under real-world charging and discharging conditions, using detailed signal processing to correlate the electrical and acoustic data. By doing so, he says, “we were able to come up with a very cost-effective and efficient method of actually understanding gas generation and fracture of materials.”
Gas generation and fracturing are two primary mechanisms of degradation and failure in batteries, so being able to detect and distinguish those processes, just by monitoring the sounds produced by the batteries, could be a significant tool for those managing battery systems.
Previous approaches have simply monitored the sounds and recorded times when the overall sound level exceeded some threshold. But in this work, by simultaneously monitoring the voltage and current as well as the sound characteristics, Bazant says, “We know that [sound] emissions happen at a certain potential [voltage], and that helps us identify what the process might be that is causing that emission.”
After these tests, they would then take the batteries apart and study them under an electron microscope to detect fracturing of the materials.
In addition, they took a wavelet transform — essentially, a way of encoding the frequency and duration of each signal that is captured, providing distinct signatures that can then be more easily extracted from background noise. “No one had done that before,” Bazant says, “so that was another breakthrough.”
Acoustic emissions are widely used in engineering, he points out, for example to monitor structures such as bridges for signs of incipient failure. “It’s a great way to monitor a system,” he says, “because those emissions are happening whether you’re listening to them or not,” so by listening, you can learn something about internal processes that would otherwise be invisible.
With batteries, he says, “we often have a hard time interpreting the voltage and current information as precisely as we’d like, to know what’s happening inside a cell. And so this offers another window into the cell’s state of health, including its remaining useful life, and safety, too.” In a related paper with Oak Ridge National Laboratory researchers, the team has shown that acoustic emissions can provide an early warning of thermal runaway, a situation that can lead to fires if not caught. The new study suggests that these sounds can be used to detect gas generation prior to combustion, “like seeing the first tiny bubbles in a pot of heated water, long before it boils,” says Bazant.
The next step will be to take this new knowledge of how certain sounds relate to specific conditions, and develop a practical, inexpensive monitoring system based on this understanding. For example, the team has a grant from Tata Motors to develop a battery monitoring system for its electric vehicles. “Now, we know what to look for, and how to correlate that with lifetime and health and safety,” Bazant says.
One possible application of this new understanding, Samantaray says, is “as a lab tool for groups that are trying to develop new materials or test new environments, so they can actually determine gas generation or active material fracturing without having to open up the battery.”
Bazant adds that the system could also be useful for quality control in battery manufacturing. “The most expensive and rate-limiting process in battery production is often the formation cycling,” he says. This is the process where batteries are cycled through charging and discharging to break them in, and part of that process involves chemical reactions that release some gas. The new system would allow detection of these gas formation signatures, he says, “and by sensing them, it may be easier to isolate well-formed cells from poorly formed cells very early, even before the useful life of the battery, when it’s being made,” he says.
The work was supported by the Toyota Research Institute, the Center for Battery Sustainability, the National Science Foundation, and the Department of Defense, and made use of the facilities of MIT.nano.
A new community for computational science and engineering The stand-alone PhD program is building connections and preparing students to make a difference.For the past decade, MIT has offered doctoral-level study in computational science and engineering (CSE) exclusively through an interdisciplinary program designed for students applying computation within a specific science or engineering field.
As interest grew among students focused primarily on advancing CSE methodology itself, it became clear that a dedicated academic home for this group — students and faculty deeply invested in the foundations of computational science and engineering — was needed.
Now, with a stand-alone CSE PhD program, they have not only a space for fostering discovery in the cross-cutting methodological dimensions of computational science and engineering, but also a tight-knit community.
“This program recognizes the existence of computational science and engineering as a discipline in and of itself, so you don’t have to be doing this work through the lens of mechanical or chemical engineering, but instead in its own right,” says Nicolas Hadjiconstantinou, co-director of the Center for Computational Science and Engineering (CCSE).
Offered by CCSE and launched in 2023, the stand-alone program blends both coursework and a thesis, much like other MIT PhD programs, yet its methodological focus sets it apart from other Institute offerings.
“What’s unique about this program is that it’s not hosted by one specific department. The stand-alone program is, at its core, about computational science and cross-cutting methodology. We connect this research with people in a lot of different application areas. We have oceanographers, people doing materials science, students with a focus on aeronautics and astronautics, and more,” says outgoing co-director Youssef Marzouk, now the associate dean of the MIT Schwarzman College of Computing.
Expanding horizons
Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering, and Marzouk, the Breene M. Kerr Professor of Aeronautics and Astronautics, have led the center’s efforts since 2018, and developed the program and curriculum together. The duo was intentional about crafting a program that fosters students’ individual research while also exposing them to all the field has to offer.
To expand students’ horizons and continue to build a collaborative community, the PhD in CSE program features two popular seminar series: weekly community seminars that focus primarily on internal speakers (current graduate students, postdocs, research scientists, and faculty), and monthly distinguished seminars in CSE, which are Institute-wide and bring external speakers from various institutions and industry roles.
“Something surprising about the program has been the seminars. I thought it would be the same people I see in my classes and labs, but it’s much broader than that,” says Emily Williams, a fourth-year PhD student and a Department of Energy Computational Science graduate fellow. “One of the most interesting seminars was around simulating fluid flow for biomedical applications. My background is in fluids, so I understand that part, but seeing it applied in a totally different domain than what I work in was eye-opening,” says Williams.
That seminar, “Astrophysical Fluid Dynamics at Exascale,” presented by James Stone, a professor in the School of Natural Sciences at the Institute for Advanced Study and at Princeton University, represented one of many opportunities for CSE students to engage with practitioners in small groups, gaining academic insight as well as a wider perspective on future career paths.
Designing for impact
The interdisciplinary PhD program served as a departure point from which Hadjiconstantinou and Marzouk created a new offering that was uniquely its own.
For Marzouk, that meant focusing on expanding the stand-alone program to be able to constantly grow and pivot to retain relevancy as technology speeds up, too: “In my view, the vitality of this program is that science and engineering applications nowadays rest on computation in a really foundational way, whether it’s engineering design or scientific discovery. So it’s essential to perform research on the building blocks of this kind of computation. This research also has to be shaped by the way that we apply it so that scientists or engineers will actually use it,” Marzouk says.
The curriculum is structured around six core focus areas, or “ways of thinking,” that are fundamental to CSE:
Students select and build their own thesis committee that consists of faculty from across MIT, not just those associated with CCSE. The combination of a curriculum that’s “modern and applicable to what employers are looking for in industry and academics," according to Williams, and the ability to build your own group of engaged advisors, allows for a level of specialization that’s hard to find elsewhere.
“Academically, I feel like this program is designed in such a flexible and interdisciplinary way. You have a lot of control in terms of which direction you want to go in,” says Rosen Yu, a PhD student. Yu’s research is focused on engineering design optimization, an interest she discovered during her first year of research at MIT with Professor Faez Ahmed. The CSE PhD was about to launch, and it became clear that her research interests skewed more toward computation than the existing mechanical engineering degree; it was a natural fit.
“At other schools, you often see just a pure computer science program or an engineering department with hardly any intersection. But this CSE program, I like to say it’s like a glue between these two communities,” says Yu.
That “glue” is strengthening, with more students matriculating each year, as well as Institute faculty and staff becoming affiliated with CSE. While the thesis topics of students range from WIlliams’ stochastic methods for model reduction of multiscale chaotic systems to scalable and robust GPU-cased optimization for energy systems, the goal of the program remains the same: develop students and research that will make a difference.
“That's why MIT is an ‘Institute of Technology’ and not a ‘university.’ There’s always this question, no matter what you’re studying: what is it good for? Our students will go on to work in systems biology, simulators of climate models, electrification, hypersonic vehicles, and more, but the whole point is that their research is helping with something,” says Hadjiconstantinou.
How to build AI scaling laws for efficient LLM training and budget maximizationMIT-IBM Watson AI Lab researchers have developed a universal guide for estimating how large language models will perform based on smaller models in the same family.When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.
New work from MIT and MIT-IBM Watson AI Lab researchers addresses this by amassing and releasing a collection of hundreds of models and metrics concerning training and performance to approximate more than a thousand scaling laws. From this, the team developed a meta-analysis and guide for how to select small models and estimate scaling laws for different LLM model families, so that the budget is optimally applied toward generating reliable performance predictions.
“The notion that you might want to try to build mathematical models of the training process is a couple of years old, but I think what was new here is that most of the work that people had been doing before is saying, ‘can we say something post-hoc about what happened when we trained all of these models, so that when we’re trying to figure out how to train a new large-scale model, we can make the best decisions about how to use our compute budget?’” says Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science and principal investigator with the MIT-IBM Watson AI Lab.
The research was recently presented at the International Conference on Machine Learning by Andreas, along with MIT-IBM Watson AI Lab researchers Leshem Choshen and Yang Zhang of IBM Research.
Extrapolating performance
No matter how you slice it, developing LLMs is an expensive endeavor: from decision-making regarding the numbers of parameters and tokens, data selection and size, and training techniques to determining output accuracy and tuning to the target applications and tasks. Scaling laws offer a way to forecast model behavior by relating a large model’s loss to the performance of smaller, less-costly models from the same family, avoiding the need to fully train every candidate. Mainly, the differences between the smaller models are the number of parameters and token training size. According to Choshen, elucidating scaling laws not only enable better pre-training decisions, but also democratize the field by enabling researchers without vast resources to understand and build effective scaling laws.
The functional form of scaling laws is relatively simple, incorporating components from the small models that capture the number of parameters and their scaling effect, the number of training tokens and their scaling effect, and the baseline performance for the model family of interest. Together, they help researchers estimate a target large model’s performance loss; the smaller the loss, the better the target model’s outputs are likely to be.
These laws allow research teams to weigh trade-offs efficiently and to test how best to allocate limited resources. They’re particularly useful for evaluating scaling of a certain variable, like the number of tokens, and for A/B testing of different pre-training setups.
In general, scaling laws aren’t new; however, in the field of AI, they emerged as models grew and costs skyrocketed. “It’s like scaling laws just appeared at some point in the field,” says Choshen. “They started getting attention, but no one really tested how good they are and what you need to do to make a good scaling law.” Further, scaling laws were themselves also a black box, in a sense. “Whenever people have created scaling laws in the past, it has always just been one model, or one model family, and one dataset, and one developer,” says Andreas. “There hadn’t really been a lot of systematic meta-analysis, as everybody is individually training their own scaling laws. So, [we wanted to know,] are there high-level trends that you see across those things?”
Building better
To investigate this, Choshen, Andreas, and Zhang created a large dataset. They collected LLMs from 40 model families, including Pythia, OPT, OLMO, LLaMA, Bloom, T5-Pile, ModuleFormer mixture-of-experts, GPT, and other families. These included 485 unique, pre-trained models, and where available, data about their training checkpoints, computational cost (FLOPs), training epochs, and the seed, along with 1.9 million performance metrics of loss and downstream tasks. The models differed in their architectures, weights, and so on. Using these models, the researchers fit over 1,000 scaling laws and compared their accuracy across architectures, model sizes, and training regimes, as well as testing how the number of models, inclusion of intermediate training checkpoints, and partial training impacted the predictive power of scaling laws to target models. They used measurements of absolute relative error (ARE); this is the difference between the scaling law’s prediction and the observed loss of a large, trained model. With this, the team compared the scaling laws, and after analysis, distilled practical recommendations for AI practitioners about what makes effective scaling laws.
Their shared guidelines walk the developer through steps and options to consider and expectations. First, it’s critical to decide on a compute budget and target model accuracy. The team found that 4 percent ARE is about the best achievable accuracy one could expect due to random seed noise, but up to 20 percent ARE is still useful for decision-making. The researchers identified several factors that improve predictions, like including intermediate training checkpoints, rather than relying only on final losses; this made scaling laws more reliable. However, very early training data before 10 billion tokens are noisy, reduce accuracy, and should be discarded. They recommend prioritizing training more models across a spread of sizes to improve robustness of the scaling law’s prediction, not just larger models; selecting five models provides a solid starting point.
Generally, including larger models improves prediction, but costs can be saved by partially training the target model to about 30 percent of its dataset and using that for extrapolation. If the budget is considerably constrained, developers should consider training one smaller model within the target model family and borrow scaling law parameters from a model family with similar architecture; however, this may not work for encoder–decoder models. Lastly, the MIT-IBM research group found that when scaling laws were compared across model families, there was strong correlation between two sets of hyperparameters, meaning that three of the five hyperparameters explained nearly all of the variation and could likely capture the model behavior. Together, these guidelines provide a systematic approach to making scaling law estimation more efficient, reliable, and accessible for AI researchers working under varying budget constraints.
Several surprises arose during this work: small models partially trained are still very predictive, and further, the intermediate training stages from a fully trained model can be used (as if they are individual models) for prediction of another target model. “Basically, you don’t pay anything in the training, because you already trained the full model, so the half-trained model, for instance, is just a byproduct of what you did,” says Choshen. Another feature Andreas pointed out was that, when aggregated, the variability across model families and different experiments jumped out and was noisier than expected. Unexpectedly, the researchers found that it’s possible to utilize the scaling laws on large models to predict performance down to smaller models. Other research in the field has hypothesized that smaller models were a “different beast” compared to large ones; however, Choshen disagrees. “If they’re totally different, they should have shown totally different behavior, and they don’t.”
While this work focused on model training time, the researchers plan to extend their analysis to model inference. Andreas says it’s not, “how does my model get better as I add more training data or more parameters, but instead as I let it think for longer, draw more samples. I think there are definitely lessons to be learned here about how to also build predictive models of how much thinking you need to do at run time.” He says the theory of inference time scaling laws might become even more critical because, “it’s not like I'm going to train one model and then be done. [Rather,] it’s every time a user comes to me, they’re going to have a new query, and I need to figure out how hard [my model needs] to think to come up with the best answer. So, being able to build those kinds of predictive models, like we’re doing in this paper, is even more important.”
This research was supported, in part, by the MIT-IBM Watson AI Lab and a Sloan Research Fellowship.
MIT geologists discover where energy goes during an earthquakeBased on mini “lab-quakes” in a controlled setting, the findings could help researchers assess the vulnerability of quake-prone regions.The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.
Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.
They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.
The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.
“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”
The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.
“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”
Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.
Under the surface
Earthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.
We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.
“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”
To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.
“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.
Microshakes
For their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)
The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.
Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.
They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.
From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces.
“In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”
The researchers suspect that similar processes play out in actual, kilometer-scale quakes.
“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”
This research was supported, in part, by the National Science Foundation.
How to get your business into the flowIn a new book, “There’s Got to be a Better Way,” two MIT management innovators explain how to think flexibly about improving an organization.In the late 1990s, a Harley-Davidson executive named Donald Kieffer became general manager of a company engine plant near Milwaukee. The iconic motorcycle maker had forged a celebrated comeback, and Kieffer, who learned manufacturing on the shop floor, had been part of it. Now Kieffer wanted to make his facility better. So he arranged for a noted Toyota executive, Hajime Oba, to pay a visit.
The meeting didn’t go as Kieffer expected. Oba walked around the plant for 45 minutes, diagrammed the setup on a whiteboard, and suggested one modest change. As a high-ranking manager, Kieffer figured he had to make far-reaching upgrades. Instead, Oba asked him, “What is the problem you are trying to solve?”
Oba’s point was subtle. Harley-Davidson had a good plant that could get better, but not by imposing grand, top-down plans. The key was to fix workflow issues the employees could identify. Even a small fix can have large effects, and, anyway, a modestly useful change is better than a big, formulaic makeover that derails things. So Kieffer took Oba’s prompt and started making specific, useful changes.
“Organizations are dynamic places, and when we try to impose a strict, static structure on them, we drive all that dynamism underground,” says MIT professor of management Nelson Repenning. “And the waste and chaos it creates is 100 times more expensive than people anticipate.”
Now Kieffer and Repenning have written a book about flexible, sensible organizational improvement, “There’s Got to Be a Better Way,” published by PublicAffairs. They call their approach “dynamic work design,” which aims to help firms refine their workflow — and to stop people from making it worse through overconfident, cookie-cutter prescriptions.
“So much of management theory presumes we can predict the future accurately, including our impact on it,” Repenning says. “And everybody knows that’s not true. Yet we go along with the fiction. The premise underlying dynamic work design is, if we accept that we can’t predict the future perfectly, we might design the world differently.”
Kieffer adds: “Our principles address how work is designed. Not how leaders have to act, but how you design human work, and drive changes.”
One collaboration, five principles
This book is the product of a long collaboration: In 1996, Kieffer first met Repenning, who was then a new MIT faculty member, and they soon recognized they thought similarly about managing work. By 2008, Kieffer also became a lecturer at the MIT Sloan School of Management, where Repenning is now a distinguished professor of system dynamics and organization studies.
The duo began teaching executive education classes together at MIT Sloan, often working with firms tackling tough problems. In the 2010s, they worked extensively with BP executives after the Deepwater Horizon accident, finding ways to combine safety priorities with other operations.
Repenning is an expert on system dynamics, an MIT-developed field emphasizing how parts of a system interact. In a firm, making isolated changes may throw the system as a whole further off kilter. Instead, managers need to grasp the larger dynamics — and recognize that a firm’s problems are not usually its people, since most employees perform similarly when burdened by a faulty system.
Whereas many have touted management systems prescribe set things in advance — like culling the bottom 10 percent of your employees annually — Repenning and Kieffer believe a firm should study itself empirically and develop improvements from there.
“Managers lose touch with how work actually gets done,” Kieffer says. “We bring managers in touch with real-time work, to see the problems people have, to help them solve it and learn new ways to work.”
Over time, Repenning and Kieffer have codified their ideas about work design into five principles:
No mugs, no t-shirts — just open your eyes
Applying dynamic work design to any given firm may sound simple, but Repenning and Kieffer note that many forces make it hard to implement. For instance, firm leaders may be tempted to opt for technology-based solutions when there are simpler, cheaper fixes available.
Indeed, “resorting to technology before fixing the underlying design risks wasting money and embedding the original problem even deeper in the organization,” they write in the book.
Moreover, dynamic work design is not itself a solution, but a way of trying to find a specific solution.
“One thing that keeps Don and I up at night is a CEO reading our book and thinking, ‘We’re going to be a dynamic work design company,’ and printing t-shirts and coffee mugs and holding two-day conferences where everyone signs the dynamic work design poster, and evaluating everyone every week on how dynamic they are,’” Repenning says. “Then you’re being awfully static.”
After all, firms change, and their needs change. Repenning and Kieffer want managers to keep studying their firm’s workflow, so they can keep current with their needs. In fairness, a certain amount of managers do this.
“Most people have experienced fleeting moments of good work design,” Repenning says. Building on that, he says, managers and employees can keep driving a process of improvement that is realistic and logical.
“Start small,” he adds. “Pick one problem you can work on in a couple of weeks, and solve that. Most cases, with open eyes, there’s low-hanging fruit. You find the places you can win, and change incrementally, rather than all at once. For senior executives, this is hard. They are used to doing big things. I tell our executive ed students, it’s going to feel uncomfortable at the beginning, but this is a much more sustainable path to progress.”
Climate Action Learning Lab helps state and local leaders identify and implement effective climate mitigation strategiesJ-PAL North America’s inaugural Climate Action Learning Lab provided six U.S. cities and states with customized training and resources to leverage data and evaluation to advance climate solutions that work.This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.
“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”
From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.
“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.
This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.
“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”
The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space.
“The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”
The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.
Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter.
How MIT’s Steel Research Group led to a groundbreaking national materials initiativeFounder Gregory B. Olson reflects on past and continuing high-impact work as the group turns 40.Traditionally, developing new materials for cutting-edge applications — such as SpaceX’s Raptor engine — has taken a decade or more. But thanks to a breakthrough technology pioneered by an MIT research group now celebrating its 40th year, a key material for the Raptor was delivered in just a few years. The same innovation has accelerated the development of high-performance materials for the Apple Watch, U.S. Air Force jets, and Formula One race cars.
The MIT Steel Research Group (SRG) also led to a national initiative that “has already sparked a paradigm shift in how new materials are discovered, developed, and deployed,” according to a White House story describing the Materials Genome Initiative’s first five years.
Gregory B. Olson founded the SRG in 1985 with the goal of using computers to accelerate the hunt for new materials by plumbing databases of those materials’ fundamental properties. It was the beginning of a new field: computational materials design.
At the time, “nobody knew whether we could really do this,” remembers Olson, a professor of the practice in the Department of Materials Science and Engineering. “I have some documented evidence of agencies resisting the entire concept because, in their opinion, a material could never be designed.”
Eventually, however, Olson and colleagues showed that the approach worked. One of the most important results: In 2011 President Barack Obama made a speech “essentially announcing that this technology is real and it’s what everybody should be doing,” says Olson, who is also affiliated with the Materials Research Laboratory. In the speech, Obama launched the Materials Genome Initiative (MGI).
The MGI is developing “a fundamental database of the parameters that direct the assembly of the structures of materials,” much like the Human Genome Project “is a database that directs the assembly of the structures of life,” says Olson.
The goal is to use the MGI database to discover, manufacture, and deploy advanced materials twice as fast, and at a fraction of the cost, compared to traditional methods, according to the MGI website.
At MIT, the SRG continues to focus on steel, “because it’s the material [the world has] studied the longest, so we have the deepest fundamental understanding of its properties,” says Olson, project principal investigator.
The Cybersteels Project, funded by the Office of Naval Research, brings together eight MIT faculty who are working to expand our knowledge of steel, eventually adding their data to the MGI. Major areas of study include the boundaries between the microscopic grains that make up a steel and the economic modeling of new steels.
Concludes Olson, “it has been tremendously satisfying to see how this technology has really blossomed in the hands of leading corporations and led to a national initiative to take it even further.”
3 Questions: On humanizing scientistsThe prolific MIT author and physicist Alan Lightman examines the working lives, contributions, and idealism of researchers.Alan Lightman has spent much of his authorial career writing about scientific discovery, the boundaries of knowledge, and remarkable findings from the world of research. His latest book “The Shape of Wonder,” co-authored with the lauded English astrophysicist Martin Rees and published this month by Penguin Random House, offers both profiles of scientists and an examination of scientific methods, humanizing researchers and making an affirmative case for the value of their work. Lightman is a professor of the practice of the humanities in MIT’s Comparative Media Studies/Writing Program; Rees is a fellow of Trinity College at Cambridge University and the UK’s Astronomer Royal. Lightman talked with MIT News about the new volume.
Q: What is your new book about?
A: The book tries to show who scientists are and how they think. Martin and I wrote it to address several problems. One is mistrust in scientists and their institutions, which is a worldwide problem. We saw this problem illustrated during the pandemic. That mistrust I think is associated with a belief by some people that scientists and their institutions are part of the elite establishment, a belief that is one feature of the populist movement worldwide. In recent years there’s been considerable misinformation about science. And, many people don’t know who scientists are.
Another thing, which is very important, is a lack of understanding about evidence-based critical thinking. When scientists get new data and information, their theories and recommendations change. But this process, part of the scientific method, is not well-understood outside of science. Those are issues we address in the book. We have profiles of a number of scientists and show them as real people, most of whom work for the benefit of society or out of intellectual curiosity, rather than being driven by political or financial interests. We try to humanize scientists while showing how they think.
Q: You profile some well-known figures in the book, as well as some lesser-known scientists. Who are some of the people you feature in it?
A: One person is a young neuroscientist, Lace Riggs, who works at the McGovern Institute for Brain Research at MIT. She grew up in difficult circumstances in southern California, decided to go into science, got a PhD in neuroscience, and works as a postdoc researching the effect of different compounds on the brain and how that might lead to drugs to combat certain mental illnesses. Another very interesting person is Magdalena Lenda, an ecologist in Poland. When she was growing up, her father sold fish for a living, and took her out in the countryside and would identify plants, which got her interested in ecology. She works on stopping invasive species. The intention is to talk about people’s lives and interests, and show them as full people.
While humanizing scientists in the book, we show how critical thinking works in science. By the way, critical thinking is not owned by scientists. Accountants, doctors, and many others use critical thinking. I’ve talked to my car mechanic about what kinds of problems come into the shop. People don’t know what causes the check engine light to go on — the catalytic converter, corroded spark plugs, etc. — so mechanics often start from the simplest and cheapest possibilities and go to the next potential problem, down the list. That’s a perfect example of critical thinking. In science, it is checking your ideas and hypotheses against data, then updating them if needed.
Q: Are there common threads linking together the many scientists you feature in the book?
A: There are common threads, but also no single scientific stereotype. There’s a wide range of personalities in the sciences. But one common thread is that all the scientists I know are passionate about what they’re doing. They’re working for the benefit of society, and out of sheer intellectual curiosity. That links all the people in the book, as well as other scientists I’ve known. I wish more people in America would realize this: Scientists are working for their overall benefit. Science is a great success story. Thanks to scientific advances, since 1900 the expected lifespan in the U.S, has increased from a little more than 45 years to almost 80 years, in just a century, largely due to our ability to combat diseases. What’s more vital than your lifespan?
This book is just a drop in the bucket in terms of what needs to be done. But we all do what we can.
Lidar helps gas industry find methane leaks and avoid costly lossesLincoln Laboratory transitioned its optical-amplifier technology to Bridger Photonics for commercialization, enhancing US energy security and efficiency.Each year, the U.S. energy industry loses an estimated 3 percent of its natural gas production, valued at $1 billion in revenue, to leaky infrastructure. Escaping invisibly into the air, these methane gas plumes can now be detected, imaged, and measured using a specialized lidar flown on small aircraft.
This lidar is a product of Bridger Photonics, a leading methane-sensing company based in Bozeman, Montana. MIT Lincoln Laboratory developed the lidar's optical-power amplifier, a key component of the system, by advancing its existing slab-coupled optical waveguide amplifier (SCOWA) technology. The methane-detecting lidar is 10 to 50 times more capable than other airborne remote sensors on the market.
"This drone-capable sensor for imaging methane is a great example of Lincoln Laboratory technology at work, matched with an impactful commercial application," says Paul Juodawlkis, who pioneered the SCOWA technology with Jason Plant in the Advanced Technology Division and collaborated with Bridger Photonics to enable its commercial application.
Today, the product is being adopted widely, including by nine of the top 10 natural gas producers in the United States. "Keeping gas in the pipe is good for everyone — it helps companies bring the gas to market, improves safety, and protects the outdoors," says Pete Roos, founder and chief innovation officer at Bridger. "The challenge with methane is that you can't see it. We solved a fundamental problem with Lincoln Laboratory."
A laser source "miracle"
In 2014, the Advanced Research Projects Agency-Energy (ARPA-E) was seeking a cost-effective and precise way to detect methane leaks. Highly flammable and a potent pollutant, methane gas (the primary constituent of natural gas) moves through the country via a vast and intricate pipeline network. Bridger submitted a research proposal in response to ARPA-E's call and was awarded funding to develop a small, sensitive aerial lidar.
Aerial lidar sends laser light down to the ground and measures the light that reflects back to the sensor. Such lidar is often used for producing detailed topography maps. Bridger's idea was to merge topography mapping with gas measurements. Methane absorbs light at the infrared wavelength of 1.65 microns. Operating a laser at that wavelength could allow a lidar to sense the invisible plumes and measure leak rates.
"This laser source was one of the hardest parts to get right. It's a key element," Roos says. His team needed a laser source with specific characteristics to emit powerfully enough at a wavelength of 1.65 microns to work from useful altitudes. Roos recalled the ARPA-E program manager saying they needed a "miracle" to pull it off.
Through mutual connections, Bridger was introduced to a Lincoln Laboratory technology for optically amplifying laser signals: the SCOWA. When Bridger contacted Juodawlkis and Plant, they had been working on SCOWAs for a decade. Although they had never investigated SCOWAs at 1.65 microns, they thought that the fundamental technology could be extended to operate at that wavelength. Lincoln Laboratory received ARPA-E funding to develop 1.65-micron SCOWAs and provide prototype units to Bridger for incorporation into their gas-mapping lidar systems.
"That was the miracle we needed," Roos says.
A legacy in laser innovation
Lincoln Laboratory has long been a leader in semiconductor laser and optical emitter technology. In 1962, the laboratory was among the first to demonstrate the diode laser, which is now the most widespread laser used globally. Several spinout companies, such as Lasertron and TeraDiode, have commercialized innovations stemming from the laboratory's laser research, including those for fiber-optic telecommunications and metal-cutting applications.
In the early 2000s, Juodawlkis, Plant, and others at the laboratory recognized a need for a stable, powerful, and bright single-mode semiconductor optical amplifier, which could enhance lidar and optical communications. They developed the SCOWA (slab-coupled optical waveguide amplifier) concept by extending earlier work on slab-coupled optical waveguide lasers (SCOWLs). The initial SCOWA was funded under the laboratory's internal technology investment portfolio, a pool of R&D funding provided by the undersecretary of defense for research and engineering to seed new technology ideas. These ideas often mature into sponsored programs or lead to commercialized technology.
"Soon, we developed a semiconductor optical amplifier that was 10 times better than anything that had ever been demonstrated before," Plant says. Like other semiconductor optical amplifiers, the SCOWA guides laser light through semiconductor material. This process increases optical power as the laser light interacts with electrons, causing them to shed photons at the same wavelength as the input laser. The SCOWA's unique light-guiding design enables it to reach much higher output powers, creating a powerful and efficient beam. They demonstrated SCOWAs at various wavelengths and applied the technology to projects for the Department of Defense.
When Bridger Photonics reached out to Lincoln Laboratory, the most impactful application of the device yet emerged. Working iteratively through the ARPA-E funding and a Cooperative Research and Development Agreement (CRADA), the team increased Bridger's laser power by more than tenfold. This power boost enabled them to extend the range of the lidar to elevations over 1,000 feet.
"Lincoln Laboratory had the knowledge of what goes on inside the optical amplifier — they could take our input, adjust the recipe, and make a device that worked very well for us," Roos says.
The Gas Mapping Lidar was commercially released in 2019. That same year, the product won an R&D 100 Award, recognizing it as a revolutionary advancement in the marketplace.
A technology transfer takes off
Today, the United States is the world's largest natural gas supplier, driving growth in the methane-sensing market. Bridger Photonics deploys its Gas Mapping Lidar for customers nationwide, attaching the sensor to planes and drones and pinpointing leaks across the entire supply chain, from where gas is extracted, piped through the country, and delivered to businesses and homes. Customers buy the data from these scans to efficiently locate and repair leaks in their gas infrastructure. In January 2025, the Environmental Protection Agency provided regulatory approval for the technology.
According to Bruce Niemeyer, president of Chevron's shale and tight operations, the lidar capability has been game-changing: "Our goal is simple — keep methane in the pipe. This technology helps us assure we are doing that … It can find leaks that are 10 times smaller than other commercial providers are capable of spotting."
At Lincoln Laboratory, researchers continue to innovate new devices in the national interest. The SCOWA is one of many technologies in the toolkit of the laboratory's Microsystems Prototyping Foundry, which will soon be expanded to include a new Compound Semiconductor Laboratory – Microsystem Integration Facility. Government, industry, and academia can access these facilities through government-funded projects, CRADAs, test agreements, and other mechanisms.
At the direction of the U.S. government, the laboratory is also seeking industry transfer partners for a technology that couples SCOWA with a photonic integrated circuit platform. Such a platform could advance quantum computing and sensing, among other applications.
"Lincoln Laboratory is a national resource for semiconductor optical emitter technology," Juodawlkis says.
MIT launches Day of Design to bring hands-on learning to classroomsBuilding on Day of AI and Day of Climate, MIT shares free design resources to spark creativity and problem-solving in classrooms.A new MIT initiative known as Day of Design offers free, open-source, hands-on design activities for all classrooms, in addition to professional development opportunities and signature events. The material engages pK-12 learners in the skills they need to solve complex open-ended problems while also considering user, social, and environmental needs. Inspired by Day of AI and Day of Climate, it is a new collaborative effort by the MIT Morningside Academy for Design (MAD) and the WPS Institute, with support from the MIT Museum and the MIT pK-12 Initiative.
“At MIT, design is practiced across departments — from the more obvious ones, like architecture and mechanical engineering, to less apparent ones, like biology and chemistry. Design skills support students in becoming strong collaborators, idea-makers, and human-centered problem-solvers. The Day of Design initiative seeks to share these skills with the K-12 audience through bite-sized, engaging activities for every classroom,” says Rosa Weinberg, who co-led the development of Day of Design and serves as MAD’s K–12 design education lead.
These interdisciplinary resources are designed collaboratively with feedback from teachers and grounded in exciting themes across science, humanities, art, engineering, and other subject areas, serving educators and learners regardless of their experience with design and making. Activities are scaffolded like “grammar lessons” for design education, including classroom-ready slides, handouts, tutorial videos, and facilitation tips supporting 21st century mindsets. All materials will be shared online, enabling educators to use the content as-is, or modify it as needed for their classrooms and other informal learning settings.
Rachel Adams, a former teacher and head of teaching and learning at the WPS Institute, explains, “There can be a gap between open-ended teaching materials and what teachers actually need in their classrooms. Day of Design classroom materials are piloted and workshopped by an interdisciplinary cohort of teachers who make up our Teacher Innovation Fellowship. This collaborative design process allows us to bridge the gap between cutting-edge MIT research with practical student-centered design lessons. These materials represent a new way of thinking that honors both the innovation happening in the labs at MIT and the real-world needs of educators.”
Day of Design also features signature events and a yearly, real-world challenge that brings all the design skills together. It is intended for educators who want ready-to-use design and making activities that connect to their subject areas and mindsets, and for students eager to develop problem-solving skills, creativity, and hands-on experience. Schools and districts looking to engage learners through interdisciplinary, project-based approaches can adopt the program as a flexible framework, while community partners can use it to provide young people with tools and spaces to create.
Cedric Jacobson, a chemistry teacher at Brooke High School in Boston who participated in MAD’s Teacher Innovation Fellowship and contributed to testing the Day of Design curriculum, emphasizes it “provides opportunities for teachers to practice and interact with design principles in concrete ways through multiple lesson structures. This process empowers them to try design principles in model lessons before preparing to use them in their own curriculum.”
Evan Milstein-Greengart, another Teacher Innovation Fellow, describes how “having this hands-on experience changed the way I thought about education. I felt like a kid again — going back to playground learning — and I want to bring that same spirit into my classroom.”
Closing the skills gap through design education
Technologies such as artificial intelligence, robotics, and biotech are reshaping work and society. The World Economic Forum estimates that 39 percent of key job skills will change by 2030. At the same time, research shows student engagement drops sharply in high school, with a third of students experiencing what is often called the “engagement cliff.” Many do not encounter design until college, if at all.
There is a growing need to foster not just technical literacy, but design fluency — the ability to approach complex problems with empathy, creativity, and critical thinking. Design education helps students prototype solutions, iterate based on feedback, and communicate ideas clearly. Studies have shown it can improve creative thinking, motivation, problem-solving, self-efficacy, and academic achievement.
At MIT, design is a way of thinking and creating that spans disciplines — from bioengineering and architecture to mechanical systems and public policy. It is both creative and analytical, grounded in iteration, user input, and systems thinking. Day of Design reflects MIT’s “mens et manus” (“mind and hand”) motto and extends the tools of design to young learners and educators.
“The workshops help students develop skills that can be applied across multiple subject areas, using topics that draw context from MIT research while remaining exciting and accessible to middle and high school students,” explains Weinberg. “For example, ‘Cosmic Comfort,’ one of our pilot workshops, was inspired by MIT's Space Architecture course (MAS.S66/4.154/16.89). It challenges students to consider how you might make a lunar habitat feel like home, while focusing on developing the crucial design skill of ideation — the ability to generate multiple creative solutions.”
Building on an MIT legacy
Day of Design builds on the model of Day of AI and Day of Climate, two ongoing efforts by MIT RAISE and the MIT pK-12 Initiative. All three initiatives share free, open-source activities, professional development materials, and events that connect MIT research with educators and students worldwide. Since 2021, Day of AI has reached more than 42,000 teachers and 1.5 million students in 170 countries and all 50 U.S. states. Day of Climate, launched in March 2025, has already recorded over 50,000 website visitors, 300 downloads of professional development materials, and an April launch event at the MIT Museum that drew 200 participants.
“Day of Design builds on the spirit of Day of AI and Day of Climate by inviting young people to engage with real-world challenges through creative work, meaningful collaboration, and deep empathy for others. These initiatives reflect MIT’s commitment to hands-on, transdisciplinary learning, empowering future young leaders not just to understand the world, but to shape it,” says Claudia Urrea, executive director for the pK–12 Initiative at MIT Open Learning.
Kicking off with connection
“Learning and creating together in person sparks the kind of ideas and connections that are hard to make any other way. Collective learning helps everyone think bigger and more creatively, while building a more deeply connected community that keeps that growth alive,” observes Caitlin Morris, PhD student in Fluid Interfaces, a 2024 MAD Design Fellow, and co-organizer of Day of Design: Connect, which will kick off Day of Design on Sept. 25.
Following the launch, the first set of classroom resources will be introduced during the 2025–26 school year, starting with activities for grades 7–12. Additional resources for younger learners, along with training opportunities for educators, will be added over time. Each year, new design skills and mindsets will be incorporated, creating a growing library of activities. While initial events will take place at MIT, organizers plan to expand programming globally.
Teacher Innovation Fellow Jessica Toupin, who piloted Day of Design activities in her math classroom, reflects on the impact: “As a math teacher, I don’t always get to focus on design. This material reminded me of the joy of learning — and when I brought it into my classroom, students who had struggled came alive. Just the ability to play and build showed me they were capable of so much more.”
This MIT spinout is taking biomolecule storage out of the freezerCache DNA has developed technologies that can preserve biomolecules at room temperature to make storing and transporting samples less expensive and more reliable.Ever since freezers were invented, the life sciences industry has been reliant on them. That’s because many patient samples, drug candidates, and other biologics must be stored and transported in powerful freezers or surrounded by dry ice to remain stable.
The problem was on full display during the Covid-19 pandemic, when truckloads of vaccines had to be discarded because they had thawed during transport. Today, the stakes are even higher. Precision medicine, from CAR-T cell therapies to tumor DNA sequencing that guides cancer treatment, depends on pristine biological samples. Yet a single power outage, shipping delay, or equipment failure can destroy irreplaceable patient samples, setting back treatment by weeks or halting it entirely. In remote areas and developing nations, the lack of reliable cold storage effectively locks out entire populations from these life-saving advances.
Cache DNA wants to set the industry free from freezers. At MIT, the company’s founders created a new way to store and preserve DNA molecules at room temperature. Now the company is building biomolecule preservation technologies that can be used in applications across health care, from routine blood tests and cancer screening to rare disease research and pandemic preparedness.
“We want to challenge the paradigm,” says Cache DNA co-founder and former MIT postdoc James Banal. “Biotech has been reliant on the cold chain for more than 50 years. Why hasn’t that changed? Meanwhile, the cost of DNA sequencing has plummeted from $3 billion for the first human genome to under $200 today. With DNA sequencing and synthesis becoming so cheap and fast, storage and transport have emerged as the critical bottlenecks. It’s like having a supercomputer that still requires punch cards for data input.”
As the company works to preserve biomolecules beyond DNA and scale the production of its kits, co-founders Banal and MIT Professor Mark Bathe believe their technology has the potential to unlock new health insights by making sample storage accessible to scientists around the world.
“Imagine if every human on Earth could contribute to a global biobank, not just those living near million-dollar freezer facilities,” Banal says. “That’s 8 billion biological stories instead of just a privileged few. The cures we’re missing might be hiding in the biomolecules of someone we’ve never been able to reach.”
From quantum computing to “Jurassic Park”
Banal came to MIT from Australia to work as a postdoc under Bathe, a professor in MIT’s Department of Biological Engineering. Banal primarily studied in the MIT-Harvard Center for Excitonics, through which he collaborated with researchers from across MIT.
“I worked on some really wacky stuff, like DNA nanotechnology and its intersection with quantum computing and artificial photosynthesis,” Banal recalls.
Another project focused on using DNA to store data. While computers store data as 0s and 1s, DNA can store the same information using the nucleotides A, T, G, and C, allowing for extremely dense storage of data: By one estimate, 1 gram of DNA can hold up to 215 petabytes of data.
After three years of work, in 2021, Banal and Bathe created a system that stored DNA-based data in tiny glass particles. They founded Cache DNA the same year, securing the intellectual property by working with MIT’s Technology Licensing Office, applying the technology to storing clinical nucleic acid samples as well as DNA data. Still, the technology was too nascent to be used for most commercial applications at the time.
Professor of chemistry Jeremiah Johnson had a different approach. His research had shown that certain plastics and rubbers could be made recyclable by adding cleavable molecular bonds. Johnson thought Cache DNA’s technology could be faster and more reliable using his amber-like polymers, similar to how researchers in the “Jurassic Park” movie recover ancient dinosaur DNA from a tree’s fossilized amber resin.
“It started basically as a fun conversation along the halls of Building 16,” Banal recalls. “He’d seen my work, and I was aware of the innovations in his lab.”
Banal immediately saw the potential. He was familiar with the burden of the cold chain. For his MIT experiments, he’d store samples in big freezers kept at -80 degrees Celsius. Samples would sometimes get lost in the freezer or be buried in the inevitable ice build-up. Even when they were perfectly preserved, samples could degrade as they thawed.
As part of a collaboration between Cache DNA and MIT, Banal, Johnson, and two researchers in Johnson’s lab developed a polymer that stores DNA at room temperature. In a nod to their inspiration, they demonstrated the approach by encoding DNA sequences with the “Jurassic Park” theme song.
The researchers’ polymers could encompass a material as a liquid and then form a solid, glass-like block when heated. To release the DNA, the researchers could add a molecule called cysteamine and a special detergent. The researchers showed the process could work to store and access all 50,000 base pairs of a human genome without causing damage.
“Real amber is not great at preservation. It’s porous and lets in moisture and air,” Banal says. “What we built is completely different: a dense polymer network that forms an impenetrable barrier around DNA. Think of it like vacuum-sealing, but at the molecular level. The polymer is so hydrophobic that water and enzymes that would normally destroy DNA simply can’t get through.”
As that research was taking shape, Cache DNA was learning that sample storage was a huge problem from hospitals and research labs. In places like Florida and Singapore, researchers said contending with the effects of humidity on samples was another constant headache. Other researchers across the globe wanted to know if the technology would help them collect samples outside of the lab.
“Hospitals told us they were running out of space,” Banal says. “They had to throw samples out, limit sample collection, and as a last-case scenario, they would use a decades-old storage technology that leads to degradation after a short period of time. It became a north star for us to solve those problems.”
A new tool for precision health
Last year, Cache DNA sent out more than 100 of its first alpha DNA preservation kits to researchers around the world.
“We didn’t tell researchers what to use it for, and our minds were blown by the use cases,” Banal says. “Some used it for collecting samples in the field where cold shipping wasn't feasible. Others evaluated for long term archival storage. The applications were different, but the problem was universal: They all needed reliable storage without the constraint of refrigeration.”
Cache DNA has developed an entire suite of preservation technologies that can be optimized for different storage scenarios. The company also recently received a grant from the National Science Foundation to expand its technology to preserve a broader swath of biomolecules, including RNA and proteins, which could yield new insights into health and disease.
“This important innovation helps eliminate the cold chain and has the potential to unlock millions of genetic samples globally for Cache DNA to empower personalized medicine,” Bathe says. “Eliminating the cold chain is half the equation. The other half is scaling from thousands to millions or even billions of nucleic acid samples. Together, this could enable the equivalent of a ‘Google Books’ for nucleic acids stored at room temperature, either for clinical samples in hospital settings and remote regions of the world, or alternatively to facilitate DNA data storage and retrieval at scale.”
“Freezers have dictated where science could happen,” Banal says. “Remove that constraint, and you start to crack open possibilities: island nations studying their unique genetics without samples dying in transit; every rare disease patient worldwide contributing to research, not just those near major hospitals; the 2 billion people without reliable electricity finally joining global health studies. Room-temperature storage isn’t the whole answer, but every cure starts with a sample that survived the journey.”
New RNA tool to advance cancer and infectious disease research and treatmentAdvance from SMART will help to better identify disease markers and develop targeted therapies and personalized treatment for diseases such as cancer and antibiotic-resistant infection.Researchers at the Antimicrobial Resistance (AMR) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a powerful tool capable of scanning thousands of biological samples to detect transfer ribonucleic acid (tRNA) modifications — tiny chemical changes to RNA molecules that help control how cells grow, adapt to stress, and respond to diseases such as cancer and antibiotic‑resistant infections. This tool opens up new possibilities for science, health care, and industry — from accelerating disease research and enabling more precise diagnostics to guiding the development of more effective medical treatments for diseases such as cancer and antibiotic-resistant infections.
For this study, the SMART AMR team worked in collaboration with researchers at MIT, Nanyang Technological University in Singapore, the University of Florida, the University at Albany in New York, and Lodz University of Technology in Poland.
Addressing current limitations in RNA modification profiling
Cancer and infectious diseases are complicated health conditions in which cells are forced to function abnormally by mutations in their genetic material or by instructions from an invading microorganism. The SMART-led research team is among the world’s leaders in understanding how the epitranscriptome — the over 170 different chemical modifications of all forms of RNA — controls growth of normal cells and how cells respond to stressful changes in the environment, such as loss of nutrients or exposure to toxic chemicals. The researchers are also studying how this system is corrupted in cancer or exploited by viruses, bacteria, and parasites in infectious diseases.
Current molecular methods used to study the expansive epitranscriptome and all of the thousands of different types of modified RNA are often slow, labor-intensive, costly, and involve hazardous chemicals, which limits research capacity and speed.
To solve this problem, the SMART team developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease. This capability allows scientists to map cell regulatory networks, discover novel enzymes, and link molecular patterns to disease mechanisms, paving the way for better drug discovery and development, and more accurate disease diagnostics.
Unlocking the complexity of RNA modifications
SMART’s open-access research, recently published in Nucleic Acids Research and titled “tRNA modification profiling reveals epitranscriptome regulatory networks in Pseudomonas aeruginosa,” shows that the tool has already enabled the discovery of previously unknown RNA-modifying enzymes and the mapping of complex gene regulatory networks. These networks are crucial for cellular adaptation to stress and disease, providing important insights into how RNA modifications control bacterial survival mechanisms.
Using robotic liquid handlers, researchers extracted tRNA from more than 5,700 genetically modified strains of Pseudomonas aeruginosa, a bacterium that causes infections such as pneumonia, urinary tract infections, bloodstream infections, and wound infections. Samples were enzymatically digested and analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS), a technique that separates molecules based on their physical properties and identifies them with high precision and sensitivity.
As part of the study, the process generated over 200,000 data points in a high-resolution approach that revealed new tRNA-modifying enzymes and simplified gene networks controlling how cells respond and adapt to stress. For example, the data revealed that the methylthiotransferase MiaB, one of the enzymes responsible for tRNA modification ms2i6A, was found to be sensitive to the availability of iron and sulfur and to metabolic changes when oxygen is low. Discoveries like this highlight how cells respond to environmental stresses, and could lead to future development of therapies or diagnostics.
SMART’s automated system was specially designed to profile tRNA modifications across thousands of samples rapidly and safely. Unlike traditional methods, this tool integrates robotics to automate sample preparation and analysis, eliminating the need for hazardous chemical handling and reducing costs. This advancement increases safety, throughput, and affordability, enabling routine large-scale use in research and clinical labs.
A faster and automated way to study RNA
As the first system capable of quantitative, system‑wide profiling of tRNA modifications at this scale, the tool provides a unique and comprehensive view of the epitranscriptome — the complete set of RNA chemical modifications within cells. This capability allows researchers to validate hypotheses about RNA modifications, uncover novel biology, and identify promising molecular targets for developing new therapies.
“This pioneering tool marks a transformative advance in decoding the complex language of RNA modifications that regulate cellular responses,” says Professor Peter Dedon, co-lead principal investigator at SMART AMR, professor of biological engineering at MIT, and corresponding author of the paper. “Leveraging AMR’s expertise in mass spectrometry and RNA epitranscriptomics, our research uncovers new methods to detect complex gene networks critical for understanding and treating cancer, as well as antibiotic-resistant infections. By enabling rapid, large-scale analysis, the tool accelerates both fundamental scientific discovery and the development of targeted diagnostics and therapies that will address urgent global health challenges.”
Accelerating research, industry, and health-care applications
This versatile tool has broad applications across scientific research, industry, and health care. It enables large-scale studies of gene regulation, RNA biology, and cellular responses to environmental and therapeutic challenges. The pharmaceutical and biotech industry can harness it for drug discovery and biomarker screening, efficiently evaluating how potential drugs affect RNA modifications and cellular behavior. This aids the development of targeted therapies and personalized medical treatments.
“This is the first tool that can rapidly and quantitatively profile RNA modifications across thousands of samples,” says Jingjing Sun, research scientist at SMART AMR and first author of the paper. “It has not only allowed us to discover new RNA-modifying enzymes and gene networks, but also opens the door to identifying biomarkers and therapeutic targets for diseases such as cancer and antibiotic-resistant infections. For the first time, large-scale epitranscriptomic analysis is practical and accessible.”
Looking ahead: advancing clinical and pharmaceutical applications
Moving forward, SMART AMR plans to expand the tool’s capabilities to analyze RNA modifications in human cells and tissues, moving beyond microbial models to deepen understanding of disease mechanisms in humans. Future efforts will focus on integrating the platform into clinical research to accelerate the discovery of biomarkers and therapeutic targets. The translation of the technology into an epitranscriptome-wide analysis tool that can be used in pharmaceutical and health-care settings will drive the development of more effective and personalized treatments.
The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
Technology originating at MIT leads to approved bladder cancer treatmentA system conceived in Professor Michael Cima’s lab was approved by the Food and Drug Administration after positive results in patients.At MIT, a few scribbles on a whiteboard can turn into a potentially transformational cancer treatment.
This scenario came to fruition this week when the U.S. Food and Drug Administration approved a system for treating an aggressive form of bladder cancer. More than a decade ago, the system started as an idea in the lab of MIT Professor Michael Cima at the Koch Institute for Integrative Cancer Research, enabled by funding from the National Institutes of Health and MIT’s Deshpande Center.
The work that started with a few researchers at MIT turned into a startup, TARIS Biomedical LLC, that was co-founded by Cima and David H. Koch Institute Professor Robert Langer, and acquired by Johnson & Johnson in 2019. In developing the core concept of a device for local drug delivery to the bladder — which represents a new paradigm in bladder cancer treatment — the MIT team approached drug delivery like an engineering problem.
“We spoke to urologists and sketched out the problems with past treatments to get to a set of design parameters,” says Cima, a David H. Koch Professor of Engineering and professor of materials science and engineering. “Part of our criteria was it had to fit into urologists’ existing procedures. We wanted urologists to know what to do with the system without even reading the instructions for use. That’s pretty much how it came out.”
To date, the system has been used in patients thousands of times. In one study involving people with high-risk, non-muscle-invasive bladder cancer whose disease had proven resistant to standard care, doctors could find no evidence of cancer in 82.4 percent of patients treated with the system. More than 50 percent of those patients were still cancer-free nine months after treatment.
The results are extremely gratifying for the team of researchers that worked on it at MIT, including Langer and Heejin Lee SM ’04, PhD ’09, who developed the system as part of his PhD thesis. And Cima says far more people deserve credit than just the ones who scribbled on his whiteboard all those years ago.
“Drug products like this take an enormous amount of effort,” says Cima. “There are probably more than 1,000 people that have been involved in developing and commercializing the system: the MIT inventors, the urologists they consulted, the scientists at TARIS, the scientists at Johnson & Johnson — and that’s not including all the patients who participated in clinical trials. I also want to emphasize the importance of the MIT ecosystem, and the importance of giving people the resources to pursue arguably crazy ideas. We need to continue to support those kinds of activities.”
In the mid 2000s, Langer connected Cima with a urologist at Boston Children’s Hospital who was seeking a new treatment for a painful bladder disease known as interstitial cystitis. The standard treatment required frequent drug infusions into a patient’s bladder through a catheter, which provided only temporary relief.
A group of researchers including Cima; Lee; Hong Linh Ho Duc SM ’05, PhD ’09; Grace Kim PhD ’08; and Karen Daniel PhD ’09 began speaking with urologists and people who had run failed clinical trials involving bladder treatments to understand what went wrong. All that information went on Cima’s whiteboard over the course of several weeks. Fortunately, Cima also scribbled “Do not erase!”
“We learned a lot in the process of writing everything down,” Cima says. “We learned what not to build and what to avoid.”
With the problem well-defined, Cima received a grant from MIT’s Deshpande Center for Technological Innovation, which allowed Lee to work on designing a better solution as part of his PhD thesis.
One of the key advances the group made was using a special alloy that gave the device “shape memory” so that it could be straightened out and inserted into the bladder through a catheter. Then it would fold up, preventing it from being expelled during urination.
The new design was able to slowly release drugs over a two-week period — far longer than any other approach — and could then be removed using a thin, flexible tube commonly used in urology, called a cystoscope. The progress was enough for Cima and Langer, who are both serial entrepreneurs, to found TARIS Biomedical and license the technology from MIT. Lee and three other MIT graduates joined the company.
“It was a real pleasure working with Mike Cima, our students, and colleagues on this novel drug delivery system, which is already changing patients’ lives,” Langer says, “It’s a great example of how research at the Koch Institute starts with basic science and engineering and ends up with new treatments for cancer patients.”
The FDA’s approval of the system for the treatment of certain patients with high-risk, non-muscle-invasive bladder cancer now means that patients with this disease may have a better treatment option. Moving forward, Cima hopes the system continues to be explored to treat other diseases.
A better understanding of debilitating head pain Tom Zeller’s new book, “The Headache,” sheds light on one of the world’s most confounding and agonizing ailments.Everyone gets headaches. But not everyone gets cluster headache attacks, a debilitating malady producing acute pain that lasts an hour or two. Cluster headache attacks come in sets — hence the name — and leave people in complete agony, unable to function. A little under 1 percent of the U.S. population suffers from cluster headache.
But that’s just an outline of the matter. What’s it like to actually have a cluster headache?
“The pain of a cluster headache is such that you can’t sit still,” says MIT-based science journalist Tom Zeller, who has suffered from them for decades. “I’d liken it to putting your hand on a hot burner, except that you can’t take your hand off for an hour or two. Every headache is an emergency. You have to run or pace or rock. Think of another pain you had to dance through, but it just doesn’t stop. It’s that level of intensity, and it’s all happening inside your head.”
And then there is the pain of the migraine headache, which seems slightly less acute than a cluster attack, but longer-lasting, and similarly debilitating. Migraine attacks can be accompanied by extreme sensitivity to light and noise, vision issues, and nausea, among other neurological symptoms, leaving patients alone in dark rooms for hours or days. An estimated 1.2 billion people around the world, including 40 million in the U.S., struggle with migraine attacks.
These are not obscure problems. And yet: We don’t know exactly why migraine and cluster headache disorders occur, nor how to address them. Headaches have never been a prominent topic within modern medical research. How can something so pervasive be so overlooked?
Now Zeller examines these issues in an absorbing book, “The Headache: The Science of a Most Confounding Affliction — and a Search for Relief,” published this summer by Mariner Books. Zeller is the editor-in-chief and co-founder of Undark, a digital magazine on science and society published by the Knight Science Journalism Program at MIT.
One word, but different syndromes
“The Headache,” which is Zeller’s first book, combines a first-person narrative of his own suffering, accounts of the pain and dread that other headache sufferers feel, and thorough reporting on headache-based research in science and medicine. Zeller has experienced cluster headache attacks for 30-plus years, dating to when he was in his 20s.
“In some ways, I suppose I had been writing the book my whole adult life without knowing it,” Zeller says. Indeed, he had collected research material about these conditions for years while grappling with his own headache issues.
A key issue in the book is why society has not taken cluster headache and migraine problems more seriously — and relatedly, why the science of headache disorders is not more advanced. Although in fairness, as Zeller says, “Anything involving the brain or central nervous system is incredibly hard to study.”
More broadly, Zeller suggests in the book, we have conflated regular workaday headaches — the kind you may get from staring at a screen too long — with the far more severe and rather different disorders like cluster headache and migraine. (Some patients refer to cluster headache and migraine in the singular, not plural, to emphasize that this is an ongoing condition, not just successive headaches.)
“Headaches are annoying, and we tough it out,” Zeller says. “But we use the same exact word to talk about these other things,” namely, cluster headache and migraine. This has likely reinforced our general dismissal of severe headache disorders as a pressing and distinct medical problem. Instead, we often consider headache disorders, even severe ones, as something people should simply power through.
“There’s a certain sense of malingering we still attach to a migraine or [other] headache disorder, and I’m not sure that’s going away,” Zeller says.
Then too, about three-quarters of people who experience migraine attacks are women, which has quite plausibly led the ailment to “get short shrift historically,” as Zeller says. Or at least, in recent history: As Zeller chronicles in the book, an awareness of severe headache disorders goes back to ancient times, and it’s possible they have received less relative attention in modernity.
A new shift in medical thinking
In any case, for much of the 20th century, conventional medical wisdom held that migraine and cluster headache stemmed from changes or abnormalities in blood vessels. But in recent decades, as Zeller details, there has been a paradigm shift: These conditions are now seen as more neurological in origin.
A key breakthrough here was the 1980s discovery of a neurotransmitter called calcitonin gene-related peptide, or CGRP. As scientists have discovered, CGRP is released from nerve endings around blood vessels and helps produce migraine symptoms. This offered a new strategy — and target — for combating severe head pain. The first drugs to inhibit the effects of CGRP hit the market in 2018, and most researchers in the field are now focused on idiopathic headache as a neurological disorder, not a vascular problem.
“It’s the way science works,” Zeller says. “Changing course is not easy. It’s like turning a ship on a dime. The same applies to the study of headaches.”
Many medications aimed at blocking these neurotransmitters have since been developed, though only about 20 percent of patients seem to find permanent relief as a result. As Zeller chronicles, other patients feel benefits for about a year, before the effects of a medication wear off; many of them now try complicated combinations of medications.
Severe headache disorders also seem linked to hormonal changes in people, who often see an onset of these ailments in their teens, and a diminishing of symptoms later in life. So, while headache medicine has witnessed a recent breakthrough, much more work lies ahead.
Opening up a discussion
Amid all this, one set of questions still tugging at Zeller is evolutionary in nature: Why do humans experience headache disorders at all? There is no clear evidence that other species get severe headaches — or that the prevalence of severe headache conditions in society has ever diminished.
One hypothesis, Zeller notes, is that “having a highly attuned nervous system could have been a benefit in our more primitive state.” Such a system may have helped us survive, in the past, but at the cost of producing intense disorders in some people when the wiring goes a bit awry. We may learn more about this as neuro-based headache research continues.
“The Headache” has received widespread praise. Writing in The New Yorker, Jerome Groopman heralded the “rich material in the book,” noting that it “weaves together history, biology, a survey of current research, testimony from patients, and an agonizing account of Zeller’s own suffering.”
For his part, Zeller says he is appreciative of the attention “The Headache” has generated, as one of the most widely-noted nonfiction books released this summer.
“It’s opened up room for a kind of conversation that doesn’t usually break through into the mainstream,” Zeller says. “I’m hearing from a lot of patients who just are saying, ‘Thank you for writing this.’ And that’s really gratifying. I’m most happy to hear from people who think it’s giving them a voice. I’m also hearing a lot from doctors and scientists. The moment has opened up for this discussion, and I’m grateful for that.”
MIT software tool turns everyday objects into animated, eye-catching displaysThe FabObscura system helps users design and print barrier-grid animations without electronics, and can help produce dynamic household, workplace, and artistic objects.Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.
But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still, sliced and interwoven to present a different snapshot depending on the overlay’s position.
While tools exist to help artists create barrier-grid animations, they’re typically used to create barrier patterns that have straight lines. Building off of previous work in creating images that appear to move, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a tool that allows users to explore more unconventional designs. From zigzags to circular patterns, the team’s “FabObscura” software turns unique concepts into printable scanimations, helping users add dynamic animations to things like pictures, toys, and decor.
MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Ticha Sethapakdi SM ’19, a lead author on a paper presenting FabObscura, says that the system is a one-size-fits-all tool for customizing barrier-grid animations. This versatility extends to unconventional, elaborate overlay designs, like pointed, angled lines to animate a picture you might put on your desk, or the swirling, hypnotic appearance of a radial pattern you could spin over an image placed on a coin or a Frisbee.
“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says Sethapakdi. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”
Behind these novel scanimations is a key finding: Barrier patterns can be expressed as any continuous mathematical function — not just straight lines. Users can type these equations into a text box within the FabObscura program, and then see how it graphs out the shape and movement of a barrier pattern. If you wanted a traditional horizontal pattern, you’d enter in a constant function, where the output is the same no matter the input, much like drawing a straight line across a graph. For a wavy design, you’d use a sine function, which is smooth and resembles a mountain range when plotted out. The system’s interface includes helpful examples of these equations to guide users toward their preferred pattern.
A simple interface for elaborate ideas
FabObscura works for all known types of barrier-grid animations, supporting a variety of user interactions. The system enables the creation of a display with an appearance that changes depending on your viewpoint. FabObscura also allows you to create displays that you can animate by sliding or rotating a barrier over an image.
To produce these designs, users can upload a folder of frames of an animation (perhaps a few stills of a horse running), or choose from a few preset sequences (like an eye blinking) and specify the angle your barrier will move. After previewing your design, you can fabricate the barrier and picture onto separate transparent sheets (or print the image on paper) using a standard 2D printer, such as an inkjet. Your image can then be placed and secured on flat, handheld items such as picture frames, phones, and books.
You can enter separate equations if you want two sequences on one surface, which the researchers call “nested animations.” Depending on how you move the barrier, you’ll see a different story being told. For example, CSAIL researchers created a car that rotates when you move its sheet vertically, but transforms into a spinning motorcycle when you slide the grid horizontally.
These customizations lead to unique household items, too. The researchers designed an interactive coaster that you can switch from displaying a “coffee” icon to symbols of a martini and a glass of water by pressing your fingers down on the edges of its surface. The team also spruced up a jar of sunflower seeds, producing a flower animation on the lid that blooms when twisted off.
Artists, including graphic designers and printmakers, could also use this tool to make dynamic pieces without needing to connect any wires. The tool saves them crucial time to explore creative, low-power designs, such as a clock with a mouse that runs along as it ticks. FabObscura could produce animated food packaging, or even reconfigurable signage for places like construction sites or stores that notify people when a particular area is closed or a machine isn’t working.
Keep it crisp
FabObscura’s barrier-grid creations do come with certain trade-offs. While nested animations are novel and more dynamic than a single-layer scanimation, their visual quality isn’t as strong. The researchers wrote design guidelines to address these challenges, recommending users upload fewer frames for nested animations to keep the interlaced image simple and stick to high-contrast images for a crisper presentation.
In the future, the researchers intend to expand what users can upload to FabObscura, like being able to drop in a video file that the program can then select the best frames from. This would lead to even more expressive barrier-grid animations.
FabObscura might also step into a new dimension: 3D. While the system is currently optimized for flat, handheld surfaces, CSAIL researchers are considering implementing their work into larger, more complex objects, possibly using 3D printers to fabricate even more elaborate illusions.
Sethapakdi wrote the paper with several CSAIL affiliates: Zhejiang University PhD student and visiting researcher Mingming Li; MIT EECS PhD student Maxine Perroni-Scharf; MIT postdoc Jiaji Li; MIT associate professors Arvind Satyanarayan and Justin Solomon; and senior author and MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. Their work will be presented at the ACM Symposium on User Interface Software and Technology (UIST) this month.
Demo Day features hormone-tracking sensors, desalination systems, and other innovationsMIT student teams celebrate business milestones at the capstone event for the 2025 delta v summer accelerator.Kresge Auditorium came alive Friday as MIT entrepreneurs took center stage to share their progress in the delta v startup accelerator program.
Now in its 14th year, delta v Demo Day represents the culmination of a summer in which students work full-time on new ventures under the guidance of the Martin Trust Center for MIT Entrepreneurship.
It also doubles as a celebration, with Trust Center Managing Director (and consummate hype man) Bill Aulet setting the tone early with his patented high-five run through the audience and leap on stage for opening remarks.
“All these students have performed a miracle,” Aulet told the crowd. “One year ago, they were sitting in the audience like all of you. One year ago, they probably didn’t even have an idea or a technology. Maybe they did, but they didn’t have a team, a clear vision, customer models, or a clear path to impact. But today they’re going to blow your mind. They have products — real products — a founding team, a clear mission, customer commitments or letters of intent, legitimate business models, and a path to greatness and impact. In short, they will have achieved escape velocity.”
The two-hour event filled Kresge Auditorium, with a line out the door for good measure, and was followed by a party under a tent on the Kresge lawn. Each presentation began with a short video introducing the company before a student took the stage to expand on the problem they were solving and what their team has learned from talks with potential customers.
In total, 22 startups showcased their ventures and early business milestones in rapid-fire presentations.
Rick Locke, the new dean of the MIT Sloan School of Management, said events like Demo Day are why he came back to the Institute after serving in various roles between 1988 and 2013.
“What’s great about this event is how it crystallizes the spirit of MIT: smart people doing important work, doing it by rolling up their sleeves, doing it with a certain humility but also a vision, and really making a difference in the world,” Locke told the audience. “You can feel the positivity, the energy, and the buzz here tonight. That’s what the world needs more of.”
A program with a purpose
This year’s Demo Day featured 70 students from across MIT, with 16 startups working out of the Trust Center on campus and six working from New York City. Through the delta v program, the students were guided by mentors, received funding, and worked through an action-oriented curriculum full-time between June and September. Aulet also noted that the students presenting benefitted from entrepreneurial support resources from across the Institute.
The odds are in the startups’ favor: A 2022 study found that 69 percent of businesses from the program were still operating five years later. Alumni companies had raised roughly $1 billion in funding.
Demo Day marks the end of delta v and serves to inspire next year’s cohort of entrepreneurs.
“Turn on a screen or look anywhere around you, and you'll see issues with climate, sustainability, health care, the future of work, economic disparities, and more,” Aulet said. “It can all be overwhelming. These entrepreneurs bring light to dark times. Entrepreneurs don’t see problems. As the great Biggie Smalls from Brooklyn said, ‘Turn a negative into a positive.’ That’s what entrepreneurs do.”
Startups in action
Startups in this year’s cohort presented solutions in biotech and health care, sustainability, financial services, energy, and more.
One company, Gees, is helping women with hormonal conditions like polycystic ovary syndrome (PCOS) with a saliva-based sensor that tracks key hormones to help women get personalized insights and manage symptoms.
“Over 200 million women live with PCOS worldwide,” said MIT postdoc and co-founder Walaa Khushaim. “If it goes unmanaged, it can lead to even more serious diseases. The good news is that 80 percent of cases can be managed with lifestyle changes. The problem is women trying to change their lifestyle are left in the dark, unsure if what they are doing is truly helping.”
Gees’ sensor is noninvasive and easier to use than current sensors that track hormones. It provides feedback in minutes from the comfort of users’ homes. The sensor connects to an app that shows results and trends to help women stay on track. The company already has more than 500 sign-ups for its wait list.
Another company, Kira, has created an electrochemical system to increase the efficiency and access of water desalination. The company is aiming to help companies manage their brine wastewater that is often dumped, pumped underground, or trucked off to be treated.
“At Kira, we’re working toward a system that produces zero liquid waste and only solid salts,” says PhD student Jonathan Bessette SM ’22.
Kira says its system increases the amount of clean water created by industrial processes, reduces the amount of brine wastewater, and optimizes the energy flows of factories. The company says next year it will deploy a system at the largest groundwater desalination plant in the U.S.
A variety of other startups presented at the event:
AutoAce builds AI agents for car dealerships, automating repetitive tasks with a 24/7 voice agent that answers inbound service calls and books appointments.
Carbion uses a thermochemical process to convert biomass into battery-grade graphite at half the temperature of traditional synthetic methods.
Clima Technologies has developed an AI building engineer that enables facilities managers to “talk” to their buildings in real-time, allowing teams to conduct 24/7 commissioning, act on fault diagnostics, minimize equipment downtime, and optimize controls.
Cognify uses AI to predict customer interactions with digital platforms, simulating customer behavior to deliver insights into which designs resonate with customers, where friction exists in user journeys, and how to build a user experience that converts.
Durability uses computer vision and AI to analyze movement, predict injury risks, and guide recovery for athletes.
EggPlan uses a simple blood test and proprietary model to assess eligibility for egg freezing with fertility clinics. If users do not have a baby, their fees are returned, making the process risk-free.
Forma Systems developed an optimization software for manufacturers to make smarter, faster decisions about things like materials use while reducing their climate impact.
Ground3d is a social impact organization building a digital tool for crowdsourcing hyperlocal environmental data, beginning with street-level documentation of flooding events in New York City. The platform could help residents with climate resilience and advocacy.
GrowthFactor helps retailers scale their footprint with a fractional real estate analyst while using an AI-powered platform to maximize their chance of commercial success.
Kyma uses AI-powered patient engagement to integrate data from wearables, smart scales, sensors, and continuous glucose monitors to track behaviors and draft physician-approved, timely reminders.
LNK Energies is solving the heavy-duty transport industry’s emissions problem with liquid organic hydrogen carriers (LOHCs): safe, room-temperature liquids compatible with existing diesel infrastructure.
Mendhai Health offers a suite of digital tools to help women improve pelvic health and rehabilitate before and after childbirth.
Nami has developed an automatic, reusable drinkware cleaning station that delivers a hot, soapy, pressurized wash in under 30 seconds.
Pancho helps restaurants improve margins with an AI-powered food procurement platform that uses real-time price comparison, dispute tracking, and smart ordering.
Qadence offers older adults a co-pilot that assesses mobility and fall risk, then delivers tailored guidance to improve balance, track progress, and extend recovery beyond the clinic.
Sensopore offers an at-home diagnostic device to help families test for everyday illnesses at home, get connected with a telehealth doctor, and have prescriptions shipped to their door, reducing clinical visits.
Spheric Bio has developed a personal occlusion device to improve a common surgical procedure used to treat strokes.
Tapestry uses conversational AI to chat with attendees before events and connect them with the right people for more meaningful conversations.
Torque automates financial analysis across private equity portfolios to help investment professionals make better strategic decisions.
Trazo helps interior designers and architects collaborate and iterate on technical drawings and 3D designs of new construction of remodeling projects.
DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid InteractionsThe research center, sponsored by the DOE’s National Nuclear Security Administration, will advance the simulation of extreme environments, such as those in hypersonic flight and atmospheric reentry.The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.
The Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions (CHEFSI) — a joint effort of the MIT Center for Computational Science and Engineering, the MIT Schwarzman College of Computing, and the MIT Institute for Soldier Nanotechnologies (ISN) — plans to harness cutting-edge exascale supercomputers and next-generation algorithms to simulate with unprecedented detail how extremely hot, fast-moving gaseous and solid materials interact. The understanding of these extreme environments — characterized by temperatures of more than 1,500 degrees Celsius and speeds as high as Mach 25 — and their effect on vehicles is central to national security, space exploration, and the development of advanced thermal protection systems.
“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”
CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.
CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.
“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”
Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team.
CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.
Ten years later, LIGO is a black-hole hunting machine LIGO, Virgo, and KAGRA celebrate the anniversary of the first detection of gravitational waves and announce verification of Stephen Hawking’s black hole area theorem.The following article is adapted from a press release issued by the Laser Interferometer Gravitational-wave Observatory (LIGO) Laboratory. LIGO is funded by the National Science Foundation and operated by Caltech and MIT, which conceived and built the project.
On Sept. 14, 2015, a signal arrived on Earth, carrying information about a pair of remote black holes that had spiraled together and merged. The signal had traveled about 1.3 billion years to reach us at the speed of light — but it was not made of light. It was a different kind of signal: a quivering of space-time called gravitational waves first predicted by Albert Einstein 100 years prior. On that day 10 years ago, the twin detectors of the U.S. National Science Foundation Laser Interferometer Gravitational-wave Observatory (NSF LIGO) made the first-ever direct detection of gravitational waves, whispers in the cosmos that had gone unheard until that moment.
The historic discovery meant that researchers could now sense the universe through three different means. Light waves, such as X-rays, optical, radio, and other wavelengths of light, as well as high-energy particles called cosmic rays and neutrinos, had been captured before, but this was the first time anyone had witnessed a cosmic event through the gravitational warping of space-time. For this achievement, first dreamed up more than 40 years prior, three of the team’s founders won the 2017 Nobel Prize in Physics: MIT’s Rainer Weiss, professor emeritus of physics (who recently passed away at age 92); Caltech’s Barry Barish, the Ronald and Maxine Linde Professor of Physics, Emeritus; and Caltech’s Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus.
Today, LIGO, which consists of detectors in both Hanford, Washington, and Livingston, Louisiana, routinely observes roughly one black hole merger every three days. LIGO now operates in coordination with two international partners, the Virgo gravitational-wave detector in Italy and KAGRA in Japan. Together, the gravitational-wave-hunting network, known as the LVK (LIGO, Virgo, KAGRA), has captured a total of about 300 black hole mergers, some of which are confirmed while others await further analysis. During the network’s current science run, the fourth since the first run in 2015, the LVK has discovered more than 200 candidate black hole mergers, more than double the number caught in the first three runs.
The dramatic rise in the number of LVK discoveries over the past decade is owed to several improvements to their detectors — some of which involve cutting-edge quantum precision engineering. The LVK detectors remain by far the most precise rulers for making measurements ever created by humans. The space-time distortions induced by gravitational waves are incredibly miniscule. For instance, LIGO detects changes in space-time smaller than 1/10,000 the width of a proton. That’s 1/700 trillionth the width of a human hair.
“Rai Weiss proposed the concept of LIGO in 1972, and I thought, ‘This doesn’t have much chance at all of working,’” recalls Thorne, an expert on the theory of black holes. “It took me three years of thinking about it on and off and discussing ideas with Rai and Vladimir Braginsky [a Russian physicist], to be convinced this had a significant possibility of success. The technical difficulty of reducing the unwanted noise that interferes with the desired signal was enormous. We had to invent a whole new technology. NSF was just superb at shepherding this project through technical reviews and hurdles.”
Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT and dean of the MIT School of Science, says that the challenges the team overcame to make the first discovery are still very much at play. “From the exquisite precision of the LIGO detectors to the astrophysical theories of gravitational-wave sources, to the complex data analyses, all these hurdles had to be overcome, and we continue to improve in all of these areas,” Mavalvala says. “As the detectors get better, we hunger for farther, fainter sources. LIGO continues to be a technological marvel.”
The clearest signal yet
LIGO’s improved sensitivity is exemplified in a recent discovery of a black hole merger referred to as GW250114. (The numbers denote the date the gravitational-wave signal arrived at Earth: January 14, 2025.) The event was not that different from LIGO’s first-ever detection (called GW150914) — both involve colliding black holes about 1.3 billion light-years away with masses between 30 to 40 times that of our sun. But thanks to 10 years of technological advances reducing instrumental noise, the GW250114 signal is dramatically clearer.
“We can hear it loud and clear, and that lets us test the fundamental laws of physics,” says LIGO team member Katerina Chatziioannou, Caltech assistant professor of physics and William H. Hurt Scholar, and one of the authors of a new study on GW250114 published in the Physical Review Letters.
By analyzing the frequencies of gravitational waves emitted by the merger, the LVK team provided the best observational evidence captured to date for what is known as the black hole area theorem, an idea put forth by Stephen Hawking in 1971 that says the total surface areas of black holes cannot decrease. When black holes merge, their masses combine, increasing the surface area. But they also lose energy in the form of gravitational waves. Additionally, the merger can cause the combined black hole to increase its spin, which leads to it having a smaller area. The black hole area theorem states that despite these competing factors, the total surface area must grow in size.
Later, Hawking and physicist Jacob Bekenstein concluded that a black hole’s area is proportional to its entropy, or degree of disorder. The findings paved the way for later groundbreaking work in the field of quantum gravity, which attempts to unite two pillars of modern physics: general relativity and quantum physics.
In essence, the LIGO detection allowed the team to “hear” two black holes growing as they merged into one, verifying Hawking’s theorem. (Virgo and KAGRA were offline during this particular observation.) The initial black holes had a total surface area of 240,000 square kilometers (roughly the size of Oregon), while the final area was about 400,000 square kilometers (roughly the size of California) — a clear increase. This is the second test of the black hole area theorem; an initial test was performed in 2021 using data from the first GW150914 signal, but because that data were not as clean, the results had a confidence level of 95 percent compared to 99.999 percent for the new data.
Thorne recalls Hawking phoning him to ask whether LIGO might be able to test his theorem immediately after he learned of the 2015 gravitational-wave detection. Hawking died in 2018 and sadly did not live to see his theory observationally verified. “If Hawking were alive, he would have reveled in seeing the area of the merged black holes increase,” Thorne says.
The trickiest part of this type of analysis had to do with determining the final surface area of the merged black hole. The surface areas of pre-merger black holes can be more readily gleaned as the pair spiral together, roiling space-time and producing gravitational waves. But after the black holes coalesce, the signal is not as clear-cut. During this so-called ringdown phase, the final black hole vibrates like a struck bell.
In the new study, the researchers precisely measured the details of the ringdown phase, which allowed them to calculate the mass and spin of the black hole and, subsequently, determine its surface area. More specifically, they were able, for the first time, to confidently pick out two distinct gravitational-wave modes in the ringdown phase. The modes are like characteristic sounds a bell would make when struck; they have somewhat similar frequencies but die out at different rates, which makes them hard to identify. The improved data for GW250114 meant that the team could extract the modes, demonstrating that the black hole’s ringdown occurred exactly as predicted by math models based on the Teukolsky formalism — devised in 1972 by Saul Teukolsky, now a professor at Caltech and Cornell University.
Another study from the LVK, submitted to Physical Review Letters today, places limits on a predicted third, higher-pitched tone in the GW250114 signal, and performs some of the most stringent tests yet of general relativity’s accuracy in describing merging black holes.
“A decade of improvements allowed us to make this exquisite measurement,” Chatziioannou says. “It took both of our detectors, in Washington and Louisiana, to do this. I don’t know what will happen in 10 more years, but in the first 10 years, we have made tremendous improvements to LIGO’s sensitivity. This not only means we are accelerating the rate at which we discover new black holes, but we are also capturing detailed data that expand the scope of what we know about the fundamental properties of black holes.”
Jenne Driggers, detection lead senior scientist at LIGO Hanford, adds, “It takes a global village to achieve our scientific goals. From our exquisite instruments, to calibrating the data very precisely, vetting and providing assurances about the fidelity of the data quality, searching the data for astrophysical signals, and packaging all that into something that telescopes can read and act upon quickly, there are a lot of specialized tasks that come together to make LIGO the great success that it is.”
Pushing the limits
LIGO and Virgo have also unveiled neutron stars over the past decade. Like black holes, neutron stars form from the explosive deaths of massive stars, but they weigh less and glow with light. Of note, in August 2017, LIGO and Virgo witnessed an epic collision between a pair of neutron stars — a kilonova — that sent gold and other heavy elements flying into space and drew the gaze of dozens of telescopes around the world, which captured light ranging from high-energy gamma rays to low-energy radio waves. The “multi-messenger” astronomy event marked the first time that both light and gravitational waves had been captured in a single cosmic event. Today, the LVK continues to alert the astronomical community to potential neutron star collisions, who then use telescopes to search the skies for signs of kilonovae.
“The LVK has made big strides in recent years to make sure we’re getting high-quality data and alerts out to the public in under a minute, so that astronomers can look for multi-messenger signatures from our gravitational-wave candidates,” Driggers says.
“The global LVK network is essential to gravitational-wave astronomy,” says Gianluca Gemme, Virgo spokesperson and director of research at the National Institute of Nuclear Physics in Italy. “With three or more detectors operating in unison, we can pinpoint cosmic events with greater accuracy, extract richer astrophysical information, and enable rapid alerts for multi-messenger follow-up. Virgo is proud to contribute to this worldwide scientific endeavor.”
Other LVK scientific discoveries include the first detection of collisions between one neutron star and one black hole; asymmetrical mergers, in which one black hole is significantly more massive than its partner black hole; the discovery of the lightest black holes known, challenging the idea that there is a “mass gap” between neutron stars and black holes; and the most massive black hole merger seen yet with a merged mass of 225 solar masses. For reference, the previous record holder for the most massive merger had a combined mass of 140 solar masses.
Even in the decades before LIGO began taking data, scientists were building foundations that made the field of gravitational-wave science possible. Breakthroughs in computer simulations of black hole mergers, for example, allow the team to extract and analyze the feeble gravitational-wave signals generated across the universe.
LIGO’s technological achievements, beginning as far back as the 1980s, include several far-reaching innovations, such as a new way to stabilize lasers using the so-called Pound–Drever–Hall technique. Invented in 1983 and named for contributing physicists Robert Vivian Pound, the late Ronald Drever of Caltech (a founder of LIGO), and John Lewis Hall, this technique is widely used today in other fields, such as the development of atomic clocks and quantum computers. Other innovations include cutting-edge mirror coatings that almost perfectly reflect laser light; “quantum squeezing” tools that enable LIGO to surpass sensitivity limits imposed by quantum physics; and new artificial intelligence methods that could further hush certain types of unwanted noise.
“What we are ultimately doing inside LIGO is protecting quantum information and making sure it doesn’t get destroyed by external factors,” Mavalvala says. “The techniques we are developing are pillars of quantum engineering and have applications across a broad range of devices, such as quantum computers and quantum sensors.”
In the coming years, the scientists and engineers of LVK hope to further fine-tune their machines, expanding their reach deeper and deeper into space. They also plan to use the knowledge they have gained to build another gravitational-wave detector, LIGO India. Having a third LIGO observatory would greatly improve the precision with which the LVK network can localize gravitational-wave sources.
Looking farther into the future, the team is working on a concept for an even larger detector, called Cosmic Explorer, which would have arms 40 kilometers long. (The twin LIGO observatories have 4-kilometer arms.) A European project, called Einstein Telescope, also has plans to build one or two huge underground interferometers with arms more than 10 kilometers long. Observatories on this scale would allow scientists to hear the earliest black hole mergers in the universe.
“Just 10 short years ago, LIGO opened our eyes for the first time to gravitational waves and changed the way humanity sees the cosmos,” says Aamir Ali, a program director in the NSF Division of Physics, which has supported LIGO since its inception. “There’s a whole universe to explore through this completely new lens and these latest discoveries show LIGO is just getting started.”
The LIGO-Virgo-KAGRA Collaboration
LIGO is funded by the U.S. National Science Foundation and operated by Caltech and MIT, which together conceived and built the project. Financial support for the Advanced LIGO project was led by NSF with Germany (Max Planck Society), the United Kingdom (Science and Technology Facilities Council), and Australia (Australian Research Council) making significant commitments and contributions to the project. More than 1,600 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. Additional partners are listed at my.ligo.org/census.php.
The Virgo Collaboration is currently composed of approximately 1,000 members from 175 institutions in 20 different (mainly European) countries. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa, Italy, and is funded by the French National Center for Scientific Research, the National Institute of Nuclear Physics in Italy, the National Institute of Subatomic Physics in the Netherlands, The Research Foundation – Flanders, and the Belgian Fund for Scientific Research. A list of the Virgo Collaboration groups can be found on the project website.
KAGRA is the laser interferometer with 3-kilometer arm length in Kamioka, Gifu, Japan. The host institute is the Institute for Cosmic Ray Research of the University of Tokyo, and the project is co-hosted by the National Astronomical Observatory of Japan and the High Energy Accelerator Research Organization. The KAGRA collaboration is composed of more than 400 members from 128 institutes in 17 countries/regions. KAGRA’s information for general audiences is at the website gwcenter.icrr.u-tokyo.ac.jp/en/. Resources for researchers are accessible at gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA.
Study explains how a rare gene variant contributes to Alzheimer’s diseaseLipid metabolism and cell membrane function can be disrupted in the neurons of people who carry rare variants of ABCA7.A new study from MIT neuroscientists reveals how rare variants of a gene called ABCA7 may contribute to the development of Alzheimer’s in some of the people who carry it.
Dysfunctional versions of the ABCA7 gene, which are found in a very small proportion of the population, contribute strongly to Alzheimer’s risk. In the new study, the researchers discovered that these mutations can disrupt the metabolism of lipids that play an important role in cell membranes.
This disruption makes neurons hyperexcitable and leads them into a stressed state that can damage DNA and other cellular components. These effects, the researchers found, could be reversed by treating neurons with choline, an important building block precursor needed to make cell membranes.
“We found pretty strikingly that when we treated these cells with choline, a lot of the transcriptional defects were reversed. We also found that the hyperexcitability phenotype and elevated amyloid beta peptides that we observed in neurons that lost ABCA7 was reduced after treatment,” says Djuna von Maydell, an MIT graduate student and the lead author of the study.
Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences, is the senior author of the paper, which appears today in Nature.
Membrane dysfunction
Genomic studies of Alzheimer’s patients have found that people who carry variants of ABCA7 that generate reduced levels of functional ABCA7 protein have about double the odds of developing Alzheimer’s as people who don’t have those variants.
ABCA7 encodes a protein that transports lipids across cell membranes. Lipid metabolism is also the primary target of a more common Alzheimer’s risk factor known as APOE4. In previous work, Tsai’s lab has shown that APOE4, which is found in about half of all Alzheimer’s patients, disrupts brain cells’ ability to metabolize lipids and respond to stress.
To explore how ABCA7 variants might contribute to Alzheimer’s risk, the researchers obtained tissue samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. Of about 1,200 samples in the dataset that had genetic information available, the researchers obtained 12 from people who carried a rare variant of ABCA7.
The researchers performed single-cell RNA sequencing of neurons from these ABCA7 carriers, allowing them to determine which other genes are affected when ABCA7 is missing. They found that the most significantly affected genes fell into three clusters related to lipid metabolism, DNA damage, and oxidative phosphorylation (the metabolic process that cells use to capture energy as ATP).
To investigate how those alterations could affect neuron function, the researchers introduced ABCA7 variants into neurons derived from induced pluripotent stem cells.
These cells showed many of the same gene expression changes as the cells from the patient samples, especially among genes linked to oxidative phosphorylation. Further experiments showed that the “safety valve” that normally lets mitochondria limit excess build-up of electrical charge was less active. This can lead to oxidative stress, a state that occurs when too many cell-damaging free radicals build up in tissues.
Using these engineered cells, the researchers also analyzed the effects of ABCA7 variants on lipid metabolism. Cells with the variants altered metabolism of a molecule called phosphatidylcholine, which could lead to membrane stiffness and may explain why the mitochondrial membranes of the cells were unable to function normally.
A boost in choline
Those findings raised the possibility that intervening in phosphatidylcholine metabolism might reverse some of the cellular effects of ABCA7 loss. To test that idea, the researchers treated neurons with ABCA7 mutations with a molecule called CDP-choline, a precursor of phosphatidylcholine.
As these cells began producing new phosphatidylcholine (both saturated and unsaturated forms), their mitochondrial membrane potentials also returned to normal, and their oxidative stress levels went down.
The researchers then used induced pluripotent stem cells to generate 3D tissue organoids made of neurons with the ABCA7 variant. These organoids developed higher levels of amyloid beta proteins, which form the plaques seen in the brains of Alzheimer’s patients. However, those levels returned to normal when the organoids were treated with CDP-choline. The treatment also reduced neurons’ hyperexcitability.
In a 2021 paper, Tsai’s lab found that CDP-choline treatment could also reverse many of the effects of another Alzheimer’s-linked gene variant, APOE4, in mice. She is now working with researchers at the University of Texas and MD Anderson Cancer Center on a clinical trial exploring how choline supplements affect people who carry the APOE4 gene.
Choline is naturally found in foods such as eggs, meat, fish, and some beans and nuts. Boosting choline intake with supplements may offer a way for many people to reduce their risk of Alzheimer’s disease, Tsai says.
“From APOE4 to ABCA7 loss of function, my lab demonstrates that disruption of lipid homeostasis leads to the development of Alzheimer’s-related pathology, and that restoring lipid homeostasis, such as through choline supplementation, can ameliorate these pathological phenotypes,” she says.
In addition to the rare variants of ABCA7 that the researchers studied in this paper, there is also a more common variant that is found at a frequency of about 18 percent in the population. This variant was thought to be harmless, but the MIT team showed that cells with this variant exhibited many of the same gene alterations in lipid metabolism that they found in cells with the rare ABCA7 variants.
“There’s more work to be done in this direction, but this suggests that ABCA7 dysfunction might play an important role in a much larger part of the population than just people who carry the rare variants,” von Maydell says.
The research was funded, in part, by the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Carol and Gene Ludwig Family Foundation, James D. Cook, and the National Institutes of Health.
Lincoln Laboratory technologies win seven R&D 100 Awards for 2025Inventions that protect US service members, advance computing, and enhance communications are recognized among the year's most significant new products.Seven technologies developed at MIT Lincoln Laboratory, either wholly or with collaborators, have earned 2025 R&D 100 Awards. This annual awards competition recognizes the year's most significant new technologies, products, and materials available on the marketplace or transitioned to use. An independent panel of technology experts and industry professionals selects the winners.
"Winning an R&D 100 Award is a recognition of the exceptional creativity and effort of our scientists and engineers. The awarded technologies reflect Lincoln Laboratory's mission to transform innovative ideas into real-world solutions for U.S. national security, industry, and society," says Melissa Choi, director of Lincoln Laboratory.
Lincoln Laboratory's winning technologies enhance national security in a range of ways, from securing satellite communication links and identifying nearby emitting devices to providing a layer of defense for U.S. Army vehicles and protecting service members from chemical threats. Other technologies are pushing frontiers in computing, enabling the 3D integration of chips and the close inspection of superconducting electronics. Industry is also benefiting from these developments — for example, by adopting an architecture that streamlines the development of laser communications terminals.
The online publication R&D World manages the awards program. Recipients span Fortune 500 companies, federally funded research institutions, academic and government labs, and small companies. Since 2010, Lincoln Laboratory has received 108 R&D 100 Awards.
Protecting lives
Tactical Optical Spherical Sensor for Interrogating Threats (TOSSIT) is a throwable, baseball-sized sensor that remotely detects hazardous vapors and aerosols. It is designed to alert soldiers, first responders, and law enforcement to the presence of chemical threats, like nerve and blister agents, industrial chemical accidents, or fentanyl dust. Users can simply toss, drone-drop, or launch TOSSIT into an area of concern. To detect specific chemicals, the sensor samples the air with a built-in fan and uses an internal camera to observe color changes on a removable dye card. If chemicals are present, TOSSIT alerts users wirelessly on an app or via audible, light-up, or vibrational alarms in the sensor.
"TOSSIT fills an unmet need for a chemical-vapor point sensor, one that senses the immediate environment around it, that can be kinetically deployed ahead of service personnel. It provides a low-cost sensing option for vapors and solid aerosol threats — think toxic dust particles — that would otherwise not be detectable by small deployed sensor systems,” says principal investigator Richard Kingsborough. TOSSIT has been tested extensively in the field and is currently being transferred to the military.
Wideband Selective Propagation Radar (WiSPR) is an advanced radar and communications system developed to protect U.S. Army armored vehicles. The system's active electronically scanned antenna array extends signal range at millimeter-wave frequencies, steering thousands of beams per second to detect incoming kinetic threats while enabling covert communications between vehicles. WiSPR is engineered to have a low probability of detection, helping U.S. Army units evade adversaries seeking to detect radio-frequency (RF) energy emitting from radars. The system is currently in production.
"Current global conflicts are highlighting the susceptibility of armored vehicles to adversary anti-tank weapons. By combining custom technologies and commercial off-the-shelf hardware, the Lincoln Laboratory team produced a WiSPR prototype as quickly and efficiently as possible," says program manager Christopher Serino, who oversaw WiSPR development with principal investigator David Conway.
Advancing computing
Bumpless Integration of Chiplets to Al-Optimized Fabric is an approach that enables the fabrication of next-generation 2D, 2.5D, and 3D integrated circuits. As data-processing demands increase, designers are exploring 3D stacked assemblies of small specialized chips (chiplets) to pack more power into devices. Tiny bumps of conductive material are used to electrically connect these stacks, but these microbumps cannot accommodate the extremely dense, massively interconnected components needed for future microcomputers. To address this issue, Lincoln Laboratory developed a technique eliminating microbumps. Key to this technique is a lithographically produced fabric allowing electrical bonding of chiplet stack layers. Researchers used an AI-driven decision-tree approach to optimize the design of this fabric. This bumpless feature can integrate hundreds of chiplets that perform like a single chip, improving data-processing speed and power efficiency, especially for high-performance AI applications.
"Our novel, bumpless, heterogeneous chiplet integration is a transformative approach addressing two semiconductor industry challenges: expanding chip yield and reducing cost and time to develop systems," says principal investigator Rabindra Das.
Quantum Diamond Magnetic Cryomicroscope is a breakthrough in magnetic field imaging for characterizing superconducting electronics, a promising frontier in high-performance computing. Unlike traditional techniques, this system delivers fast, wide-field, high-resolution imaging at the cryogenic temperatures required for superconducting devices. The instrument combines an optical microscopy system with a cryogenic sensor head containing a diamond engineered with nitrogen-vacancy centers — atomic-scale defects highly sensitive to magnetic fields. The cryomicroscope enables researchers to directly visualize trapped magnetic vortices that interfere with critical circuit components, helping to overcome a major obstacle to scaling superconducting electronics.
“The cryomicroscope gives us an unprecedented window into magnetic behavior in superconducting devices, accelerating progress toward next-generation computing technologies,” says Pauli Kehayias, joint principal investigator with Jennifer Schloss. The instrument is currently advancing superconducting electronics development at Lincoln Laboratory and is poised to impact materials science and quantum technology more broadly.
Enhancing communications
Lincoln Laboratory Radio Frequency Situational Awareness Model (LL RF-SAM) utilizes advances in AI to enhance U.S. service members' vigilance over the electromagnetic spectrum. The modern spectrum can be described as a swamp of mixed signals originating from civilian, military, or enemy sources. In near-real time, LL RF-SAM inspects these signals to disentangle and identify nearby waveforms and their originating devices. For example, LL RF-SAM can help a user identify a particular packet of energy as a drone transmission protocol and then classify whether that drone is part of a corpus of friendly or enemy drones.
"This type of enhanced context helps military operators make data-driven decisions. The future adoption of this technology will have profound impact across communications, signals intelligence, spectrum management, and wireless infrastructure security," says principal investigator Joey Botero.
Modular, Agile, Scalable Optical Terminal (MAScOT) is a laser communications (lasercom) terminal architecture that facilitates mission-enabling lasercom solutions adaptable to various space platforms and operating environments. Lasercom is rapidly becoming the go-to technology for space-to-space links in low Earth orbit because of its ability to support significantly higher data rates compared to radio frequency terminals. However, it has yet to be used operationally or commercially for longer-range space-to-ground links, as such systems often require custom designs for specific missions. MASCOT's modular, agile, and scalable design streamlines the process for building lasercom terminals suitable for a range of missions, from near Earth to deep space. MAScOT made its debut on the International Space Station in 2023 to demonstrate NASA's first two-way lasercom relay system, and is now being prepared to serve in an operational capacity on Artemis II, NASA's moon flyby mission scheduled for 2026. Two industry-built terminals have adopted the MAScOT architecture, and technology transfer to additional industry partners is ongoing.
"MAScOT is the latest lasercom terminal designed by Lincoln Laboratory engineers following decades of pioneering lasercom work with NASA, and it is poised to support lasercom for decades to come," says Bryan Robinson, who co-led MAScOT development with Tina Shih.
Protected Anti-jam Tactical SATCOM (PATS) Key Management System (KMS) Prototype addresses the critical challenge of securely distributing cryptographic keys for military satellite communications (SATCOM) during terminal jamming, compromise, or disconnection. Realizing the U.S. Space Systems Command's vision for resilient, protected tactical SATCOM, the PATS KMS Prototype leverages innovative, bandwidth-efficient protocols and algorithms to enable real-time, scalable key distribution over wireless links, even under attack, so that warfighters can communicate securely in contested environments. PATS KMS is now being adopted as the core of the Department of Defense's next-generation SATCOM architecture.
"PATS KMS is not just a technology — it's a linchpin enabler of resilient, modern SATCOM, built for the realities of today's contested battlefield. We worked hand-in-hand with government stakeholders, operational users, and industry partners across a multiyear, multiphase journey to bring this capability to life," says Joseph Sobchuk, co-principal investigator with Nancy List. The R&D 100 Award is shared with the U.S. Space Force Space Systems Command, whose “visionary leadership has been instrumental in shaping the future of protected tactical SATCOM,” Sobchuk adds.
Study finds cell memory can be more like a dimmer dial than an on/off switch The findings may redefine how cell identity is established and enable the creation of more sophisticated engineered tissues.When cells are healthy, we don’t expect them to suddenly change cell types. A skin cell on your hand won’t naturally morph into a brain cell, and vice versa. That’s thanks to epigenetic memory, which enables the expression of various genes to “lock in” throughout a cell’s lifetime. Failure of this memory can lead to diseases, such as cancer.
Traditionally, scientists have thought that epigenetic memory locks genes either “on” or “off” — either fully activated or fully repressed, like a permanent Lite-Brite pattern. But MIT engineers have found that the picture has many more shades.
In a new study appearing today in Cell Genomics, the team reports that a cell’s memory is set not by on/off switching but through a more graded, dimmer-like dial of gene expression.
The researchers carried out experiments in which they set the expression of a single gene at different levels in different cells. While conventional wisdom would assume the gene should eventually switch on or off, the researchers found that the gene’s original expression persisted: Cells whose gene expression was set along a spectrum between on and off remained in this in-between state.
The results suggest that epigenetic memory — the process by which cells retain gene expression and “remember” their identity — is not binary but instead analog, which allows for a spectrum of gene expression and associated cell identities.
“Our finding opens the possibility that cells commit to their final identity by locking genes at specific levels of gene expression instead of just on and off,” says study author Domitilla Del Vecchio, professor of mechanical and biological engineering at MIT. “The consequence is that there may be many more cell types in our body than we know and recognize today, that may have important functions and could underlie healthy or diseased states.”
The study’s MIT lead authors are Sebastian Palacios and Simone Bruno, with additional co-authors.
Beyond binary
Every cell shares the same genome, which can be thought of as the starting ingredient for life. As a cell takes shape, it differentiates into one type or another, through the expression of genes in its genome. Some genes are activated, while others are repressed. The combination steers a cell toward one identity versus another.
A process of DNA methylation, by which certain molecules attach to the genes’ DNA, helps lock their expression in place. DNA methylation assists a cell to “remember” its unique pattern of gene expression, which ultimately establishes the cell’s identity.
Del Vecchio’s group at MIT applies mathematics and genetic engineering to understand cellular molecular processes and to engineer cells with new capabilities. In previous work, her group was experimenting with DNA methylation and ways to lock the expression of certain genes in ovarian cells.
“The textbook understanding was that DNA methylation had a role to lock genes in either an on or off state,” Del Vecchio says. “We thought this was the dogma. But then we started seeing results that were not consistent with that.”
While many of the cells in their experiment exhibited an all-or-nothing expression of genes, a significant number of cells appeared to freeze genes in an in-between state — neither entirely on or off.
“We found there was a spectrum of cells that expressed any level between on and off,” Palacios says. “And we thought, how is this possible?”
Shades of blue
In their new study, the team aimed to see whether the in-between gene expression they observed was a fluke or a more established property of cells that until now has gone unnoticed.
“It could be that scientists disregarded cells that don’t have a clear commitment, because they assumed this was a transient state,” Del Vecchio says. “But actually these in-between cell types may be permanent states that could have important functions.”
To test their idea, the researchers ran experiments with hamster ovarian cells — a line of cells commonly used in the laboratory. In each cell, an engineered gene was initially set to a different level of expression. The gene was turned fully on in some cells, completely off in others, and set somewhere in between on and off for the remaining cells.
The team paired the engineered gene with a fluorescent marker that lights up with a brightness corresponding to the gene’s level of expression. The researchers introduced, for a short time, an enzyme that triggers the gene’s DNA methylation, a natural gene-locking mechanism. They then monitored the cells over five months to see whether the modification would lock the genes in place at their in-between expression levels, or whether the genes would migrate toward fully on or off states before locking in.
“Our fluorescent marker is blue, and we see cells glow across the entire spectrum, from really shiny blue, to dimmer and dimmer, to no blue at all,” Del Vecchio says. “Every intensity level is maintained over time, which means gene expression is graded, or analog, and not binary. We were very surprised, because we thought after such a long time, the gene would veer off, to be either fully on or off, but it did not.”
The findings open new avenues into engineering more complex artificial tissues and organs by tuning the expression of certain genes in a cell’s genome, like a dial on a radio, rather than a switch. The results also complicate the picture of how a cell’s epigenetic memory works to establish its identity. It opens up the possibility that cell modifications such as those exhibited in therapy-resistant tumors could be treated in a more precise fashion.
“Del Vecchio and colleagues have beautifully shown how analog memory arises through chemical modifications to the DNA itself,” says Michael Elowitz, professor of biology and biological engineering at the California institute of Technology, who was not involved in the study. “As a result, we can now imagine repurposing this natural analog memory mechanism, invented by evolution, in the field of synthetic biology, where it could help allow us to program permanent and precise multicellular behaviors.”
“One of the things that enables the complexity in humans is epigenetic memory,” Palacios says. “And we find that it is not what we thought. For me, that’s actually mind-blowing. And I think we’re going to find that this analog memory is relevant for many different processes across biology.”
This research was supported, in part, by the National Science Foundation, MODULUS, and a Vannevar Bush Faculty Fellowship through the U.S. Office of Naval Research.
“Bottlebrush” particles deliver big chemotherapy payloads directly to cancer cellsOutfitted with antibodies that guide them to the tumor site, the new nanoparticles could reduce the side effects of treatment.Using tiny particles shaped like bottlebrushes, MIT chemists have found a way to deliver a large range of chemotherapy drugs directly to tumor cells.
To guide them to the right location, each particle contains an antibody that targets a specific tumor protein. This antibody is tethered to bottlebrush-shaped polymer chains carrying dozens or hundreds of drug molecules — a much larger payload than can be delivered by any existing antibody-drug conjugates.
In mouse models of breast and ovarian cancer, the researchers found that treatment with these conjugated particles could eliminate most tumors. In the future, the particles could be modified to target other types of cancer, by swapping in different antibodies.
“We are excited about the potential to open up a new landscape of payloads and payload combinations with this technology, that could ultimately provide more effective therapies for cancer patients,” says Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the new study.
MIT postdoc Bin Liu is the lead author of the paper, which appears today in Nature Biotechnology.
A bigger drug payload
Antibody-drug conjugates (ADCs) are a promising type of cancer treatment that consist of a cancer-targeting antibody attached to a chemotherapy drug. At least 15 ADCs have been approved by the FDA to treat several different types of cancer.
This approach allows specific targeting of a cancer drug to a tumor, which helps to prevent some of the side effects that occur when chemotherapy drugs are given intravenously. However, one drawback to currently approved ADCs is that only a handful of drug molecules can be attached to each antibody. That means they can only be used with very potent drugs — usually DNA-damaging agents or drugs that interfere with cell division.
To try to use a broader range of drugs, which are often less potent, Johnson and his colleagues decided to adapt bottlebrush particles that they had previously invented. These particles consist of a polymer backbone that are attached to tens to hundreds of “prodrug” molecules — inactive drug molecules that are activated upon release within the body. This structure allows the particles to deliver a wide range of drug molecules, and the particles can be designed to carry multiple drugs in specific ratios.
Using a technique called click chemistry, the researchers showed that they could attach one, two, or three of their bottlebrush polymers to a single tumor-targeting antibody, creating an antibody-bottlebrush conjugate (ABC). This means that just one antibody can carry hundreds of prodrug molecules. The currently approved ADCs can carry a maximum of about eight drug molecules.
The huge number of payloads in the ABC particles allows the researchers to incorporate less potent cancer drugs such as doxorubicin or paclitaxel, which enhances the customizability of the particles and the variety of drug combinations that can be used.
“We can use antibody-bottlebrush conjugates to increase the drug loading, and in that case, we can use less potent drugs,” Liu says. “In the future, we can very easily copolymerize with multiple drugs together to achieve combination therapy.”
The prodrug molecules are attached to the polymer backbone by cleavable linkers. After the particles reach a tumor site, some of these linkers are broken right away, allowing the drugs to kill nearby cancer cells even if they don’t express the target antibody. Other particles are absorbed into cells with the target antibody before releasing their toxic payload.
Effective treatment
For this study, the researchers created ABC particles carrying a few different types of drugs: microtubule inhibitors called MMAE and paclitaxel, and two DNA-damaging agents, doxorubicin and SN-38. They also designed ABC particles carrying an experimental type of drug known as PROTAC (proteolysis-targeting chimera), which can selectively degrade disease-causing proteins inside cells.
Each bottlebrush was tethered to an antibody targeting either HER2, a protein often overexpressed in breast cancer, or MUC1, which is commonly found in ovarian, lung, and other types of cancer.
The researchers tested each of the ABCs in mouse models of breast or ovarian cancer and found that in most cases, the ABC particles were able to eradicate the tumors. This treatment was significantly more effective than giving the same bottlebrush prodrugs by injection, without being conjugated to a targeting antibody.
“We used a very low dose, almost 100 times lower compared to the traditional small-molecule drug, and the ABC still can achieve much better efficacy compared to the small-molecule drug given on its own,” Liu says.
These ABCs also performed better than two FDA-approved ADCs, T-DXd and TDM-1, which both use HER2 to target cells. T-DXd carries deruxtecan, which interferes with DNA replication, and TDM-1 carries emtansine, a microtubule inhibitor.
In future work, the MIT team plans to try delivering combinations of drugs that work by different mechanisms, which could enhance their overall effectiveness. Among these could be immunotherapy drugs such as STING activators.
The researchers are also working on swapping in different antibodies, such as antibodies targeting EGFR, which is widely expressed in many tumors. More than 100 antibodies have been approved to treat cancer and other diseases, and in theory any of those could be conjugated to cancer drugs to create a targeted therapy.
The research was funded in part by the National Institutes of Health, the Ludwig Center at MIT, and the Koch Institute Frontier Research Program.
Remembering David Baltimore, influential biologist and founding director of the Whitehead InstituteThe longtime MIT professor and Nobel laureate was a globally respected researcher, academic leader, and science policy visionary who guided the careers of generations of scientists.The Whitehead Institute for Biomedical Research fondly remembers its founding director, David Baltimore, a former MIT Institute Professor and Nobel laureate who died Sept. 6 at age 87.
With discovery after discovery, Baltimore brought to light key features of biology with direct implications for human health. His work at MIT earned him a share of the 1975 Nobel Prize in Physiology or Medicine (along with Howard Temin and Renato Dulbecco) for discovering reverse transcriptase and identifying retroviruses, which use RNA to synthesize viral DNA.
Following the award, Baltimore reoriented his laboratory’s focus to pursue a mix of immunology and virology. Among the lab’s most significant subsequent discoveries were the identification of a pair of proteins that play an essential role in enabling the immune system to create antibodies for so many different molecules, and investigations into how certain viruses can cause cell transformation and cancer. Work from Baltimore’s lab also helped lead to the development of the important cancer drug Gleevec — the first small molecule to target an oncoprotein inside of cells.
In 1982, Baltimore partnered with philanthropist Edwin C. “Jack” Whitehead to conceive and launch the Whitehead Institute and then served as its founding director until 1990. Within a decade of its founding, the Baltimore-led Whitehead Institute was named the world’s top research institution in molecular biology and genetics.
“More than 40 years later, Whitehead Institute is thriving, still guided by the strategic vision that David Baltimore and Jack Whitehead articulated,” says Phillip Sharp, MIT Institute Professor Emeritus, former Whitehead board member, and fellow Nobel laureate. “Of all David’s myriad and significant contributions to science, his role in building the first independent biomedical research institute associated with MIT and guiding it to extraordinary success may well prove to have had the broadest and longest-term impact.”
Ruth Lehmann, director and president of the Whitehead Institute, and professor of biology at MIT, says: “I, like many others, owe my career to David Baltimore. He recruited me to Whitehead Institute and MIT in 1988 as a faculty member, taking a risk on an unproven, freshly-minted PhD graduate from Germany. As director, David was incredibly skilled at bringing together talented scientists at different stages of their careers and facilitating their collaboration so that the whole would be greater than the sum of its parts. This approach remains a core strength of Whitehead Institute.”
As part of the Whitehead Institute’s mission to cultivate the next generation of scientific leaders, Baltimore founded the Whitehead Fellows program, which provides extraordinarily talented recent PhD and MD graduates with the opportunity to launch their own labs, rather than to go into traditional postdoctoral positions. The program has been a huge success, with former fellows going on to excel as leaders in research, education, and industry.
David Page, MIT professor of biology, Whitehead Institute member, and former director who was the Whitehead's first fellow, recalls, “David was both an amazing scientist and a peerless leader of aspiring scientists. The launching of the Whitehead Fellows program reflected his recipe for institutional success: gather up the resources to allow young scientists to realize their dreams, recruit with an eye toward potential for outsized impact, and quietly mentor and support without taking credit for others’ successes — all while treating junior colleagues as equals. It is a beautiful strategy that David designed and executed magnificently.”
Sally Kornbluth, president of MIT and a member of the Whitehead Institute Board of Directors, says that “David was a scientific hero for so many. He was one of those remarkable individuals who could make stellar scientific breakthroughs and lead major institutions with extreme thoughtfulness and grace. He will be missed by the whole scientific community.”
“David was a wise giant. He was brilliant. He was an extraordinarily effective, ethical leader and institution builder who influenced and inspired generations of scientists and premier institutions,” says Susan Whitehead, member of the board of directors and daughter of Jack Whitehead.
Gerald R. Fink, the Margaret and Herman Sokol Professor Emeritus at MIT who was recruited by Baltimore from Cornell University as one of four founding members of the Whitehead Institute, and who succeeded him as director in 1990, observes: “David became my hero and friend. He upheld the highest scientific ideals and instilled trust and admiration in all around him.”
David Baltimore - Infinite History (2010)
Video: MIT | Watch with transcript
Baltimore was born in New York City in 1938. His scientific career began at Swarthmore College, where he earned a bachelor’s degree with high honors in chemistry in 1960. He then began doctoral studies in biophysics at MIT, but in 1961 shifted his focus to animal viruses and moved to what is now the Rockefeller University, where he did his thesis work in the lab of Richard Franklin.
After completing postdoctoral fellowships with James Darnell at MIT and Jerard Hurwitz at the Albert Einstein College of Medicine, Baltimore launched his own lab at the Salk Institute for Biological Studies from 1965 to 1968. Then, in 1968, he returned to MIT as a member of its biology faculty, where he remained until 1990. (Whitehead Institute’s members hold parallel appointments as faculty in the MIT Department of Biology.)
In 1990, Baltimore left the Whitehead Institute and MIT to become the president of Rockefeller University. He returned to MIT from 1994 to 1997, serving as an Institute Professor, after which he was named president of Caltech. Baltimore held that position until 2006, when he was elected to a three-year term as president of the American Association for the Advancement of Science.
For decades, Baltimore has been viewed not just as a brilliant scientist and talented academic leader, but also as a wise counsel to the scientific community. For example, he helped organize the 1975 Asilomar Conference on Recombinant DNA, which created stringent safety guidelines for the study and use of recombinant DNA technology. He played a leadership role in the development of policies on AIDS research and treatment, and on genomic editing. Serving as an advisor to both organizations and individual scientists, he helped to shape the strategic direction of dozens of institutions and to advance the careers of generations of researchers. As Founding Member Robert Weinberg summarizes it, “He had no tolerance for nonsense and weak science.”
In 2023, the Whitehead Institute established the endowed David Baltimore Chair in Biomedical Research, honoring Baltimore’s six decades of scientific, academic, and policy leadership and his impact on advancing innovative basic biomedical research.
“David was a visionary leader in science and the institutions that sustain it. He devoted his career to advancing scientific knowledge and strengthening the communities that make discovery possible, and his leadership of Whitehead Institute exemplified this,” says Richard Young, MIT professor of biology and Whitehead Institute member. “David approached life with keen observation, boundless curiosity, and a gift for insight that made him both a brilliant scientist and a delightful companion. His commitment to mentoring and supporting young scientists left a lasting legacy, inspiring the next generation to pursue impactful contributions to biomedical research. Many of us found in him not only a mentor and role model, but also a steadfast friend whose presence enriched our lives and whose absence will be profoundly felt.”
Alzheimer’s erodes brain cells’ control of gene expression, undermining function, cognitionStudy of 3.5 million cells from more than 100 human brains finds Alzheimer’s progression — and resilience to disease — depends on preserving epigenomic stability.Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.
The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.
The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.
Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.
“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”
By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.
“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”
Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.
Compromised compartments and eroded information
Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.
In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).
The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.
For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.
“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.
But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.
Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.
Risk genes and “chromatin guardians”
Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.
Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.
In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.
“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.
“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”
Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.
Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.
Physicists devise an idea for lasers that shoot beams of neutrinosSuper-cooling radioactive atoms could produce a laser-like neutrino beam, offering a new way to study these ghostly particles — and possibly a new form of communication.At any given moment, trillions of particles called neutrinos are streaming through our bodies and every material in our surroundings, without noticeable effect. Smaller than electrons and lighter than photons, these ghostly entities are the most abundant particles with mass in the universe.
The exact mass of a neutrino is a big unknown. The particle is so small, and interacts so rarely with matter, that it is incredibly difficult to measure. Scientists attempt to do so by harnessing nuclear reactors and massive particle accelerators to generate unstable atoms, which then decay into various byproducts including neutrinos. In this way, physicists can manufacture beams of neutrinos that they can probe for properties including the particle’s mass.
Now MIT physicists propose a much more compact and efficient way to generate neutrinos that could be realized in a tabletop experiment.
In a paper appearing in Physical Review Letters, the physicists introduce the concept for a “neutrino laser” — a burst of neutrinos that could be produced by laser-cooling a gas of radioactive atoms down to temperatures colder than interstellar space. At such frigid temps, the team predicts the atoms should behave as one quantum entity, and radioactively decay in sync.
The decay of radioactive atoms naturally releases neutrinos, and the physicists say that in a coherent, quantum state this decay should accelerate, along with the production of neutrinos. This quantum effect should produce an amplified beam of neutrinos, broadly similar to how photons are amplified to produce conventional laser light.
“In our concept for a neutrino laser, the neutrinos would be emitted at a much faster rate than they normally would, sort of like a laser emits photons very fast,” says study co-author Ben Jones PhD ’15, an associate professor of physics at the University of Texas at Arlington.
As an example, the team calculated that such a neutrino laser could be realized by trapping 1 million atoms of rubidium-83. Normally, the radioactive atoms have a half-life of about 82 days, meaning that half the atoms decay, shedding an equivalent number of neutrinos, every 82 days. The physicists show that, by cooling rubidium-83 to a coherent, quantum state, the atoms should undergo radioactive decay in mere minutes.
“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says co-author Joseph Formaggio, professor of physics at MIT.
The team hopes to build a small tabletop demonstration to test their idea. If it works, they envision a neutrino laser could be used as a new form of communication, by which the particles could be sent directly through the Earth to underground stations and habitats. The neutrino laser could also be an efficient source of radioisotopes, which, along with neutrinos, are byproducts of radioactive decay. Such radioisotopes could be used to enhance medical imaging and cancer diagnostics.
Coherent condensate
For every atom in the universe, there are about a billion neutrinos. A large fraction of these invisible particles may have formed in the first moments following the Big Bang, and they persist in what physicists call the “cosmic neutrino background.” Neutrinos are also produced whenever atomic nuclei fuse together or break apart, such as in the fusion reactions in the sun’s core, and in the normal decay of radioactive materials.
Several years ago, Formaggio and Jones separately considered a novel possibility: What if a natural process of neutrino production could be enhanced through quantum coherence? Initial explorations revealed fundamental roadblocks in realizing this. Years later, while discussing the properties of ultracold tritium (an unstable isotope of hydrogen that undergoes radioactive decay) they asked: Could the production of neutrinos be enhanced if radioactive atoms such as tritium could be made so cold that they could be brought into a quantum state known as a Bose-Einstein condensate?
A Bose-Einstein condensate, or BEC, is a state of matter that forms when a gas of certain particles is cooled down to near absolute zero. At this point, the particles are brought down to their lowest energy level and stop moving as individuals. In this deep freeze, the particles can start to “feel” each others’ quantum effects, and can act as one coherent entity — a unique phase that can result in exotic physics.
BECs have been realized in a number of atomic species. (One of the first instances was with sodium atoms, by MIT’s Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for the result.) However, no one has made a BEC from radioactive atoms. To do so would be exceptionally challenging, as most radioisotopes have short half-lives and would decay entirely before they could be sufficiently cooled to form a BEC.
Nevertheless, Formaggio wondered, if radioactive atoms could be made into a BEC, would this enhance the production of neutrinos in some way? In trying to work out the quantum mechanical calculations, he found initially that no such effect was likely.
“It turned out to be a red herring — we can’t accelerate the process of radioactive decay, and neutrino production, just by making a Bose-Einstein condensate,” Formaggio says.
In sync with optics
Several years later, Jones revisited the idea, with an added ingredient: superradiance — a phenomenon of quantum optics that occurs when a collection of light-emitting atoms is stimulated to behave in sync. In this coherent phase, it’s predicted that the atoms should emit a burst of photons that is “superradiant,” or more radiant than when the atoms are normally out of sync.
Jones proposed to Formaggio that perhaps a similar superradiant effect is possible in a radioactive Bose-Einstein condensate, which could then result in a similar burst of neutrinos. The physicists went to the drawing board to work out the equations of quantum mechanics governing how light-emitting atoms morph from a coherent starting state into a superradiant state. They used the same equations to work out what radioactive atoms in a coherent BEC state would do.
“The outcome is: You get a lot more photons more quickly, and when you apply the same rules to something that gives you neutrinos, it will give you a whole bunch more neutrinos more quickly,” Formaggio explains. “That’s when the pieces clicked together, that superradiance in a radioactive condensate could enable this accelerated, laser-like neutrino emission.”
To test their concept in theory, the team calculated how neutrinos would be produced from a cloud of 1 million super-cooled rubidium-83 atoms. They found that, in the coherent BEC state, the atoms radioactively decayed at an accelerating rate, releasing a laser-like beam of neutrinos within minutes.
Now that the physicists have shown in theory that a neutrino laser is possible, they plan to test the idea with a small tabletop setup.
“It should be enough to take this radioactive material, vaporize it, trap it with lasers, cool it down, and then turn it into a Bose-Einstein condensate,” Jones says. “Then it should start doing this superradiance spontaneously.”
The pair acknowledge that such an experiment will require a number of precautions and careful manipulation.
“If it turns out that we can show it in the lab, then people can think about: Can we use this as a neutrino detector? Or a new form of communication?” Formaggio says. “That’s when the fun really starts.”
Study finds exoplanet TRAPPIST-1e is unlikely to have a Venus- or Mars-like atmosphereAstronomers led by EAPS postdoc Ana Glidden ruled out several atmospheric scenarios for the planet, narrowing ideas of what habitability there might look like.In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.
In an open-access paper published today in The Astrophysical Journal Letters, astronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.
“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.
The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.
“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”
The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.
Improved observations
Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.
“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.
JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.
“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”
Ruling out atmospheric conditions
The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.
The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.
“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.
AI and machine learning for engineering designPopular mechanical engineering course applies machine learning and AI theory to real-world engineering design.Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.
“When people think about mechanical engineering, they're thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”
In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.
“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.
First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.
The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.
Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.
Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”
The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.
“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.
“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.
Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.
“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.”
A human-centered approach to data visualizationBalancing automation and agency, Associate Professor Arvind Satyanarayan develops interactive data visualizations that amplify human creativity and cognition.The world is awash in data visualizations, from charts accompanying news stories on the economy to graphs tracking the weekly temperature to scatterplots showing relationships between baseball statistics.
At their core, data visualizations convey information, and everyone consumes that information differently. One person might scan the axes, while another may focus on an outlying data point or examine the magnitude of each colored bar.
But how do you consume that information if you can’t see it?
Making a data visualization accessible for blind and low-vision readers often involves writing a descriptive caption that captures some key points in a succinct paragraph.
“But that means blind and low-vision readers don’t get the ability to interpret the data for themselves. What if they had a different question about the data? Suddenly a simple caption doesn’t give them that. The core idea behind our group’s work in accessibility has been to maintain agency for blind and low-vision people,” says Arvind Satyanarayan, a newly tenured associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Satyanarayan’s group has explored making data visualizations accessible for screen readers, which narrate content on a computer screen. His team created a hierarchical platform that allows screen reader users to explore various levels of detail in a visualization with their keyboard, drilling down from high-level information to individual data points.
Under the umbrella of human-computer interaction (HCI) research, Satyanarayan’s Visualization Group also develops programming languages and authoring tools for visualizations, studies the sociocultural elements of visualization design, and uses visualizations to analyze machine-learning models.
For Satyanarayan, HCI is about promoting human agency, whether that means enabling a blind reader to interpret data trends or ensuring designers still feel in control of AI-driven visualization systems.
“We really take a human-centered approach to data visualization,” he says.
An eye for technology
Satyanarayan found the field of data visualization almost by accident.
As a child growing up in India, Bahrain, and Abu Dhabi, his initial interest in science sprouted from his love for tinkering.
Satyanarayan recalls his father bringing home a laptop, which he loaded with simple games. The internet grew up along with him, and as a teenager he became heavily engaged in the popular blogging platform Movable Type.
A teacher at heart even as a teenager, Satyanarayan offered tutorials on how to use the platform and ran a contest for people to style their blog. Along the way, he taught himself the skills to develop plugins and extensions.
He enjoyed designing eye-catching and user-friendly blogs, laying the foundation for his studies in human-computer interaction.
When he arrived at the University of California at San Diego for college, he was interested enough in the HCI field to take an introductory class.
“I’d always been a student of history, and this intro class really appealed to me because it was more about the history of user interfaces, and tracing the provenance and development of the ideas behind them,” he says.
Almost as an afterthought, he spoke with the professor, Jim Hollan — a pioneer of the field. Even though he hadn’t thought much about research beforehand, Satyanarayan ended up spending the summer in Hollan’s lab, studying how people interact with wall-sized displays.
As he prepared to pursue graduate studies (Satyanarayan split his PhD between Stanford University and the University of Washington), he was unsure whether to focus on programming languages or HCI. When it came time to choose, the human-centered focus of HCI and the interdisciplinarity of data visualization drew him in.
“Data visualization is deeply technical, but it also draws from cognitive science, perceptual psychology, and visual arts and aesthetics, and then it also has a big stake in civic and social responsibility,” he says.
He saw how visualization plays a role in civic and social responsibility through his first project with his PhD advisor, Jeffrey Heer. Satyanarayan and his collaborators built a data visualization interface for journalists at newsrooms that couldn’t afford to hire data departments. That drag-and-drop tool allowed journalists to design the visualization and all the data storytelling they wanted to do around it.
That project seeded many elements that became his thesis, for which he studied new programming languages for visualization and developed interactive graphical systems on top of them.
After earning his PhD, Satyanarayan sought a faculty job and spent an exhausting interview season crisscrossing the country, participating in 15 interviews in only two months.
MIT was his very last stop.
“I remember being exhausted and on autopilot, thinking that this is not going well. But then, the first day of my interview at MIT was filled with some of the best conversations I had. People were so eager and interested in understanding my research and how it connected to theirs,” he says.
Charting a collaborative course
The collaborative nature of MIT remained important as he built his research group; one of the group’s first graduate students was pursuing a PhD in MIT’s program in History, Anthropology, and Science, Technology, and Society. They continue to work closely with faculty who study anthropology, topics in the humanities, and clinical machine learning.
With interdisciplinary collaborators, the Visualization Group has explored the sociotechnical implications of data visualizations. For instance, charts are frequently shared, disseminated, and discussed on social media, where they are stripped of their context.
“What happens as a result is they can become vectors for misinformation or misunderstanding. But that is not because they are poorly designed to begin with. We spent a lot of time unpacking those details,” Satyanarayan says.
His group is also studying tactile graphics, which are common in museums to help blind and low-vision individuals interact with exhibits. Often, making a tactile graphic boils down to 3D-printing a chart.
“But a chart was designed to be read with our eyes, and our eyes work very differently than our fingers. We are now drilling into what it means to design tactile-first visualizations,” he says.
Co-design is a driving principle behind all his group’s accessibility work. On many projects, they work closely with Daniel Hajas, a researcher at the University College of London who has been blind since the age of 16.
“That has been really important for us, to make sure as people who are not blind, that we are developing tools and platforms that are actually useful for blind and low-vision people,” he says.
His group is also studying the sociocultural implications of data visualization. For instance, during the height of the Covid-19 pandemic, data visualizations were often turned into memes and social artifacts that were used to support or contest data from experts.
“In reality, neither data nor visualizations are neutral. We’ve been thinking about the data you use to visualize, and the design choices behind specific visualizations, and what that is communicating besides insights about the data,” he says.
Visualizing a real-world impact
Interdisciplinarity is also a theme of Satyanarayan’s interactive data visualization class, which he co-teaches with faculty members Sarah Williams and Catherine D'Ignazio in the Department of Urban Studies and Planning; and Crystal Lee in Comparative Media Studies/Writing, with shared appointments in the School of Arts, Humanities, and Social Sciences and the MIT Schwarzman College of Computing.
In the popular course, students not only learn the technical skills to make data visualizations, but they also build final projects centered on an area of social importance. For the past two years, students have focused on the housing affordability crisis in the Boston area, in partnership with the Massachusetts Area Planning Council. The students enjoy the opportunity to make a real-world impact with their work, Satyanarayan says.
And he enjoys the course as much as they do.
“I love teaching. I really enjoy getting to interact with the students. Our students are so intellectually curious and committed. It reassures me that our future is in good hands,” he says.
One of Satyanarayan’s personal interests is running along the Charles River Esplanade in Boston, which he does almost every day. He also enjoys cooking, especially with ingredients he has never used before.
Satyanarayan and his wife, who met while they were graduate students at Stanford (her PhD is in microbiology), also delight in tending their plot in the Fenway Victory Gardens, which is overflowing with lilies, lavender, lilacs, peonies, and roses.
Their newest addition is a miniature poodle puppy named Fen, which they got when Satyanarayan earned tenure earlier this year.
Thinking toward the future of his research, Satyanarayan is keen to further explore how generative AI might effectively assist people in building visualizations, and its implications for human creativity.
“In the world of generative AI, this question of agency applies to all of us,” he says. “How do we make sure, for these AI-driven systems, that we haven’t lost the parts of the work we find most interesting?”
J-WAFS welcomes Daniela Giardina as new executive directorSucceeding founding executive director Renee Robins, Giardina will help shape and implement the goals and initiatives of MIT’s eminent water and food program.The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) announced that Daniela Giardina has been named the new J-WAFS executive director. Giardina stepped into the role at the start of the fall semester, replacing founding executive director Renee J. Robins ’83, who is retiring after leading the program since its launch in 2014.
“Daniela brings a deep background in water and food security, along with excellent management and leadership skills,” says Robins. “Since I first met her nearly 10 years ago, I have been impressed with her commitment to working on global water and food challenges through research and innovation. I am so happy to know that I will be leaving J-WAFS in her experienced and capable hands.”
A decade of impact
J-WAFS fuels research, innovation, and collaboration to solve global water and food systems challenges. The mission of J-WAFS is to ensure safe and resilient supplies of water and food to meet the local and global needs of a dramatically growing population on a rapidly changing planet. J-WAFS funding opportunities are open to researchers in every MIT department, lab, and center, spanning all disciplines. Supported research projects include those involving engineering, science, technology, business, social science, economics, architecture, urban planning, and more. J-WAFS research and related activities include early-stage projects, sponsored research, commercialization efforts, student activities and mentorship, events that convene local and global experts, and international-scale collaborations.
The global water, food, and climate emergency makes J-WAFS’ work both timely and urgent. J-WAFS-funded researchers are achieving tangible, real-time solutions and results. Since its inception, J-WAFS has distributed nearly $26 million in grants, fellowships, and awards to the MIT community, supporting roughly 10 percent of MIT’s faculty and 300 students, postdocs, and research staff from 40 MIT departments, labs, and centers. J-WAFS grants have also helped researchers launch 13 startups and receive over $25 million in follow-on funding.
Giardina joins J-WAFS at an exciting time in the program’s history; in the spring, J-WAFS celebrated 10 years of supporting water and food research at MIT. The milestone was commemorated at a special event attended by MIT leadership, researchers, students, staff, donors, and others in the J-WAFS community. As J-WAFS enters its second decade, interest and opportunities for water and food research continue to grow. “I am truly honored to join J-WAFS at such a pivotal moment,” Giardina says.
Putting research into real-world practice
Giardina has nearly two decades of experience working with nongovernmental organizations and research institutions on humanitarian and development projects. Her work has taken her to Africa, Latin America, the Caribbean, and Central and Southeast Asia, where she has focused on water and food security projects. She has conducted technical trainings and assessments, and managed projects from design to implementation, including monitoring and evaluation.
Giardina comes to MIT from Oxfam America, where she directed disaster risk reduction and climate resilience initiatives, working on approaches to strengthen local leadership, community-based disaster risk reduction, and anticipatory action. Her role at Oxfam required her to oversee multimillion-dollar initiatives, supervising international teams, managing complex donor portfolios, and ensuring rigorous monitoring across programs. She connected hands-on research with community-oriented implementation, for example, by partnering with MIT’s D-Lab to launch an innovation lab in rural El Salvador. Her experience will help guide J-WAFS as it pursues impactful research that will make a difference on the ground.
Beyond program delivery, Giardina has played a strategic leadership role in shaping Oxfam’s global disaster risk reduction strategy and representing the organization at high-level U.N. and academic forums. She is multilingual and adept at building partnerships across cultures, having worked with governments, funders, and community-based organizations to strengthen resilience and advance equitable access to water and food.
Giardina holds a PhD in sustainable development from the University of Brescia in Italy. She also holds a master’s degree in environmental engineering from the Politecnico of Milan in Italy and is a chartered engineer since 2005 (equivalent to a professional engineering license in the United States). She also serves as vice chair of the Boston Network for International Development, a nonprofit that connects and strengthens Boston’s global development community.
“I have seen first-hand how climate change, misuse of resources, and inequality are undermining water and food security around the globe,” says Giardina. “What particularly excites me about J-WAFS is its interdisciplinary approach in facilitating meaningful partnerships to solve many of these problems through research and innovation. I am eager to help expand J-WAFS’ impact by strengthening existing programs, developing new initiatives, and building strategic partnerships that translate MIT's groundbreaking research into real-world solutions,” she adds.
A legacy of leadership
Renee Robins will retire with over 23 years of service to MIT. Years before joining the staff, she graduated from MIT with dual bachelor’s degrees in both biology and humanities/anthropology. She then went on to earn a master’s degree in public policy from Carnegie Mellon University. In 1998, she came back to MIT to serve in various roles across campus, including with the Cambridge MIT Institute, the MIT Portugal Program, the Mexico City Program, the Program on Emerging Technologies, and the Technology and Policy Program. She also worked at the Harvard Graduate School of Education, where she managed a $15 million research program as it scaled from implementation in one public school district to 59 schools in seven districts across North Carolina.
In late 2014, Robins joined J-WAFS as its founding executive director, playing a pivotal role in building it from the ground up and expanding the team to six full-time professionals. She worked closely with J-WAFS founding director Professor John H. Lienhard V to develop and implement funding initiatives, develop, and shepherd corporate-sponsored research partnerships, and mentor students in the Water Club and Food and Agriculture Club, as well as numerous other students. Throughout the years, Robins has inspired a diverse range of researchers to consider how their capabilities and expertise can be applied to water and food challenges. Perhaps most importantly, her leadership has helped cultivate a vibrant community, bringing together faculty, students, and research staff to be exposed to unfamiliar problems and new methodologies, to explore how their expertise might be applied, to learn from one another, and to collaborate.
At the J-WAFS 10th anniversary event in May, Robins noted, “it has been a true privilege to work alongside John Lienhard, our dedicated staff, and so many others. It’s been particularly rewarding to see the growth of an MIT network of water and food researchers that J-WAFS has nurtured, which grew out of those few individuals who saw themselves to be working in solitude on these critical challenges.”
Lienhard also spoke, thanking Robins by saying she “was my primary partner in building J-WAFS and [she is] a strong leader and strategic thinker.”
Not only is Robins a respected leader, she is also a dear friend to so many at MIT and beyond. In 2021, she was recognized for her outstanding leadership and commitment to J-WAFS and the Institute with an MIT Infinite Mile Award in the area of the Offices of the Provost and Vice President for Research.
Outside of MIT, Robins has served on the Board of Trustees for the International Honors Program — a comparative multi-site study abroad program, where she previously studied comparative culture and anthropology in seven countries around the world. Robins has also acted as an independent consultant, including work on program design and strategy around the launch of the Université Mohammed VI Polytechnique in Morocco.
Continuing the tradition of excellence
Giardina will report to J-WAFS director Rohit Karnik, the Abdul Latif Jameel Professor of Water and Food in the MIT Department of Mechanical Engineering. Karnik was named the director of J-WAFS in January, succeeding John Lienhard, who retired earlier this year.
As executive director, Giardina will be instrumental in driving J-WAFS’ mission and impact. She will work with Karnik to help shape J-WAFS’ programs, long-term strategy, and goals. She will also be responsible for supervising J-WAFS staff, managing grant administration, and overseeing and advising on financial decisions.
“I am very grateful to John and Renee, who have helped to establish J-WAFS as the Institute’s preeminent program for water and food research and significantly expanded MIT’s research efforts and impact in the water and food space,” says Karnik. “I am confident that with Daniela as executive director, J-WAFS will continue in the tradition of excellence that Renee and John put into place, as we move into the program’s second decade,” he notes.
Giardina adds, “I am inspired by the lab’s legacy of Renee Robins and Professor Lienhard, and I look forward to working with Professor Karnik and the J-WAFS staff.”
A comprehensive cellular-resolution map of brain activityAn international collaboration of neuroscientists, including MIT Professor Ila Fiete, developed a brain-wide map of decision-making at cellular resolution in mice.The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists.
Researchers from the International Brain Laboratory (IBL), including MIT neuroscientist Ila Fiete, published their open-access findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.
“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains co-founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half-a-million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95 percent of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a group leader at the University of Geneva in Switzerland.
Modeling decision-making
The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called neuropixels probes, for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.
“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences, an associate investigator at the McGovern Institute for Brain Research, and director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brain-wide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now, we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision-making,” says Fiete.
The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.
In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.
Brain-wide results
The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function, and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in the future.
“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”
The second paper, “Brain-wide representations of prior information,” showed that prior expectations — our beliefs about what is likely to happen based on our recent experience — are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.
“Much remains to be unpacked: If it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.
Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.
New model of collaborative neuroscience
Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.
All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.
This research was supported by grants from Wellcome, the Simons Foundation, the National Institutes of Health, the National Science Foundation, the Gatsby Charitable Foundation, and by the Max Planck Society and the Humboldt Foundation.
A greener way to 3D print stronger stuffMIT CSAIL researchers developed SustainaPrint, a system that reinforces only the weakest zones of eco-friendly 3D prints, achieving strong results with less plastic.3D printing has come a long way since its invention in 1983 by Chuck Hull, who pioneered stereolithography, a technique that solidifies liquid resin into solid objects using ultraviolet lasers. Over the decades, 3D printers have evolved from experimental curiosities into tools capable of producing everything from custom prosthetics to complex food designs, architectural models, and even functioning human organs.
But as the technology matures, its environmental footprint has become increasingly difficult to set aside. The vast majority of consumer and industrial 3D printing still relies on petroleum-based plastic filament. And while “greener” alternatives made from biodegradable or recycled materials exist, they come with a serious trade-off: they’re often not as strong. These eco-friendly filaments tend to become brittle under stress, making them ill-suited for structural applications or load-bearing parts — exactly where strength matters most.
This trade-off between sustainability and mechanical performance prompted researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Hasso Plattner Institute to ask: Is it possible to build objects that are mostly eco-friendly, but still strong where it counts?
Their answer is SustainaPrint, a new software and hardware toolkit designed to help users strategically combine strong and weak filaments to get the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model through finite element analysis simulations, predicts where the object is most likely to experience stress, and then reinforces just those zones with stronger material. The rest of the part can be printed using greener, weaker filament, reducing plastic use while preserving structural integrity.
“Our hope is that SustainaPrint can be used in industrial and distributed manufacturing settings one day, where local material stocks may vary in quality and composition,” says MIT PhD student and CSAIL researcher Maxine Perroni-Scharf, who is a lead author on a paper presenting the project. “In these contexts, the testing toolkit could help ensure the reliability of available filaments, while the software’s reinforcement strategy could reduce overall material consumption without sacrificing function.”
For their experiments, the team used Polymaker’s PolyTerra PLA as the eco-friendly filament, and standard or Tough PLA from Ultimaker for reinforcement. They used a 20 percent reinforcement threshold to show that even a small amount of strong plastic goes a long way. Using this ratio, SustainaPrint was able to recover up to 70 percent of the strength of an object printed entirely with high-performance plastic.
They printed dozens of objects, from simple mechanical shapes like rings and beams to more functional household items such as headphone stands, wall hooks, and plant pots. Each object was printed three ways: once using only eco-friendly filament, once using only strong PLA, and once with the hybrid SustainaPrint configuration. The printed parts were then mechanically tested by pulling, bending, or otherwise breaking them to measure how much force each configuration could withstand.
In many cases, the hybrid prints held up nearly as well as the full-strength versions. For example, in one test involving a dome-like shape, the hybrid version outperformed the version printed entirely in Tough PLA. The team believes this may be due to the reinforced version’s ability to distribute stress more evenly, avoiding the brittle failure sometimes caused by excessive stiffness.
“This indicates that in certain geometries and loading conditions, mixing materials strategically may actually outperform a single homogenous material,” says Perroni-Scharf. “It’s a reminder that real-world mechanical behavior is full of complexity, especially in 3D printing, where interlayer adhesion and tool path decisions can affect performance in unexpected ways.”
A lean, green, eco-friendly printing machine
SustainaPrint starts off by letting a user upload their 3D model into a custom interface. By selecting fixed regions and areas where forces will be applied, the software then uses an approach called “Finite Element Analysis” to simulate how the object will deform under stress. It then creates a map showing pressure distribution inside the structure, highlighting areas under compression or tension, and applies heuristics to segment the object into two categories: those that need reinforcement, and those that don’t.
Recognizing the need for accessible and low-cost testing, the team also developed a DIY testing toolkit to help users assess strength before printing. The kit has a 3D-printable device with modules for measuring both tensile and flexural strength. Users can pair the device with common items like pull-up bars or digital scales to get rough, but reliable performance metrics. The team benchmarked their results against manufacturer data and found that their measurements consistently fell within one standard deviation, even for filaments that had undergone multiple recycling cycles.
Although the current system is designed for dual-extrusion printers, the researchers believe that with some manual filament swapping and calibration, it could be adapted for single-extruder setups, too. In current form, the system simplifies the modeling process by allowing just one force and one fixed boundary per simulation. While this covers a wide range of common use cases, the team sees future work expanding the software to support more complex and dynamic loading conditions. The team also sees potential in using AI to infer the object’s intended use based on its geometry, which could allow for fully automated stress modeling without manual input of forces or boundaries.
3D for free
The researchers plan to release SustainaPrint open-source, making both the software and testing toolkit available for public use and modification. Another initiative they aspire to bring to life in the future: education. “In a classroom, SustainaPrint isn’t just a tool, it’s a way to teach students about material science, structural engineering, and sustainable design, all in one project,” says Perroni-Scharf. “It turns these abstract concepts into something tangible.”
As 3D printing becomes more embedded in how we manufacture and prototype everything from consumer goods to emergency equipment, sustainability concerns will only grow. With tools like SustainaPrint, those concerns no longer need to come at the expense of performance. Instead, they can become part of the design process: built into the very geometry of the things we make.
Co-author Patrick Baudisch, who is a professor at the Hasso Plattner Institute, adds that “the project addresses a key question: What is the point of collecting material for the purpose of recycling, when there is no plan to actually ever use that material? Maxine presents the missing link between the theoretical/abstract idea of 3D printing material recycling and what it actually takes to make this idea relevant.”
Perroni-Scharf and Baudisch wrote the paper with CSAIL research assistant Jennifer Xiao; MIT Department of Electrical Engineering and Computer Science master’s student Cole Paulin ’24; master’s student Ray Wang SM ’25 and PhD student Ticha Sethapakdi SM ’19 (both CSAIL members); Hasso Plattner Institute PhD student Muhammad Abdullah; and Associate Professor Stefanie Mueller, lead of the Human-Computer Interaction Engineering Group at CSAIL.
The researchers’ work was supported by a Designing for Sustainability Grant from the Designing for Sustainability MIT-HPI Research Program. Their work will be presented at the ACM Symposium on User Interface Software and Technology in September.
A new generative AI approach to predicting chemical reactionsSystem developed at MIT could provide realistic predictions for a wide variety of reactions, while maintaining real-world physical constraints.Many attempts have been made to harness the power of new artificial intelligence and large language models (LLMs) to try to predict the outcomes of new chemical reactions. These have had limited success, in part because until now they have not been grounded in an understanding of fundamental physical principles, such as the laws of conservation of mass. Now, a team of researchers at MIT has come up with a way of incorporating these physical constraints on a reaction prediction model, and thus greatly improving the accuracy and reliability of its outputs.
The new work was reported Aug. 20 in the journal Nature, in a paper by recent postdoc Joonyoung Joung (now an assistant professor at Kookmin University, South Korea); former software engineer Mun Hong Fong (now at Duke University); chemical engineering graduate student Nicholas Casetti; postdoc Jordan Liles; physics undergraduate student Ne Dassanayake; and senior author Connor Coley, who is the Class of 1957 Career Development Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science.
“The prediction of reaction outcomes is a very important task,” Joung explains. For example, if you want to make a new drug, “you need to know how to make it. So, this requires us to know what product is likely” to result from a given set of chemical inputs to a reaction. But most previous efforts to carry out such predictions look only at a set of inputs and a set of outputs, without looking at the intermediate steps or considering the constraints of ensuring that no mass is gained or lost in the process, which is not possible in actual reactions.
Joung points out that while large language models such as ChatGPT have been very successful in many areas of research, these models do not provide a way to limit their outputs to physically realistic possibilities, such as by requiring them to adhere to conservation of mass. These models use computational “tokens,” which in this case represent individual atoms, but “if you don’t conserve the tokens, the LLM model starts to make new atoms, or deletes atoms in the reaction.” Instead of being grounded in real scientific understanding, “this is kind of like alchemy,” he says. While many attempts at reaction prediction only look at the final products, “we want to track all the chemicals, and how the chemicals are transformed” throughout the reaction process from start to end, he says.
In order to address the problem, the team made use of a method developed back in the 1970s by chemist Ivar Ugi, which uses a bond-electron matrix to represent the electrons in a reaction. They used this system as the basis for their new program, called FlowER (Flow matching for Electron Redistribution), which allows them to explicitly keep track of all the electrons in the reaction to ensure that none are spuriously added or deleted in the process.
The system uses a matrix to represent the electrons in a reaction, and uses nonzero values to represent bonds or lone electron pairs and zeros to represent a lack thereof. “That helps us to conserve both atoms and electrons at the same time,” says Fong. This representation, he says, was one of the key elements to including mass conservation in their prediction system.
The system they developed is still at an early stage, Coley says. “The system as it stands is a demonstration — a proof of concept that this generative approach of flow matching is very well suited to the task of chemical reaction prediction.” While the team is excited about this promising approach, he says, “we’re aware that it does have specific limitations as far as the breadth of different chemistries that it’s seen.” Although the model was trained using data on more than a million chemical reactions, obtained from a U.S. Patent Office database, those data do not include certain metals and some kinds of catalytic reactions, he says.
“We’re incredibly excited about the fact that we can get such reliable predictions of chemical mechanisms” from the existing system, he says. “It conserves mass, it conserves electrons, but we certainly acknowledge that there’s a lot more expansion and robustness to work on in the coming years as well.”
But even in its present form, which is being made freely available through the online platform GitHub, “we think it will make accurate predictions and be helpful as a tool for assessing reactivity and mapping out reaction pathways,” Coley says. “If we’re looking toward the future of really advancing the state of the art of mechanistic understanding and helping to invent new reactions, we’re not quite there. But we hope this will be a steppingstone toward that.”
“It’s all open source,” says Fong. “The models, the data, all of them are up there,” including a previous dataset developed by Joung that exhaustively lists the mechanistic steps of known reactions. “I think we are one of the pioneering groups making this dataset, and making it available open-source, and making this usable for everyone,” he says.
The FlowER model matches or outperforms existing approaches in finding standard mechanistic pathways, the team says, and makes it possible to generalize to previously unseen reaction types. They say the model could potentially be relevant for predicting reactions for medicinal chemistry, materials discovery, combustion, atmospheric chemistry, and electrochemical systems.
In their comparisons with existing reaction prediction systems, Coley says, “using the architecture choices that we’ve made, we get this massive increase in validity and conservation, and we get a matching or a little bit better accuracy in terms of performance.”
He adds that “what’s unique about our approach is that while we are using these textbook understandings of mechanisms to generate this dataset, we’re anchoring the reactants and products of the overall reaction in experimentally validated data from the patent literature.” They are inferring the underlying mechanisms, he says, rather than just making them up. “We’re imputing them from experimental data, and that’s not something that has been done and shared at this kind of scale before.”
The next step, he says, is “we are quite interested in expanding the model’s understanding of metals and catalytic cycles. We’ve just scratched the surface in this first paper,” and most of the reactions included so far don’t include metals or catalysts, “so that’s a direction we’re quite interested in.”
In the long term, he says, “a lot of the excitement is in using this kind of system to help discover new complex reactions and help elucidate new mechanisms. I think that the long-term potential impact is big, but this is of course just a first step.”
The work was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium and the National Science Foundation.
3 Questions: The pros and cons of synthetic data in AI Artificially created data offer benefits from cost savings to privacy preservation, but their limitations require careful planning and evaluation, Kalyan Veeramachaneni says.Synthetic data are artificially generated by algorithms to mimic the statistical properties of actual data, without containing any information from real-world sources. While concrete numbers are hard to pin down, some estimates suggest that more than 60 percent of data used for AI applications in 2024 was synthetic, and this figure is expected to grow across industries.
Because synthetic data don’t contain real-world information, they hold the promise of safeguarding privacy while reducing the cost and increasing the speed at which new AI models are developed. But using synthetic data requires careful evaluation, planning, and checks and balances to prevent loss of performance when AI models are deployed.
To unpack some pros and cons of using synthetic data, MIT News spoke with Kalyan Veeramachaneni, a principal research scientist in the Laboratory for Information and Decision Systems and co-founder of DataCebo whose open-core platform, the Synthetic Data Vault, helps users generate and test synthetic data.
Q: How are synthetic data created?
A: Synthetic data are algorithmically generated but do not come from a real situation. Their value lies in their statistical similarity to real data. If we’re talking about language, for instance, synthetic data look very much as if a human had written those sentences. While researchers have created synthetic data for a long time, what has changed in the past few years is our ability to build generative models out of data and use them to create realistic synthetic data. We can take a little bit of real data and build a generative model from that, which we can use to create as much synthetic data as we want. Plus, the model creates synthetic data in a way that captures all the underlying rules and infinite patterns that exist in the real data.
There are essentially four different data modalities: language, video or images, audio, and tabular data. All four of them have slightly different ways of building the generative models to create synthetic data. An LLM, for instance, is nothing but a generative model from which you are sampling synthetic data when you ask it a question.
A lot of language and image data are publicly available on the internet. But tabular data, which is the data collected when we interact with physical and social systems, is often locked up behind enterprise firewalls. Much of it is sensitive or private, such as customer transactions stored by a bank. For this type of data, platforms like the Synthetic Data Vault provide software that can be used to build generative models. Those models then create synthetic data that preserve customer privacy and can be shared more widely.
One powerful thing about this generative modeling approach for synthesizing data is that enterprises can now build a customized, local model for their own data. Generative AI automates what used to be a manual process.
Q: What are some benefits of using synthetic data, and which use-cases and applications are they particularly well-suited for?
A: One fundamental application which has grown tremendously over the past decade is using synthetic data to test software applications. There is data-driven logic behind many software applications, so you need data to test that software and its functionality. In the past, people have resorted to manually generating data, but now we can use generative models to create as much data as we need.
Users can also create specific data for application testing. Say I work for an e-commerce company. I can generate synthetic data that mimics real customers who live in Ohio and made transactions pertaining to one particular product in February or March.
Because synthetic data aren’t drawn from real situations, they are also privacy-preserving. One of the biggest problems in software testing has been getting access to sensitive real data for testing software in non-production environments, due to privacy concerns. Another immediate benefit is in performance testing. You can create a billion transactions from a generative model and test how fast your system can process them.
Another application where synthetic data hold a lot of promise is in training machine-learning models. Sometimes, we want an AI model to help us predict an event that is less frequent. A bank may want to use an AI model to predict fraudulent transactions, but there may be too few real examples to train a model that can identify fraud accurately. Synthetic data provide data augmentation — additional data examples that are similar to the real data. These can significantly improve the accuracy of AI models.
Also, sometimes users don’t have time or the financial resources to collect all the data. For instance, collecting data about customer intent would require conducting many surveys. If you end up with limited data and then try to train a model, it won’t perform well. You can augment by adding synthetic data to train those models better.
Q. What are some of the risks or potential pitfalls of using synthetic data, and are there steps users can take to prevent or mitigate those problems?
A. One of the biggest questions people often have in their mind is, if the data are synthetically created, why should I trust them? Determining whether you can trust the data often comes down to evaluating the overall system where you are using them.
There are a lot of aspects of synthetic data we have been able to evaluate for a long time. For instance, there are existing methods to measure how close synthetic data are to real data, and we can measure their quality and whether they preserve privacy. But there are other important considerations if you are using those synthetic data to train a machine-learning model for a new use case. How would you know the data are going to lead to models that still make valid conclusions?
New efficacy metrics are emerging, and the emphasis is now on efficacy for a particular task. You must really dig into your workflow to ensure the synthetic data you add to the system still allow you to draw valid conclusions. That is something that must be done carefully on an application-by-application basis.
Bias can also be an issue. Since it is created from a small amount of real data, the same bias that exists in the real data can carry over into the synthetic data. Just like with real data, you would need to purposefully make sure the bias is removed through different sampling techniques, which can create balanced datasets. It takes some careful planning, but you can calibrate the data generation to prevent the proliferation of bias.
To help with the evaluation process, our group created the Synthetic Data Metrics Library. We worried that people would use synthetic data in their environment and it would give different conclusions in the real world. We created a metrics and evaluation library to ensure checks and balances. The machine learning community has faced a lot of challenges in ensuring models can generalize to new situations. The use of synthetic data adds a whole new dimension to that problem.
I expect that the old systems of working with data, whether to build software applications, answer analytical questions, or train models, will dramatically change as we get more sophisticated at building these generative models. A lot of things we have never been able to do before will now be possible.
Soft materials hold onto “memories” of their past, for longer than previously thoughtNew findings could help manufacturers design gels, lotions, or even paving materials that last longer and perform more predictably.If your hand lotion is a bit runnier than usual coming out of the bottle, it might have something to do with the goop’s “mechanical memory.”
Soft gels and lotions are made by mixing ingredients until they form a stable and uniform substance. But even after a gel has set, it can hold onto “memories,” or residual stress, from the mixing process. Over time, the material can give in to these embedded stresses and slide back into its former, premixed state. Mechanical memory is, in part, why hand lotion separates and gets runny over time.
Now, an MIT engineer has devised a simple way to measure the degree of residual stress in soft materials after they have been mixed, and found that common products like hair gel and shaving cream have longer mechanical memories, holding onto residual stresses for longer periods of time than manufacturers might have assumed.
In a study appearing today in Physical Review Letters, Crystal Owens, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), presents a new protocol for measuring residual stress in soft, gel-like materials, using a standard benchtop rheometer.
Applying this protocol to everyday soft materials, Owens found that if a gel is made by mixing it in one direction, once it settles into a stable and uniform state, it effectively holds onto the memory of the direction in which it is mixed. Even after several days, the gel will hold some internal stress that, if released, will cause the gel to shift in the direction opposite to how it was initially mixed, reverting back to its earlier state.
“This is one reason different batches of cosmetics or food behave differently even if they underwent ‘identical’ manufacturing,” Owens says. “Understanding and measuring these hidden stresses during processing could help manufacturers design better products that last longer and perform more predictably.”
A soft glass
Hand lotion, hair gel, and shaving cream all fall under the category of “soft glassy materials” — materials that exhibit properties of both solids and liquids.
“Anything you can pour into your hand and it forms a soft mound is going to be considered a soft glass,” Owens explains. “In materials science, it’s considered a soft version of something that has the same amorphous structure as glass.”
In other words, a soft glassy material is a strange amalgam of a solid and a liquid. It can be poured out like a liquid, and it can hold its shape like a solid. Once they are made, these materials exist in a delicate balance between solid and liquid. And Owens wondered: For how long?
“What happens to these materials after very long times? Do they finally relax or do they never relax?” Owens says. “From a physics perspective, that’s a very interesting concept: What is the essential state of these materials?”
Twist and hold
In the manufacturing of soft glassy materials such as hair gel and shampoo, ingredients are first mixed into a uniform product. Quality control engineers then let a sample sit for about a minute — a period of time that they assume is enough to allow any residual stresses from the mixing process dissipate. In that time, the material should settle into a steady, stable state, ready for use.
But Owens suspected that the materials may hold some degree of stress from the production process long after they’ve appeared to settle.
“Residual stress is a low level of stress that’s trapped inside a material after it’s come to a steady state,” Owens says. “This sort of stress has not been measured in these sorts of materials.”
To test her hypothesis, she carried out experiments with two common soft glassy materials: hair gel and shaving cream. She made measurements of each material in a rheometer — an instrument consisting of two rotating plates that can twist and press a material together at precisely controlled pressures and forces that relate directly to the material’s internal stresses and strains.
In her experiments, she placed each material in the rheometer and spun the instrument’s top plate around to mix the material. Then she let the material settle, and then settle some more — much longer than one minute. During this time, she observed the amount of force it took the rheometer to hold the material in place. She reasoned that the greater the rheometer’s force, the more it must be counteracting any stress within the material that would otherwise cause it to shift out of its current state.
Over multiple experiments using this new protocol, Owens found that different types of soft glassy materials held a significant amount of residual stress, long after most researchers would assume the stress had dissipated. What’s more, she found that the degree of stress that a material retained was a reflection of the direction in which it was initially mixed, and when it was mixed.
“The material can effectively ‘remember’ which direction it was mixed, and how long ago,” Owens says. “And it turns out they hold this memory of their past, a lot longer than we used to think.”
In addition to the protocol she has developed to measure residual stress, Owens has developed a model to estimate how a material will change over time, given the degree of residual stress that it holds. Using this model, she says scientists might design materials with “short-term memory,” or very little residual stress, such that they remain stable over longer periods.
One material where she sees room for such improvement is asphalt — a substance that is first mixed, then poured in molten form over a surface where it then cools and settles over time. She suspects that residual stresses from the mixing of asphalt may contribute to cracks forming in pavement over time. Reducing these stresses at the start of the process could lead to longer-lasting, more resilient roads.
“People are inventing new types of asphalt all the time to be more eco-friendly, and all of these will have different levels of residual stress that will need some control,” she says. “There’s plenty of room to explore.”
This research was supported, in part, by MIT’s Postdoctoral Fellowship for Engineering Excellence and an MIT Mathworks Fellowship.
3 Questions: On biology and medicine’s “data revolution”Professor Caroline Uhler discusses her work at the Schmidt Center, thorny problems in math, and the ongoing quest to understand some of the most complex interactions in biology.Caroline Uhler is an Andrew (1956) and Erna Viterbi Professor of Engineering at MIT; a professor of electrical engineering and computer science in the Institute for Data, Science, and Society (IDSS); and director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, where she is also a core institute and scientific leadership team member.
Uhler is interested in all the methods by which scientists can uncover causality in biological systems, ranging from causal discovery on observed variables to causal feature learning and representation learning. In this interview, she discusses machine learning in biology, areas that are ripe for problem-solving, and cutting-edge research coming out of the Schmidt Center.
Q: The Eric and Wendy Schmidt Center has four distinct areas of focus structured around four natural levels of biological organization: proteins, cells, tissues, and organisms. What, within the current landscape of machine learning, makes now the right time to work on these specific problem classes?
A: Biology and medicine are currently undergoing a “data revolution.” The availability of large-scale, diverse datasets — ranging from genomics and multi-omics to high-resolution imaging and electronic health records — makes this an opportune time. Inexpensive and accurate DNA sequencing is a reality, advanced molecular imaging has become routine, and single cell genomics is allowing the profiling of millions of cells. These innovations — and the massive datasets they produce — have brought us to the threshold of a new era in biology, one where we will be able to move beyond characterizing the units of life (such as all proteins, genes, and cell types) to understanding the `programs of life’, such as the logic of gene circuits and cell-cell communication that underlies tissue patterning and the molecular mechanisms that underlie the genotype-phenotype map.
At the same time, in the past decade, machine learning has seen remarkable progress with models like BERT, GPT-3, and ChatGPT demonstrating advanced capabilities in text understanding and generation, while vision transformers and multimodal models like CLIP have achieved human-level performance in image-related tasks. These breakthroughs provide powerful architectural blueprints and training strategies that can be adapted to biological data. For instance, transformers can model genomic sequences similar to language, and vision models can analyze medical and microscopy images.
Importantly, biology is poised to be not just a beneficiary of machine learning, but also a significant source of inspiration for new ML research. Much like agriculture and breeding spurred modern statistics, biology has the potential to inspire new and perhaps even more profound avenues of ML research. Unlike fields such as recommender systems and internet advertising, where there are no natural laws to discover and predictive accuracy is the ultimate measure of value, in biology, phenomena are physically interpretable, and causal mechanisms are the ultimate goal. Additionally, biology boasts genetic and chemical tools that enable perturbational screens on an unparalleled scale compared to other fields. These combined features make biology uniquely suited to both benefit greatly from ML and serve as a profound wellspring of inspiration for it.
Q: Taking a somewhat different tack, what problems in biology are still really resistant to our current tool set? Are there areas, perhaps specific challenges in disease or in wellness, which you feel are ripe for problem-solving?
A: Machine learning has demonstrated remarkable success in predictive tasks across domains such as image classification, natural language processing, and clinical risk modeling. However, in the biological sciences, predictive accuracy is often insufficient. The fundamental questions in these fields are inherently causal: How does a perturbation to a specific gene or pathway affect downstream cellular processes? What is the mechanism by which an intervention leads to a phenotypic change? Traditional machine learning models, which are primarily optimized for capturing statistical associations in observational data, often fail to answer such interventional queries.There is a strong need for biology and medicine to also inspire new foundational developments in machine learning.
The field is now equipped with high-throughput perturbation technologies — such as pooled CRISPR screens, single-cell transcriptomics, and spatial profiling — that generate rich datasets under systematic interventions. These data modalities naturally call for the development of models that go beyond pattern recognition to support causal inference, active experimental design, and representation learning in settings with complex, structured latent variables. From a mathematical perspective, this requires tackling core questions of identifiability, sample efficiency, and the integration of combinatorial, geometric, and probabilistic tools. I believe that addressing these challenges will not only unlock new insights into the mechanisms of cellular systems, but also push the theoretical boundaries of machine learning.
With respect to foundation models, a consensus in the field is that we are still far from creating a holistic foundation model for biology across scales, similar to what ChatGPT represents in the language domain — a sort of digital organism capable of simulating all biological phenomena. While new foundation models emerge almost weekly, these models have thus far been specialized for a specific scale and question, and focus on one or a few modalities.
Significant progress has been made in predicting protein structures from their sequences. This success has highlighted the importance of iterative machine learning challenges, such as CASP (critical assessment of structure prediction), which have been instrumental in benchmarking state-of-the-art algorithms for protein structure prediction and driving their improvement.
The Schmidt Center is organizing challenges to increase awareness in the ML field and make progress in the development of methods to solve causal prediction problems that are so critical for the biomedical sciences. With the increasing availability of single-gene perturbation data at the single-cell level, I believe predicting the effect of single or combinatorial perturbations, and which perturbations could drive a desired phenotype, are solvable problems. With our Cell Perturbation Prediction Challenge (CPPC), we aim to provide the means to objectively test and benchmark algorithms for predicting the effect of new perturbations.
Another area where the field has made remarkable strides is disease diagnostic and patient triage. Machine learning algorithms can integrate different sources of patient information (data modalities), generate missing modalities, identify patterns that may be difficult for us to detect, and help stratify patients based on their disease risk. While we must remain cautious about potential biases in model predictions, the danger of models learning shortcuts instead of true correlations, and the risk of automation bias in clinical decision-making, I believe this is an area where machine learning is already having a significant impact.
Q: Let’s talk about some of the headlines coming out of the Schmidt Center recently. What current research do you think people should be particularly excited about, and why?
A: In collaboration with Dr. Fei Chen at the Broad Institute, we have recently developed a method for the prediction of unseen proteins’ subcellular location, called PUPS. Many existing methods can only make predictions based on the specific protein and cell data on which they were trained. PUPS, however, combines a protein language model with an image in-painting model to utilize both protein sequences and cellular images. We demonstrate that the protein sequence input enables generalization to unseen proteins, and the cellular image input captures single-cell variability, enabling cell-type-specific predictions. The model learns how relevant each amino acid residue is for the predicted sub-cellular localization, and it can predict changes in localization due to mutations in the protein sequences. Since proteins’ function is strictly related to their subcellular localization, our predictions could provide insights into potential mechanisms of disease. In the future, we aim to extend this method to predict the localization of multiple proteins in a cell and possibly understand protein-protein interactions.
Together with Professor G.V. Shivashankar, a long-time collaborator at ETH Zürich, we have previously shown how simple images of cells stained with fluorescent DNA-intercalating dyes to label the chromatin can yield a lot of information about the state and fate of a cell in health and disease, when combined with machine learning algorithms. Recently, we have furthered this observation and proved the deep link between chromatin organization and gene regulation by developing Image2Reg, a method that enables the prediction of unseen genetically or chemically perturbed genes from chromatin images. Image2Reg utilizes convolutional neural networks to learn an informative representation of the chromatin images of perturbed cells. It also employs a graph convolutional network to create a gene embedding that captures the regulatory effects of genes based on protein-protein interaction data, integrated with cell-type-specific transcriptomic data. Finally, it learns a map between the resulting physical and biochemical representation of cells, allowing us to predict the perturbed gene modules based on chromatin images.
Furthermore, we recently finalized the development of a method for predicting the outcomes of unseen combinatorial gene perturbations and identifying the types of interactions occurring between the perturbed genes. MORPH can guide the design of the most informative perturbations for lab-in-a-loop experiments. Furthermore, the attention-based framework provably enables our method to identify causal relations among the genes, providing insights into the underlying gene regulatory programs. Finally, thanks to its modular structure, we can apply MORPH to perturbation data measured in various modalities, including not only transcriptomics, but also imaging. We are very excited about the potential of this method to enable the efficient exploration of the perturbation space to advance our understanding of cellular programs by bridging causal theory to important applications, with implications for both basic research and therapeutic applications.
New gift expands mental illness studies at Poitras Center for Psychiatric Disorders ResearchA commitment from longtime supporters Patricia and James Poitras ’63 initiates multidisciplinary efforts to understand and treat complex psychiatric disorders.One in every eight people — 970 million globally — live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.
Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.
“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap — giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute for Brain Research board.
Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.
“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Professor Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”
A legacy of support
Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.
“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia.
The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute of MIT and Harvard, McLean Hospital, Mass General Brigham, and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.
For the past decade, the Poitrases have also fueled breakthroughs in the lab of McGovern investigator and MIT Professor Feng Zhang, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.
In addition to fueling research in the center, the Poitras family has gifted two endowed professorships — the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng — and an annual postdoctoral fellowship at the McGovern Institute.
New initiatives at the Poitras Center
The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.
McGovern cognitive neuroscientists Evelina Fedorenko PhD ’07, an associate professor, and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience — in collaboration with psychiatrist Ann Shinn of McLean Hospital — will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.
A complementary line of investigation will focus on the role of inner speech — the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.
A different project led by McGovern neuroscientist and MIT Associate Professor Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine — an increasingly used antidepressant — alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly — and inform the development of safer, longer-lasting antidepressants.
Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.
“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”
Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain — essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.
Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.
“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia Poitras. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and, most of all, give families living with these conditions a renewed sense of hope for the future.”
New particle detector passes the “standard candle” testThe sPHENIX detector is on track to reveal properties of primordial quark-gluon plasma.A new and powerful particle detector just passed a critical test in its goal to decipher the ingredients of the early universe.
The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. From the aftermath, scientists hope to reconstruct the properties of quark-gluon plasma (QGP) — a white-hot soup of subatomic particles known as quarks and gluons that is thought to have sprung into existence in the few microseconds following the Big Bang. Just as quickly, the mysterious plasma disappeared, cooling and combining to form the protons and neutrons that make up today’s ordinary matter.
Now, the sPHENIX detector has made a key measurement that proves it has the precision to help piece together the primordial properties of quark-gluon plasma.
In a paper in the Journal of High Energy Physics, scientists including physicists at MIT report that sPHENIX precisely measured the number and energy of particles that streamed out from gold ions that collided at close to the speed of light.
Straight ahead
This test is considered in physics to be a “standard candle,” meaning that the measurement is a well-established constant that can be used to gauge a detector’s precision.
In particular, sPHENIX successfully measured the number of charged particles that are produced when two gold ions collide, and determined how this number changes when the ions collide head-on, versus just glancing by. The detector’s measurements revealed that head-on collisions produced 10 times more charged particles, which were also 10 times more energetic, compared to less straight-on collisions.
“This indicates the detector works as it should,” says Gunther Roland, professor of physics at MIT, who is a member and former spokesperson for the sPHENIX Collaboration. “It’s as if you sent a new telescope up in space after you’ve spent 10 years building it, and it snaps the first picture. It’s not necessarily a picture of something completely new, but it proves that it’s now ready to start doing new science.”
“With this strong foundation, sPHENIX is well-positioned to advance the study of the quark-gluon plasma with greater precision and improved resolution,” adds Hao-Ren Jheng, a graduate student in physics at MIT and a lead co-author of the new paper. “Probing the evolution, structure, and properties of the QGP will help us reconstruct the conditions of the early universe.”
The paper’s co-authors are all members of the sPHENIX Collaboration, which comprises over 300 scientists from multiple institutions around the world, including Roland, Jheng, and physicists at MIT’s Bates Research and Engineering Center.
“Gone in an instant”
Particle colliders such as Brookhaven’s RHIC are designed to accelerate particles at “relativistic” speeds, meaning close to the speed of light. When these particles are flung around in opposite, circulating beams and brought back together, any smash-ups that occur can release an enormous amount of energy. In the right conditions, this energy can very briefly exist in the form of quark-gluon plasma — the same stuff that sprung out of the Big Bang.
Just as in the early universe, quark-gluon plasma doesn’t hang around for very long in particle colliders. If and when QGP is produced, it exists for just 10 to the minus 22, or about a sextillionth, of a second. In this moment, quark-gluon plasma is incredibly hot, up to several trillion degrees Celsius, and behaves as a “perfect fluid,” moving as one entity rather than as a collection of random particles. Almost immediately, this exotic behavior disappears, and the plasma cools and transitions into more ordinary particles such as protons and neutrons, which stream out from the main collision.
“You never see the QGP itself — you just see its ashes, so to speak, in the form of the particles that come from its decay,” Roland says. “With sPHENIX, we want to measure these particles to reconstruct the properties of the QGP, which is essentially gone in an instant.”
“One in a billion”
The sPHENIX detector is the next generation of Brookhaven’s original Pioneering High Energy Nuclear Interaction eXperiment, or PHENIX, which measured collisions of heavy ions generated by RHIC. In 2021, sPHENIX was installed in place of its predecessor, as a faster and more powerful version, designed to detect quark-gluon plasma’s more subtle and ephemeral signatures.
The detector itself is about the size of a two-story house and weighs around 1,000 tons. It sits at the intersection of RHIC’s two main collider beams, where relativistic particles, accelerated from opposite directions, meet and collide, producing particles that fly out into the detector. The sPHENIX detector is able to catch and measure 15,000 particle collisions per second, thanks to its novel, layered components, including the MVTX, or micro-vertex — a subdetector that was designed, built, and installed by scientists at MIT’s Bates Research and Engineering Center.
Together, the detector’s systems enable sPHENIX to act as a giant 3D camera that can track the number, energy, and paths of individual particles during an explosion of particles generated by a single collision.
“SPHENIX takes advantage of developments in detector technology since RHIC switched on 25 years ago, to collect data at the fastest possible rate,” says MIT postdoc Cameron Dean, who was a main contributor to the new study’s analysis. “This allows us to probe incredibly rare processes for the first time.”
In the fall of 2024, scientists ran the detector through the “standard candle” test to gauge its speed and precision. Over three weeks, they gathered data from sPHENIX as the main collider accelerated and smashed together beams of gold ions traveling at the speed of light. Their analysis of the data showed that sPHENIX accurately measured the number of charged particles produced in individual gold ion collisions, as well as the particles’ energies. What’s more, the detector was sensitive to a collision’s “head-on-ness,” and could observe that head-on collisions produced more particles with greater energy, compared to less direct collisions.
“This measurement provides clear evidence that the detector is functioning as intended,” Jheng says.
“The fun for sPHENIX is just beginning,” Dean adds. “We are currently back colliding particles and expect to do so for several more months. With all our data, we can look for the one-in-a-billion rare process that could give us insights on things like the density of QGP, the diffusion of particles through ultra-dense matter, and how much energy it takes to bind different particles together.”
This work was supported, in part, by the U.S. Department of Energy Office of Science, and the National Science Foundation.
MIT researchers develop AI tool to improve flu vaccine strain selectionVaxSeer uses machine learning to predict virus evolution and antigenicity, aiming to make vaccine selection more accurate and less reliant on guesswork.Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.
This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.
To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.
Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”
An open-access report on the study was published today in Nature Medicine.
The future of flu
VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.
The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)
In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population.
For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.
So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.
Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity.
Outpacing evolution
“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi.
VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families
“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator.
“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”
Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.
New self-assembling material could be the key to recyclable EV batteriesMIT researchers designed an electrolyte that can break apart at the end of a battery’s life, allowing for easier recycling of components.Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.
A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.
The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.
“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”
Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.
Better batteries
There’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.
That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.
To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.
“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”
When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.
“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”
The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.
“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.
When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.
“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”
Validating a new approach
Cho says the material is a proof of concept that demonstrates the recycle-first approach.
“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”
Cho also sees a lot of room for optimizing the material’s performance with further experiments.
Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.
“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”
Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.
“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”
The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy. This work was performed, in part, using the MIT.nano Characterization facilities.
Why countries trade with each other while fighting Mariya Grinberg’s new book, “Trade in War,” examines the curious phenomenon of economic trade during military conflict.In World War II, Britain was fighting for its survival against German aerial bombardment. Yet Britain was importing dyes from Germany at the same time. This sounds curious, to put it mildly. How can two countries at war with each other also be trading goods?
Examples of this abound, actually. Britain also traded with its enemies for almost all of World War I. India and Pakistan conducted trade with each other during the First Kashmir War, from 1947 to 1949, and during the India-Pakistan War of 1965. Croatia and then-Yugoslavia traded with each other while fighting in 1992.
“States do in fact trade with their enemies during wars,” says MIT political scientist Mariya Grinberg. “There is a lot of variation in which products get traded, and in which wars, and there are differences in how long trade lasts into a war. But it does happen.”
Indeed, as Grinberg has found, state leaders tend to calculate whether trade can give them an advantage by boosting their own economies while not supplying their enemies with anything too useful in the near term.
“At its heart, wartime trade is all about the tradeoff between military benefits and economic costs,” Grinberg says. “Severing trade denies the enemy access to your products that could increase their military capabilities, but it also incurs a cost to you because you’re losing trade and neutral states could take over your long-term market share.” Therefore, many countries try trading with their wartime foes.
Grinberg explores this topic in a groundbreaking new book, the first one on the subject, “Trade in War: Economic Cooperation Across Enemy Lines,” published this month by Cornell University Press. It is also the first book by Grinberg, an assistant professor of political science at MIT.
Calculating time and utility
“Trade in War” has its roots in research Grinberg started as a doctoral student at the University of Chicago, where she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.
Grinberg wanted to learn about it comprehensively, so, as she quips, “I did what academics usually do: I went to the work of historians and said, ‘Historians, what have you got for me?’”
Modern wartime trading began during the Crimean War, which pitted Russia against France, Britain, the Ottoman Empire, and other allies. Before the war’s start in 1854, France had paid for many Russian goods that could not be shipped because ice in the Baltic Sea was late to thaw. To rescue its produce, France then persuaded Britain and Russia to adopt “neutral rights,” codified in the 1856 Declaration of Paris, which formalized the idea that goods in wartime could be shipped via neutral parties (sometimes acting as intermediaries for warring countries).
“This mental image that everyone has, that we don’t trade with our enemies during war, is actually an artifact of the world without any neutral rights,” Grinberg says. “Once we develop neutral rights, all bets are off, and now we have wartime trade.”
Overall, Grinberg’s systematic analysis of wartime trade shows that it needs to be understood on the level of particular goods. During wartime, states calculate how much it would hurt their own economies to stop trade of certain items; how useful specific products would be to enemies during war, and in what time frame; and how long a war is going to last.
“There are two conditions under which we can see wartime trade,” Grinberg says. “Trade is permitted when it does not help the enemy win the war, and it’s permitted when ending it would damage the state’s long-term economic security, beyond the current war.”
Therefore a state might export diamonds, knowing an adversary would need to resell such products over time to finance any military activities. Conversely, states will not trade products that can quickly convert into military use.
“The tradeoff is not the same for all products,” Grinberg says. “All products can be converted into something of military utility, but they vary in how long that takes. If I’m expecting to fight a short war, things that take a long time for my opponent to convert into military capabilities won’t help them win the current war, so they’re safer to trade.” Moreover, she adds, “States tend to prioritize maintaining their long-term economic stability, as long as the stakes don’t hit too close to home.”
This calculus helps explain some seemingly inexplicable wartime trade decisions. In 1917, three years into World War I, Germany started trading dyes to Britain. As it happens, dyes have military uses, for example as coatings for equipment. And World War I, infamously, was lasting far beyond initial expectations. But as of 1917, German planners thought the introduction of unrestricted submarine warfare would bring the war to a halt in their favor within a few months, so they approved the dye exports. That calculation was wrong, but it fits the framework Grinberg has developed.
States: Usually wrong about the length of wars
“Trade in War” has received praise from other scholars in the field. Michael Mastanduno of Dartmouth College has said the book “is a masterful contribution to our understanding of how states manage trade-offs across economics and security in foreign policy.”
For her part, Grinberg notes that her work holds multiple implications for international relations — one being that trade relationships do not prevent hostilities from unfolding, as some have theorized.
“We can’t expect even strong trade relations to deter a conflict,” Grinberg says. “On the other hand, when we learn our assumptions about the world are not necessarily correct, we can try to find different levers to deter war.”
Grinberg has also observed that states are not good, by any measure, at projecting how long they will be at war.
“States very infrequently get forecasts about the length of war right,” Grinberg says. That fact has formed the basis of a second, ongoing Grinberg book project.
“Now I’m studying why states go to war unprepared, why they think their wars are going to end quickly,” Grinberg says. “If people just read history, they will learn almost all of human history works against this assumption.”
At the same time, Grinberg thinks there is much more that scholars could learn specifically about trade and economic relations among warring countries — and hopes her book will spur additional work on the subject.
“I’m almost certain that I’ve only just begun to scratch the surface with this book,” she says.
New method could monitor corrosion and cracking in a nuclear reactorBy directly imaging material failure in 3D, this real-time technique could help scientists improve reactor safety and longevity.MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.
This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.
During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.
They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.
By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.
“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.
Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.
“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.
Real-time imaging
Studying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.
Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.
“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.
They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.
For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.
To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.
“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.
As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.
But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.
Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.
However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.
“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.
They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.
An unexpected result
As they fine-tuned the experiment, the researchers discovered something else.
They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.
In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.
“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.
In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.
“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.
This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities.
Professor Emeritus Rainer Weiss, influential physicist who forged new paths to understanding the universe, dies at 92The longtime MIT professor shared a Nobel Prize for his role in developing the LIGO observatory and detecting gravitational waves.MIT Professor Emeritus Rainer Weiss ’55, PhD ’62, a renowned experimental physicist and Nobel laureate whose groundbreaking work confirmed a longstanding prediction about the nature of the universe, passed away on Aug. 25. He was 92.
Weiss conceived of the Laser Interferometer Gravitational-Wave Observatory (LIGO) for detecting ripples in space-time known as gravitational waves, and was later a leader of the team that built LIGO and achieved the first-ever detection of gravitational waves. He shared the Nobel Prize in Physics for this work in 2017. Together with international collaborators, he and his colleagues at LIGO would go on to detect many more of these cosmic reverberations, opening up a new way for scientists to view the universe.
During his remarkable career, Weiss also developed a more precise atomic clock and figured out how to measure the spectrum of the cosmic microwave background via a weather balloon. He later co-founded and advanced the NASA Cosmic Background Explorer project, whose measurements helped support the Big Bang theory describing the expansion of the universe.
“Rai leaves an indelible mark on science and a gaping hole in our lives,” says Nergis Mavalvala PhD ’97, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. As a doctoral student with Weiss in the 1990s, Mavalvala worked with him to build an early prototype of a gravitational-wave detector as part of her PhD thesis. “He will be so missed but has also gifted us a singular legacy. Every gravitational wave event we observe will remind us of him, and we will smile. I am indeed heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us — of passion for science and discovery, but most of all to always put people first.” she says.
A member of the MIT physics faculty since 1964, Weiss was known as a committed mentor and teacher, as well as a dedicated researcher.
“Rai’s ingenuity and insight as an experimentalist and a physicist were legendary,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics. “His no-nonsense style and gruff manner belied a very close, supportive and collaborative relationship with his students, postdocs, and other mentees. Rai was a thoroughly MIT product.”
“Rai held a singular position in science: He was the creator of two fields — measurements of the cosmic microwave background and of gravitational waves. His students have gone on to lead both fields and carried Rai’s rigor and decency to both. He not only created a huge part of important science, he also populated them with people of the highest caliber and integrity,” says Peter Fisher, the Thomas A. Frank Professor of Physics and former head of the physics department.
Enabling a new era in astrophysics
LIGO is a system of two identical detectors located 1,865 miles apart. By sending finely tuned lasers back and forth through the detectors, scientists can detect perturbations caused by gravitational waves, whose existence was proposed by Albert Einstein. These discoveries illuminate ancient collisions and other events in the early universe, and have confirmed Einstein’s theory of general relativity. Today, the LIGO Scientific Collaboration involves hundreds of scientists at MIT, Caltech, and other universities, and with the Virgo and KAGRA observatories in Italy and Japan makes up the global LVK Collaboration — but five decades ago, the instrument concept was an MIT class exercise conceived by Weiss.
As he told MIT News in 2017, in generating the initial idea, Weiss wondered: “What’s the simplest thing I can think of to show these students that you could detect the influence of a gravitational wave?”
To realize the audacious design, Weiss teamed up in 1976 with physicist Kip Thorne, who, based in part on conversations with Weiss, soon seeded the creation of a gravitational wave experiment group at Caltech. The two formed a collaboration between MIT and Caltech, and in 1979, the late Scottish physicist Ronald Drever, then of the University of Glasgow, joined the effort at Caltech. The three scientists — who became the co-founders of LIGO — worked to refine the dimensions and scientific requirements for an instrument sensitive enough to detect a gravitational wave. Barry Barish later joined the team at Caltech, helping to secure funding and bring the detectors to completion.
After receiving support from the National Science Foundation, LIGO broke ground in the mid-1990s, constructing interferometric detectors in Hanford, Washington, and in Livingston, Louisiana.
Years later, when he shared the Nobel Prize with Thorne and Barish for his work on LIGO, Weiss noted that hundreds of colleagues had helped to push forward the search for gravitational waves.
“The discovery has been the work of a large number of people, many of whom played crucial roles,” Weiss said at an MIT press conference. “I view receiving this [award] as sort of a symbol of the various other people who have worked on this.”
He continued: “This prize and others that are given to scientists is an affirmation by our society of [the importance of] gaining information about the world around us from reasoned understanding of evidence.”
“While I have always been amazed and guided by Rai’s ingenuity, integrity, and humility, I was most impressed by his breadth of vision and ability to move between worlds,” says Matthew Evans, the MathWorks Professor of Physics. “He could seamlessly shift from the smallest technical detail of an instrument to the global vision for a future observatory. In the last few years, as the idea for a next-generation gravitational-wave observatory grew, Rai would often be at my door, sharing ideas for how to move the project forward on all levels. These discussions ranged from quantum mechanics to global politics, and Rai’s insights and efforts have set the stage for the future.”
A lifelong fascination with hard problems
Weiss was born in 1932 in Berlin. The young family fled Nazi Germany to Prague and then emigrated to New York City, where Weiss grew up with a love for classical music and electronics, earning money by fixing radios.
He enrolled at MIT, then dropped out of school in his junior year, only to return shortly after, taking a job as a technician in the former Building 20. There, Weiss met physicist Jerrold Zacharias, who encouraged him in finishing his undergraduate degree in 1955 and his PhD in 1962.
Weiss spent some time at Princeton University as a postdoc in the legendary group led by Robert Dicke, where he developed experiments to test gravity. He returned to MIT as an assistant professor in 1964, starting a new research group in the Research Laboratory of Electronics dedicated to research in cosmology and gravitation.
With the money he received from the Nobel Prize, Weiss established the Barish-Weiss Fellowship to support student research in the MIT Department of Physics.
Weiss received numerous awards and honors in addition to the Nobel Prize, including the Medaille de l’ADION, the 2006 Gruber Prize in Cosmology, and the 2007 Einstein Prize of the American Physical Society. He was a fellow of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the American Physical Society, as well as a member of the National Academy of Sciences. In 2016, Weiss received a Special Breakthrough Prize in Fundamental Physics, the Gruber Prize in Cosmology, the Shaw Prize in Astronomy, and the Kavli Prize in Astrophysics, all shared with Drever and Thorne. He also shared the Princess of Asturias Award for Technical and Scientific Research with Thorne, Barry Barish of Caltech, and the LIGO Scientific Collaboration.
Weiss is survived by his wife, Rebecca; his daughter, Sarah, and her husband, Tony; his son, Benjamin, and his wife, Carla; and a grandson, Sam, and his wife, Constance. Details about a memorial are forthcoming.
This article may be updated.
Simpler models can outperform deep learning at climate predictionNew research shows the natural variability in climate data can cause AI models to struggle at predicting local temperature and rainfall.Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.
The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.
Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.
The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.
They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.
The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.
“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.
Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.
Comparing emulators
Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.
Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.
But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.
The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.
Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.
“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.
Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.
They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.
Constructing a new evaluation
From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.
“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.
Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.
“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.
Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.
“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.
Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.
The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.
This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”
Engineering fantasy into realityPhD student Erik Ballesteros is building “Doc Ock” arms for future astronauts.Growing up in the suburban town of Spring, Texas, just outside of Houston, Erik Ballesteros couldn’t help but be drawn in by the possibilities for humans in space.
It was the early 2000s, and NASA’s space shuttle program was the main transport for astronauts to the International Space Station (ISS). Ballesteros’ hometown was less than an hour from Johnson Space Center (JSC), where NASA’s mission control center and astronaut training facility are based. And as often as they could, he and his family would drive to JSC to check out the center’s public exhibits and presentations on human space exploration.
For Ballesteros, the highlight of these visits was always the tram tour, which brings visitors to JSC’s Astronaut Training Facility. There, the public can watch astronauts test out spaceflight prototypes and practice various operations in preparation for living and working on the International Space Station.
“It was a really inspiring place to be, and sometimes we would meet astronauts when they were doing signings,” he recalls. “I’d always see the gates where the astronauts would go back into the training facility, and I would think: One day I’ll be on the other side of that gate.”
Today, Ballesteros is a PhD student in mechanical engineering at MIT, and has already made good on his childhood goal. Before coming to MIT, he interned on multiple projects at JSC, working in the training facility to help test new spacesuit materials, portable life support systems, and a propulsion system for a prototype Mars rocket. He also helped train astronauts to operate the ISS’ emergency response systems.
Those early experiences steered him to MIT, where he hopes to make a more direct impact on human spaceflight. He and his advisor, Harry Asada, are building a system that will quite literally provide helping hands to future astronauts. The system, dubbed SuperLimbs, consists of a pair of wearable robotic arms that extend out from a backpack, similar to the fictional Inspector Gadget, or Doctor Octopus (“Doc Ock,” to comic book fans). Ballesteros and Asada are designing the robotic arms to be strong enough to lift an astronaut back up if they fall. The arms could also crab-walk around a spacecraft’s exterior as an astronaut inspects or makes repairs.
Ballesteros is collaborating with engineers at the NASA Jet Propulsion Laboratory to refine the design, which he plans to introduce to astronauts at JSC in the next year or two, for practical testing and user feedback. He says his time at MIT has helped him make connections across academia and in industry that have fueled his life and work.
“Success isn’t built by the actions of one, but rather it’s built on the shoulders of many,” Ballesteros says. “Connections — ones that you not just have, but maintain — are so vital to being able to open new doors and keep great ones open.”
Getting a jumpstart
Ballesteros didn’t always seek out those connections. As a kid, he counted down the minutes until the end of school, when he could go home to play video games and watch movies, “Star Wars” being a favorite. He also loved to create and had a talent for cosplay, tailoring intricate, life-like costumes inspired by cartoon and movie characters.
In high school, he took an introductory class in engineering that challenged students to build robots from kits, that they would then pit against each other, BattleBots-style. Ballesteros built a robotic ball that moved by shifting an internal weight, similar to Star Wars’ fictional, sphere-shaped BB-8.
“It was a good introduction, and I remember thinking, this engineering thing could be fun,” he says.
After graduating high school, Ballesteros attended the University of Texas at Austin, where he pursued a bachelor’s degree in aerospace engineering. What would typically be a four-year degree stretched into an eight-year period during which Ballesteros combined college with multiple work experiences, taking on internships at NASA and elsewhere.
In 2013, he interned at Lockheed Martin, where he contributed to various aspects of jet engine development. That experience unlocked a number of other aerospace opportunities. After a stint at NASA’s Kennedy Space Center, he went on to Johnson Space Center, where, as part of a co-op program called Pathways, he returned every spring or summer over the next five years, to intern in various departments across the center.
While the time at JSC gave him a huge amount of practical engineering experience, Ballesteros still wasn’t sure if it was the right fit. Along with his childhood fascination with astronauts and space, he had always loved cinema and the special effects that forged them. In 2018, he took a year off from the NASA Pathways program to intern at Disney, where he spent the spring semester working as a safety engineer, performing safety checks on Disney rides and attractions.
During this time, he got to know a few people in Imagineering — the research and development group that creates, designs, and builds rides, theme parks, and attractions. That summer, the group took him on as an intern, and he worked on the animatronics for upcoming rides, which involved translating certain scenes in a Disney movie into practical, safe, and functional scenes in an attraction.
“In animation, a lot of things they do are fantastical, and it was our job to find a way to make them real,” says Ballesteros, who loved every moment of the experience and hoped to be hired as an Imagineer after the internship came to an end. But he had one year left in his undergraduate degree and had to move on.
After graduating from UT Austin in December 2019, Ballesteros accepted a position at NASA’s Jet Propulsion Laboratory in Pasadena, California. He started at JPL in February of 2020, working on some last adjustments to the Mars Perseverance rover. After a few months during which JPL shifted to remote work during the Covid pandemic, Ballesteros was assigned to a project to develop a self-diagnosing spacecraft monitoring system. While working with that team, he met an engineer who was a former lecturer at MIT. As a practical suggestion, she nudged Ballesteros to consider pursuing a master’s degree, to add more value to his CV.
“She opened up the idea of going to grad school, which I hadn’t ever considered,” he says.
Full circle
In 2021, Ballesteros arrived at MIT to begin a master’s program in mechanical engineering. In interviewing with potential advisors, he immediately hit it off with Harry Asada, the Ford Professor of Enginering and director of the d'Arbeloff Laboratory for Information Systems and Technology. Years ago, Asada had pitched JPL an idea for wearable robotic arms to aid astronauts, which they quickly turned down. But Asada held onto the idea, and proposed that Ballesteros take it on as a feasibility study for his master’s thesis.
The project would require bringing a seemingly sci-fi idea into practical, functional form, for use by astronauts in future space missions. For Ballesteros, it was the perfect challenge. SuperLimbs became the focus of his master’s degree, which he earned in 2023. His initial plan was to return to industry, degree in hand. But he chose to stay at MIT to pursue a PhD, so that he could continue his work with SuperLimbs in an environment where he felt free to explore and try new things.
“MIT is like nerd Hogwarts,” he says. “One of the dreams I had as a kid was about the first day of school, and being able to build and be creative, and it was the happiest day of my life. And at MIT, I felt like that dream became reality.”
Ballesteros and Asada are now further developing SuperLimbs. The team recently re-pitched the idea to engineers at JPL, who reconsidered, and have since struck up a partnership to help test and refine the robot. In the next year or two, Ballesteros hopes to bring a fully functional, wearable design to Johnson Space Center, where astronauts can test it out in space-simulated settings.
In addition to his formal graduate work, Ballesteros has found a way to have a bit of Imagineer-like fun. He is a member of the MIT Robotics Team, which designs, builds, and runs robots in various competitions and challenges. Within this club, Ballesteros has formed a sub-club of sorts, called the Droid Builders, that aim to build animatronic droids from popular movies and franchises.
“I thought I could use what I learned from Imagineering and teach undergrads how to build robots from the ground up,” he says. “Now we’re building a full-scale WALL-E that could be fully autonomous. It’s cool to see everything come full circle.”
At convocation, President Kornbluth greets the Class of 2029“We believe in all of you,” MIT’s president said at the welcoming ceremony for new undergraduates.In welcoming the undergraduate Class of 2029 to campus in Cambridge, Massachusetts, MIT President Sally Kornbluth began the Institute’s convocation on Sunday with a greeting that underscored MIT’s confidence in its new students.
“We believe in all of you, in the learning, making, discovering, and inventing that you all have come here to do,” Kornbluth said. “And in your boundless potential as future leaders who will help solve real problems that people face in their daily lives.”
She added: “If you’re out there feeling really lucky to be joining this incredible community, I want you to know that we feel even more lucky. We’re delighted and grateful that you chose to bring your talent, your energy, your curiosity, creativity, and drive here to MIT. And we’re thrilled to be starting this new year with all of you.”
The event, officially called the President’s Convocation for First-years and Families, was held at the Johnson Ice Rink on campus.
While recognizing that academic life can be “intense” at MIT, Kornbluth highlighted the many opportunities available to students outside the classroom, too. A biologist and cancer researcher herself, Kornbluth observed that students can participate in the Undergraduate Research Opportunities Program (UROP), which Kornbluth called “an unmissable opportunity to work side by side with MIT faculty at the front lines of research.” She also noted that MIT offers abundant opportunities for entrepreneurship, as well as 450 official student organizations.
“It’s okay to be a beginner,” Kornbluth said. “Join a group you wouldn’t have had time for in high school. Explore a new skill. Volunteer in the neighborhoods around campus.”
And if the transition to college feels daunting at any point, she added, MIT provides considerable resources to students for well-being and academic help.
“Sometimes the only way to succeed in facing a big challenge or solving a tough problem is to admit there’s no way you can do it all yourself,” Kornbluth observed. “You’re surrounded by a community of caring people. So please don’t be shy about asking for guidance and help.”
The large audience heard additional remarks from two faculty members who themselves have MIT degrees, reflecting on student life at the Institute.
As a student, “The most important things I had were a willingness to take risks and put hard work into the things I cared about,” said Ankur Moitra SM ’09, PhD ’11, the Norbert Wiener Professor of Mathematics.
He emphasized to students the importance of staying grounded and being true to themselves, especially in the face of, say, social media pressures.
“These are the things that make it harder to find your own way and what you really care about,” Moitra said. “Because the rest of the world’s opinion is right there staring you in the face, and it’s impossible to avoid it. And how will you discover what’s important to you, what’s worth pouring yourself into?”
Moitra also advised students to be wary of the tech tools “that want to do the thinking for you, but take away your agency” in the process. He added: “I worry about this because it’s going to become too easy to rely on these tools, and there are going to be many times you’re going to be tempted, especially late at night, with looming p-set deadlines. As educators, we don’t always have fixes for these kinds of things, and all we can do is open the door and hope you walk through it.”
Beyond that, he suggested,“Periodically remind yourself about what’s been important to you all along, what brought you here. For your next four years, you’re going to be surrounded by creative, clever, passionate people every day, who are going to challenge you. Rise to that challenge.”
Christopher Palmer PhD ’14, an associate professor of finance in the MIT Sloan School of Management, began his remarks by revealing that his MIT undergraduate application was not accepted — although he later received his doctorate at the Institute and is now a tenured professor at MIT.
“I played the long game,” he quipped, drawing laughs.
Indeed, Palmer’s remarks focused on cultivating the resilience, focus, and concentration needed to flourish in the long run.
While being at MIT is “thrilling,” Palmer advised students to “build enough slack into your system to handle both the stress and take advantage of the opportunities” on campus. Much like a bank conducts a “stress test” to see if it can withstand changes, Palmer suggested, we can try the same with our workloads: “If you build a schedule that passes the stress test, that means time for curiosity and meaningful creativity.”
Students should also avoid the “false equivalency that your worth is determined by your achievements,” he added. “You have inherent, immutable, instrinsic, eternal value. Be discerning with your commitments. Future you will be so grateful that you have built in the capacity to sleep, to catch up, to say ‘Yes’ to cool invitations, and to attend to your mental health.”
Additionally, Palmer recommended that students pursue “deep work,” involving “the hard thinking where progress actually happens” — a concept, he noted, that has been elevated by computer scientist Cal Newport SM ’06, PhD ’09. As research shows, Palmer explained, “We can’t actually multitask. What we’re really doing is switching tasks at high frequency and incurring a small cost every single time we switch our focus.”
It might help students, he added, to try some structural changes: Put the phone away, turn off alerts, pause notifications, and cultivate sleep. A healthy blend of academic work, activities, and community fun can emerge.
Concluding her own remarks, Kornbluth also emphasized that attending MIT means being part of a community that is respectful of varying viewpoints and all people, and sustains an ethos of fair-minded understanding.
“I know you have extremely high expectations for yourselves,” Kornbluth said, adding: “We have high expectations for you, too, in all kinds of ways. But I want to emphasize one that’s more important than all the others — and that’s an expectation for how we treat each other. At MIT, the work we do is so important, and so hard, that it’s essential we treat each other with empathy, understanding and compassion. That we take care to express our own ideas with clarity and respect, and make room for sharply different points of view. And above all, that we keep engaging in conversation, even when it’s difficult, frustrating or painful.”
Marcus Stergio named ombudspersonOffering confidential, impartial support, the Ombuds Office helps faculty, students, and staff resolve issues affecting their work and studies at MIT.Marcus Stergio will join the MIT Ombuds Office on Aug. 25, bringing over a decade of experience as a mediator and conflict-management specialist. Previously an ombuds at the U.S. Department of Labor, Stergio will be part of MIT’s ombuds team, working alongside Judi Segall.
The MIT Ombuds Office provides a confidential, independent resource for all members of the MIT community to constructively manage concerns and conflicts related to their experiences at MIT.
Established in 1980, the office played a key role in the early development of the profession, helping to develop and establish standards of practice for organizational ombuds offices. The ombudspersons help MIT community members analyze concerns, clarify policies and procedures, and identify options to constructively manage conflicts.
“There’s this aura and legend around MIT’s Ombuds Office that is really exciting,” Stergio says.
Among other types of conflict resolution, the work of an ombuds is particularly appealing for its versatility, according to Stergio. “We can be creative and flexible in figuring out which types of processes work for the people seeking support, whether that’s having one-on-one, informal, confidential conversations or exploring more active and involved ways of getting their issues addressed,” he says.
Prior to coming to MIT, Stergio worked for six years at the Department of Labor, where he established a new externally facing ombuds office for the Office of Federal Contract Compliance Programs (OFCCP). There, he operated in accordance with the International Ombuds Association’s standards of practice, offering ombuds services to both external stakeholders and OFCCP employees.
He has also served as ombudsperson or in other conflict-management roles for a variety of organizations across multiple sectors. These included the Centers for Disease Control and Prevention, the United Nations Population Fund, General Motors, BMW of North America, and the U.S. Department of Treasury, among others. From 2013 to 2019, Stergio was a mediator and the manager of commercial and corporate programs for the Boston-based dispute resolution firm MWI.
Stergio has taught conflict resolution courses and delivered mediation and negotiation workshops at multiple universities, including MIT, where he says the interest in his subject matter was palpable. “There was something about the MIT community, whether it was students or staff or faculty. People seemed really energized by the conflict management skills that I was presenting to them,” he recalls. “There was this eagerness to perfect things that was inspiring and contagious.”
“I’m honored to be joining such a prestigious institution, especially one with such a rich history in the ombuds field,” Stergio adds. “I look forward to building on that legacy and working with the MIT community to navigate challenges together.”
Stergio earned a bachelor’s degree from Northeastern University in 2008 and a master’s in conflict resolution from the University of Massachusetts at Boston in 2012. He has served on the executive committee of the Coalition of Federal Ombuds since 2022, as co-chair of the American Bar Association’s ombuds day subcommittee, and as an editor for the newsletter of the ABA’s Dispute Resolution Section. He is also a member of the International Ombuds Association.