General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Accelerating science with AI and simulations

Associate Professor Rafael Gómez-Bombarelli has spent his career applying AI to improve scientific discovery. Now he believes we are at an inflection point.


For more than a decade, MIT Associate Professor Rafael Gómez-Bombarelli has used artificial intelligence to create new materials. As the technology has expanded, so have his ambitions.

Now, the newly tenured professor in materials science and engineering believes AI is poised to transform science in ways never before possible. His work at MIT and beyond is devoted to accelerating that future.

“We’re at a second inflection point,” Gómez-Bombarelli says. “The first one was around 2015 with the first wave of representation learning, generative AI, and high-throughput data in some areas of science. Those are some of the techniques I first brought into my lab at MIT. Now I think we’re at a second inflection point, mixing language and merging multiple modalities into general scientific intelligence. We’re going to have all the model classes and scaling laws needed to reason about language, reason over material structures, and reason over synthesis recipes.”

Gómez Bombarelli’s research combines physics-based simulations with approaches like machine learning and generative AI to discover new materials with promising real-world applications. His work has led to new materials for batteries, catalysts, plastics, and organic light-emitting diodes (OLEDs). He has also co-founded multiple companies and served on scientific advisory boards for startups applying AI to drug discovery, robotics, and more. His latest company, Lila Sciences, is working to build a scientific superintelligence platform for the life sciences, chemical, and materials science industries.

All of that work is designed to ensure the future of scientific research is more seamless and productive than research today.

“AI for science is one of the most exciting and aspirational uses of AI,” Gómez-Bombarelli says. “Other applications for AI have more downsides and ambiguity. AI for science is about bringing a better future forward in time.”

From experiments to simulations

Gómez-Bombarelli grew up in Spain and gravitated toward the physical sciences from an early age. In 2001, he won a Chemistry Olympics competition, setting him on an academic track in chemistry, which he studied as an undergraduate at his hometown college, the University of Salamanca. Gómez-Bombarelli stuck around for his PhD, where he investigated the function of DNA-damaging chemicals.

“My PhD started out experimental, and then I got bitten by the bug of simulation and computer science about halfway through,” he says. “I started simulating the same chemical reactions I was measuring in the lab. I like the way programming organizes your brain; it felt like a natural way to organize one’s thinking. Programming is also a lot less limited by what you can do with your hands or with scientific instruments.”

Next, Gómez-Bombarelli went to Scotland for a postdoctoral position, where he studied quantum effects in biology. Through that work, he connected with Alán Aspuru-Guzik, a chemistry professor at Harvard University, whom he joined for his next postdoc in 2014.

“I was one of the first people to use generative AI for chemistry in 2016, and I was on the first team to use neural networks to understand molecules in 2015,” Gómez-Bombarelli says. “It was the early, early days of deep learning for science.”

Gómez-Bombarelli also began working to eliminate manual parts of molecular simulations to run more high-throughput experiments. He and his collaborators ended up running hundreds of thousands of calculations across materials, discovering hundreds of promising materials for testing.

After two years in the lab, Gómez-Bombarelli and Aspuru-Guzik started a general-purpose materials computation company, which eventually pivoted to focus on producing organic light-emitting diodes. Gómez-Bombarelli joined the company full-time and calls it the hardest thing he’s ever done in his career.

“It was amazing to make something tangible,” he says. “Also, after seeing Aspuru-Guzik run a lab, I didn’t want to become a professor. My dad was a professor in linguistics, and I thought it was a mellow job. Then I saw Aspuru-Guzik with a 40-person group, and he was on the road 120 days a year. It was insane. I didn’t think I had that type of energy and creativity in me.”

In 2018, Aspuru-Guzik suggested Gómez-Bombarelli apply for a new position in MIT’s Department of Materials Science and Engineering. But, with his trepidation about a faculty job, Gómez-Bombarelli let the deadline pass. Aspuru-Guzik confronted him in his office, slammed his hands on the table, and told him, “You need to apply for this.” It was enough to get Gómez-Bombarelli to put together a formal application.

Fortunately at his startup, Gómez-Bombarelli had spent a lot of time thinking about how to create value from computational materials discovery. During the interview process, he says, he was attracted to the energy and collaborative spirit at MIT. He also began to appreciate the research possibilities.

“Everything I had been doing as a postdoc and at the company was going to be a subset of what I could do at MIT,” he says. “I was making products, and I still get to do that. Suddenly, my universe of work was a subset of this new universe of things I could explore and do.”

It’s been nine years since Gómez Bombarelli joined MIT. Today his lab focuses on how the composition, structure, and reactivity of atoms impact material performance. He has also used high-throughput simulations to create new materials and helped develop tools for merging deep learning with physics-based modeling.

“Physics-based simulations make data and AI algorithms get better the more data you give them,” Gómez Bombarelli’s says. “There are all sorts of virtuous cycles between AI and simulations.”

The research group he has built is solely computational — they don’t run physical experiments.

“It’s a blessing because we can have a huge amount of breadth and do lots of things at once,” he says. “We love working with experimentalists and try to be good partners with them. We also love to create computational tools that help experimentalists triage the ideas coming from AI .”

Gómez-Bombarelli is also still focused on the real-world applications of the materials he invents. His lab works closely with companies and organizations like MIT’s Industrial Liaison Program to understand the material needs of the private sector and the practical hurdles of commercial development.

Accelerating science

As excitement around artificial intelligence has exploded, Gómez-Bombarelli has seen the field mature. Companies like Meta, Microsoft, and Google’s DeepMind now regularly conduct physics-based simulations reminiscent of what he was working on back in 2016. In November, the U.S. Department of Energy launched the Genesis Mission to accelerate scientific discovery, national security, and energy dominance using AI.

“AI for simulations has gone from something that maybe could work to a consensus scientific view,” Gómez-Bombarelli says. “We’re at an inflection point. Humans think in natural language, we write papers in natural language, and it turns out these large language models that have mastered natural language have opened up the ability to accelerate science. We’ve seen that scaling works for simulations. We’ve seen that scaling works for language. Now we’re going to see how scaling works for science.”

When he first came to MIT, Gómez-Bombarelli says he was blown away by how non-competitive things were between researchers. He tries to bring that same positive-sum thinking to his research group, which is made up of about 25 graduate students and postdocs.

“We’ve naturally grown into a really diverse group, with a diverse set of mentalities,” Gomez-Bombarelli says. “Everyone has their own career aspirations and strengths and weaknesses. Figuring out how to help people be the best versions of themselves is fun. Now I’ve become the one insisting that people apply to faculty positions after the deadline. I guess I’ve passed that baton.”


Using synthetic biology and AI to address global antimicrobial resistance threat

Driven by overuse and misuse of antibiotics, drug-resistant infections are on the rise, while development of new antibacterial tools has slowed.


James J. Collins, the Termeer Professor of Medical Engineering and Science at MIT and faculty co-lead of the Abdul Latif Jameel Clinic for Machine Learning in Health, is embarking on a multidisciplinary research project that applies synthetic biology and generative artificial intelligence to the growing global threat of antimicrobial resistance (AMR).

The research project is sponsored by Jameel Research, part of the Abdul Latif Jameel International network. The initial three-year, $3 million research project in MIT’s Department of Biological Engineering and Institute of Medical Engineering and Science focuses on developing and validating programmable antibacterials against key pathogens.

AMR — driven by the overuse and misuse of antibiotics — has accelerated the rise of drug-resistant infections, while the development of new antibacterial tools has slowed. The impact is felt worldwide, especially in low- and middle-income countries, where limited diagnostic infrastructure causes delays or ineffective treatment.

The project centers on developing a new generation of targeted antibacterials using AI to design small proteins to disable specific bacterial functions. These designer molecules would be produced and delivered by engineered microbes, providing a more precise and adaptable approach than traditional antibiotics.

“This project reflects my belief that tackling AMR requires both bold scientific ideas and a pathway to real-world impact,” Collins says. “Jameel Research is keen to address this crisis by supporting innovative, translatable research at MIT.”

Mohammed Abdul Latif Jameel ’78, chair of Abdul Latif Jameel, says, “antimicrobial resistance is one of the most urgent challenges we face today, and addressing it will require ambitious science and sustained collaboration. We are pleased to support this new research, building on our long-standing relationship with MIT and our commitment to advancing research across the world, to strengthen global health and contribute to a more resilient future.”


AI algorithm enables tracking of vital white matter pathways

Opening a new window on the brainstem, a new tool reliably and finely resolves distinct nerve bundles in live diffusion MRI scans, revealing signs of injury or disease.


The signals that drive many of the brain and body’s most essential functions — consciousness, sleep, breathing, heart rate, and motion — course through bundles of “white matter” fibers in the brainstem, but imaging systems so far have been unable to finely resolve these crucial neural cables. That has left researchers and doctors with little capability to assess how they are affected by trauma or neurodegeneration. 

In a new study, a team of MIT, Harvard University, and Massachusetts General Hospital researchers unveil AI-powered software capable of automatically segmenting eight distinct bundles in any diffusion MRI sequence.

In the open-access study, published Feb. 6 in the Proceedings of the National Academy Sciencesthe research team led by MIT graduate student Mark Olchanyi reports that their BrainStem Bundle Tool (BSBT), which they’ve made publicly available, revealed distinct patterns of structural changes in patients with Parkinson’s disease, multiple sclerosis, and traumatic brain injury, and shed light on Alzheimer’s disease as well. Moreover, the study shows, BSBT retrospectively enabled tracking of bundle healing in a coma patient that reflected the patient’s seven-month road to recovery.

“The brainstem is a region of the brain that is essentially not explored because it is tough to image,” says Olchanyi, a doctoral candidate in MIT’s Medical Engineering and Medical Physics Program. “People don't really understand its makeup from an imaging perspective. We need to understand what the organization of the white matter is in humans and how this organization breaks down in certain disorders.”

Adds Professor Emery N. Brown, Olchanyi’s thesis supervisor and co-senior author of the study, “the brainstem is one of the body’s most important control centers. Mark’s algorithms are a significant contribution to imaging research and to our ability to the understand regulation of fundamental physiology. By enhancing our capacity to image the brainstem, he offers us new access to vital physiological functions such as control of the respiratory and cardiovascular systems, temperature regulation, how we stay awake during the day and how sleep at night.”

Brown is the Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. He is also an anesthesiologist at MGH and a professor at Harvard Medical School.

Building the algorithm

Diffusion MRI helps trace the long branches, or “axons,” that neurons extend to communicate with each other. Axons are typically clad in a sheath of fat called myelin, and water diffuses along the axons within the myelin, which is also called the brain’s “white matter.” Diffusion MRI can highlight this very directed displacement of water. But segmenting the distinct bundles of axons in the brainstem has proved challenging, because they are small and masked by flows of brain fluids and the motions produced by breathing and heart beats.

As part of his thesis work to better understand the neural mechanisms that underpin consciousness, Olchanyi wanted to develop an AI algorithm to overcome these obstacles. BSBT works by tracing fiber bundles that plunge into the brainstem from neighboring areas higher in the brain, such as the thalamus and the cerebellum, to produce a “probabilistic fiber map.” An artificial intelligence module called a “convolutional neural network” then combines the map with several channels of imaging information from within the brainstem to distinguish eight individual bundles.

To train the neural network to segment the bundles, Olchanyi “showed” it 30 live diffusion MRI scans from volunteers in the Human Connectome Project (HCP). The scans were manually annotated to teach the neural network how to identify the bundles. Then he validated BSBT by testing its output against “ground truth” dissections of post-mortem human brains where the bundles were well delineated via microscopic inspection or very slow but ultra-high-resolution imaging. After training, BSBT became proficient in automatically identifying the eight distinct fiber bundles in new scans.

In an experiment to test its consistency and reliability, Olchanyi tasked BSBT with finding the bundles in 40 volunteers who underwent separate scans two months apart. In each case, the tool was able to find the same bundles in the same patients in each of their two scans. Olchanyi also tested BSBT with multiple datasets (not just the HCP), and even inspected how each component of the neural network contributed to BSBT’s analysis by hobbling them one by one.

“We put the neural network through the wringer,” Olchanyi says. “We wanted to make sure that it’s actually doing these plausible segmentations and it is leveraging each of its individual components in a way that improves the accuracy.”

Potential novel biomarkers

Once the algorithm was properly trained and validated, the research team moved on to testing whether the ability to segment distinct fiber bundles in diffusion MRI scans could enable tracking of how each bundle’s volume and structure varied with disease or injury, creating a novel kind of biomarker. Although the brainstem has been difficult to examine in detail, many studies show that neurodegenerative diseases affect the brainstem, often early on in their progression.

Olchanyi, Brown and their co-authors applied BSBT to scores of datasets of diffusion MRI scans from patients with Alzheimer’s, Parkinson’s, MS, and traumatic brain injury (TBI). Patients were compared to controls and sometimes to themselves over time. In the scans, the tool measured bundle volume and “fractional anisotropy,” (FA) which tracks how much water is flowing along the myelinated axons versus how much is diffusing in other directions, a proxy for white matter structural integrity.

In each condition, the tool found consistent patterns of changes in the bundles. While only one bundle showed significant decline in Alzheimer’s, in Parkinson’s the tool revealed a reduction in FA in three of the eight bundles. It also revealed volume loss in another bundle in patients between a baseline scan and a two-year follow-up. Patients with MS showed their greatest FA reductions in four bundles and volume loss in three. Meanwhile, TBI patients didn’t show significant volume loss in any bundles, but FA reductions were apparent in the majority of bundles.

Testing in the study showed that BSBT proved more accurate than other classifier methods in discriminating between patients with health conditions versus controls.

BSBT, therefore, can be “a key adjunct that aids current diagnostic imaging methods by providing a fine-grained assessment of brainstem white matter structure and, in some cases, longitudinal information,” the authors wrote.

Finally, in the case of a 29-year-old man who suffered a severe TBI, Olchanyi applied BSBT to a scans taken during the man’s seven-month coma. The tool showed that the man’s brainstem bundles had been displaced, but not cut, and showed that over his coma, the lesions on the nerve bundles decreased by a factor of three in volume. As they healed, the bundles moved back into place as well.

The authors wrote that BSBT “has substantial prognostic potential by identifying preserved brainstem bundles that can facilitate coma recovery.”

The study’s other senior authors are Juan Eugenio Iglesias and Brian Edlow. Other co-authors are David Schreier, Jian Li, Chiara Maffei, Annabel Sorby-Adams, Hannah Kinney, Brian Healy, Holly Freeman, Jared Shless, Christophe Destrieux, and Hendry Tregidgo.

Funding for the study came from the National Institutes of Health, U.S. Department of Defense, James S. McDonnell Foundation, Rappaport Foundation, American SidS Institute, American Brain Foundation, American Academy of Neurology, Center for Integration of Medicine and Innovative Technology, Blueprint for Neuroscience Research, and Massachusetts Life Sciences Center.


Magnetic mixer improves 3D bioprinting

MagMix, an onboard mixing device, enables scalable manufacturing of 3D-printed tissues.


3D bioprinting, in which living tissues are printed with cells mixed into soft hydrogels, or “bio-inks,” is widely used in the field of bioengineering for modeling or replacing the tissues in our bodies. The print quality and reproducibility of tissues, however, can face challenges. One of the most significant challenges is created simply by gravity — cells naturally sink to the bottom of the bioink-extruding printer syringe because the cells are heavier than the hydrogel around them.

“This cell settling, which becomes worse during the long print sessions required to print large tissues, leads to clogged nozzles, uneven cell distribution, and inconsistencies between printed tissues,” explains Ritu Raman, the Eugene Bell Career Development Professor of Tissue Engineering and assistant professor of mechanical engineering at MIT. “Existing solutions, such as manually stirring bioinks before loading them into the printer, or using passive mixers, cannot maintain uniformity once printing begins.”

In a study published Feb. 2 in the journal Device, Raman’s team introduces a new approach that aims to solve this core limitation by actively preventing cell sedimentation within bioinks during printing, allowing for more reliable and biologically consistent 3D printed tissues.

“Precise control over the bioink’s physical and biological properties is essential for recreating the structure and function of native tissues,” says Ferdows Afghah, a postdoc in mechanical engineering at MIT and lead author of the study.

“If we can print tissues that more closely mimic those in our bodies, we can use them as models to understand more about human diseases, or to test the safety and efficacy of new therapeutic drugs,” adds Raman. Such models could help researchers move away from techniques like animal testing, which supports recent interest from the U.S. Food and Drug Administration in developing faster, less expensive, and more informative new approaches to establish the safety and efficacy of new treatment paths.

“Eventually, we are working towards regenerative medicine applications such as replacing diseased or injured tissues in our bodies with 3D printed tissues that can help restore healthy function,” says Raman.

MagMix, a magnetically actuated mixer, is composed of two parts: a small magnetic propeller that fits inside the syringes used by bioprinters to deposit bioinks, layer by layer, into 3D tissues, and a permanent magnet attached to a motor that moves up and down near the syringe, controlling the movement of the propeller inside. Together, this compact system can be mounted onto any standard 3D bioprinter, keeping bioinks uniformly mixed during printing without changing the bioink formulation or interfering with the printer’s normal operation. To test the approach, the team used computer simulations to design the optimal mixing propeller geometry and speed and then validated its performance experimentally.

“Across multiple bioink types, MagMix prevented cell settling for more than 45 minutes of continuous printing, reducing clogging and preserving high cell viability,” says Raman. “Importantly, we showed that mixing speeds could be adjusted to balance effective homogenization for different bioinks while inducing minimal stress on the cells. As a proof-of-concept, we demonstrated that MagMix could be used to 3D print cells that could mature into muscle tissues over the course of several days.”

By maintaining uniform cell distribution throughout long or complex print jobs, MagMix enables the fabrication of high-quality tissues with more consistent biological function. Because the device is compact, low-cost, customizable, and easily integrated into existing 3D printers, it offers a broadly accessible solution for laboratories and industries working toward reproducible engineered tissues for applications in human health including disease modeling, drug screening, and regenerative medicine.

This work was supported, in part, by the Safety, Health, and Environmental Discovery Lab (SHED) at MIT, which provides infrastructure and interdisciplinary expertise to help translate biofabrication innovations from lab-scale demonstrations to scalable, reproducible applications.

“At the SHED, we focus on accelerating the translation of innovative methods into practical tools that researchers can reliably adopt,” says Tolga Durak, the SHED’s founding director. “MagMix is a strong example of how the right combination of technical infrastructure and interdisciplinary support can move biofabrication technologies toward scalable, real-world impact.”

The SHED’s involvement reflects a broader vision of strengthening technology pathways that enhance reproducibility and accessibility across engineering and the life sciences by providing equitable access to advanced equipment and fostering cross-disciplinary collaboration.

“As the field advances toward larger-scale and more standardized systems, integrated labs like SHED are essential for building sustainable capacity,” Durak adds. “Our goal is not only to enable discovery, but to ensure that new technologies can be reliably adopted and sustained over time.”

The team is also interested in non-medical applications of engineered tissues, such as using printed muscles to power safer and more efficient “biohybrid” robots.

The researchers believe this work can improve the reliability and scalability of 3D bioprinting, making the potential impacts on the field of 3D bioprinting and on human health significant. Their paper, “Advancing Bioink Homogeneity in Extrusion 3D Bioprinting with Active In Situ Magnetic Mixing,” is available now from the journal Device


3 Questions: Using AI to help Olympic skaters land a quint

MIT Sports Lab researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.


Olympic figure skating looks effortless. Athletes sail across the ice, then soar into the air, spinning like a top, before landing on a single blade just 4-5 millimeters wide. To help figure skaters land quadruple axels, Salchows, Lutzes, and maybe even the elusive quintuple without looking the least bit stressed, Jerry Lu MFin ’24 developed an optical tracking system called OOFSkate that uses artificial intelligence to analyze video of a figure skater’s jump and make recommendations on how to improve. Lu, a former researcher at the MIT Sports Lab, has been aiding elite skaters on Team USA with their technical performance and will be working with NBC Sports during the 2026 Winter Olympics to help commentators and TV viewers make better sense of the complex scoring system in figure skating, snowboarding, and skiing. He’ll be applying AI technologies to explain nuanced judging decisions and demonstrate just how technically challenging these sports can be.

Meanwhile, Professor Anette “Peko” Hosoi, co-founder and faculty director of the MIT Sports Lab, is embarking on new research aimed at understanding how AI systems evaluate aesthetic performance in figure skating. Hosoi and Lu recently chatted with MIT News about applying AI to sports, whether AI systems could ever be used to judge Olympic figure skating, and when we might see a skater land a quint.

Q: Why apply AI to figure skating?

Lu: Skaters can always keep pushing, higher, faster, stronger. OOFSkate is all about helping skaters figure out a way to rotate a little bit faster in their jumps or jump a little bit higher. The system helps skaters catch things that perhaps could pass an eye test, but that might allow them to target some high-value areas of opportunity. The artistic side of skating is much harder to evaluate than the technical elements because it’s subjective.

To use mobile training app, you just need to take a video of an athlete’s jump, and it will spit out the physical metrics that drive how many rotations you can do. It tracks those metrics and builds in all of the other current elite and former elite athletes. You can see your data and then see, “This is how an Olympic champion did this element, perhaps I should try that.” You get the comparison and the automated classifier, which shows you if you did this trick at World Championships and it were judged by an international panel, this is approximately the grade of execution score they would give you.

Hosoi: There are a lot of AI tools that are coming online, especially things like pose estimators, where you can approximate skeletal configurations from video. The challenge with these pose estimators is that if you only have one camera angle, they do very well in the plane of the camera, but they do very poorly with depth. For example, if you’re trying to critique somebody’s form in fencing, and they’re moving toward the camera, you get very bad data. But with figure skating, Jerry has found one of the few areas where depth challenges don’t really matter. In figure skating, you need to understand: How high did this person jump, how many times did they go around, and how well did they land? None of those rely on depth. He’s found an application that pose estimators do really well, and that doesn’t pay a penalty for the things they do badly.

Q: Could you ever see a world in which AI is used to evaluate the artistic side of figure skating?

Hosoi: When it comes to AI and aesthetic evaluation, we have new work underway thanks to a MIT Human Insight Collaborative (MITHIC) grant. This work is in collaboration with Professor Arthur Bahr and IDSS graduate student Eric Liu. When you ask an AI platform for an aesthetic evaluation such as “What do you think of this painting?” it will respond with something that sounds like it came from a human. What we want to understand is, to get to that assessment, are the AIs going through the same sort of reasoning pathways or using the same intuitive concepts that humans go through to arrive at, “I like that painting,” or “I don’t like that painting”? Or are they just parrots? Are they just mimicking what they heard a person say? Or is there some concept map of aesthetic appeal? Figure skating is a perfect place to look for this map because skating is aesthetically judged. And there are numbers. You can’t go around a museum and find scores, “This painting is a 35.” But in skating, you’ve got the data.

That brings up another even more interesting question, which is the difference between novices and experts. It’s known that expert humans and novice humans will react differently to seeing the same thing. Somebody who is an expert judge may have a different opinion of a skating performance than a member of the general population. We’re trying to understand differences between reactions from experts, novices, and AI. Do these reactions have some common ground in where they are coming from, or is the AI coming from a different place than both the expert and the novice?

Lu: Figure skating is interesting because everybody working in the field of AI is trying to figure out AGI or artificial general intelligence and trying to build this extremely sound AI that replicates human beings. Working on applying AI to sports like figure skating helps us understand how humans think and approach judging. This has down-the-line impacts for AI research and companies that are developing AI models. By gaining a deeper understanding of how current state-of-the-art AI models work with these sports, and how you need to do training and fine-tuning of these models to make them work for specific sports, it helps you understand how AI needs to advance.

Q: What will you be watching for in the Milan Cortina Olympics figure skating competitions, now that you’ve been studying and working in this area? Do you think someone will land a quint?

Lu: For the winter games, I am working with NBC for the figure skating, ski, and snowboarding competitions to help them tell a data-driven story for the American people. The goal is to make these sports more relatable. Skating looks slow on television, but it’s not. Everything is supposed to look effortless. If it looks hard, you are probably going to get penalized. Skaters need to learn how to spin very fast, jump extremely high, float in the air, and land beautifully on one foot. The data we are gathering can help showcase how hard skating actually is, even though it is supposed to look easy.

I’m glad we are working in the Olympics sports realm because the world watches once every four years, and it is traditionally coaching-intensive and talent-driven sports, unlike a sport like baseball, where if you don’t have an elite-level optical tracking system you are not maximizing the value that you currently have. I’m glad we get to work with these Olympic sports and athletes and make an impact here.

Hosoi: I have always watched Olympic figure skating competitions, ever since I could turn on the TV. They’re always incredible. One of the things that I’m going to be practicing is identifying the jumps, which is very hard to do if you’re an amateur “judge.”

I have also done some back-of-the-envelope calculations to see if a quint is possible. I am now totally convinced it’s possible. We will see one in our lifetime, if not relatively soon. Not in this Olympics, but soon. When I saw we were so close on the quint, I thought, what about six? Can we do six rotations? Probably not. That’s where we start to come up against the limits of human physical capability. But five, I think, is in reach.


Times Higher Education ranks MIT No. 1 in arts and humanities, business and economics, and social sciences for 2026

Top worldwide honors span disciplines across three MIT schools for the second year in a row.


The 2026 Times Higher Education World University Ranking has ranked MIT first in three subject categories: Arts and Humanities, Business and Economics, and Social Sciences, repeating the Institute’s top spot in the same subjects in 2025.

The Times Higher Education World University Ranking is an annual publication of university rankings by Times Higher Education, a leading British education magazine. The subject rankings are based on 18 rigorous performance indicators categorized under five core pillars: teaching, research environment, research quality, industry, and international outlook.

Disciplines included in MIT’s top-ranked subjects are housed in the School of Humanities, Arts, and Social Sciences (SHASS), the School of Architecture and Planning (SA+P), and the MIT Sloan School of Management.

“SHASS is a vibrant crossroads of ideas, bringing together extraordinary people,” says Agustín Rayo, the Kenan Sahin Dean of SHASS. “These rankings reflect the strength of this remarkable community and MIT’s ongoing commitment to the humanities, arts, and social sciences.” 

“The human dimension is capital to our school's mission and programs, be they architecture, planning, media arts and sciences, or the arts, and whether at the scale of individuals, communities, or societies,” says Hashim Sarkis, dean of SA+P. “The acknowledgment and celebration of their centrality by the Times Higher Education only renews our deep commitment to human values.”

“MIT and MIT Sloan are providing students with an education that ensures they have the skills, experience, and problem-solving abilities they need in order to succeed in our world today,” says Richard M. Locke, the John C Head III Dean at the MIT Sloan School of Management. “It’s not just what we teach them, but how we teach them. The interdisciplinary nature of a school like MIT combines analytical reasoning skills, deep functional knowledge, and, at MIT Sloan, a hands-on management education that teaches students how to collaborate, lead teams, and navigate challenges, now and in the future."

The Arts and Humanities ranking evaluated 817 universities from 74 countries in the disciplines of languages; literature and linguistics; history; philosophy; theology; architecture; archaeology; and art, performing arts, and design. This is the second consecutive year MIT has earned the top spot in this subject.

The ranking for Business and Economics evaluated 1,067 institutions from 91 countries and territories across three core disciplines: business and management; accounting and finance; and economics and econometrics. This is the fifth consecutive year MIT has been ranked first in this subject.

The Social Sciences ranking evaluated 1,202 institutions from 104 countries and territories in the disciplines of political science and international studies, sociology, geography, communication and media studies, and anthropology. MIT claimed the top spot in this subject for the second consecutive year.

In other subjects, MIT was also named among the top universities, ranking third in Engineering and Life Sciences, and fourth in Computer Science and Physical Sciences. Overall, MIT ranked second in the Times Higher Education 2026 World University Ranking


A quick stretch switches this polymer’s capacity to transport heat

The flexible material could enable on-demand heat dissipation for electronics, fabrics, and buildings.


Most materials have an inherent capacity to handle heat. Plastic, for instance, is typically a poor thermal conductor, whereas materials like marble move heat more efficiently. If you were to place one hand on a marble countertop and the other on a plastic cutting board, the marble would conduct more heat away from your hand, creating a colder sensation compared to the plastic.

Typically, a material’s thermal conductivity cannot be changed without re-manufacturing it. But MIT engineers have now found that a relatively common material can switch its thermal conductivity. Simply stretching the material quickly dials up its heat conductance, from a baseline similar to that of plastic to a higher capacity closer to that of marble. When the material springs back to its unstretched form, it returns to its plastic-like properties.

The thermally reversible material is an olefin block copolymer — a soft and flexible polymer that is used in a wide range of commercial products. The team found that when the material is quickly stretched, its ability to conduct heat more than doubles. This transition occurs within just 0.22 seconds, which is the fastest thermal switching that has been observed in any material.

This material could be used to engineer systems that adapt to changing temperatures in real time. For instance, switchable fibers could be woven into apparel that normally retains heat. When stretched, the fabric would instantly conduct heat away from a person’s body to cool them down. Similar fibers can be built into laptops and infrastructure to keep devices and buildings from overheating. The researchers are working on further optimizing the polymer and on engineering new materials with similar properties.

“We need cheap and abundant materials that can quickly adapt to environmental temperature changes,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now that we’ve seen this thermal switching, this changes the direction where we can look for and build new adaptive materials.”

Boriskina and her colleagues have published their results in a study appearing today in the journal Advanced Materials. The study’s co-authors include Duo Xu, Buxuan Li, You Lyu, and Vivian Santamaria-Garcia of MIT, and Yuan Zhu of Southern University of Science and Technology in Shenzhen, China.

Elastic chains

The key to the new phenomenon is that when the material is stretched, its microscopic structures align in ways that suddenly allow heat to travel through easily, increasing the material’s thermal conductivity. In its unstretched state, the same microstructures are tangled and bunched, effectively blocking heat’s path.

As it happens, Boriskina and her colleagues didn’t set out to find a heat-switching material. They were initially looking for more sustainable alternatives to spandex, which is a synthetic fabric made from petroleum-based plastics that is traditionally difficult to recycle. As a potential replacement, the team was investigating fibers made from a different polymer known as polyethylene.

“Once we started working with the material, we realized it had other properties that were more interesting than the fact that it was elastic,” Boriskina says. “What makes polyethylene unique is it has this backbone of carbon atoms arranged along a simple chain. And carbon is a very good conductor of heat.”

The microstructure of most polymer materials, including polyethylene, contains many carbon chains. However, these chains exist in a messy, spaghetti-like tangle known as an amorphous phase. Despite the fact that carbon is a good heat conductor, the disordered arrangement of chains typically impedes heat flow. Polyethylene and most other polymers, therefore, generally have low thermal conductivity.

In previous work, MIT Professor Gang Chen and his collaborators found ways to untangle the mess of carbon chains and push polyethylene to shift from a disordered amorphous state to a more aligned, crystalline phase. This transition effectively straightened the carbon chains, providing clear highways for heat to flow through and increasing the material’s thermal conductivity. In those experiments however, the switch was permanent; once the material’s phase changed, it could not be reversed.

As Boriskina’s team explored polyethylene, they also considered other closely related materials, including olefin block copolymer (OBC). OBC is predominantly an amorphous material, made from highly tangled chains of carbon and hydrogen atoms. Scientists had therefore assumed that OBC would exhibit low thermal conductivity. If its conductance could be increased, it would likely be permanent, similar to polyethylene.

But when the team carried out experiments to test the elasticity of OBC, they found something quite different.

“As we stretched and released the material, we realized that its thermal conductivity was really high when it was stretched and lower when it was relaxed, over thousands of cycles,” says study co-author and MIT graduate student Duo Xu. “This switch was reversible, while the material stayed mostly amorphous. That was unexpected.”

A stretchy mess

The team then took a closer look at OBC, and how it might be changing as it was stretched. The researchers used a combination of X-ray and Raman spectroscopy to observe the material’s microscopic structure as they stretched and relaxed it repeatedly. They observed that, in its unstretched state, the material consists mainly of amorphous tangles of carbon chains, with just a few islands of ordered, crystalline domains scattered here and there. When stretched, the crystalline domains seemed to align and the amorphous tangles straightened out, similar to what Gang Chen observed in polyethylene.

However, rather than transitioning entirely into a crystalline phase, the straightened tangles stayed in their amorphous state. In this way, the team found that the tangles were able to switch back and forth, from straightened to bunched and back again, as the material was stretched and relaxed repeatedly.

“Our material is always in a mostly amorphous state; it never crystallizes under strain,” Xu notes. “So it leaves you this opportunity to go back and forth in thermal conductivity a thousand times. It’s very reversible.”

The team also found that this thermal switching happens extremely fast: The material’s thermal conductivity more than doubled within just 0.22 seconds of being stretched.

“The resulting difference in heat dissipation through this material is comparable to a tactile difference between touching a plastic cutting board versus a marble countertop,” Boriskina says.

She and her colleagues are now taking the results of their experiments and working them into models to see how they can tweak a material’s amorphous structure, to trigger an even bigger change when stretched.

“Our fibers can quickly react to dissipate heat, for electronics, fabrics, and building infrastructure.” Boriskina says. “If we could make further improvements to switch their thermal conductivity from that of plastic to that closer to diamond, it would have a huge industrial and societal impact.”

This research was supported, in part, by the U.S. Department of Energy, the Office of Naval Research Global via Tec de Monterrey, MIT Evergreen Graduate Innovation Fellowship, MathWorks MechE Graduate Fellowship, and the MIT-SUSTech Centers for Mechanical Engineering Research and Education, and carried out, in part, with the use of MIT.nano and ISN facilities.


Study: Platforms that rank the latest LLMs can be unreliable

Removing just a tiny fraction of the crowdsourced data that informs online ranking platforms can significantly change the results.


A firm that wants to use a large language model (LLM) to summarize sales reports or triage customer inquiries can choose between hundreds of unique LLMs with dozens of model variations, each with slightly different performance.

To narrow down the choice, companies often rely on LLM ranking platforms, which gather user feedback on model interactions to rank the latest LLMs based on how they perform on certain tasks.

But MIT researchers found that a handful of user interactions can skew the results, leading someone to mistakenly believe one LLM is the ideal choice for a particular use case. Their study reveals that removing a tiny fraction of crowdsourced data can change which models are top-ranked.

They developed a fast method to test ranking platforms and determine whether they are susceptible to this problem. The evaluation technique identifies the individual votes most responsible for skewing the results so users can inspect these influential votes.

The researchers say this work underscores the need for more rigorous strategies to evaluate model rankings. While they didn’t focus on mitigation in this study, they provide suggestions that may improve the robustness of these platforms, such as gathering more detailed feedback to create the rankings.

The study also offers a word of warning to users who may rely on rankings when making decisions about LLMs that could have far-reaching and costly impacts on a business or organization.

“We were surprised that these ranking platforms were so sensitive to this problem. If it turns out the top-ranked LLM depends on only two or three pieces of user feedback out of tens of thousands, then one can’t assume the top-ranked LLM is going to be consistently outperforming all the other LLMs when it is deployed,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS); a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society; an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author of this study.

She is joined on the paper by lead authors and EECS graduate students Jenny Huang and Yunyi Shen as well as Dennis Wei, a senior research scientist at IBM Research. The study will be presented at the International Conference on Learning Representations.

Dropping data

While there are many types of LLM ranking platforms, the most popular variations ask users to submit a query to two models and pick which LLM provides the better response.

The platforms aggregate the results of these matchups to produce rankings that show which LLM performed best on certain tasks, such as coding or visual understanding.

By choosing a top-performing LLM, a user likely expects that model’s top ranking to generalize, meaning it should outperform other models on their similar, but not identical, application with a set of new data.

The MIT researchers previously studied generalization in areas like statistics and economics. That work revealed certain cases where dropping a small percentage of data can change a model’s results, indicating that those studies’ conclusions might not hold beyond their narrow setting.

The researchers wanted to see if the same analysis could be applied to LLM ranking platforms.

“At the end of the day, a user wants to know whether they are choosing the best LLM. If only a few prompts are driving this ranking, that suggests the ranking might not be the end-all-be-all,” Broderick says.

But it would be impossible to test the data-dropping phenomenon manually. For instance, one ranking they evaluated had more than 57,000 votes. Testing a data drop of 0.1 percent means removing each subset of 57 votes out of the 57,000, (there are more than 10194 subsets), and then recalculating the ranking.

Instead, the researchers developed an efficient approximation method, based on their prior work, and adapted it to fit LLM ranking systems.

“While we have theory to prove the approximation works under certain assumptions, the user doesn’t need to trust that. Our method tells the user the problematic data points at the end, so they can just drop those data points, re-run the analysis, and check to see if they get a change in the rankings,” she says.

Surprisingly sensitive

When the researchers applied their technique to popular ranking platforms, they were surprised to see how few data points they needed to drop to cause significant changes in the top LLMs. In one instance, removing just two votes out of more than 57,000, which is 0.0035 percent, changed which model is top-ranked.

A different ranking platform, which uses expert annotators and higher quality prompts, was more robust. Here, removing 83 out of 2,575 evaluations (about 3 percent) flipped the top models.

Their examination revealed that many influential votes may have been a result of user error. In some cases, it appeared there was a clear answer as to which LLM performed better, but the user chose the other model instead, Broderick says.

“We can never know what was in the user’s mind at that time, but maybe they mis-clicked or weren’t paying attention, or they honestly didn’t know which one was better. The big takeaway here is that you don’t want noise, user error, or some outlier determining which is the top-ranked LLM,” she adds.

The researchers suggest that gathering additional feedback from users, such as confidence levels in each vote, would provide richer information that could help mitigate this problem. Ranking platforms could also use human mediators to assess crowdsourced responses.

For the researchers’ part, they want to continue exploring generalization in other contexts while also developing better approximation methods that can capture more examples of non-robustness.

“Broderick and her students’ work shows how you can get valid estimates of the influence of specific data on downstream processes, despite the intractability of exhaustive calculations given the size of modern machine-learning models and datasets,” says Jessica Hullman, the Ginni Rometty Professor of Computer Science at Northwestern University, who was not involved with this work.  “The recent work provides a glimpse into the strong data dependencies in routinely applied — but also very fragile — methods for aggregating human preferences and using them to update a model. Seeing how few preferences could really change the behavior of a fine-tuned model could inspire more thoughtful methods for collecting these data.”

This research is funded, in part, by the Office of Naval Research, the MIT-IBM Watson AI Lab, the National Science Foundation, Amazon, and a CSAIL seed award.


How MIT’s 10th president shaped the Cold War

For several decades beginning in the 1950s, the Killian Report set the frontiers of military technology, intelligence gathering, national security policy, and global affairs.


Today, MIT plays a key role in maintaining U.S. competitiveness, technological leadership, and national defense — and much of the Institute’s work to support the nation’s standing in these areas can be traced back to 1953.

Two months after he took office that year, U.S. President Dwight Eisenhower received a startling report from the military: The USSR had successfully exploded a nuclear bomb nine months sooner than intelligence sources had predicted. The rising Communist power had also detonated a hydrogen bomb using development technology more sophisticated than that of the U.S. And lastly, there was evidence of a new Soviet bomber that rivaled the B-52 in size and range — and the aircraft was of an entirely original design from within the USSR. There was, the report concluded, a significant chance of a surprise nuclear attack on the United States.

Eisenhower’s understanding of national security was vast (he had led the Allies to victory in World War II and served as the first supreme commander of NATO), but the connections he’d made during his two-year stint as president of Columbia University would prove critical to navigating the emerging challenges of the Cold War. He sent his advisors in search of a plan for managing this threat, and he suggested they start with James Killian, then president of MIT.

Killian had an unlikely path to the presidency of MIT. “He was neither a scientist nor an engineer,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing and a professor of aeronautics and astronautics at MIT. “But Killian turned out to be a truly gifted administrator.”

While he was serving as editor of MIT Technology Review (where he founded what became the MIT Press), Killian was tapped by then-president Karl Compton to join his staff. As the war effort ramped up on the MIT campus in the 1940s, Compton deputized Killian to lead the RadLab — a 4,000-person effort to develop and deploy the radar systems that proved decisive in the Allied victory.

Killian was named MIT’s 10th president in 1948. In 1951, he launched MIT Lincoln Laboratory, a federally funded research center where MIT and U.S. Air Force scientists and engineers collaborated on new air defense technologies to protect the nation against a nuclear attack.

Two years later, within weeks of Eisenhower’s 1953 request, Killian convened a group of leading scientists at MIT. The group proposed a three-part study: The U.S. needed to reassess its offensive capabilities, its continental defense, and its intelligence operations. Eisenhower agreed.

Killian mobilized 42 engineers and scientists from across the country into three panels matching the committee’s charge. Between September 1954 and February 1955, the panels held 307 meetings with every major defense and intelligence organization in the U.S. government. They had unrestricted access to every project, plan, and program involving national defense. The result, a 190-page report titled “Meeting the Threat of a Surprise Attack,” was delivered to Eisenhower’s desk on Feb. 14, 1955.

The Killian Report, as it came to be known, would go on to play a dramatic role in defining the frontiers of military technology, intelligence gathering, national security policy, and global affairs over the next several decades. Killian’s input would also have dramatic impacts on Eisenhower’s presidency and the relationship between the federal government and higher education.

Foreseeing an evolving competition

The Killian Report opens by anticipating four projected “periods” in the shifting balance of power between the U.S. and the Soviet Union.

In 1955, the U.S. had a decided offensive advantage over the USSR, but it was overly vulnerable to surprise attack. In 1956 and 1957, the U.S. would have an even larger offensive advantage and be only somewhat less vulnerable to surprise. By 1960, the U.S.’ offensive advantage would be narrower, but it would be in a better position to anticipate an attack. Within a decade, the report stated, the two nations would enter “Period IV” — during which “an attack by either side would result in mutual destruction … [a period] so fraught with danger to the U.S. that we should push all promising technological development so that we may stay in Periods II and III as long as possible.”

The report went on to make extensive, detailed recommendations — accelerated development of intercontinental ballistic missiles and high-energy aircraft fuels, expansion and increased ground security for “delivery system” facilities, increased cooperation with Canada and more studies about establishing monitoring stations on polar pack ice, and “studies directed toward better understanding of the radiological hazards that may result from the detonation of large numbers of nuclear weapons,” among others.

“Eisenhower really wanted to draw the perspectives of scientists and engineers into his decision-making,” says Mindell. “Generals and admirals tend to ask for more arms and more boots on the ground. The president didn’t want to be held captive by these views — and Killian’s report really delivered this for him.”

On the day it arrived, President Eisenhower circulated the Killian Report to the head of every department and agency in the federal government and asked them to comment on its recommendations. The Cold War arms race was on — and it would be between scientists and engineers in the United States and those in the Soviet Union.

An odd couple

The Killian Report made many recommendations based on “the correctness of the current national intelligence estimates” — even though “Eisenhower was frustrated with his whole intelligence apparatus,” says Will Hitchcock, the James Madison Professor of History at the University of Virginia and author of “The Age of Eisenhower.” “He felt it was still too much World War II ‘exploding-cigar’ stuff. There wasn’t enough work on advance warning, on seeing what’s over the hill. But that’s what Eisenhower really wanted to know.” The surprise attack on Pearl Harbor still lingered in the minds of many Americans, Hitchcock notes, and “that needed to be avoided.”

Killian needed an aggressive, innovative thinker to assess U.S. intelligence, so he turned to Edwin Land. The cofounder of Polaroid, Land was an astonishingly bold engineer and inventor. He also had military experience, having developed new ordnance targeting systems, aerial photography devices, and other photographic and visual surveillance technologies during World War II. Killian approached Land knowing their methods and work style were quite different. (When the offer to lead the intelligence panel was made, Land was in Hollywood advising filmmakers on the development of 3D movies; Land told Killian he had a personal rule that any committee he served on “must fit into a taxicab.”)

In fall 1954, Land and his five-person panel quickly confirmed Killian and Eisenhower’s suspicions: “We would go in and interview generals and admirals in charge of intelligence and come away worried,” Land reported to Killian later. “We were [young scientists] asking questions — and they couldn’t answer them.” Killian and Land realized this would set their report and its recommendations on a complicated path: While they needed to acknowledge and address the challenges of broadly upgrading intelligence activities, they also needed to make rapid progress on responding to the Soviet threat.

As work on the report progressed, Land and Killian held briefings with Eisenhower. They used these meetings to make two additional proposals — neither of which, President Eisenhower decided, would be spelled out in the final report for security reasons. The first was the development of missile-firing submarines, a long-term prospect that would take a decade to complete. (The technology developed for Polaris-class submarines, Mindell notes, transferred directly to the rockets that powered the Apollo program to the moon.)

The second proposal — to fast-track development of the U-2, a new high-altitude spy plane —could be accomplished within a year, Land told Eisenhower. The president agreed to both ideas, but he put a condition on the U-2 program. As Killian later wrote: “The president asked that it should be handled in an unconventional way so that it would not become entangled in the bureaucracy of the Defense Department or troubled by rivalries among the services.”

Powered by Land’s revolutionary imaging devices, the U-2 would become a critical tool in the U.S.’ ability to assess and understand the Soviet Union’s nuclear capacity. But the spy plane would also go on to have disastrous consequences for the peace process and for Eisenhower.

The aftermath(s)

The Killian Report has a very complex legacy, says Christopher Capozzola, the Elting Morison Professor of History. “There is a series of ironies about the whole undertaking,” he says. “For example, Eisenhower was trying to tamp down interservice rivalries by getting scientists to decide things. But within a couple of years those rivalries have all gotten worse.” Similarly, Capozzola notes, Eisenhower — who famously coined the phrase “military-industrial complex” and warned against it — amplified the militarization of scientific research “more than anyone else.”

Another especially painful irony emerged on May 1, 1960. Two weeks before a meeting between Eisenhower and Khrushchev in Paris to discuss how the U.S. and USSR could ease Cold War tensions and slow the arms race, a U-2 was shot down in Soviet airspace. After a public denial by the U.S. that the aircraft was being used for espionage, the Soviets produced the plane’s wreckage, cameras, and pilot — who admitted he was working for the CIA. The peace process, which had become the centerpiece of Eisenhower’s intended legacy, collapsed.

There were also some brighter outcomes of the Killian Report, Capozzola says. It marked a dramatic reset of the national government’s relationship with academic scientists and engineers — and with MIT specifically. “The report really greased the wheels between MIT scientists and Washington,” he notes. “Perhaps more than the report itself, the deep structures and relationships that Killian set up had implications for MIT and other research universities. They started to orient their missions toward the national interest,” he adds.

The report also cemented Eisenhower’s relationship with Killian. After the launch of Sputnik, which induced a broad public panic in the U.S. about Soviet scientific capabilities, the president called on Killian to guide the national response. Eisenhower later named Killian the first special assistant to the president for science and technology. In the years that followed, Killian would go on to help launch NASA, and MIT engineers would play a critical role in the Apollo mission that landed the first person on the moon. To this day, researchers at MIT and Lincoln Laboratory uphold this legacy of service, advancing knowledge in areas vital to national security, economic competitiveness, and quality of life for all Americans.

As Eisenhower’s special assistant, Killian met with him almost daily and became one of his most trusted advisors. “Killian could talk to the president, and Eisenhower really took his advice,” says Capozzola. “Not very many people can do that. The fact that Killian had that and used it was different.”

A key to their relationship, Capozzola notes, was Killian’s approach to his work. “He exemplified the notion that if you want to get something done, don’t take the credit. At no point did Killian think he was setting science policy. He was advising people on their best options, including decision-makers who would have to make very difficult decisions. That’s it.”

In 1977, after many tours of duty in Washington and his retirement from MIT, Killian summarized his experience working for Eisenhower in his memoir, “Sputnik, Scientists, and Eisenhower.” Killian said of his colleagues: “They were held together in close harmony not only by the challenge of the scientific and technical work they were asked to undertake but by their abiding sense of the opportunity they had to serve a president they admired and the country they loved. They entered the corridors of power in a moment of crisis and served there with a sense of privilege and of admiration for the integrity and high purpose of the White House.”


“This is science!” – MIT president talks about the importance of America’s research enterprise on GBH’s Boston Public Radio

MIT faculty join The Curiosity Desk to discuss football, math, Olympic figure skating, AI and the quest to cure ovarian cancer.


In a wide-ranging live conversation, MIT President Sally Kornbluth joined Jim Braude and Margery Eagan live in studio for GBH’s Boston Public Radio on Thursday, February 5. They talked about MIT, the pressures facing America’s research enterprise, the importance of science, that Congressional hearing on antisemitism in 2023, and more – including Sally’s experience as a Type 1 diabetic.

Reflecting on how research and innovation in the treatment of diabetes has advanced over decades of work, leading to markedly better patient care, Kornbluth exclaims: “This is science!”

With new financial pressures facing universities, increased competition for talented students and scholars from outside the U.S., as well as unprecedented pressures on university leaders and campuses, co-host Eagan asks Kornbluth what she thinks will happen in years to come.

“For us, one of the hardest things now is the endowment tax,” remarks Kornbluth. “That is $240 million a year. Think about how much science you can get for $240 million a year. Are we managing it? Yes. Are we still forging ahead on all of our exciting initiatives? Yes. But we’ve had to reconfigure things. We’ve had to merge things. And it’s not the way we should be spending our time and money.”   

Watch and listen to the full episode on YouTube. President Kornbluth appears one hour and seven minutes into the broadcast.

Following Kornbluth’s appearance, MIT Assistant Professor John Urschel – also a former offensive lineman for the Baltimore Ravens –   joined Edgar B. Herwick III, host of GBH’s newest show, The Curiosity Desk, to talk about his love of his family, linear algebra, and football.

On how he eventually chose math over football, Urschel quips: “Well, I hate to break it to you, I like math better… let me tell you, when I started my PhD at MIT, I just fell in love with the place. I fell in love with this idea of being in this environment [where] everyone loves math, everyone wants to learn. I was just constantly excited every day showing up.”

Prof. Urschel appears about 2 hours and 40 minutes into the webcast on YouTube.

Coming up on Curiosity Desk later this month…

Airing weekday afternoons from 1-2 p.m., The Curiosity Desk will welcome additional MIT guests in the coming weeks. On Thursday, Feb. 12, Professors Sangeeta Bhatia and Angela Belcher talk with Herwick about their research to improve diagnostics for ovarian cancer. We learn that about 80% of the time ovarian cancer starts in the fallopian tubes and how this points the way to a whole new approach to diagnosing and treating the disease. 

Then, on Tuesday, Feb. 17 Anette “Peko” Hosoi, Pappalardo Professor of Mechanical Engineering, and Jerry Lu MFin ’24, a former researcher at the MIT Sports Lab, visit The Curiosity Desk to discuss their work using AI to help Olympic figure skaters improve their jumps.

MIT News · Curiosity Desk Preview
Source: GBH 

I’m walking here! A new model maps foot traffic in New York City

The first complete charting of foot traffic in any US city can be used for infrastructure decisions and safety improvements.


Early in the 1969 film “Midnight Cowboy,” Dustin Hoffman, playing the character of Ratso Rizzo, crosses a Manhattan street and angrily bangs on the hood of an encroaching taxi. Hoffman’s line — “I’m walking here!” — has since been repeated by thousands of New Yorkers. Where cars and people mix, tensions rise.

And yet, governments and planners across the U.S. haven’t thoroughly tracked where it is that cars and people mix. Officials have long measured vehicle traffic closely while largely ignoring pedestrian traffic. Now, an MIT research group has assembled a routable dataset of sidewalks, crosswalks, and footpaths for all of New York City — a massive mapping project and the first complete model of pedestrian activity in any U.S. city.

The model could help planners decide where to make pedestrian infrastructure and public space investments, and illuminate how development decisions could affect non-motorized travel in the city. The study also helps pinpoint locations throughout the city where there are both lots of pedestrians and high pedestrian hazards, such as traffic crashes, and where streets or intersections are most in need of upgrades.

“We now have a first view of foot traffic all over New York City and can check planning decisions against it,” says Andres Sevtsuk, an associate professor in MIT’s Department of Urban Studies and Planning (DUSP), who led the study. “New York has very high densities of foot traffic outside of its most well-known areas.”

Indeed, one upshot of the model is that while Manhattan has the most foot traffic per block, the city’s other boroughs contain plenty of pedestrian-heavy stretches of sidewalk and could probably use more investment on behalf of walkers.

“Midtown Manhattan has by far the most foot traffic, but we found there is a probably unintentional Manhattan bias when it comes to policies that support pedestrian infrastructure,” Sevtsuk says. “There are a whole lot of streets in New York with very high pedestrian volumes outside of Manhattan, whether in Queens or the Bronx or Brooklyn, and we’re able to show, based on data, that a lot of these streets have foot-traffic levels similar to many parts of Manhattan.”

And, in an advance that could help cities anywhere, the model was used to quantify vehicle crashes involving pedestrians not only as raw totals, but on a per-pedestrian basis.

“A lot of cities put real investments behind keeping pedestrians safe from vehicles by prioritizing dangerous locations,” Sevtsuk says. “But that’s not only where the most crashes occur. Here we are able to calculate accidents per pedestrian, the risk people face, and that broadens the picture in terms of where the most dangerous intersections for pedestrians really are.”

The paper, “Spatial Distribution of Foot-traffic in New York City and Applications for Urban Planning,” is published today in Nature Cities.

The authors are Sevtsuk, the Charles and Ann Spaulding Associate Professor of Urban Science and Planning in DUSP and head of the City Design and Development Group; Rounaq Basu, an assistant professor at Georgia Tech; Liu Liu, a PhD student at the City Form Lab in DUSP; Abdulaziz Alhassan, a PhD student at MIT’s Center for Complex Engineering Systems; and Justin Kollar, a PhD student at MIT’s Leventhal Center for Advanced Urbanism in DUSP.

Walking everywhere

The current study continues work Sevtsuk and his colleagues have conducted charting and modeling pedestrian traffic around the world, from Melbourne to MIT’s Kendall Square neighborhood in Cambridge, Massachusetts. Many cities collect some pedestrian count data — but not much. And while officials usually request vehicle traffic impact assessments for new development plans, they rarely study how new developments or infrastructure proposals affect pedestrians.

However, New York City does devote part of its Department of Transportation (DOT) to pedestrian issues, and about 41 percent of trips city-wide are made on foot, compared to just 28 percent by vehicle, likely the highest such ratio in any big U.S. city. To calibrate the model, the MIT team used pedestrian counts that New York City’s DOT recorded in 2018 and 2019, covering up to 1,000 city sidewalk segments on weekdays and up to roughly 450 segments on weekends.

The researchers were able to test the model — which incorporates a wide range of factors — against New York City’s pedestrian-count data. Once calibrated, the model could expand foot-traffic estimates throughout the whole city, not just the points where pedestrian counts were observed.

The results showed that in Midtown Manhattan, there are about 1,697 pedestrians, on average, per sidewalk segment per hour during the evening peak of foot traffic, the highest in the city. The financial district in lower Manhattan comes in second, at 740 pedestrians per hour, with Greenwich Village third at 656.

Other parts of Manhattan register lower levels of foot traffic, however. Morningside Heights and East Harlem register 226 and 227 pedestrians per block per hour. And that’s similar to, or lower than, some parts of other boroughs. Brooklyn Heights has 277 pedestrians per sidewalk segment per hour; University Heights in the Bronx has 263; Borough Park in Brooklyn and the Grand Concourse in the Bronx average 236; and a slice of Queens in the Corona area averages 222. Many other spots are over 200.

The model overlays many different types of pedestrian journeys for each time period and shows that people are generally headed to work and schools in the morning, but conduct more varied types of trips in mid-day and the evening, as they seek out amenities or conduct social or recreational visits.

“Because of jobs, transit stops are the biggest generators of foot traffic in the morning peak,” Liu observes. “In the evening peak, of course people need to get home too, but patterns are much more varied, and people are not just returning from work or school. More social and recreational travel happens after work, whether it’s getting together with friends or running errands for family or family care trips, and that’s what the model detects too.”

On the safety front, pedestrians face danger in many places, not just the intersections with the most total accidents. Many parts of the city are riskier than others on a per-pedestrian basis, compared to the locations with the most pedestrian-related crashes.

“Places like Times Square and Herald Square in Manhattan may have numerous crashes, but they have very high pedestrian volumes, and it’s actually relatively safe to walk there,” Basu says. “There are other parts of the city, around highway off-ramps and heavy car-infrastructure, including the relatively low-density borough of Staten Island, which turn out to have a disproportionate number of crashes per pedestrian.”

Taking the model across the U.S.

The MIT model stands a solid chance of being applied in New York City policy and planning circles, since officials there are aware of the research and have been regularly communicating with the MIT team about it.

For his part, Sevtsuk emphasizes that, as distinct as New York City might be, the MIT model can be applied to cities and town anywhere in the U.S. As it happens, the team is working with municipal officials in two other places at the moment. One is Los Angeles, where city officials are not only trying to upgrade pedestrian and public transit mobility for regular daily trips, but making plans to handle an influx of visitors for the 2028 summer Olympics.

Meanwhile the state of Maine is working with the MIT team to evaluate pedestrian movement in over 140 of its cities and towns, to better understand the kinds of upgrades and safety improvements it could make for pedestrians across the state. Sevtsuk hopes that still other places will take notice of the New York City study and recognize that the tools are in place to analyze foot traffic more broadly in U.S. cities, to address the urgent need to decarbonize cities, and to start balancing what he views as the disproportionate focus on car travel prevalent in 20th century urban planning.

“I hope this can inspire other cities to invest in modeling foot traffic and mapping pedestrian infrastructure as well,” Sevtsuk says. “Very few cities make plans for pedestrian mobility or examine rigorously how future developments will impact foot-traffic. But they can. Our models serve as a test bed for making future changes.” 


Some early life forms may have breathed oxygen well before it filled the atmosphere

A new study suggests aerobic respiration began hundreds of millions of years earlier than previously thought.


Oxygen is a vital and constant presence on Earth today. But that hasn’t always been the case. It wasn’t until around 2.3 billion years ago that oxygen became a permanent fixture in the atmosphere, during a pivotal period known as the Great Oxidation Event (GOE), which set the evolutionary course for oxygen-breathing life as we know it today.

A new study by MIT researchers suggests some early forms of life may have evolved the ability to use oxygen hundreds of millions of years before the GOE. The findings may represent some of the earliest evidence of aerobic respiration on Earth.

In a study appearing today in the journal Palaeogeography, Palaeoclimatology, Palaeoecology, MIT geobiologists traced the evolutionary origins of a key enzyme that enables organisms to use oxygen. The enzyme is found in the vast majority of aerobic, oxygen-breathing life forms today. The team discovered that this enzyme evolved during the Mesoarchean — a geological period that predates the Great Oxidation Event by hundreds of millions of years.

The team’s results may help to explain a longstanding puzzle in Earth’s history: Why did it take so long for oxygen to build up in the atmosphere?

The very first producers of oxygen on the planet were cyanobacteria — microbes that evolved the ability to use sunlight and water to photosynthesize, releasing oxygen as a byproduct. Scientists have determined that cyanobacteria emerged around 2.9 billion years ago. The microbes, then, were presumably churning out oxygen for hundreds of millions of years before the Great Oxidation Event. So, where did all of cyanobacteria’s early oxygen go?

Scientists suspect that rocks may have drawn down a large portion of oxygen early on, through various geochemical reactions. The MIT team’s new study now suggests that biology may have also played a role.

The researchers found that some organisms may have evolved the enzyme to use oxygen hundreds of millions of years before the Great Oxidation Event. This enzyme may have enabled the organisms living near cyanobacteria to gobble up any small amounts of oxygen that the microbes produced, in turn delaying oxygen’s accumulation in the atmosphere for hundreds of millions of years.

“This does dramatically change the story of aerobic respiration,” says study co-author Fatima Husain, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our study adds to this very recently emerging story that life may have used oxygen much earlier than previously thought. It shows us how incredibly innovative life is at all periods in Earth’s history.”

The study’s other co-authors include Gregory Fournier, associate professor of geobiology at MIT, along with Haitao Shang and Stilianos Louca of the University of Oregon.

First respirers

The new study adds to a long line of work at MIT aiming to piece together oxygen’s history on Earth. This body of research has helped to pin down the timing of the Great Oxidation Event as well as the first evidence of oxygen-producing cyanobacteria. The overall understanding that has emerged is that oxygen was first produced by cyanobacteria around 2.9 billion years ago, while the Great Oxidation Event — when oxygen finally accumulated enough to persist in the atmosphere — took place much later, around 2.33 billion years ago.

For Husain and her colleagues, this apparent delay between oxygen’s first production and its eventual persistence inspired a question.

“We know that the microorganisms that produce oxygen were around well before the Great Oxidation Event,” Husain says. “So it was natural to ask, was there any life around at that time that could have been capable of using that oxygen for aerobic respiration?”

If there were in fact some life forms that were using oxygen, even in small amounts, they might have played a role in keeping oxygen from building up in the atmosphere, at least for a while.

To investigate this possibility, the MIT team looked to heme-copper oxygen reductases, which are a set of enzymes that are essential for aerobic respiration. The enzymes act to reduce oxygen to water, and they are found in the majority of aerobic, oxygen-breathing organism today, from bacteria to humans.

“We targeted the core of this enzyme for our analyses because that’s where the reaction with oxygen is actually taking place,” Husain explains.

Tree dates

The team aimed to trace the enzyme’s evolution backward in time to see when the enzyme first emerged to enable organisms to use oxygen. They first identified the enzyme’s genetic sequence and then used an automated search tool to look for this same sequence in databases containing the genomes of millions of different species of organisms.

“The hardest part of this work was that we had too much data,” Fournier says. “This enzyme is just everywhere and is present in most modern living organism. So we had to sample and filter the data down to a dataset that was representative of the diversity of modern life and also small enough to do computation with, which is not trivial.”

The team ultimately isolated the enzyme’s sequence from several thousand modern species and mapped these sequences onto an evolutionary tree of life, based on what scientists know about when each respective species has likely evolved and branched off. They then looked through this tree for specific species that might offer related information about their origins.

If, for instance, there is a fossil record for a particular organism on the tree, that record would include an estimate of when that organism appeared on Earth. The team would use that fossil’s age to “pin” a date to that organism on the tree. In a similar way, they could place pins across the tree to effectively tighten their estimates for when in time the enzyme evolved from one species to the next.

In the end, the researchers were able to trace the enzyme as far back as the Mesoarchean — a geological era that lasted from 3.2 to 2.8 billion years ago. It’s around this time that the team suspects the enzyme — and organisms’ ability to use oxygen — first emerged. This period predates the Great Oxidation Event by several hundred million years.

The new findings suggest that, shortly after cyanobacteria evolved the ability to produce oxygen, other living things evolved the enzyme to use that oxygen. Any such organism that happened to live near cyanobacteria would have been able to quickly take up the oxygen that the bacteria churned out. These early aerobic organisms may have then played some role in preventing oxygen from escaping to the atmosphere, delaying its accumulation for hundreds of millions of years.

“Considered all together, MIT research has filled in the gaps in our knowledge of how Earth’s oxygenation proceeded,” Husain says. “The puzzle pieces are fitting together and really underscore how life was able to diversify and live in this new, oxygenated world.”

This research was supported, in part, by the Research Corporation for Science Advancement Scialog program.


T. Alan Hatton receives Bernard M. Gordon Prize for Innovation in Engineering and Technology Education

Former Chemical Engineering Practice School director recognized by the National Academy of Engineering for decades of leadership advancing immersive, industry-centered learning at MIT.


The National Academy of Engineering (NAE) has announced T. Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, Post-Tenure, as the recipient of the 2026 Bernard M. Gordon Prize for Innovation in Engineering and Technology Education, recognizing his transformative leadership of the Institute’s David H. Koch School of Chemical Engineering Practice. The award citation highlights his efforts to advance “an immersive, industry-integrated educational model that has produced thousands of engineering leaders, strengthening U.S. technological competitiveness and workforce readiness.”

The Gordon Prize recognizes “new modalities and experiments in education that develop effective engineering leaders.” The prize is awarded annually and carries a $500,000 cash award, half granted to the recipient and the remainder granted to their institution to support the recognized innovation.

“As engineering challenges become more complex and interdisciplinary, education must evolve alongside them,” says Paula Hammond, Institute Professor and dean of the School of Engineering. “Under Alan’s leadership, the Practice School has demonstrated how rigorous academics, real industrial problems, and student responsibility can be woven together into an educational experience that is both powerful and adaptable. His work offers a compelling blueprint for the future of engineering education.”

Hatton served as director of the Practice School for 36 years, from 1989 until his retirement in 2025. When he assumed the role, the program worked with a limited number of host companies, largely within traditional chemical industries. Over time, Hatton reshaped the program’s scope and structure, enabling it to operate across continents and sectors to offer students exposure to diverse technologies, organizational cultures, and geographic settings.

“The MIT Chemical Engineering Practice School represents a level of experiential learning that few programs anywhere can match,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering. “This recognition reflects not only Alan’s extraordinary personal contributions, but also the enduring value of a program that prepares students to deliver impact from their very first day as engineers.”

Central to Hatton’s approach was a deliberate strategy of adaptability. He introduced a model in which new companies are recruited regularly as Practice School hosts, broadening participation while keeping the program aligned with emerging technologies and industry needs. He also strengthened on-campus preparation by launching an intensive project management course during MIT’s Independent Activities Period (IAP) — training that has since become foundational for students entering complex, team-based industrial environments.

This forward-looking vision is shared by current Practice School leadership. Fikile Brushett, Ralph Landau Professor of Chemical Engineering Practice and director of the program, emphasizes that Hatton’s legacy is not a static one. “Alan consistently positioned the Practice School to respond to change — whether in technology, industry expectations, or educational practice,” Brushett says. “The Gordon Prize provides an opportunity to further evolve the program while staying true to its core principles of immersion, rigor, and partnership.”

In recognition of Hatton’s service, the department established the T. Alan Hatton Fund in fall 2025 with support from Practice School alumni. The fund is dedicated to helping launch new Practice School stations, lowering barriers for emerging partners and sustaining the program’s ability to engage with a broad and diverse set of industries.

Learning that delivers value on both sides

The Practice School’s impact extends well beyond the classroom. Student teams are embedded directly within host organizations — often in manufacturing plants or research and development centers — where they tackle open-ended technical problems under real operational constraints. Sponsors routinely cite tangible outcomes from these projects, including improved processes, reduced costs, and new technical directions informed by MIT-level analysis.

For students, the experience offers something difficult to replicate in traditional academic settings: sustained responsibility for complex work, direct interaction with industry professionals, and repeated opportunities to present, defend, and refine their ideas. The result is a training environment that closely mirrors professional engineering practice, while retaining the reflective depth of an academic program.

A program shaped by history — and by change

The Practice School was established in 1916 to complement classroom instruction with hands-on industrial experience, an idea that was unconventional at the time. More than a century later, the program has not only endured but continually reinvented itself, expanding far beyond its early focus on regional chemical manufacturing.

Today, Practice School students work with companies around the world in fields that include pharmaceuticals, food production, energy, advanced materials, software, and finance. The program remains a defining feature of graduate education in MIT’s Department of Chemical Engineering, linking research strengths with the practical demands of industry.

Participation in the Practice School is a required component of the department’s Master of Science in Chemical Engineering Practice (MSCEP) and PhD/ScD Chemical Engineering Practice (CEP) programs. After completing coursework, students attend two off-campus stations, spending two months at each site. Teams of two or three students work on month-long projects, culminating in formal presentations and written reports delivered to host organizations. Recent stations have included placements with Evonik in Germany, AstraZeneca in Maryland, EGA in the United Arab Emirates, AspenTech in Massachusetts, and Shell Technology Center and Dimensional Energy in Texas.

“I’m deeply honored by this recognition,” Hatton says. “The Practice School has always been about learning through responsibility — placing students in situations where their work matters. This award will help MIT build on that foundation and explore ways to extend the model so it can serve even more students and partners in the years ahead.”

Hatton obtained his BS and MS degrees in chemical engineering at the University of Natal in Durban, South Africa, before spending three years as a researcher at the Council for Scientific and Industrial Research in Pretoria. He later earned his PhD at the University of Wisconsin at Madison and joined the MIT faculty in 1982 as an assistant professor.

Over the course of his career at MIT, Hatton helped extend the Practice School model beyond campus through his involvement in the Singapore–MIT Alliance for Research and Technology and the Cambridge–MIT Institute, contributing to the development of practice-based engineering education in international settings. He also served as co-director of the MIT Energy Initiative’s Low-Carbon Energy Center focused on carbon capture, utilization, and storage.

Hatton has long been recognized for his commitment to education and service. From 1983 to 1986, he served as a junior faculty housemaster (now known as an associate head of house) in MacGregor House and received MIT’s Everett Moore Baker Teaching Award in 1983. His professional honors include being named a founding fellow of the American Institute of Medical and Biological Engineering and an honorary professorial fellow at the University of Melbourne in Australia.

In addition to his educational leadership, Hatton has made substantial contributions to the broader engineering community, chairing multiple national and international conferences in the areas of colloids and separation processes and delivering numerous plenary, keynote, and invited lectures worldwide.

Hatton will formally receive the Bernard M. Gordon Prize at a ceremony hosted by the National Academy of Engineering at MIT on April 30.


A satellite language network in the brain

Researchers find a component of the brain’s dedicated language network in the cerebellum, a region better known for coordinating movement.


The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute for Brain Research, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.

Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT's Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported Jan. 21 in the journal Neuron.

“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”

Imaging the language network

There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved, or tease out their roles in language processing.

To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.

Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.

Satellite language network

While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.

Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex — a function that could be important for many cognitive tasks.

“We’ve found that language is distinct from many, many other things — but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”

The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.

Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.

The researchers are also exploring the possibility that the cerebellum is particularly important for language learning — playing an outsized role during development, or when people learn languages later in life.

Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says.

Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.


Helping AI agents search to get the best results out of large language models

EnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.


Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.

AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.

But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes. Coding this up can take as much effort as implementing the original agent; if your system for translating a codebase contained thousands of lines of code, then you’d be making thousands of lines of code changes or additions to support the logic for backtracking when LLMs make mistakes. 

To save programmers time and effort, researchers with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Asari AI have developed a framework called “EnCompass.” 

With EnCompass, you no longer have to make these changes yourself. Instead, when EnCompass runs your program, it automatically backtracks if LLMs make mistakes. EnCompass can also make clones of the program runtime to make multiple attempts in parallel in search of the best solution. In full generality, EnCompass searches over the different possible paths your agent could take as a result of the different possible outputs of all the LLM calls, looking for the path where the LLM finds the best solution.

Then, all you have to do is to annotate the locations where you may want to backtrack or clone the program runtime, as well as record any information that may be useful to the strategy used to search over the different possible execution paths of your agent (the search strategy). You can then separately specify the search strategy — you could either use one that EnCompass provides out of the box or, if desired, implement your own custom search strategy.

“With EnCompass, we’ve separated the search strategy from the underlying workflow of an AI agent,” says lead author Zhening Li ’25, MEng ’25, who is an MIT electrical engineering and computer science (EECS) PhD student, CSAIL researcher, and research consultant at Asari AI. “Our framework lets programmers easily experiment with different search strategies to find the one that makes the AI agent perform the best.” 

EnCompass was used for agents implemented as Python programs that call LLMs, where it demonstrated noticeable code savings. EnCompass reduced coding effort for implementing search by up to 80 percent across agents, such as an agent for translating code repositories and for discovering transformation rules of digital grids. In the future, EnCompass could enable agents to tackle large-scale tasks, including managing massive code libraries, designing and carrying out science experiments, and creating blueprints for rockets and other hardware.

Branching out

When programming your agent, you mark particular operations — such as calls to an LLM — where results may vary. These annotations are called “branchpoints.” If you imagine your agent program as generating a single plot line of a story, then adding branchpoints turns the story into a choose-your-own-adventure story game, where branchpoints are locations where the plot branches into multiple future plot lines. 

You can then specify the strategy that EnCompass uses to navigate that story game, in search of the best possible ending to the story. This can include launching parallel threads of execution or backtracking to a previous branchpoint when you get stuck in a dead end.

Users can also plug-and-play a few common search strategies provided by EnCompass out of the box, or define their own custom strategy. For example, you could opt for Monte Carlo tree search, which builds a search tree by balancing exploration and exploitation, or beam search, which keeps the best few outputs from every step. EnCompass makes it easy to experiment with different approaches to find the best strategy to maximize the likelihood of successfully completing your task.

The coding efficiency of EnCompass

So just how code-efficient is EnCompass for adding search to agent programs? According to researchers’ findings, the framework drastically cut down how much programmers needed to add to their agent programs to add search, helping them experiment with different strategies to find the one that performs the best.

For example, the researchers applied EnCompass to an agent that translates a repository of code from the Java programming language, which is commonly used to program apps and enterprise software, to Python. They found that implementing search with EnCompass — mainly involving adding branchpoint annotations and annotations that record how well each step did — required 348 fewer lines of code (about 82 percent) than implementing it by hand. They also demonstrated how EnCompass enabled them to easily try out different search strategies, identifying the best strategy to be a two-level beam search algorithm, achieving an accuracy boost of 15 to 40 percent across five different repositories at a search budget of 16 times the LLM calls made by the agent without search.

“As LLMs become a more integral part of everyday software, it becomes more important to understand how to efficiently build software that leverages their strengths and works around their limitations,” says co-author Armando Solar-Lezama, who is an MIT professor of EECS and CSAIL principal investigator. “EnCompass is an important step in that direction.”

The researchers add that EnCompass targets agents where a program specifies the steps of the high-level workflow; the current iteration of their framework is less applicable to agents that are entirely controlled by an LLM. “In those agents, instead of having a program that specifies the steps and then using an LLM to carry out those steps, the LLM itself decides everything,” says Li. “There is no underlying programmatic workflow, so you can execute inference-time search on whatever the LLM invents on the fly. In this case, there’s less need for a tool like EnCompass that modifies how a program executes with search and backtracking.”

Li and his colleagues plan to extend EnCompass to more general search frameworks for AI agents. They also plan to test their system on more complex tasks to refine it for real-world uses, including at companies. What’s more, they’re evaluating how well EnCompass helps agents work with humans on tasks like brainstorming hardware designs or translating much larger code libraries. For now, EnCompass is a powerful building block that enables humans to tinker with AI agents more easily, improving their performance.

“EnCompass arrives at a timely moment, as AI-driven agents and search-based techniques are beginning to reshape workflows in software engineering,” says Carnegie Mellon University Professor Yiming Yang, who wasn’t involved in the research. “By cleanly separating an agent’s programming logic from its inference-time search strategy, the framework offers a principled way to explore how structured search can enhance code generation, translation, and analysis. This abstraction provides a solid foundation for more systematic and reliable search-driven approaches to software development.”  

Li and Solar-Lezama wrote the paper with two Asari AI researchers: Caltech Professor Yisong Yue, an advisor at the company; and senior author Stephan Zheng, who is the founder and CEO. Their work was supported by Asari AI.

The team’s work was presented at the Conference on Neural Information Processing Systems (NeurIPS) in December.


New vaccine platform promotes rare protective B cells

Based on a virus-like particle built with a DNA scaffold, the approach could generate broadly neutralizing antibody responses against HIV or influenza.


A longstanding goal of immunotherapies and vaccine research is to induce antibodies in humans that neutralize deadly viruses such as HIV and influenza. Of particular interest are antibodies that are “broadly neutralizing,” meaning they can in principle eliminate multiple strains of a virus such as HIV, which mutates rapidly to evade the human immune system.

Researchers at MIT and the Scripps Research Institute have now developed a vaccine that generates a significant population of rare precursor B cells that are capable of evolving to produce broadly neutralizing antibodies. Expanding these cells is the first step toward a successful HIV vaccine.

The researchers’ vaccine design uses DNA instead of protein as a scaffold to fabricate a virus-like particle (VLP) displaying numerous copies of an engineered HIV immunogen called eOD-GT8, which was developed at Scripps. This vaccine generated substantially more precursor B cells in a humanized mouse model compared to a protein-based virus-like particle that has shown significant success in human clinical trials.

Preclinical studies showed that the DNA-VLP generated eight times more of the desired, or “on-target,” B cells than the clinical product, which was already shown to be highly potent.

“We were all surprised that this already outstanding VLP from Scripps was significantly outperformed by the DNA-based VLP,” says Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard. “These early preclinical results suggest a potential breakthrough as an entirely new, first-in-class VLP that could transform the way we think about active immunotherapies, and vaccine design, across a variety of indications.”

The researchers also showed that the DNA scaffold doesn’t induce an immune response when applied to the engineered HIV antigen. This means the DNA VLP might be used to deliver multiple antigens when boosting strategies are needed, such as for challenging diseases such as HIV.

“The DNA-VLP allowed us for the first time to assess whether B cells targeting the VLP itself limit the development of ‘on target’ B cell responses — a longstanding question in vaccine immunology,” says Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute and a Howard Hughes Medical Institute Investigator.

Bathe and Irvine are the senior authors of the study, which appears today in Science. The paper’s lead author is Anna Romanov PhD ’25.

Priming B cells

The new study is part of a major ongoing global effort to develop active immunotherapies and vaccines that expand specific lineages of B cells. All humans have the necessary genes to produce the right B cells that can neutralize HIV, but they are exceptionally rare and require many mutations to become broadly neutralizing. If exposed to the right series of antigens, however, these cells can in principle evolve to eventually produce the requisite broadly neutralizing antibodies.

In the case of HIV, one such target antibody, called VRC01, was discovered by National Institutes of Health researchers in 2010 when they studied humans living with HIV who did not develop AIDS. This set off a major worldwide effort to develop an HIV vaccine that would induce this target antibody, but this remains an outstanding challenge.

Generating HIV-neutralizing antibodies is believed to require three stages of vaccination, each one initiated by a different antigen that helps guide B cell evolution toward the correct target, the native HIV envelope protein gp120.

In 2013, William Schief, a professor of immunology and microbiology at Scripps, reported an engineered antigen called eOD-GT6 that could be used for the first step in this process, known as priming. His team subsequently upgraded the antigen to eOD-GT8. Vaccination with eOD-GT8 arrayed on a protein VLP generated early antibody precursors to VRC01 both in mice and more recently in humans, a key first step toward an HIV vaccine.

However, the protein VLP also generated substantial “off-target” antibodies that bound the irrelevant, and potentially highly distracting, protein VLP itself. This could have unknown consequences on propagating target B cells of interest for HIV, as well as other challenging immunotherapy applications.

The Bathe and Irvine labs set out to test if they could use a particle made from DNA, instead of protein, to deliver the priming antigen. These nanoscale particles are made using DNA origami, a method that offers precise control over the structure of synthetic DNA and allows researchers to attach viral antigens at specific locations.

In 2024, Bathe and Daniel Lingwood, an associate professor at Harvard Medical School and a principal investigator at the Ragon Institute, showed this DNA VLP could be used to deliver a SARS-CoV-2 vaccine in mice to generate neutralizing antibodies. From that study, the researchers learned that the DNA scaffold does not induce antibodies to the VLP itself, unlike proteins. They wondered whether this might also enable a more focused antibody response.

Building on these results, Romanov, co-advised by Bathe and Irvine, set off to apply the DNA VLP to the Scripps HIV priming vaccine, based on eOD-GT8.

“Our earlier work with SARS-CoV-2 antigens on DNA-VLPs showed that DNA-VLPs can be used to focus the immune response on an antigen of interest. This property seemed especially useful for a case like HIV, where the B cells of interest are exceptionally rare. Thus, we hypothesized that reducing the competition among other irrelevant B cells (by delivering the vaccine on a silent DNA nanoparticle) may help these rare cells have a better chance to survive,”  Romanov says.

Initial studies in mice, however, showed the vaccine did not induce sufficient early B cell response to the first, priming dose.

After redesigning the DNA VLPs, Romanov and colleagues found that a smaller diameter version with 60 instead of 30 copies of the engineered antigen dramatically out-performed the clinical protein VLP construct, both in overall number of antigen-specific B cells and the fraction of B cells that were on-target to the specific HIV domain of interest. This was a result of improved retention of the particles in B cell follicles in lymph nodes and better collaboration with helper T cells, which promote B cell survival.

Overall, these improvements enabled the particles to generate eightfold more on-target B cells than the vaccine consisting of eOD-GT8 carried by a protein scaffold. Another key finding, elucidated by the Lingwood lab, was that the DNA particles promoted VRC01 precursor B cells toward the VRC01 antibody more efficiently than the protein VLP.

“In the field of vaccine immunology, the question of whether B cell responses to a targeted protective epitope on a vaccine antigen might be hindered by responses to neighboring off-target epitopes on the same antigen has been under intense investigation,” says Schief, who is also vice president for protein design at Moderna. “There are some data from other studies suggesting that off-target responses might not have much impact, but this study shows quite convincingly that reducing off-target responses by using a DNA VLP can improve desired on-target responses.”

“While nanoparticle formulations have been great at boosting antibody responses to various antigens, there is always this nagging question of whether competition from B cells specific for the particle’s own structural antigens won’t get in the way of antibody responses to targeted epitopes,” says Gabriel Victora, a professor of immunology, virology, and microbiology at Rockefeller University, who was not involved in the study. “DNA-based particles that leverage B cells’ natural tolerance to nucleic acids are a clever idea to circumvent this problem, and the research team’s elegant experiments clearly show that this strategy can be used to make difficult epitopes easier to target.”

A “silent” scaffold

The fact that the DNA-VLP scaffold doesn’t induce scaffold-specific antibodies means that it could be used to carry second and potentially third antigens needed in the vaccine series, as the researchers are currently investigating. It also might offer significantly improved on-target antibodies for numerous antigens that are outcompeted and dominated by off-target, irrelevant protein VLP scaffolds in this or other applications.

“A breakthrough of this paper is the rigorous, mechanistic quantification of how DNA-VLPs can ‘focus’ antibody responses on target antigens of interest, which is a consequence of the silent nature of this DNA-based scaffold we’ve previously shown is stealth to the immune system,” Bathe says.

More broadly, this new type of VLP could be used to generate other kinds of protective antibody responses against pandemic threats such as flu, or potentially against chemical warfare agents, the researchers suggest. Alternatively, it might be used as an active immunotherapy to generate antibodies that target amyloid beta or tau protein to treat degenerative diseases such as Alzheimer’s, or to generate antibodies that target noxious chemicals such as opioids or nicotine to help people suffering from addiction.

The research was funded by the National Institutes of Health; the Ragon Institute of MGH, MIT, and Harvard; the Howard Hughes Medical Institute; the National Science Foundation; the Novo Nordisk Foundation; a Koch Institute Support (core) Grant from the National Cancer Institute; the National Institute of Environmental Health Sciences; the Gates Foundation Collaboration for AIDS Vaccine Discovery; the IAVI Neutralizing Antibody Center; the National Institute of Allergy and Infectious Diseases; and the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies.


“Essential” torch heralds the start of the 2026 Winter Olympics

Professor of the practice Carlo Ratti designed this year’s Olympic torch with the ethos and principles he brings to his work at MIT.


Before the thrill of victory; before the agony of defeat; before the gold medalist’s national anthem plays, there is the Olympic torch. A symbol of unity, friendship, and the spirit of competition, the torch links today’s Olympic Games to its heritage in ancient Greece.

The torch for the 2026 Milano Cortina Olympic Games and Paralympic Games was designed by Carlo Ratti, a professor of the practice for the MIT Department of Urban Studies and Planning and the director of the Senseable City Lab in the MIT School of Architecture and Planning.

A native of Turin, Italy, and a respected designer and architect worldwide, Ratti’s work and that of his firm, Carlo Ratti Associati, has been featured at various international expositions such as the French Pavilion at the Osaka Expo (World’s Fair) in 2025 and the Italian Pavilion at the Dubai Expo in 2020. Their design for The Cloud, a 400-foot tall spherical structure that would serve as a unique observation deck, was a finalist for the 2012 Olympic Games in London, but ultimately not built.

Ratti relishes the opportunity to participate in these events.

“You can push the boundaries more at these [venues] because you are building something that is temporary,” says Ratti. “They allow for more creativity, so it’s a good moment to experiment.”

Based on his previous work, Ratti was invited to design the torch by the Olympic organizers. He approached the project much as he instructs his students working in his lab.

“It is about what the object or the design is to convey,” Ratti says. “How it can touch people, how it can relate to people, how it can transmit emotions. That’s the most important thing.”

To Ratti, the fundamental aspect of the torch is the flame. A few months before the games begin, the torch is lit in Olympia, Greece, using a parabolic mirror reflecting the sun’s rays. In ancient Greece, the flame was considered “sacred,” and was to remain lit throughout the competition. Ratti, familiar with the history of the Olympic torch, is less impressed with designs that he deems overwrought. Many torches added superfluous ornamentation to its exterior much like cars are designed around their engines, he says. Instead, he decided to strip away everything that wasn’t essential to the flame itself.

What is “essential”

“Essential” — the official name for the 2026 Winter Olympic torch — was designed to perform regardless of the weather, wind, or altitude it would encounter on its journey from Olympia to Milan. The process took three years with many designs created, considered, and discussed with the local and global Olympic committees and Olympic sponsor Versalis. And, as with Ratti’s work at MIT, researchers and engineers collaborated in the effort.

“Each design pushed the boundaries in different directions, but all of them with the key principle to put the flame at the center,” says Ratti who wanted the torch to embody “an ethos of frugality.”

At the core of Ratti’s torch is a high-performance burner powered by bio-GPL produced by energy company ENI from 100 percent renewable feedstocks. Furthermore, the torch can be recharged 10 times. In previous years, torches were used only once. This allowed for a 10-fold reduction in the number of torches created.

Also unique to this torch is its internal mechanism, which is visible via a vertical opening along its side, allowing audiences to see the burner in action. This reinforces the desire to keep the emphasis on the flame instead of the object.

In keeping with the requisite for minimalism and sustainability, the torch is primarily composed of recycled aluminum. It is the lightest torch created for the Olympics, weighing just under 2.5 pounds. The body is finished with a PVD coating that is heat resistant, letting it shift colors by reflecting the environments — such as the mountains and the city lights — through which it is carried. The Olympic torch is a blue-green shade, while the Paralympic torch is gold.

The torch won an honorable mention in Italy’s most prestigious industrial design award, the Compasso d’Oro.

The Olympic Relay

The torch relay is considered an event itself, drawing thousands as it is carried to the host city by hundreds of volunteers. Its journey for the 2026 Olympics started in late November and, after visiting cities across Greece, will have covered all 110 Italian provinces before arriving in Milan for the opening ceremony on Feb. 6.

Ratti carried the torch for a portion of its journey through Turin in mid-January — another joyful invitation to this quadrennial event. He says winter sports are his favorite; he grew up skiing where these games are being held, and has since skied around the world — from Utah to the Himalayas.

In addition to a highly sustainable torch, there was another statement Ratti wanted to make: He wanted to showcase the Italy of today and of the future. It is the same issue he confronted as the curator of the 2025 Biennale Architettura in Venice titled “Intelligens. Natural. Artificial. Collective: an architecture exhibition, but infused with technology for the future.”

“When people think about Italy, they often think about the past, from ancient Romans to the Renaissance or Baroque period,” he says. “Italy does indeed have a significant past. But the reality is that it is also the second-largest industrial powerhouse in Europe and is leading in innovation and tech in many fields. So, the 2026 torch aims to combine both past and future. It draws on Italian design from the past, but also on future-forward technologies.”

“There should be some kind of architectural design always translating into form some kind of ethical principles or ideals. It’s not just about a physical thing. Ultimately, it’s about the human dimension. That applies to the work we do at MIT or the Olympic torch.”


Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing

He joins Nikos Trichakis in guiding the cross-cutting initiative of the MIT Schwarzman College of Computing.


Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.

Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy.

“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.

Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.

Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.

The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.


Antonio Torralba, three MIT alumni named 2025 ACM fellows

Torralba’s research focuses on computer vision, machine learning, and human visual perception.


Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.

A principal investigator within the Computer Science and Artificial Intelligence Laboratory, Torralba received his BS in telecommunications engineering from the Universitat Politècnica de Catalunya, in Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, in France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab. 

Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field. 

Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya. 

ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.


3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs

Professor James Collins discusses how collaboration has been central to his research into combining computational predictions with new experimental platforms.


In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.

James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.

In this Q&A, Collins speaks about his latest work and goals for this research.

Q.  You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research? 

A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.

At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.

The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.

Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?

A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.

Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.

Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?

A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.

Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.


3D-printed metamaterials that stretch and fail by design

New framework supports design and fabrication of compliant materials such as printable textiles and functional foams, letting users predict deformation and material failure.


Metamaterials — materials whose properties are primarily dictated by their internal microstructure, and not their chemical makeup — have been redefining the engineering materials space for the last decade. To date, however, most metamaterials have been lightweight options designed for stiffness and strength.

New research from the MIT Department of Mechanical Engineering introduces a computational design framework to support the creation of a new class of soft, compliant, and deformable metamaterials. These metamaterials, termed 3D woven metamaterials, consist of building blocks that are composed of intertwined fibers that self-contact and entangle to endow the material with unique properties.

“Soft materials are required for emerging engineering challenges in areas such as soft robotics, biomedical devices, or even for wearable devices and functional textiles,” explains Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor of mechanical engineering.

In an open-access paper published Jan. 26 in the journal Nature Communications, researchers from Portela’s lab provide a universal design framework that generates complex 3D woven metamaterials with a wide range of properties. The work also provides open-source code that allows users to create designs to fit specifications and generate a file for printing or simulating the material using a 3D printer.

“Normal knitting or weaving have been constrained by the hardware for hundreds of years — there’s only a few patterns that you can make clothes out of, for example — but that changes if hardware is no longer a limitation,” Portela says. “With this framework, you can come up with interesting patterns that completely change the way the textile is going to behave.”

Possible applications include wearable sensors that move with human skin, fabrics for aerospace or defense needs, flexible electronic devices, and a variety of other printable textiles.

The team developed general design rules — in the form of an algorithm — that first provide a graph representation of the metamaterial. The attributes of this graph eventually dictate how each fiber is placed and connected within the metamaterial. The fundamental building blocks are woven unit cells that can be functionally graded via control of various design parameters, such as the radius and pitch of the fibers that make up the woven struts.

“Because this framework allows these metamaterials to be tailored to be softer in one place and stiffer in another, or to change shape as they stretch, they can exhibit an exceptional range of behaviors that would be hard to design using conventional soft materials,” says Molly Carton, lead author of the study. Carton, a former postdoc in Portela’s lab, is now an assistant research professor in mechanical engineering at the University of Maryland.

Further, the simulation framework also allows users to predict the deformation response of these materials, capturing complex phenomena such as self-contact within fibers and entanglement, and design to predict and resist deformation or tearing patterns.

“The most exciting part was being able to tailor failure in these materials and design arbitrary combinations,” says Portela. “Based on the simulations, we were able to fabricate these spatially varying geometries and experiment on them at the microscale.”

This work is the first to provide a tool for users to design, print, and simulate an emerging class of metamaterials that are extensible and tough. It also demonstrates that through tuning of geometric parameters, users can control and predict how these materials will deform and fail, and presents several new design building blocks that substantially expand the property space of woven metamaterials.

“Until now, these complex 3D lattices have been designed manually, painstakingly, which limits the number of designs that anyone has tested,” says Carton. “We’ve been able to describe how these woven lattices work and use that to create a design tool for arbitrary woven lattices. With that design freedom, we’re able to design the way that a lattice changes shape as it stretches, how the fibers entangle and knot with each other, as well as how it tears when stretched to the limit.”

Carton says she believes the framework will be useful across many disciplines. “In releasing this framework as a software tool, our hope is that other researchers will explore what’s possible using woven lattices and find new ways to use this design flexibility,” she says. “I’m looking forward to seeing what doors our work can open.”

The paper, “Design framework for programmable three-dimensional woven metamaterials,” is available now in the journal Nature Communications. Its other MIT-affiliated authors are James Utama Surjadi, Bastien F. G. Aymon, and Ling Xu.

This work was performed, in part, through the use of MIT.nano’s fabrication and characterization facilities.


Terahertz microscope reveals the motion of superconducting electrons

For the first time, the new scope allowed physicists to observe terahertz “jiggles” in a superconducting fluid.


You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.

Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.

Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.

But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.

In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.

The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.

“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.

By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.

“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”

In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.

Hitting a limit

Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.

Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.

With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.

“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”

Zooming in

The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.

By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.

The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.

As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.

“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”

With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.

“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.

This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.

“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”

This research was supported, in part, by the MIT Research Laboratory of Electronics, the U.S. Department of Energy, and the Gordon and Betty Moore Foundation. Fabrication was carried out with the use of MIT.nano.


MIT winter club sports energized by the Olympics

Members of the MIT curling and figure skating clubs are embracing the 2026 Winter Olympics, an international showcase for their — and many other — cherished winter sports.


With the Milano Cortina 2026 Winter Olympics officially kicking off today, several of MIT’s winter sports clubs are hosting watch parties to cheer on their favorite players, events, and teams.

Members of MIT’s Curling Club are hosting a gathering to support their favorite teams. Co-presidents Polly Harrington and Gabi Wojcik are rooting for the United States.

“I’m looking forward to watching the Olympics and cheering for Team USA. I grew up in Seattle, and during the Vancouver Olympics, we took a family trip to the games. The most affordable tickets were to the curling events, and that was my first exposure to the sport. Seeing it live was really cool. I was hooked,” says Harrington.

Wojcik says, “It’s a very analytical and strategic sport, so it’s perfect for MIT students. Physicists still don't entirely agree on why the rocks behave the way they do. Everyone in the club is welcoming and open to teaching new people to play. I’d never played before and learned from scratch. The other advantage of playing is that it is a lifelong sport.”

The two say the biggest misconception about curling, other than that it is easy, is that it is played on ice skates. It’s neither easy nor played on skates. The stone, or rock, as it is often called, weighs 43 pounds, and is always made from the same weathered granite from Scotland so that the playing field, or in this case, ice, is even.

Both agree that playing is a great way to meet other students from MIT that they might not otherwise have the chance to.

Having seen the American team at a recent tournament, Wojcik is hoping the team does well, but admits that if Scotland wins, she’ll also be happy. Harrington met members of the U.S. men's curling team, Luc Violette and Ben Richardson, when curling in Seattle in high school, and will be cheering for them.

The Curling Club team practices and competes in tournaments in the New England area from late September until mid-March and always welcomes new members, no previous experience is necessary to join.

Figure Skating Club

The MIT Figure Skating Club is also excited for the 2026 Olympics and has been watching preliminary events (nationals) leading up to the games with great anticipation. Eleanor Li, the current club president, and Amanda (Mandy) Paredes Rioboo, former president, say holding small gatherings to watch the Olympics is a great way for the team to bond further.

Li began taking skating lessons at age 14 and fell in love with the sport right away, and has been skating ever since. Paredes Rioboo started lessons at age 5 and practices in the mornings with other club members, saying, “there is no better way to start the day.”

The Figure Skating Club currently has 120 members and offers a great way to meet friends who share the same passion. Any MIT student, regardless of skill level, is welcome to join the club.

Li says, “We have members ranging from former national and international competitors to people who are completely new to the ice.” Adding that her favorite part of skating is, “the freeing feeling of wind coming at you when you’re gliding across the ice! And all the life lessons learned — time management, falling again and again, and getting up again and again, the artistry and expressiveness of this beautiful sport, and most of all the community.”

Paredes Rioboo agrees. “The sport taught me discipline, to work at something and struggle with it until I got good at it. It taught me to be patient with myself and to be unafraid of failure.”

“The Olympics always bring a lot of buzz and curiosity around skating, and we’re excited to hopefully see more people come to our Saturday free group lessons, try skating for the first time, and maybe even join the club,” says Li.

Li and Paredes Rioboo are ready to watch the games with other club members. Li says, “I’m especially excited for women’s singles skating. All of the athletes have trained so hard to get there, and I’m really looking forward to watching all the beautiful skating. Especially Kaori Sakamoto.”

“I’m excited to watch Alysa Liu and Ami Nakai,” adds Paredes Rioboo.

Students interested in joining the Figure Skating Club can find more information here.


Katie Spivakovsky wins 2026 Churchill Scholarship

The MIT senior will pursue a master’s degree at Cambridge University in the U.K. this fall.


MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.

Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.

At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.

On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.

“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.

The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.

MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.


Counter intelligence

Architecture students bring new forms of human-machine interaction into the kitchen.


How can artificial intelligence step out of a screen and become something we can physically touch and interact with?

That question formed the foundation of class 4.043/4.044 (Interaction Intelligence), an MIT course focused on designing a new category of AI-driven interactive objects. Known as large language objects (LLOs), these physical interfaces extend large language models into the real world. Their behaviors can be deliberately generated for specific people or applications, and their interactions can evolve from simple to increasingly sophisticated — providing meaningful support for both novice and expert users.

“I came to the realization that, while powerful, these new forms of intelligence still remain largely ignorant of the world outside of language,” says Marcelo Coelho, associate professor of the practice in the MIT Department of Architecture, who has been teaching the design studio for several years and directs the Design Intelligence Lab. “They lack real-time, contextual understanding of our physical surroundings, bodily experiences, and social relationships to be truly intelligent. In contrast, LLOs are physically situated and interact in real time with their physical environment. The course is an attempt to both address this gap and develop a new kind of design discipline for the age of AI.”

Given the assignment to design an interactive device that they would want in their lives, students Jacob Payne and Ayah Mahmoud focused on the kitchen. While they each enjoy cooking and baking, their design inspiration came from the first home computer: the Honeywell 316 Kitchen Computer, marketed by Neiman Marcus in 1969. Priced at $10,000, there is no record of one ever being sold.

“It was an ambitious but impractical early attempt at a home kitchen computer,” says Payne, an architecture graduate student. “It made an intriguing historical reference for the project.”

“As somebody who likes learning to cook — especially now, in college as an undergrad — the thought of designing something that makes cooking easy for those who might not have a cooking background and just wants a nice meal that satisfies their cravings was a great starting point for me,” says Mahmoud, a senior design major.

“We thought about the leftover ingredients you have in the refrigerator or pantry, and how AI could help you find new creative uses for things that you may otherwise throw away,” says Payne.

Generative cuisine

The students designed their device — named Kitchen Cosmo — with instructions to function as a “recipe generator.” One challenge was prompting the LLM to consistently acknowledge real-world cooking parameters, such as heating, timing, or temperature. One issue they worked out was having the LLM recognize flavor profiles and spices accurate to regional and cultural dishes around the world to support a wider range of cuisines. Troubleshooting included taste-testing recipes Kitchen Cosmo generated. Not every early recipe produced a winning dish.

“There were lots of small things that AI wasn't great at conceptually understanding,” says Mahmoud. “An LLM needs to fundamentally understand human taste to make a great meal.”

They fine-tuned their device to allow for the myriad ways people approach preparing a meal. Is this breakfast, lunch, dinner, or a snack? How advanced of a cook are you? How much meal prep time do you have? How many servings will you make? Dietary preferences were also programmed, as well as the type of mood or vibe you want to achieve. Are you feeling nostalgic, or are you in a celebratory mood? There’s a dial for that.

“These selections were the focal point of the device because we were curious to see how the LLM would interpret subjective adjectives as inputs and use them to transform the type of recipe outputs we would get,” says Payne.

Unlike most AI interactions that tend to be invisible, Payne and Mahmoud wanted their device to be more of a “partner” in the kitchen. The tactile interface was intentionally designed to structure the interaction, giving users a physical control over how the AI responded.

“While I’ve worked with electronics and hardware before, this project pushed me to integrate the components with a level of precision and refinement that felt much closer to a product-ready device,” says Payne of the course work.

Retro and red

After their electronic work was completed, the students designed a series of models using cardboard until settling on the final look, which Payne describes as “retro.” The body was designed in a 3D modeling software and printed. In a nod to the original Honeywell computer, they painted it red.

A thin, rectangular device about 18 inches in height, Kitchen Cosmo has a webcam that hinges open to scan ingredients set on a counter. It translates these into a recipe that takes into consideration general spices and condiments common in most households. An integrated thermal printer delivers a printed recipe that is torn off. Recipes can be stored in a plastic receptacle on its base.

While Kitchen Cosmo made a modest splash in design magazines, both students have ideas where they will take future iterations.

Payne would like to see it “take advantage of a lot of the data we have in the kitchen and use AI as a mediator, offering tips for how to improve on what you’re cooking at that moment.”

Mahmoud is looking at how to optimize Kitchen Cosmo for her thesis. Classmates have given feedback to upgrade its abilities. One suggestion is to provide multi-person instructions that give several people tasks needed to complete a recipe. Another idea is to create a “learning mode” in which a kitchen tool — for example, a paring knife — is set in front of Kitchen Cosmo, and it delivers instructions on how to use the tool. Mahmoud has been researching food science history as well.

“I’d like to get a better handle on how to train AI to fully understand food so it can tailor recipes to a user’s liking,” she says.

Having begun her MIT education as a geologist, Mahmoud’s pivot to design has been a revelation, she says. Each design class has been inspiring. Coelho’s course was her first class to include designing with AI. Referencing the often-mentioned analogy of “drinking from a firehouse” while a student at MIT, Mahmoud says the course helped define a path for her in product design.

“For the first time, in that class, I felt like I was finally drinking as much as I could and not feeling overwhelmed. I see myself doing design long-term, which is something I didn’t think I would have said previously about technology.” 


SMART launches new Wearable Imaging for Transforming Elderly Care research group

WITEC is working to develop the first wearable ultrasound imaging system to monitor chronic conditions in real-time, with the goal of enabling earlier detection and timely intervention.


What if ultrasound imaging is no longer confined to hospitals? Patients with chronic conditions, such as hypertension and heart failure, could be monitored continuously in real-time at home or on the move, giving health care practitioners ongoing clinical insights instead of the occasional snapshots — a scan here and a check-up there. This shift from reactive, hospital-based care to preventative, community and home-based care could enable earlier detection and timely intervention, and truly personalized care.

Bringing this vision to reality, the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, has launched a new collaborative research project: Wearable Imaging for Transforming Elderly Care (WITEC). 

WITEC marks a pioneering effort in wearable technology, medical imaging, research, and materials science. It will be dedicated to foundational research and development of the world’s first wearable ultrasound imaging system capable of 48-hour intermittent cardiovascular imaging for continuous and real-time monitoring and diagnosis of chronic conditions such as hypertension and heart failure. 

This multi-million dollar, multi-year research program, supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence and Technological Enterprise program, brings together top researchers and expertise from MIT, Nanyang Technological University (NTU Singapore), and the National University of Singapore (NUS). Tan Tock Seng Hospital (TTSH) is WITEC’s clinical collaborator and will conduct patient trials to validate long-term heart imaging for chronic cardiovascular disease management.

“Addressing society’s most pressing challenges requires innovative, interdisciplinary thinking. Building on SMART’s long legacy in Singapore as a hub for research and innovation, WITEC will harness interdisciplinary expertise — from MIT and leading institutions in Singapore — to advance transformative research that creates real-world impact and benefits Singapore, the U.S., and societies all over. This is the kind of collaborative research that not only pushes the boundaries of knowledge, but also redefines what is possible for the future of health care,” says Bruce Tidor, chief executive officer and interim director of SMART, who is also an MIT professor of biological engineering and electrical engineering and computer science.

Industry-leading precision equipment and capabilities

To support this work, WITEC’s laboratory is equipped with advanced tools, including Southeast Asia’s first sub-micrometer 3D printer and the latest Verasonics Vantage NXT 256 ultrasonic imaging system, which is the first unit of its kind in Singapore.

Unlike conventional 3D printers that operate at millimeter or micrometer scales, WITEC’s 3D printer can achieve sub‑micrometer resolution, allowing components to be fabricated at the level of single cells or tissue structures. With this capability, WITEC researchers can prototype bioadhesive materials and device interfaces with unprecedented accuracy — essential to ensuring skin‑safe adhesion and stable, long‑term imaging quality.

Complementing this is the latest Verasonics ultrasonic imaging system. Equipped with a new transducer adapter and supporting a significantly larger number of probe control channels than existing systems, it gives researchers the freedom to test highly customized imaging methods. This allows more complex beamforming, higher‑resolution image capture, and integration with AI‑based diagnostic models — opening the door to long‑duration, real‑time cardiovascular imaging not possible with standard hospital equipment.

Together, these technologies allow WITEC to accelerate the design, prototyping, and testing of its wearable ultrasound imaging system, and to demonstrate imaging quality on phantoms and healthy subjects.

Transforming chronic disease care through wearable innovation 

Chronic diseases are rising rapidly in Singapore and globally, especially among the aging population and individuals with multiple long-term conditions. This trend highlights the urgent need for effective home-based care and easy-to-use monitoring tools that go beyond basic wellness tracking.

Current consumer wearables, such as smartwatches and fitness bands, offer limited physiological data like heart rate or step count. While useful for general health, they lack the depth needed to support chronic disease management. Traditional ultrasound systems, although clinically powerful, are bulky, operator-dependent, can only be deployed episodically within the hospitals, and are limited to snapshots in time, making them unsuitable for long-term, everyday use.

WITEC aims to bridge this gap with its wearable ultrasound imaging system that uses bioadhesive technology to enable up to 48 hours of uninterrupted imaging. Combined with AI-enhanced diagnostics, the innovation is aimed at supporting early detection, home-based pre-diagnosis, and continuous monitoring of chronic diseases.

Beyond improving patient outcomes, this innovation could help ease labor shortages by freeing up ultrasound operators, nurses, and doctors to focus on more complex care, while reducing demand for hospital beds and resources. By shifting monitoring to homes and communities, WITEC’s technology will enable patient self-management and timely intervention, potentially lowering health-care costs and alleviating the increasing financial and manpower pressures of an aging population.

Driving innovation through interdisciplinary collaboration

WITEC is led by the following co-lead principal investigators: Xuanhe Zhao, professor of mechanical engineering and professor of civil and environmental engineering at MIT; Joseph Sung, senior vice president of health and life sciences at NTU Singapore and dean of the Lee Kong Chian School of Medicine (LKCMedicine); Cher Heng Tan, assistant dean of clinical research at LKCMedicine; Chwee Teck Lim, NUS Society Professor of Biomedical Engineering at NUS and director of the Institute for Health Innovation and Technology at NUS; and Xiaodong Chen, distinguished university professor at the School of Materials Science and Engineering within NTU. 

“We’re extremely proud to bring together an exceptional team of researchers from Singapore and the U.S. to pioneer core technologies that will make wearable ultrasound imaging a reality. This endeavor combines deep expertise in materials science, data science, AI diagnostics, biomedical engineering, and clinical medicine. Our phased approach will accelerate translation into a fully wearable platform that reshapes how chronic diseases are monitored, diagnosed and managed,” says Zhao, who serves as a co-lead PI of WITEC.

Research roadmap with broad impact across health care, science, industry, and economy

Bringing together leading experts across interdisciplinary fields, WITEC will advance foundational work in soft materials, transducers, microelectronics, data science and AI diagnostics, clinical medicine, and biomedical engineering. As a deep-tech R&D group, its breakthroughs will have the potential to drive innovation in health-care technology and manufacturing, diagnostics, wearable ultrasonic imaging, metamaterials, diagnostics, and AI-powered health analytics. WITEC’s work is also expected to accelerate growth in high-value jobs across research, engineering, clinical validation, and health-care services, and attract strategic investments that foster biomedical innovation and industry partnerships in Singapore, the United States, and beyond.

“Chronic diseases present significant challenges for patients, families, and health-care systems, and with aging populations such as Singapore, those challenges will only grow without new solutions. Our research into a wearable ultrasound imaging system aims to transform daily care for those living with cardiovascular and other chronic conditions — providing clinicians with richer, continuous insights to guide treatment, while giving patients greater confidence and control over their own health. WITEC’s pioneering work marks an important step toward shifting care from episodic, hospital-based interventions to more proactive, everyday management in the community,” says Sung, who serves as co‑lead PI of WITEC.

Led by Violet Hoon, senior consultant at TTSH, clinical trials are expected to commence this year to validate long-term heart monitoring in the management of chronic cardiovascular disease. Over the next three years, WITEC aims to develop a fully integrated platform capable of 48-hour intermittent imaging through innovations in bioadhesive couplants, nanostructured metamaterials, and ultrasonic transducers.

As MIT’s research enterprise in Singapore, SMART is committed to advancing breakthrough technologies that address pressing global challenges. WITEC adds to SMART’s existing research endeavors that foster a rich exchange of ideas through collaboration with leading researchers and academics from the United States, Singapore, and around the world in key areas such as antimicrobial resistance, cell therapy development, precision agriculture, AI, and 3D-sensing technologies.


New tissue models could help researchers develop drugs for liver disease

Two models more accurately replicate the physiology of the liver, offering a new way to test treatments for fat buildup.


More than 100 million people in the United States suffer from metabolic dysfunction-associated steatotic liver disease (MASLD), characterized by a buildup of fat in the liver. This condition can lead to the development of more severe liver disease that causes inflammation and fibrosis.

In hopes of discovering new treatments for these liver diseases, MIT engineers have designed a new type of tissue model that more accurately mimics the architecture of the liver, including blood vessels and immune cells.

Reporting their findings today in Nature Communications, the researchers showed that this model could accurately replicate the inflammation and metabolic dysfunction that occur in the early stages of liver disease. Such a device could help researchers identify and test new drugs to treat those conditions.

This is the latest study in a larger effort by this team to use these types of tissue models, also known as microphysiological systems, to explore human liver biology, which cannot be easily replicated in mice or other animals.

In another recent paper, the researchers used an earlier version of their liver tissue model to explore how the liver responds to resmetirom. This drug is used to treat an advanced form of liver disease called metabolic dysfunction-associated steatohepatitis (MASH), but it is only effective in about 30 percent of patients. The team found that the drug can induce an inflammatory response in liver tissue, which may help to explain why it doesn’t help all patients.

“There are already tissue models that can make good preclinical predictions of liver toxicity for certain drugs, but we really need to better model disease states, because now we want to identify drug targets, we want to validate targets. We want to look at whether a particular drug may be more useful early or later in the disease,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation at MIT, a professor of biological engineering and mechanical engineering, and the senior author of both studies.

Former MIT postdoc Dominick Hellen is the lead author of the resmetirom paper, which appeared Jan. 14 in Communications Biology. Erin Tevonian PhD ’25 and PhD candidate Ellen Kan, both in the Department of Biological Engineering, are the lead authors of today’s Nature Communications paper on the new microphysiological system.

Modeling drug response

In the Communications Biology paper, Griffith’s lab worked with a microfluidic device that she originally developed in the 1990s, known as the LiverChip. This chip offers a simple scaffold for growing 3D models of liver tissue from hepatocytes, the primary cell type in the liver.

This chip is widely used by pharmaceutical companies to test whether their new drugs have adverse effects on the liver, which is an important step in drug development because most drugs are metabolized by the liver.

For the new study, Griffith and her students modified the chip so that it could be used to study MASLD.

Patients with MASLD, a buildup of fat in the liver, can eventually develop MASH, a more severe disease that occurs when scar tissue called fibrosis forms in the liver. Currently, resmetirom and the GLP-1 drug semaglutide are the only medications that are FDA-approved to treat MASH. Finding new drugs is a priority, Griffith says.

“You’re never declaring victory with liver disease with one drug or one class of drugs, because over the long term there may be patients who can’t use them, or they may not be effective for all patients,” she says.

To create a model of MASLD, the researchers exposed the tissue to high levels of insulin, along with large quantities of glucose and fatty acids. This led to a buildup of fatty tissue and the development of insulin resistance, a trait that is often seen in MASLD patients and can lead to type 2 diabetes.

Once that model was established, the researchers treated the tissue with resmetirom, a drug that works by mimicking the effects of thyroid hormone, which stimulates the breakdown of fat.

To their surprise, the researchers found that this treatment could also lead to an increase in immune signaling and markers of inflammation.

“Because resmetirom is primarily intended to reduce hepatic fibrosis in MASH, we found the result quite paradoxical,” Hellen says. “We suspect this finding may help clinicians and scientists alike understand why only a subset of patients respond positively to the thyromimetic drug. However, additional experiments are needed to further elucidate the underlying mechanism.”

A more realistic liver model

Tiny yellow bits flow through vessels

In the Nature Communications paper, the researchers reported a new type of chip that allows them to more accurately reproduce the architecture of the human liver. The key advance was developing a way to induce blood vessels to grow into the tissue. These vessels can deliver nutrients and also allow immune cells to flow through the tissue.

“Making more sophisticated models of liver that incorporate features of vascularity and immune cell trafficking that can be maintained over a long time in culture is very valuable,” Griffith says. “The real advance here was showing that we could get an intimate microvascular network through liver tissue and that we could circulate immune cells. This helped us to establish differences between how immune cells interact with the liver cells in a type two diabetes state and a healthy state.”

As the liver tissue matured, the researchers induced insulin resistance by exposing the tissue to increased levels of insulin, glucose, and fatty acids.

As this disease state developed, the researchers observed changes in how hepatocytes clear insulin and metabolize glucose, as well as narrower, leakier blood vessels that reflect microvascular complications often seen in diabetic patients. They also found that insulin resistance leads to an increase in markers of inflammation that attract monocytes into the tissue. Monocytes are the precursors of macrophages, immune cells that help with tissue repair during inflammation and are also observed in the liver of patients with early-stage liver disease.

“This really shows that we can model the immune features of a disease like MASLD, in a way that is all based on human cells,” Griffith says.

The research was funded by the National Institutes of Health, the National Science Foundation Graduate Research Fellowship program, NovoNordisk, the Massachusetts Life Sciences Center, and the Siebel Scholars Foundation.


Your future home might be framed with printed plastic

MIT engineers are using recycled plastic to 3D print construction-grade floor trusses.


The plastic bottle you just tossed in the recycling bin could provide structural support for your future house.

MIT engineers are using recycled plastic to 3D print construction-grade beams, trusses, and other structural elements that could one day offer lighter, modular, and more sustainable alternatives to traditional wood-based framing.

In a paper published in the Solid FreeForm Fabrication Symposium Proceedings, the MIT team presents the design for a 3D-printed floor truss system made from recycled plastic.

A traditional floor truss is made from wood beams that connect via metal plates in a pattern resembling a ladder with diagonal rungs. Set on its edge and combined with other parallel trusses, the resulting structure provides support for flooring material such as plywood that lies over the trusses.

The MIT team printed four long trusses out of recycled plastic and configured them into a conventional plywood-topped floor frame, then tested the structure’s load-bearing capacity. The printed flooring held over 4,000 pounds, exceeding key building standards set by the U.S. Department of Housing and Urban Development.

The plastic-printed trusses weigh about 13 pounds each, which is lighter than a comparable wood-based truss, and they can be printed on a large-scale industrial printer in under 13 minutes. In addition to floor trusses, the group is working on printing other elements and combining them into a full frame for a modest-sized home.

The researchers envision that as global demand for housing eclipses the supply of wood in the coming years, single-use plastics such as water bottles and food containers could get a second life as recycled framing material to alleviate both a global housing crisis and the overwhelming demand for timber.

“We’ve estimated that the world needs about 1 billion new homes by 2050. If we try to make that many homes using wood, we would need to clear-cut the equivalent of the Amazon rainforest three times over,” says AJ Perez, a lecturer in the MIT School of Engineering and research scientist in the MIT Office of Innovation. “The key here is: We recycle dirty plastic into building products for homes that are lighter, more durable, and sustainable.”

Perez’ co-authors on the study are graduate students Tyler Godfrey, Kenan Sehnawi, Arjun Chandar, and professor of mechanical engineering David Hardt, who are all members of the MIT Laboratory for Manufacturing and Productivity.

Printing dirty

In 2019, Perez and Hardt started MIT HAUS, a group within the Laboratory for Manufacturing and Productivity that aims to produce homes from recycled polymer products, using large-scale additive manufacturing, which encompasses technologies that are capable of producing big structures, layer-by-layer, in relatively short timescales.

Today, some companies are exploring large-scale additive manufacturing to 3D-print modest-sized homes. These efforts mainly focus on printing with concrete or clay — materials that have had a large negative environmental impact associated with their production. The house structures that have been printed so far are largely walls. The MIT HAUS group is among the first to consider printing structural framing elements such as foundation pilings, floor trusses, stair stringers, roof trusses, wall studs, and joists.

What’s more, they are seeking to do so not with cement, but with recycled “dirty” plastic — plastic that doesn’t have to be cleaned and preprocessed before reuse. The researchers envision that one day, used bottles and food containers could be fed directly into a shredder, pelletized, then fed into a large-scale additive manufacturing machine to become structural composite construction components. The plastic composite parts would be light enough to transport via pickup truck rather than a traditional lumber-hauling 18-wheeler. At the construction site, the elements could be quickly fitted into a lightweight yet sturdy home frame.

“We are starting to crack the code on the ability to process and print really dirty plastic,” Perez says. “The questions we’ve been asking are, what is the dirty, unwanted plastic good for, and how do we use the dirty plastic as-is?”

Weight class

The team’s new study is one step toward that overall goal of sustainable, recycled construction. In this work, they developed a design for a printed floor truss made from recycled plastic. They designed the truss with a high stiffness-to-weight ratio, meaning that it should be able to support a given amount of weight with minimal deflection, or bending. (Think of being able to walk across a floor without it sagging between the joists.)

The researchers first explored a handful of possible truss designs in simulation, and put each design through a simulated load-bearing test. Their modeling showed that one design in particular exhibited the highest stiffness-to-weight ratio and was therefore the most promising pattern to print and physically test. The design is close to the traditional wood-based floor truss pattern resembling a ladder with diagonal, triangular rungs. The team made a slight adjustment to this design, adding small reinforcing elements to each node where a “rung” met the main truss frame.

To print the design, Perez and his colleagues went to MIT’s Bates Research and Engineering Center, which houses the group’s industrial-scale 3D printer — a room-sized industrial machine that is capable of printing large structures at a fast rate of up to 80 pounds of material per hour. For their preliminary study, the researchers used pellets made of a combination of recycled PET polymers and glass fibers — a mixture that improves the material’s printability and durability. They obtained the material from an aerospace materials company, and then fed the pellets into the printer as composite “ink.”   

The team printed four trusses, each measuring 8 feet long, 1 foot high, and about 1 inch wide. Each truss took about 13 minutes to print. Perez and Godfrey spaced the trusses apart in a parallel configuration similar to traditional wood-based trusses, and screwed them into a sheet of plywood to mimic a 4-x-8-foot floor frame. They placed bags of sand and concrete of increasing weight in the center of the flooring system and measured the amount of deflection that the trusses experienced underneath.

The trusses easily withstood loads of 300 pounds, well above the deflection standards set by the U.S. by the Department of Housing and Urban Development. They didn’t stop there, continuing to add weight. Only when the loads reached over 4,000 pounds did the trusses finally buckle and crack.

In terms of stiffness, the printed trusses meet existing building codes in the U.S. To make them ready for wide adoption, Perez says the cost of producing the structures will have to be brought down to compete with the price of wood. The trusses in the new study were printed using recycled plastic, but from a source that he describes as the “crème de la crème of recycled feedstocks.” The plastic is factory-discarded material, but is not quite the “dirty” plastic that he aims ultimately to shred, print, and build.

The current study demonstrates that it is possible to print structural building elements from recycled plastic. Perez is in the process of working with dirtier plastic such as used soda bottles — that still hold a bit of liquid residue — to see how such contaminants affect the quality of the printed product.

If dirty plastics can be made into durable housing structures, Perez says “the idea is to bring shipping containers close to where you know you’ll have a lot of plastic, like next to a football stadium. Then you could use off-the-shelf shredding technology and feed that dirty shredded plastic into a large-scale additive manufacturing system, which could exist in micro-factories, just like bottling centers, around the world. You could print the parts for entire buildings that would be light enough to transport on a moped or pickup truck to where homes are most needed.”

This research was supported, in part, by Gerstner Philanthropies, the Chandler Health of the Planet grant, and Cincinnati Incorporated.


Young and gifted

Joshua Bennett’s new book profiles American prodigies, examining the personal and social dimensions of cultivating promise.


James Baldwin was a prodigy. That is not the first thing most people associate with a writer who once declared that he “had no childhood” and whose work often elides the details of his early life in New York, in the 1920s and 1930s. Still, by the time Baldwin was 14, he was a successful church preacher, excelling in a role otherwise occupied by adults.

Throw in the fact that Baldwin was reading Dostoyevsky by the fifth grade, wrote “like an angel” according to his elementary school principal, edited his middle school periodical, and wrote for his high school magazine, and it’s clear he was a precocious wordsmith.

These matters are complicated, of course. To MIT scholar Joshua Bennett, Baldwin’s writings reveal enough for us to conclude that his childhood was marked by a “relentless introspection” as he sought to come to terms with the world. Beyond that, Bennett thinks, some of Baldwin’s work, and even the one children’s book he wrote, yields “messages of persistence,” recognizing the need for any child to receive encouragement and education.

And if someone as precocious as Baldwin still needed cultivation, then virtually everyone does. If we act is if talent blossoms on its own, we are ignoring the vital role communities, teachers, and families play in helping artists — or anyone — develop their skills.

“We talk as if these people emerged ex nihilo,” Bennett says. “When all along the way, there were people who cultivated them, and our children deserve the same — all of the children of the world. We have a dominant model of genius that is fundamentally flawed, in that it often elides the role of communities and cultural institutions.”

Bennett explores these issues in a new book, “The People Can Fly: American Promise, Black Prodigies, and the Greatest Miracle of All Time,” published this week by Hachette. A literary scholar and poet himself, Bennett is the Distinguished Chair of the Humanities at MIT and a professor of literature.

“The People Can Fly” accomplishes many kinds of work at once: Bennett offers a series of profiles, carefully wrought to see how some prominent figures were able to flourish from childhood forward. And he closely reads their works for indications about how they understood the shape of their own lives. In so doing, Bennett underscores the significance of the social settings that prodigious talents grow up in. For good measure, he also offers reflections on his own career trajectory and encounters with these artists, driving home their influence and meaning.

Reading about these many prodigies, one by one, helps readers build a picture of the realities, and complications, of trying to sustain early promise.

“It’s part of what I tell my students — the individual is how you get to the universal,” Bennett says. “It doesn’t mean I need to share certain autobiographical impulses with, say, Hemingway. It’s just that I think those touchpoints exist in all great works of art.”

Space odyssey

For Bennett, the idea of writing about prodigies grew naturally from his research and teaching, which ranges broadly in American and global literature. Bennett began contemplating “the idea of promise as this strange, idiosyncratic quality, this thing we see through various acts, perhaps something as simple as a little riff you hear a child sing, an element of their drawings, or poems.” At the same time, he notes, people struggle with “the weight of promise. There is a peril that can come along with promise. Promise can be taken away.”

Ultimately, Bennett adds, “I started thinking a little more about what promise has meant in African American communities,” in particular. Ranging widely in the book, Bennett consistently loops back to a core focus on the ideals, communities, and obstacles many Black artists grew up with. These artists and intellectuals include Malcolm X, Gwendolyn Brooks, Stevie Wonder, and the late poet and scholar Nikki Giovanni.

Bennett’s chapter on Giovanni shows his own interest in placing an artist’s life in historical context, and picks up on motifs relating back to childhood and personal promise.

Giovanni attended Fisk University early, enrolling at 17. Later she enrolled in Columbia University’s Masters of Fine Arts program, where poetry students were supposed to produce publishable work in a two-year program. In her first year, Giovanni’s poetry collection, “Black Feeling, Black Talk,” not only got published but became a hit, selling 10,000 copies. She left the program early — without a degree, since it required two years of residency. In short, she was always going places.

Giovanni went on to become one of the most celebrated poets of her time, and spent decades on the faculty at Virginia Tech. One idea that kept recurring in her work: dreams of space exploration. Giovanni’s work transmitted a clear enthusiasm for exploring the stars.

“Looking through her work, you see space travel everywhere,” Bennett says. “Even in her most prominent poem, ‘Ego trippin (there may be a reason why),’ there is this sense of someone who’s soaring over the landscape — ‘I’m so hip even my errors are correct.’ There is this idea of an almost divine being.”

That enthusiasm was accompanied by the recognition that astronauts, at least at one time, emerged from a particular slice of society. Indeed, Giovanni at many times publicly called for more opportunities for more Americans to become astronauts. A pressing issue, for her, was making dreams achievable for more people.

“Nikki Giovanni is very invested in these sorts of questions, as a writer, as an educator, and as a big thinker,” Bennett says. “This kind of thinking about the cosmos is everywhere in her work. But inside of that is a critique, that everyone should have a chance to expand the orbit of their dreaming. And dream of whatever they need to.”

And as Bennett draws out in “The People Can Fly,” stories and visions of flying have run deep in Black culture, offering a potent symbolism and a mode of “holding on to a deeper sense that the constraints of this present world are not all-powerful or everlasting. The miraculous is yet available. The people could fly, and still can.”

Children with promise, families with dreams

Other artists have praised “The People Can Fly.” The actor, producer, and screenwriter Lena Waithe has said that “Bennett’s poetic nature shines through on every page. … This book is a masterclass in literature and a necessary reminder to cherish the child in all of us.”

Certainly Bennett brings a vast sense of scope to “The People Can Fly,” ranging across centuries of history. Phillis Wheatley, a former enslaved woman whose 1773 poetry collection was later praised by George Washington, was an early American prodigy, studying the classics as a teenager and releasing her work at age 20. Mae Jemison, the first Black female astronaut, enrolled in Stanford University at age 16, spurred by family members who taught her about the stars. All told, Bennett weaves together a scholarly tapestry about hope, ambition, and, at times, opportunity.

Often, that hope and ambition belong to whole families, not just one gifted child. As Nikki Giovanni herself quipped, while giving the main address at MIT’s annual Martin Luther King convocation in 1990, “the reason you go to college is that it makes your mother happy.”

Bennett can relate, having come from a family where his mother was the only prior relative to have attended college. As a kid in the 1990s, growing up in Yonkers, New York, he had a Princeton University sweatshirt, inspired by his love of the television program “The Fresh Prince of Bel Air.” The program featured a character named Phillip Banks — popularly known as “Uncle Phil” — who was, within the world of the show, a Princeton alumnus.

“I would ask my Mom, ‘How do I get into Princeton?’” Bennett recalls. “She would just say, ‘Study hard, honey.’ No one but her had even been to college in my family. No one had been to Princeton. No one had set foot on Princeton University’s campus. But the idea that was possible in the country we lived in, for a woman who was the daughter of two sharecroppers, and herself grew up in a tenement with her brothers and sister, and nonetheless went on to play at Carnegie Hall and get a college degree and buy her mother a color TV — it’s fascinating to me.”

The postscript to that anecdote is that Bennett did go on to earn his PhD from Princeton. Behind many children with promise are families and communities with dreams for those kids.

“There’s something to it I refuse to relinquish,” Bennett says. “My mother’s vision was a powerful and persistent one — she believed that the future also belonged to her children.”


How a unique class of neurons may set the table for brain development

Somatostatin-expressing neurons follow a unique trajectory when forming connections in the visual cortex that may help establish the conditions needed for sensory experience to refine circuits.


The way the brain develops can shape us throughout our lives, so neuroscientists are intensely curious about how it happens. A new study by researchers in The Picower Institute for Learning and Memory at MIT that focused on visual cortex development in mice reveals that an important class of neurons follows a set of rules that, while surprising, might just create the right conditions for circuit optimization.

During early brain development, multiple types of neurons emerge in the visual cortex (where the brain processes vision). Many are “excitatory,” driving the activity of brain circuits, and others are “inhibitory,” meaning they control that activity. Just like a car needs not only an engine and a gas pedal, but also a steering wheel and brakes, a healthy balance between excitation and inhibition is required for proper brain function. During a “critical period” of development in the visual cortex, soon after the eyes first open, excitatory and inhibitory neurons forge and edit millions of connections, or synapses, to adapt nascent circuits to the incoming flood of visual experience. Over many days, in other words, the brain optimizes its attunement to the world.

In the new study in The Journal of Neuroscience, a team led by MIT research scientist Josiah Boivin and Professor Elly Nedivi visually tracked somatostatin (SST)-expressing inhibitory neurons forging synapses with excitatory cells along their sprawling dendrite branches, illustrating the action before, during, and after the critical period with unprecedented resolution. Several of the rules the SST cells appeared to follow were unexpected — for instance, unlike other cell types, their activity did not depend on visual input — but now that the scientists know these neurons’ unique trajectory, they have a new idea about how it may enable sensory activity to influence development: SST cells might help usher in the critical period by establishing the baseline level of inhibition needed to ensure that only certain types of sensory input will trigger circuit refinement.

“Why would you need part of the circuit that’s not really sensitive to experience? It could be that it’s setting things up for the experience-dependent components to do their thing,” says Nedivi, the William R. and Linda R. Young Professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences.

Boivin adds: “We don’t yet know whether SST neurons play a causal role in the opening of the critical period, but they are certainly in the right place at the right time to sculpt cortical circuitry at a crucial developmental stage.”

A unique trajectory

To visualize SST-to-excitatory synapse development, Nedivi and Boivin’s team used a genetic technique that pairs expression of synaptic proteins with fluorescent molecules to resolve the appearance of the “boutons” SST cells use to reach out to excitatory neurons. They then performed a technique called eMAP, developed by Kwanghun Chung’s lab in the Picower Institute, that expands and clears brain tissue to increase magnification, allowing super-resolution visualization of the actual synapses those boutons ultimately formed with excitatory cells along their dendrites. Co-author and postdoc Bettina Schmerl helped lead the eMAP work.

These new techniques revealed that SST bouton appearance and then synapse formation surged dramatically when the eyes opened, and then as the critical period got underway. But while excitatory neurons during this time frame are still maturing, first in the deepest layers of the cortex and later in its more superficial layers, the SST boutons blanketed all layers simultaneously, meaning that, perhaps counterintuitively, they sought to establish their inhibitory influence regardless of the maturation stage of their intended partners.

Many studies have shown that eye opening and the onset of visual experience sets in motion the development and elaboration of excitatory cells and another major inhibitory neuron type (parvalbumin-expressing cells). Raising mice in the dark for different lengths of time, for instance, can distinctly alter what happens with these cells. Not so for the SST neurons. The new study showed that varying lengths of darkness had no effect on the trajectory of SST bouton and synapse appearance; it remained invariant, suggesting it is preordained by a genetic program or an age-related molecular signal, rather than experience.

Moreover, after the initial frenzy of synapse formation during development, many synapses are then edited, or pruned away, so that only the ones needed for appropriate sensory responses endure. Again, the SST boutons and synapses proved to be exempt from these redactions. Although the pace of new SST synapse formation slowed at the peak of the critical period, the net number of synapses never declined, and even continued increasing into adulthood.

“While a lot of people think that the only difference between inhibition and excitation is their valence, this demonstrates that inhibition works by a totally different set of rules,” Nedivi says.

In all, while other cell types were tailoring their synaptic populations to incoming experience, the SST neurons appeared to provide an early but steady inhibitory influence across all layers of the cortex. After excitatory synapses have been pruned back by the time of adulthood, the continued upward trickle of SST inhibition may contribute to the increase in the inhibition to excitation ratio that still allows the adult brain to learn, but not as dramatically or as flexibly as during early childhood.

A platform for future studies

In addition to shedding light on typical brain development, Nedivi says, the study’s techniques can enable side-by-side comparisons in mouse models of neurodevelopmental disorders such as autism or epilepsy, where aberrations of excitation and inhibition balance are implicated.

Future studies using the techniques can also look at how different cell types connect with each other in brain regions other than the visual cortex, she adds.

Boivin, who will soon open his own lab as a faculty member at Amherst College, says he is eager to apply the work in new ways.

“I’m excited to continue investigating inhibitory synapse formation on genetically defined cell types in my future lab,” Boivin says. “I plan to focus on the development of limbic brain regions that regulate behaviors relevant to adolescent mental health.”

In addition to Nedivi, Boivin and Schmerl, the paper’s other authors are Kendyll Martin and Chia-Fang Lee.

Funding for the study came from the National Institutes of Health, the Office of Naval Research, and the Freedom Together Foundation.


How generative AI can help scientists synthesize complex materials

MIT researchers’ DiffSyn model offers recipes for synthesizing new materials, enabling faster experimentation and a shorter journey from hypothesis to use.


Generative artificial intelligence models have been used to create enormous libraries of theoretical materials that could help solve all kinds of problems. Now, scientists just have to figure out how to make them.

In many cases, materials synthesis is not as simple as following a recipe in the kitchen. Factors like the temperature and length of processing can yield huge changes in a material’s properties that make or break its performance. That has limited researchers’ ability to test millions of promising model-generated materials.

Now, MIT researchers have created an AI model that guides scientists through the process of making materials by suggesting promising synthesis routes. In a new paper, they showed the model delivers state-of-the-art accuracy in predicting effective synthesis pathways for a class of materials called zeolites, which could be used to improve catalysis, absorption, and ion exchange processes. Following its suggestions, the team synthesized a new zeolite material that showed improved thermal stability.

The researchers believe their new model could break the biggest bottleneck in the materials discovery process.

“To use an analogy, we know what kind of cake we want to make, but right now we don’t know how to bake the cake,” says lead author Elton Pan, a PhD candidate in MIT’s Department of Materials Science and Engineering (DMSE). “Materials synthesis is currently done through domain expertise and trial and error.”

The paper describing the work appears today in Nature Computational Science. Joining Pan on the paper are Soonhyoung Kwon ’20, PhD ’24; DMSE postdoc Sulin Liu; chemical engineering PhD student Mingrou Xie; DMSE postdoc Alexander J. Hoffman; Research Assistant Yifei Duan SM ’25; DMSE visiting student Thorben Prein; DMSE PhD candidate Killian Sheriff; MIT Robert T. Haslam Professor in Chemical Engineering Yuriy Roman-Leshkov; Valencia Polytechnic University Professor Manuel Moliner; MIT Paul M. Cook Career Development Professor Rafael Gómez-Bombarelli; and MIT Jerry McAfee Professor in Engineering Elsa Olivetti.

Learning to bake

Massive investments in generative AI have led companies like Google and Meta to create huge databases filled with material recipes that, at least theoretically, have properties like high thermal stability and selective absorption of gases. But making those materials can require weeks or months of careful experiments that test specific reaction temperatures, times, precursor ratios, and other factors.

“People rely on their chemical intuition to guide the process,” Pan says. “Humans are linear. If there are five parameters, we might keep four of them constant and vary one of them linearly. But machines are much better at reasoning in a high-dimensional space.”

The synthesis process of materials discovery now often takes the most time in a material’s journey from hypothesis to use.

To help scientists navigate that process, the MIT researchers trained a generative AI model on over 23,000 material synthesis recipes described over 50 years of scientific papers. The researchers iteratively added random “noise” to the recipes during training, and the model learned to de-noise and sample from the random noise to find promising synthesis routes.

The result is DiffSyn, which uses an approach in AI known as diffusion.

“Diffusion models are basically a generative AI model like ChatGPT, but more like the DALL-E image generation model,” Pan says. “During inference, it converts noise into meaningful structure by subtracting a little bit of noise at each step. In this case, the ‘structure’ is the synthesis route for a desired material.”

When a scientist using DiffSyn enters a desired material structure, the model offers some promising combinations of reaction temperatures, reaction times, precursor ratios, and more.

“It basically tells you how to bake your cake,” Pan says. “You have a cake in mind, you feed it into the model, the model spits out the synthesis recipes. The scientist can pick whichever synthesis path they want, and there are simple ways to quantify the most promising synthesis path from what we provide, which we show in our paper.”

To test their system, the researchers used DiffSyn to suggest novel synthesis paths for a zeolite, a material class that is complex and takes time to form into a testable material.

“Zeolites have a very high-dimensional synthesis space,” Pan says. “Zeolites also tend to take days or weeks to crystallize, so the impact [of finding the best synthesis pathway faster] is much higher than other materials that crystallize in hours.”

The researchers were able to make the new zeolite material using synthesis pathways suggested by DiffSyn. Subsequent testing revealed the material had a promising morphology for catalytic applications.

“Scientists have been trying out different synthesis recipes one by one,” Pan says. “That makes them very time-consuming. This model can sample 1,000 of them in under a minute. It gives you a very good initial guess on synthesis recipes for completely new materials.”

Accounting for complexity

Previously, researchers have built machine-learning models that mapped a material to a single recipe. Those approaches do not take into account that there are different ways to make the same material.

DiffSyn is trained to map material structures to many different possible synthesis paths. Pan says that is better aligned with experimental reality.

“This is a paradigm shift away from one-to-one mapping between structure and synthesis to one-to-many mapping,” Pan says. “That’s a big reason why we achieved strong gains on the benchmarks.”

Moving forward, the researchers believe the approach should work to train other models that guide the synthesis of materials outside of zeolites, including metal-organic frameworks, inorganic solids, and other materials that have more than one possible synthesis pathway.

“This approach could be extended to other materials,” Pan says. “Now, the bottleneck is finding high-quality data for different material classes. But zeolites are complicated, so I can imagine they are close to the upper-bound of difficulty. Eventually, the goal would be interfacing these intelligent systems with autonomous real-world experiments, and agentic reasoning on experimental feedback to dramatically accelerate the process of materials design.”

The work was supported by MIT International Science and Technology Initiatives (MISTI), the National Science Foundation, Generalitat Vaslenciana, the Office of Naval Research, ExxonMobil, and the Agency for Science, Technology and Research in Singapore.


A portable ultrasound sensor may enable earlier detection of breast cancer

The new system could be used at home or in doctors’ offices to scan people who are at high risk for breast cancer.


For people who are at high risk of developing breast cancer, frequent screenings with ultrasound can help detect tumors early. MIT researchers have now developed a miniaturized ultrasound system that could make it easier for breast ultrasounds to be performed more often, either at home or at a doctor’s office.

The new system consists of a small ultrasound probe attached to an acquisition and processing module that is a little larger than a smartphone. This system can be used on the go when connected to a laptop computer to reconstruct and view wide-angle 3D images in real-time.

“Everything is more compact, and that can make it easier to be used in rural areas or for people who may have barriers to this kind of technology,” says Canan Dagdeviren, an associate professor of media arts and sciences at MIT and the senior author of the study.

With this system, she says, more tumors could potentially be detected earlier, which increases the chances of successful treatment.

Colin Marcus PhD ’25 and former MIT postdoc Md Osman Goni Nayeem are the lead authors of the paper, which appears in the journal Advanced Healthcare Materials. Other authors of the paper are MIT graduate students Aastha Shah, Jason Hou, and Shrihari Viswanath; MIT summer intern and University of Central Florida undergraduate Maya Eusebio; MIT Media Lab Research Specialist David Sadat; MIT Provost Anantha Chandrakasan; and Massachusetts General Hospital breast cancer surgeon Tolga Ozmen.

Frequent monitoring

While many breast tumors are detected through routine mammograms, which use X-rays, tumors can develop in between yearly mammograms. These tumors, known as interval cancers, account for 20 to 30 percent of all breast cancer cases, and they tend to be more aggressive than those found during routine scans.

Detecting these tumors early is critical: When breast cancer is diagnosed in the earliest stages, the survival rate is nearly 100 percent. However, for tumors detected in later stages, that rate drops to around 25 percent.

For some individuals, more frequent ultrasound scanning in addition to regular mammograms could help to boost the number of tumors that are detected early. Currently, ultrasound is usually done only as a follow-up if a mammogram reveals any areas of concern. Ultrasound machines used for this purpose are large and expensive, and they require highly trained technicians to use them.

“You need skilled ultrasound technicians to use those machines, which is a major obstacle to getting ultrasound access to rural communities, or to developing countries where there aren’t as many skilled radiologists,” Viswanath says.

By creating ultrasound systems that are portable and easier to use, the MIT team hopes to make frequent ultrasound scanning accessible to many more people.

In 2023, Dagdeviren and her colleagues developed an array of ultrasound transducers that were incorporated into a flexible patch that can be attached to a bra, allowing the wearer to move an ultrasound tracker along the patch and image the breast tissue from different angles.

Those 2D images could be combined to generate a 3D representation of the tissue, but there could be small gaps in coverage, making it possible that small abnormalities could be missed. Also, that array of transducers had to be connected to a traditional, costly, refrigerator-sized processing machine to view the images.

In their new study, the researchers set out to develop a modified ultrasound array that would be fully portable and could create a 3D image of the entire breast by scanning just two or three locations.

The new system they developed is a chirped data acquisition system (cDAQ) that consists of an ultrasound probe and a motherboard that processes the data. The probe, which is a little smaller than a deck of cards, contains an ultrasound array arranged in the shape of an empty square, a configuration that allows the array to take 3D images of the tissue below.

This data is processed by the motherboard, which is a little bit larger than a smartphone and costs only about $300 to make. All of the electronics used in the motherboard are commercially available. To view the images, the motherboard can be connected to a laptop computer, so the entire system is portable.

“Traditional 3D ultrasound systems require power expensive and bulky electronics, which limits their use to high-end hospitals and clinics,” Chandrakasan says. “By redesigning the system to be ultra-sparse and energy-efficient, this powerful diagnostic tool can be moved out of the imaging suite and into a wearable form factor that is accessible for patients everywhere.”

This system also uses much less power than a traditional ultrasound machine, so it can be powered with a 5V DC supply (a battery or an AC/DC adapter used to plug in small electronic devices such as modems or portable speakers).

“Ultrasound imaging has long been confined to hospitals,” says Nayeem. “To move ultrasound beyond the hospital setting, we reengineered the entire architecture, introducing a new ultrasound fabrication process, to make the technology both scalable and practical.”

Earlier diagnosis

The researchers tested the new system on one human subject, a 71-year-old woman with a history of breast cysts. They found that the system could accurately image the cysts and created a 3D image of the tissue, with no gaps.

The system can image as deep as 15 centimeters into the tissue, and it can image the entire breast from two or three locations. And, because the ultrasound device sits on top of the skin without having to be pressed into the tissue like a typical ultrasound probe, the images are not distorted.

“With our technology, you simply place it gently on top of the tissue and it can visualize the cysts in their original location and with their original sizes,” Dagdeviren says.

The research team is now conducting a larger clinical trial at the MIT Center for Clinical and Translational Research and at MGH.

The researchers are also working on an even smaller version of the data processing system, which will be about the size of a fingernail. They hope to connect this to a smartphone that could be used to visualize the images, making the entire system smaller and easier to use. They also plan to develop a smartphone app that would use an AI algorithm to help guide the patient to the best location to place the ultrasound probe.

While the current version of the device could be readily adapted for use in a doctor’s office, the researchers hope that the future, a smaller version can be incorporated into a wearable sensor that could be used at home by people at high risk for developing breast cancer.

Dagdeviren is now working on launching a company to help commercialize the technology, with assistance from an MIT HEALS Deshpande Momentum Grant, the Martin Trust Center for MIT Entrepreneurship, and the MIT Media Lab WHx Women’s Health Innovation Fund.

The research was funded by a National Science Foundation CAREER Award, a 3M Non-Tenured Faculty Award, Lyda Hill Philanthropies, and the MIT Media Lab Consortium.


The philosophical puzzle of rational artificial intelligence

As AI technology advances, a new interdisciplinary course seeks to equip students with foundational critical thinking skills in computing.


To what extent can an artificial system be rational?

A new MIT course, 6.S044/24.S00 (AI and Rationality), doesn’t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.

This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one's goals.

“You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn’t available as a major at the time.

Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”

Tools for further theoretical thinking

Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).

While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.

Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.

“It's important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”

Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent.

Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”

The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.

Blending disciplines and questioning assumptions

So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.

Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.

On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “We’re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?”

Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class’s challenges and the ways in which the definition of a rational agent could change depending on the discipline. “Representing what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.”

The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.

For Paredes Rioboo, this is her third Common Ground course. She says, “I really like the interdisciplinary aspect. They’ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.”

According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.

He adds, “philosophy also has a way of surprising you.”


Designing the future of metabolic health through tissue-selective drug delivery

Founded by three MIT alumni, Gensaic uses AI-guided protein design to deliver RNA and other therapeutic molecules to specific cells or areas of the body.


New treatments based on biological molecules like RNA give scientists unprecedented control over how cells function. But delivering those drugs to the right tissues remains one of the biggest obstacles to turning these promising yet fragile molecules into powerful new treatments.

Now Gensaic, founded by Lavi Erisson MBA ’19; Uyanga Tsedev SM ’15, PhD ’21; and Jonathan Hsu PhD ’22, is building an artificial intelligence-powered discovery engine to develop protein shuttles that can deliver therapeutic molecules like RNA to specific tissues and cells in the body. The company is using its platform to create advanced treatments for metabolic diseases and other conditions. It is also developing treatments in partnership with Novo Nordisk and exploring additional collaborations to amplify the speed and scale of its impact.

The founders believe their delivery technology — combined with advanced therapies that precisely control gene expression, like RNA interference (RNAi) and small activating RNA (saRNA) — will enable new ways of improving health and treating disease.

“I think the therapeutic space in general is going to explode with the possibilities our approach unlocks,” Erisson says. “RNA has become a clinical-grade commodity that we know is safe. It is easy to synthesize, and it has unparalleled specificity and reversibility. By taking that and combining it with our targeting and delivery, we can change the therapeutic landscape.”

Drinking from the firehose

Erisson worked on drug development at the large pharmaceutical company Teva Pharmaceuticals before coming to MIT for his Sloan Fellows MBA in 2018.

“I came to MIT in large part because I was looking to stretch the boundaries of how I apply critical thinking,” Erisson says. “At that point in my career, I had taken about 10 drug programs into clinical development, with products on the market now. But what I didn’t have were the intellectual and quantitative tools for interrogating finance strategy and other disciplines that aren’t purely scientific. I knew I’d be drinking from the firehose coming to MIT.”

Erisson met Hsu and Tsedev, then PhD students at MIT, in a class taught by professors Harvey Lodish and Andrew Lo. The group started holding weekly meetings to discuss their research and the prospect of starting a business.

After Erisson completed his MBA program in 2019, he became chief medical and business officer at the MIT spinout Iterative Health, a company using AI to improve screening for colorectal cancer and inflammatory bowel disease that has raised over $200 million to date. There, Erisson ran a 1,400-patient study and led the development and clearance of the company’s software product.

During that time, the eventual founders continued to meet at Erisson’s house to discuss promising research avenues, including Tsedev’s work in the lab of Angela Belcher, MIT’s James Mason Crafts Professor of Biological Engineering. Tsedev’s research involved using bacteriophages, which are fast-replicating protein particles, to deliver treatments into hard-to-drug places like the brain.

As Hsu and Tsedev neared completion of their PhDs, the team decided to commercialize the technology, founding Gensaic at the end of 2021. Gensaic’s approach uses a method called unbiased directed evolution to find the best protein scaffolding to reach target tissues in the body.

“Directed evolution means having a lot of different species of proteins competing together for a certain function,” Erisson says. “The proteins are competing for the ability to reach the right cell, and we are then able to look at the genetic code of the protein that has ‘won’ that competition. When we do that process repeatedly, we find extremely adaptable proteins that can achieve the function we’re looking for.”

Initially, the founders focused on developing protein scaffolds to deliver gene therapies. Gensaic has since pivoted to focus on delivering molecules like siRNA and RNAi, which have been hard to deliver outside of the liver.

Today Gensaic has screened more than 500 billion different proteins using a process called phage display and directed evolution. It calls its platform FORGE, for Functional Optimization by Recursive Genetic Evolution.

Erisson says Gensaic’s delivery vehicles can also carry multiple RNA molecules into cells at the same time, giving doctors a novel and powerful set of tools to treat and prevent diseases.

“Today FORGE is built into the idea of multifunctional medicines,” Erisson says. “We are moving into a future where we can extract multiple therapeutic mechanisms from a single molecule. We can combine proteins with multiple tissue selectivity and multiple molecules of siRNA or other therapeutic modalities, and affect complex disease system biology with a single molecule.”

A “universe of opportunity”

The founders believe their approach will enable new ways of improving health by delivering advanced therapies directly to new places in the body. Precise delivery of drugs to anywhere in the body could not only unlock new therapeutic targets but also boost the effectiveness of existing treatments and reduce side effects.

“We’ve found we can get to the brain, and we can get to specific tissues like skeletal and adipose tissue,” Erisson says. “We’re the only company, to my knowledge, that has a protein-based delivery mechanism to get to adipose tissue.”

Delivering drugs into fat and muscle cells could be used to help people lose weight, retain muscle, and prevent conditions like fatty liver disease or osteoporosis.

Erisson says combining RNA therapeutics is another differentiator for Gensaic.

“The idea of multiplexed medicines is just emerging,” Erisson says. “There are no clinically approved drugs using dual-targeted siRNAs, especially ones that have multi-tissue targeting. We are focused on metabolic indications that have two targets at the same time and can take on unique tissues or combinations of tissues.”

Gensaic’s collaboration with Novo Nordisk, announced last year, targets cardiometabolic diseases and includes up to $354 million in upfront and milestone payments per disease target.

“We already know we can deliver multiple types of payloads, and Novo Nordisk is not limited to siRNA, so we can go after diseases in ways that aren’t available to other companies,” Erisson says. “We are too small to try to swallow this universe of opportunity on our own, but the potential of this platform is incredibly large. Patients deserve safer medicines and better outcomes than what are available now.”


Taking the heat out of industrial chemical separations

The gas-filtering membranes developed by MIT spinout Osmoses offer an alternative to energy-hungry thermal separation for chemicals and fuels.


The modern world runs on chemicals and fuels that require a huge amount of energy to produce: Industrial chemical separation accounts for 10 to 15 percent of the world’s total energy consumption. That’s because most separations today rely on heat to boil off unwanted materials and isolate compounds.

The MIT spinout Osmoses is making industrial chemical separations more efficient by reducing the need for all that heat. The company, founded by former MIT postdoc Francesco Maria Benedetti; Katherine Mizrahi Rodriguez ’17, PhD ’22; Professor Zachary Smith; and Holden Lai, has developed a polymer technology capable of filtering gases with unprecedented selectivity.

Gases — consisting of some of the smallest molecules in the world — have historically been the hardest to separate. Osmoses says its membranes enable industrial customers to increase production, use less energy, and operate in a smaller footprint than is possible using conventional heat-based separation processes.

Osmoses has already begun working with partners to demonstrate its technology’s performance, including its ability to upgrade biogas, which involves separating CO2 and methane. The company also has projects in the works to recover hydrogen from large chemical facilities and, in a partnership with the U.S. Department of Energy, to pull helium from underground hydrogen wells.

“Chemical separations really matter, and they are a bottleneck to innovation and progress in an industry where innovation is challenging, yet an existential need,” Benedetti says. “We want to make it easier for our customers to reach their revenue targets, their decarbonization goals, and expand their markets to move the industry forward.”

Better separations

Benedetti joined Smith’s lab in MIT’s Department of Chemical Engineering in 2017. He was joined by Mizrahi Rodriguez the following year, and the pair spent the next few years conducting fundamental research into membrane materials for gas separations, collaborating with chemists at MIT and beyond, including Lai as he conducted his PhD at Stanford University with Professor Yan Xia.

“I was fascinated by the projects [Smith] was thinking about,” Benedetti says. “It was high-risk, high-reward, and that’s something I love. I had the opportunity to work with talented chemists, and they were synthesizing amazing polymers. The idea was for us chemical engineers at MIT to study those polymers, support chemists in taking next steps, and find an application in the separations world.”

The researchers slowly iterated on the membranes, gradually achieving better performance until, in 2020, a group including Lai, Benedetti, Xia, and Smith broke records for gas separation selectivity with a class of three-dimensional polymers whose structural backbone could be tuned to optimize performance. They filed patents with Stanford and MIT over the next two years, publishing their results in the journal Science in 2022.

“We were facing a decision of what to do with this incredible innovation,” Benedetti recalls. “By then, we’d published a lot of papers where, as the introduction, we described the huge energy footprint of thermal gas separations and the potential of membranes to solve that. We thought rather than wait for somebody to pick up the paper and do something with it, we wanted to lead the effort to commercialize the technology.”

Benedetti joined forces with Mizrahi Rodriguez, Lai, and industrial advisor Xinjin Zhao PhD ’92 to go through the National Science Foundation’s I-Corps Program, which challenges researchers to speak to potential customers in industry. The researchers interviewed more than 100 people, which confirmed for them the huge impact their technology could have.

Benedetti received grants from the MIT Deshpande Center for Technological Innovation, MIT Sandbox, and was a fellow with the MIT Energy Initiative. Osmoses also won the MIT $100K Entrepreneurship Competition in 2021, the same year they founded the company.

“I spent a lot of time talking to stakeholders of companies, and it was a window into the challenges the industry is facing,” Benedetti says. “It helped me determine this was a problem they were facing, and showed me the problem was massive. We realized if we could solve the problem, we could change the world.”

Today, Benedetti says more than 90 percent of energy in the chemicals industry is used to thermally separate gases. One study in Nature found that replacing thermal distillation could reduce annual U.S. energy costs by $4 billion and save 100 million tons of carbon dioxide emissions.

Made up of a class of molecules with tunable structures called hydrocarbon ladder polymers, Osmoses’ membranes are capable of filtering gas molecules with high levels of selectivity, at scale. The technology reduces the size of separation systems, making it easier to add to existing spaces and lowering upfront costs for customers.

“This technology is a paradigm shift with respect to how most separations are happening in industry today,” Benedetti says. “It doesn’t require any thermal processes, which is the reason why the chemical and petrochemical industries have such high energy consumption. There are huge inefficiencies in how separations are done today because of the traditional systems used.”

From the lab to the world

In the lab, the founders were making single grams of their membrane polymers for experiments. Since then, they’ve scaled up production dramatically, reducing the cost of the material with an eye toward producing potentially hundreds of kilograms in the future.

The company is currently working toward its first pilot project upgrading biogas at a landfill operated by a large utility in North America. It is also planning a pilot at a dairy farm in North America. Mizrahi Rodriguez says waste gas from landfills and agricultural make up over 80 percent of the biogas upgrading market overall and represent a promising alternative source of renewable methane for customers.

“In the near term, our goal is to validate this technology at scale,” Benedetti says, noting Osmoses aims to scale up its pilot projects. “It has been a big accomplishment to secure funded pilots in all of the verticals that will serve as a springboard for our next commercial phase.”

Osmoses’ other two pilot projects focus on recovering valuable gas, including helium with the Department of Energy.

“Helium is a scarce resource that we need for a variety of applications, like MRIs, and our membranes’ high performance can be used to extract small amounts of it from underground wells,” Mizrahi Rodriguez explains. “Helium is very important in the semiconductor industry to build chips and graphical processing units that are powering the AI revolution. It’s a strategic resource that the U.S. has a growing interest to produce domestically.”

Benedetti says further down the line, Osmoses’ technology could be used in carbon capture, gas “sweetening” to remove acid gases from natural gas, to separate oxygen and nitrogen, to reuse refrigerants, and more.

“There will be a progressive expansion of our capabilities and markets to deliver on our mission of redefining the backbone of the chemical, petrochemical, and energy industries,” Benedetti says. “Separations should not be a bottleneck to innovation and progress anymore.”


Q&A: A simpler way to understand syntax

A new book by Professor Ted Gibson brings together his years of teaching and research to detail the rules of how words combine.



For decades, MIT Professor Ted Gibson has taught the meaning of language to first-year graduate students in the Department of Brain and Cognitive Sciences (BCS). A new book, Gibson’s first, brings together his years of teaching and research to detail the rules of how words combine.

Syntax: A Cognitive Approach,” released by MIT Press on Dec. 16, lays out the grammar of a language from the perspective of a cognitive scientist, outlining the components of language structure and the model of syntax that Gibson advocates: dependency grammar.

It was his research collaborator and wife, associate professor of BCS and McGovern Institute for Brain Research investigator Ev Fedorenko, who encouraged him to put pen to paper. Here, Gibson takes some time to discuss the book.

Q: Where did the process for “Syntax” begin?

A: I think it started with my teaching. Course 9.012 (Cognitive Science), which I teach with Josh Tenenbaum and Pawan Sinha, divides language into three components: sound, structure, and meaning. I work on the structure and meaning parts of language: words and how they get put together. That’s called syntax.

I’ve spent a lot of time over the last 30 years trying to understand the compositional rules of syntax, and even though there are many grammar rules in any language, I actually don’t think the form for grammar rules is that complicated. I’ve taught it in a very simple way for many years, but I’ve never written it all down in one place. My wife, Ev, is a longtime collaborator, and she suggested I write a paper. It turned into a book.

Q: How do you like to explain syntax?

A: For any sentence, for any utterance in any human language, there’s always going to be a word that serves as the head of that sentence, and every other other word will somehow depend on that headword, maybe as an immediate dependent, or further away, through some other dependent words. This is called dependency grammar; it means there’s a root word in each sentence, and dependents of that root, on down, for all the words in the sentence, form a simple tree structure. I have cognitive reasons to suggest that this model is correct, but it isn’t my model; it was first proposed in the 1950s. I adopted it because it aligns with human cognitive phenomena.

That very simple framework gives you the following observation: that longer-distance connections between words are harder to produce and understand than shorter-distance ones. This is because of limitations in human memory. The closer the words are together, the easier it is for me to produce them in a sentence, and the easier it is for you to understand them. If they’re far apart, then it’s a complicated memory problem to produce and understand them.

This gives rise to a cool observation: Languages optimize their rules in order to keep the words close together. We can have very different orders of the same elements across languages, such as the difference in word orders for English versus Japanese, where the order of the words in the English sentence “Mary eats an apple” is “Mary apple eats” in Japanese. But then the ordering rules in English and Japanese are aligned within themselves in order to minimize dependency lengths on average for the language.

Q: How does the book challenge some longstanding ideas in the field of linguistics?

A: In 1957, a book called “Syntactic Structures” by Noam Chomsky was published. It is a wonderful book that provides mathematical approaches to describe what human language is. It is very influential in the field of linguistics, and for good reason.

One of the key components of the theory that Chomsky proposed was the “transformation,” such that words and phrases can move from a deep structure to the structure that we produce. He thought it was self-evident from examples in English that transformations must be part of a human language. But then this concept of transformations eventually led him to conclude that grammar is unlearnable, that it has to be built into the human mind.  

In my view of grammar, there are no transformations. Instead, there are just two different versions of some words, or they can be underspecified for their grammar usage. The different usages may be related in meaning, and they can point to a similar meaning, but they have different dependency structures.

I think the advent of large language models suggests that language is learnable and that syntax isn’t as complicated as we used to think it was, because LLMs are successful at producing language. A large language model is almost the same as an adult speaker of a language in what it can produce. There are subtle ways in which they differ, but on the surface, they look the same in many ways, which suggests that these models do very well with learning language, even with human-like quantities of data.

I get pushback from some people who say, well, researchers can still use transformations to account for some phenomena. My reaction is: Unless you can show me that transformations are necessary, then I don’t think we need them.

Q: This book is open access. Why did you decide to publish it that way?

A: I am all for free knowledge for everyone. I am one of the editors of “Open Mind,” a journal established several years ago that is completely free and open access. I felt my book should be the same way, and MIT Press is a fantastic university press that is nonprofit and supportive of open-access publishing. It means I make less money, but it also means it can reach more people. For me, it is really about trying to get the information out there. I want more people to read it, to learn things. I think that’s how science is supposed to be.


Rhea Vedro brings community wishes to life in Boston sculpture

The MIT lecturer and artist-in-residence transformed hundreds of inscribed and hammered steel plates into “Amulet,” a soaring public artwork at City Hall Plaza.


Boston recently got its own good luck charm, “Amulet,” a 19-foot-tall tangle of organic spires installed in City Hall Plaza and embedded with the wishes, hopes, and prayers of residents from across the city.

The public artwork, by artist Rhea Vedro — also a lecturer and metals artist-in-residence in MIT’s Department of Materials Science and Engineering (DMSE) — was installed on the north side of City Hall, in a newly renovated stretch of the plaza along Congress Street, in October and dedicated with a ribbon cutting on Dec. 19.

“I’m really interested in this idea of protective objects worn on the skin by humans across cultures, across time,” said Vedro at the event in the Civic Pavilion, across the plaza from the sculpture. “And then, how do you take those ideas off the body and turn them into a blown-up version — a stand-in for the body?”

Vedro started exploring that question in 2021, when she was awarded a Boston Triennial Public Art Accelerator fellowship and later commissioned by the city to create the piece — the first artwork installed in the refurbished section of the plaza. She invited people to workshops and community centers to create hundreds of “wishmarks” — steel panels with hammered indentations and words, each representing a personal wish or reflection.

The plates were later used to form the metal skin of the sculpture — three bird-like forms designed to be, in Vedro’s words, a “protective amulet for the landscape.”

“I didn’t ask anyone to share what their actual wishes were, but I met people going into surgery, people who were homeless and looking for housing, people who had just lost a loved one, people dealing with immigration issues,” Vedro said. She asked participants to meditate on the idea of a journey and safe passage. “That could be a literal journey with ideas around immigration and migration,” she said, “or it could be your own internal journey.”

Large-scale art, fine-scale detail

Vedro, who has several public artworks to her name, said in a video about making “Amulet” that the project was “the biggest thing I’ve ever done.” While artworks of this scale are often handed off to fabrication teams, she handled the construction herself, starting on her driveway until zoning rules forced her to move to her father-in-law’s warehouse. Sections were also welded at Artisans Asylum, a community workshop in Boston, where she was an artist in residence, and then moved to a large industrial studio in Rhode Island.

At the ribbon-cutting event, Vedro thanked friends, family members, and city officials who helped bring the project to life. The celebration ended with a concert by musician Veronica Robles and her mariachi band. Robles runs the Veronica Robles Cultural Center in East Boston, which served as the main site for wishmark workshops. The sculpture is expected to remain in City Hall Plaza for up to five years.

Vedro’s background is in fine arts metalsmithing, a discipline that involves shaping and manipulating metals like silver, gold, and copper through forging, casting, and soldering. She began working at a very different scale, making jewelry, and then later moved primarily to welded steel sculpture — both techniques she now teaches at MIT. When working with steel, Vedro applies the same sensitivity a jeweler brings to small objects, paying close attention to small undulations and surface texture.

She loves working with steel, Vedro says — “shaping and forming and texturing and fighting with it” — because it allows her to engage physically with the material, with her hands involved in every millimeter.

The sculpture’s fluid design began with loose, free-form bird drawings on a cement floor and rubber panels with soapstone, oil pastels, and paint sticks. Vedro then built the forms in metal, welding three-dimensional armatures from round steel bars. The organic shapes and flourishes emerged through a responsive, intuitive process.

“I’m someone who works in real-time, changing my mind and responding to the material,” Vedro says. She likens her process to making a patchwork quilt of steel pieces: forming patterns in a shapeable material like tar paper, transferring them to steel sheets, cutting and shaping and texturing the pieces, and welding them together. “So I can get lots of curvatures that way that are not at all modular.”

From steel plates to soaring form

The sculpture’s outer skin is made from thin, 20-gauge mild steel — a low-carbon steel that’s relatively soft and easy to work with — used for the wishmarks. Those plates were fitted over an internal armature constructed from heavier structural steel.

Because there were more wishmark panels than surface area, Vedro slipped some of them into the hollow space inside the sculpture before welding the piece closed. She compares them to treasures in a locket, “loose, rattling around, which freaked out the team when they were installing.” Any written text on the panels was burned off when the pieces were welded together.

“I believe the stuff’s all alchemized up into smoke, which to me is wonderful because it traverses realms just like a bird,” she says.

The surface of the sculpture is coated with a sealant — necessary because the outer skin material is prone to rust — along with spray paints, patinas, and accents including gold leaf. Its appearance will change over time, something Vedro embraces.

“The idea of transformation is actually integral to my work,” she says.

Standing outside the warmth of the Civic Pavilion on a windy, rainy day, artist Matt Bajor described the sculpture as “gorgeous,” attributing its impact in part to Vedro’s fluency in working across vastly different scales.

“The attention to detail — paying attention to the smaller things so that as it comes together as a whole, you have that fineness throughout the whole sculpture,” he said. “To do that at such a large scale is just crazy. It takes a lot of skill, a lot of effort, and a lot of time.”

Suveena Sreenilayam, a DMSE graduate student who has worked closely with Vedro, said her understanding of the relationship between art and craft brings a unique dimension to her work.

“Metal is hard to work with — and to build that on such small and large scales indicates real versatility,” Sreenilayam said. “To make something so artistic at this scale reflects her physical talent, and also her eye for detail and expression.”

Bajor said “Amulet” is a striking addition to the plaza, where the clean lines of City Hall’s Brutalist architecture contrast with the sculpture’s sinuous curves — and to Boston itself.

“I’m looking forward to seeing it in different conditions — in snow and bright sun — as the metal changes over time and as the patina develops,” he said. “It’s just a really great addition to the city.”


MIT engineers design structures that compute with heat

By leveraging excess heat instead of electricity, microscopic silicon structures could enable more energy-efficient thermal sensing and signal processing.


MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation.

In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device. The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is thermostat at a fixed temperature.      

The researchers used these structures to perform matrix vector multiplication with more than 99 percent accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.

While the researchers still have to overcome many challenges to scale up this computing method for modern deep-learning models, the technique could be applied to detect heat sources and measure temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that take up space on a chip.

“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the new computing paradigm.

Silva is joined on the paper by senior author Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies. The research appears today in Physical Review Applied.

Turning up the heat

This work was enabled by a software system the researchers previously developed that allows them to automatically design a material that can conduct heat in a specific manner.

Using a technique called inverse design, this system flips the traditional engineering approach on its head. The researchers define the functionality they want first, then the system uses powerful algorithms to iteratively design the best geometry for the task.

They used this system to design complex silicon structures, each roughly the same size as a dust particle, that can perform computations using heat conduction. This is a form of analog computing, in which data are encoded and signals are processed using continuous values, rather than digital bits that are either 0s or 1s.

The researchers feed their software system the specifications of a matrix of numbers that represents a particular calculation. Using a grid, the system designs a set of rectangular silicon structures filled with tiny pores. The system continually adjusts each pixel in the grid until it arrives at the desired mathematical function.

Heat diffuses through the silicon in a way that performs the matrix multiplication, with the geometry of the structure encoding the coefficients.

Four renders show subtle movements of porous structure.

“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique,” Romano says.

But the researchers ran into a problem. Due to the laws of heat conduction, which impose that heat goes from hot to cold regions, these structures can only encode positive coefficients. 

They overcame this problem by splitting the target matrix into its positive and negative components and representing them with separately optimized silicon structures that encode positive entries. Subtracting the outputs at a later stage allows them to compute negative matrix values.

They can also tune the thickness of the structures, which allows them to realize a greater variety of matrices. Thicker structures have greater heat conduction.

“Finding the right topology for a given matrix is challenging. We beat this problem by developing an optimization algorithm that ensures the topology being developed is as close as possible to the desired matrix without having any weird parts,” Silva explains.

Microelectronic applications

The researchers used simulations to test the structures on simple matrices with two or three columns. While simple, these small matrices are relevant for important applications, such as fusion sensing and diagnostics in microelectronics.     

The structures performed computations with more than 99 percent accuracy in many cases.

However, there is still a long way to go before this technique could be used for large-scale applications such as deep learning, since millions of structures would need to be tiled together. As the matrices become more complicated, the structures become less accurate, especially when there is a large distance between the input and output terminals. In addition, the devices have limited bandwidth, which would need to be greatly expanded if they were to be used for deep learning.

But because the structures rely on excess heat, they could be directly applied for tasks like thermal management, as well as heat source or temperature gradient detection in microelectronics.

“This information is critical. Temperature gradients can cause thermal expansion and damage a circuit or even cause an entire device to fail. If we have a localized  heat source where we don’t want a heat source, it means we have a problem. We could directly detect such heat sources with these structures, and we can just plug them in without needing any digital components,” Romano says.

Building on this proof-of-concept, the researchers want to design structures that can perform sequential operations, where the output of one structure becomes an input for the next. This is how machine-learning models perform computations. They also plan to develop programmable structures, enabling them to encode different matrices without starting from scratch with a new structure each time.


Keeril Makan named vice provost for the arts

An acclaimed composer and longtime MIT faculty member, Makan will direct the next act in MIT’s story of artistic leadership.



Keeril Makan has been appointed vice provost for the arts at MIT, effective Feb. 1. In this role, Makan, who is the Michael (1949) and Sonja Koerner Music Composition Professor at MIT, will provide leadership and strategic direction for the arts across the Institute.

Provost Anantha Chandrakasan announced Makan’s appointment in an email to the MIT community today.

“Keeril’s record of accomplishment both as an artist and an administrative leader makes him exceedingly qualified to take on this important role,” Chandrakasan wrote, noting that Makan “has repeatedly taken on new leadership assignments with skill and enthusiasm.”

Makan’s appointment follows the publication last September of the final report of the Future of the Arts at MIT Committee. At MIT, the report noted, “the arts thrive as a constellation of recognized disciplines while penetrating and illuminating countless aspects of the Institute’s scientific and technological enterprise.” Makan will build on this foundation as MIT continues to strengthen the role of the arts in research, education, and community life.

As vice provost for the arts, Makan will provide Institute-wide leadership and strategic direction for the arts, working in close partnership with academic leaders, arts units, and administrative colleagues across MIT, including the Office of the Arts; the MIT Center for Art, Science and Technology; the MIT Museum; the List Visual Arts Center; and the Council for the Arts at MIT. His role will focus on strengthening connections between artistic practice, research, education, and community life, and on supporting public engagement and interdisciplinary collaboration.

“At MIT, the arts are a vital way of thinking, making, and convening,” Makan says. “As vice provost, my priority is to support and strengthen the extraordinary artistic work already happening across the Institute, while listening carefully to faculty, students, and staff as we shape what comes next. I’m excited to build on MIT’s distinctive, only-at-MIT approach to the arts and to help ensure that artistic practice remains central to MIT’s intellectual and community life.”

Makan says he will begin his new role with a period of listening and learning across MIT’s arts ecosystem, informed by the Future of the Arts at MIT report. His initial focus will be on understanding how artistic practice intersects with research, education, and community life, and on identifying opportunities to strengthen connections, visibility, and coordination across MIT’s many arts activities.

Over time, Makan says he will work with the arts community to advance MIT’s long-standing commitment to artistic excellence and experimentation, while supporting student participation and public engagement in the arts. He said his approach will “emphasize collaboration, clarity, and sustainability, reflecting MIT’s distinctive integration of the arts with science and technology.”

Makan came to MIT in 2006 as an assistant professor of music. From 2018 to 2024, he served as head of the Music and Theater Arts (MTA) Section in the School of Humanities, Arts, and Social Sciences (SHASS). In 2023, he was appointed associate dean for strategic initiatives in SHASS, where he helped guide the school’s response to recent fiscal pressures and led Institute-wide strategic initiatives.

With colleagues from MTA and the School of Engineering, Makan helped launch a new, multidisciplinary graduate program in music technology and computation. He was intimately involved in the project to develop the new Edward and Joyce Linde Music Building (Building 18), a state-of-the-art facility that opened in 2025. 

Makan was a member of the Future of the Arts at MIT Committee and chaired a working group on the creation of a center for the humanities, which ultimately became the MIT Human Insight Collaborative (MITHIC), one of the Institute’s strategic initiatives. Since last year, he has served as MITHIC’s faculty lead. Under his leadership, MITHIC has awarded $4.7 million in funding to 56 projects across 28 units at MIT, supporting interdisciplinary, human-centered research and teaching.

Trained initially as a violinist, Makan earned undergraduate degrees in music composition and religion from Oberlin and a PhD in music composition from the University of California at Berkeley.

A critically-acclaimed composer, Makan is the recipient of a Guggenheim Fellowship and the Luciano Berio Rome Prize from the American Academy in Rome. His music has been recorded by the Kronos Quartet, the Boston Modern Orchestra Project, and the International Contemporary Ensemble, and performed at Carnegie Hall, the Lincoln Center for the Performing Arts, and Tanglewood. His opera, “Persona,” premiered at National Sawdust and was performed at the Isabella Stewart Gardner Museum in Boston and by the Los Angeles Opera. The Los Angeles Times described the music from “Persona” as “brilliant.”

Makan succeeds Philip Khoury, the Ford International Professor of History, who served as vice provost for the arts from 2006 before stepping down in 2025. Khoury will return to the MIT faculty following a sabbatical.


Study: The infant universe’s “primordial soup” was actually soupy

MIT physicists observed the first clear evidence that quarks create a wake as they speed through quark-gluon plasma, confirming the plasma behaves like a liquid.


In its first moments, the infant universe was a trillion-degree-hot soup of quarks and gluons. These elementary particles zinged around at light speed, creating a “quark-gluon plasma” that lasted for only a few millionths of a second. The primordial goo then quickly cooled, and its individual quarks and gluons fused to form the protons, neutrons, and other fundamental particles that exist today.

Physicists at CERN’s Large Hadron Collider in Switzerland are recreating quark-gluon plasma (QGP) to better understand the universe’s starting ingredients. By smashing together heavy ions at close to light speeds, scientists can briefly dislodge quarks and gluons to create and study the same material that existed during the first microseconds of the early universe.

Now, a team at CERN led by MIT physicists has observed clear signs that quarks create wakes as they speed through the plasma, similar to a duck trailing ripples through water. The findings are the first direct evidence that quark-gluon plasma reacts to speeding particles as a single fluid, sloshing and splashing in response, rather than scattering randomly like individual particles.

“It has been a long debate in our field, on whether the plasma should respond to a quark,” says Yen-Jie Lee, professor of physics at MIT. “Now we see the plasma is incredibly dense, such that it is able to slow down a quark, and produces splashes and swirls like a liquid. So quark-gluon plasma really is a primordial soup.”

To see a quark’s wake effects, Lee and his colleagues developed a new technique that they report in the study. They plan to apply the approach to more particle-collision data to zero in on other quark wakes. Measuring the size, speed, and extent of these wakes, and how long it takes for them to ebb and dissipate, can give scientists an idea of the properties of the plasma itself, and how quark-gluon plasma might have behaved in the universe’s first microseconds.

“Studying how quark wakes bounce back and forth will give us new insights on the quark-gluon plasma’s properties,” Lee says. “With this experiment, we are taking a snapshot of this primordial quark soup.”

The study’s co-authors are members of the CMS Collaboration — a team of particle physicists from around the world who work together to carry out and analyze data from the Compact Muon Solenoid (CMS) experiment, which is one of the general-purpose particle detectors at CERN’s Large Hadron Collider. The CMS experiment was used to detect signs of quark wake effects for this study. The open-access study appears in the journal Physics Letters B.

Quark shadows

Quark-gluon plasma is the first liquid to have ever existed in the universe. It is also the hottest liquid ever, as scientists estimate that during its brief existence, the QGP was around a few trillion degrees Celsius. This boiling stew is also thought to have been a near-“perfect” liquid, meaning that the individual quarks and gluons in the plasma flowed together as a smooth, frictionless fluid.

This picture of the QGP is based on many independent experiments and theoretical models. One such model, derived by Krishna Rajagopal, the William A. M. Burden Professor of Physics at MIT, and his collaborators, predicts that the quark-gluon plasma should respond like a fluid to any particles speeding through it. His theory, known as the hybrid model, suggests that when a jet of quarks is zinging through the QGP, it should produce a wake behind it, inducing the plasma to ripple and splash in response.

Physicists have looked for such wake effects in experiments at the Large Hadron Collider and other high-energy particle accelerators. These experiments whip up heavy ions such as lead, to close to the speed of light, at which point they can collide and produce a short-lived droplet of primordial soup, typically lasting for less than a quadrillionth of a second. Scientists essentially take a snapshot of the moment to try and identify characteristics of the QGP.

To identify quark wakes, physicists have looked for pairs of quarks and “antiquarks” — particles that are identical to their quark counterparts, except that certain properties are equal in magnitude but opposite in sign. For instance, when a quark is speeding through plasma, there is likely an antiquark that is traveling at exactly the same speed, but in the opposite direction.

For this reason, physicists have looked for quark/antiquark pairs in the QGP produced in heavy-ion collisions, assuming that the particles might produce identical, detectable wakes through the plasma.

“When you have two quarks produced, the problem is that, when the two quarks go in opposite directions, the one quark overshadows the wake of the second quark,” Lee says.

He and his colleagues realized that looking for the wake of the first quark would be easier if there were no second quark obscuring its effects.

“We have figured out a new technique that allows us to see the effects of a single quark in the QGP, through a different pair of particles,” Lee says.

A wake tag

Rather than search for pairs of quarks and antiquarks in the aftermath of lead ion collisions, Lee’s team instead looked for events with only one quark moving through the plasma, essentially back-to-back with a “Z boson.” A Z boson is a neutral, electrically weak elementary particle that has virtually no effect on the surrounding environment. However, because they exist at a very specific energy, Z bosons are relatively straightforward to detect.

“In this soup of quark-gluon plasma, there are numerous quarks and gluons passing by and colliding with each other,” Lee explains. “Sometimes when we are lucky, one of these collisions creates a Z boson and a quark, with high momentum.”

In such a collision, the two particles should hit each other and fly off in exact opposite directions. While the quark could leave a wake, the Z boson should have no effect on the surrounding plasma. Whatever ripples are observed in the droplet of primordial soup would have been made entirely by the single quark zipping through it.

The team, in collaboration with Professor Yi Chen’s group at Vanderbilt University, reasoned that they could use Z bosons as a “tag” to locate and trace the wake effects of single quarks. For their new study, the researchers looked through data from the Large Hadron Collider’s heavy-ion collision experiments. From 13 billion collisions, they identified about 2,000 events that produced a Z boson. For each of these events, they mapped the energies throughout the short-lived quark-gluon plasma, and consistently observed a fluid-like pattern of splashes in swirls — a wake effect — in the opposite direction of the Z bosons, which the team could directly attribute to the effect of single quarks zooming through the plasma.

What’s more, the physicists found that the wake effects they observed in the data were consistent with what Rajagopal’s hybrid model predicts. In other words, quark-gluon plasma does in fact flow and ripple like a fluid when particles speed through it.

“This is something that many of us have argued must be there for a good many years, and that many experiments have looked for,” says Rajagopal, who was not directly involved with the new study.

“What Yen-Jie and CMS have done is to devise and execute a measurement that has brought them and us the first clean, clear, unambiguous, evidence for this foundational phenomenon,” says Daniel Pablos, professor of physics at Oviedo University in Spain and a collaborator of Rajagopal’s who was not involved in the current study.

“We’ve gained the first direct evidence that the quark indeed drags more plasma with it as it travels,” Lee adds. “This will enable us to study the properties and behavior of this exotic fluid in unprecedented detail.”

This work was supported, in part, by the U.S. Department of Energy.


Welcome to the “most wicked” apprentice program on campus

With a focus on metallurgy and fabrication, Pappalardo Apprentices assist their peers with machining, hand-tool use, brainstorming, and more, while expanding their own skills.


The Pappalardo Apprentice program pushes the boundaries of the traditional lab experience, inviting a selected group of juniors and seniors to advance their fabrication skills while also providing mentor training and peer-to-peer mentoring opportunities in an environment fueled by creativity, safety, and fun.

“This apprenticeship was largely born of my need for additional lab help during our larger sophomore-level design course, and the desire of third- and fourth-year students to advance their fabrication knowledge and skills,” says Daniel Braunstein, senior lecturer in mechanical engineering (MechE) and director of the Pappalardo Undergraduate Teaching Laboratories. “Though these needs and wants were nothing particularly new, it had not occurred to me that we could combine these interests into a manageable and meaningful program.”

Apprentices serve as undergraduate lab assistants for class 2.007 (Design and Manufacturing I), joining lab sessions and assisting 2.007 students with various aspects of the learning experience including machining, hand-tool use, brainstorming, and peer support. Apprentices also participate in a series of seminars and clinics designed to further their fabrication knowledge and hands-on skills, including advancing understanding of mill and lathe use, computer-aided design and manufacturing (CAD/CAM) and pattern-making.

Putting this learning into practice, junior apprentices fabricate Stirling engines (a closed-cycle heat engine that converts thermal energy into mechanical work), while returning senior apprentices take on more ambitious group projects involving casting. Previous years’ projects included an early 20th-century single-cylinder marine engine and a 19th-century torpedo boat steam engine, on permanent exhibit at the MIT Museum. This spring will focus on copper alloys and fabrication of a replica of an 1899 anchor windlass from the Herreshoff Manufacturing Co., used on the famous New York 70 class sloops.

The sloops, designed by MIT Class of 1870 alumnus Nathanael Greene Herreshoff for wealthy New York Yacht Club members, were a short-lived, single-design racing vessels meant for exclusive competition. The historic racing yachts used robust manual windlasses — mechanical devices used to haul large loads — to manage their substantial anchors.

“The more we got into casting, I was modestly surprised that [the students’] exposure to metals was very limited. So that really launched not just a project, but also a more specific curriculum around metallurgy,” says Braunstein.

Metallurgy is not a traditional part of the curriculum. “I think [the project] really opened up my eyes to how much material choice is an important thing for engineering in general,” says apprentice Jade Durham.

In casting the windlasses, students are working from century-old drawings. “[Looking at these old drawings,] we don't know how they made [the parts],” says Braunstein. “So, there is an element of the discovery of what they may or may not have done. It’s like technical archaeology.”

“You’re really just relying on your knowledge of the windlass system, how it’s meant to work, which surfaces are really critical, and kind of just applying your intuition,” says apprentice Saechow Yap. “I learned a lot about applying my art skills and my ability to judge and shape aesthetic.”

Learning by doing is an important hallmark of an MIT MechE education. The Pappalardo Apprentice Program, which celebrated its 10th year last spring, is housed in the Pappalardo Lab. The lab, established through a gift from Neil Pappalardo ’64, is the self-proclaimed “most wicked labs on campus” — “wicked,” for readers outside of Greater Boston, is slang used in a variety of ways, but generally meaning something is pretty awesome.

“Pappalardo is my favorite place on campus, I had never set foot in any sort of like makerspace or lab before I came to MIT,” says apprentice Wilhem Hector. “I did not just learn how to make things. I got empowered ... [to] make anything.”

Braunstein developed the Pappalardo Apprentice program to reinforce the learning of the older students while building community. In a 2023 interview, he said he called the seminar an apprenticeship to emphasize MIT’s relationship with the art — and industrial character — of engineering.

“I did want to borrow from the language of the trades,” Braunstein said. “MIT has a strong heritage in industrial work; that’s why we were founded. It was not a science institution; it was about the mechanical arts. And I think the blend of the industrial, plus the academic, is what makes this lab particularly meaningful.”

Today, he says the most enjoyable part of the program, for him, is watching relationships develop. “They come in, bright-eyed, bushy-tailed, and then to see them go to people who are capable of pouring iron, tramming mills, teaching other people how to do it and having this tight group of friends … that's fun to watch.”


Bryan Bryson: Engineering solutions to the tough problem of tuberculosis

By analyzing how Myobacterium tuberculosis interacts with the immune system, the associate professor hopes to find new vaccine targets to help eliminate the disease.


On his desk, Bryan Bryson ’07, PhD ’13 still has the notes he used for the talk he gave at MIT when he interviewed for a faculty position in biological engineering. On that sheet, he outlined the main question he wanted to address in his lab: How do immune cells kill bacteria?

Since starting his lab in 2018, Bryson has continued to pursue that question, which he sees as critical for finding new ways to target infectious diseases that have plagued humanity for centuries, especially tuberculosis. To make significant progress against TB, researchers need to understand how immune cells respond to the disease, he says.

“Here is a pathogen that has probably killed more people in human history than any other pathogen, so you want to learn how to kill it,” says Bryson, now an associate professor at MIT. “That has really been the core of our scientific mission since I started my lab. How does the immune system see this bacterium and how does the immune system kill the bacterium? If we can unlock that, then we can unlock new therapies and unlock new vaccines.”

The only TB vaccine now available, the BCG vaccine, is a weakened version of a bacterium that causes TB in cows. This vaccine is widely administered in some parts of the world, but it poorly protects adults against pulmonary TB. Although some treatments are available, tuberculosis still kills more than a million people every year.

“To me, making a better TB vaccine comes down to a question of measurement, and so we have really tried to tackle that problem head-on. The mission of my lab is to develop new measurement modalities and concepts that can help us accelerate a better TB vaccine,” says Bryson, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard.

From engineering to immunology

Engineering has deep roots in Bryson’s family: His great-grandfather was an engineer who worked on the Panama Canal, and his grandmother loved to build things and would likely have become an engineer if she had had the educational opportunity, Bryson says.

The oldest of four sons, Bryson was raised primarily by his mother and grandparents, who encouraged his interest in science. When he was three years old, his family moved from Worcester, Massachusetts, to Miami, Florida, where he began tinkering with engineering himself, building robots out of Styrofoam cups and light bulbs. After moving to Houston, Texas, at the beginning of seventh grade, Bryson joined his school’s math team.

As a high school student, Bryson had his heart set on studying biomedical engineering in college. However, MIT, one of his top choices, didn’t have a biomedical engineering program, and biological engineering wasn’t yet offered as an undergraduate major. After he was accepted to MIT, his family urged him to attend and then figure out what he would study.

Throughout his first year, Bryson deliberated over his decision, with electrical engineering and computer science (EECS) and aeronautics and astronautics both leading contenders. As he recalls, he thought he might study aero/astro with a minor in biomedical engineering and work on spacesuit design.

However, during an internship the summer after his first year, his mentor gave him a valuable piece of advice: “You should study something that will let you have a lot of options, because you don’t know how the world is going to change.”

When he came back to MIT for his sophomore year, Bryson switched his major to mechanical engineering, with a bioengineering track. He also started looking for undergraduate research positions. A poster in the hallway grabbed his attention, and he ended up with working with the professor whose work was featured: Linda Griffith, a professor of biological engineering and mechanical engineering.

Bryson’s experience in the lab “changed the trajectory of my life,” he says. There, he worked on building microfluidic devices that could be used to grow liver tissue from hepatocytes. He enjoyed the engineering aspects of the project, but he realized that he also wanted to learn more about the cells and why they behaved the way they did. He ended up staying at MIT to earn a PhD in biological engineering, working with Forest White.

In White’s lab, Bryson studied cell signaling processes and how they are altered in diseases such as cancer and diabetes. While doing his PhD research, he also became interested in studying infectious diseases. After earning his degree, he went to work with a professor of immunology at the Harvard School of Public Health, Sarah Fortune.

Fortune studies tuberculosis, and in her lab, Bryson began investigating how Mycobacterium tuberculosis interacts with host cells. During that time, Fortune instilled in him a desire to seek solutions to tuberculosis that could be transformative — not just identifying a new antibiotic, for example, but finding a way to dramatically reduce the incidence of the disease. This, he thought, could be done by vaccination, and in order to do that, he needed to understand how immune cells response to the disease. 

“That postdoc really taught me how to think bravely about what you could do if you were not limited by the measurements you could make today,” Bryson says. “What are the problems we really need to solve? There are so many things you could think about with TB, but what’s the thing that’s going to change history?”

Pursuing vaccine targets

Since joining the MIT faculty eight years ago, Bryson and his students have developed new ways to answer the question he posed in his faculty interviews: How does the immune system kill bacteria?

One key step in this process is that immune cells must be able to recognize bacterial proteins that are displayed on the surfaces of infected cells. Mycobacterium tuberculosis produces more than 4,000 proteins, but only a small subset of those end up displayed by infected cells. Those proteins would likely make the best candidates for a new TB vaccine, Bryson says.

Bryson’s lab has developed ways to identify those proteins, and so far, their studies have revealed that many of the TB antigens displayed to the immune system belong to a class of proteins known as type 7 secretion system substrates. Mycobacterium tuberculosis expresses about 100 of these proteins, but which of these 100 are displayed by infected cells varies from person to person, depending on their genetic background.

By studying blood samples from people of different genetic backgrounds, Bryson’s lab has identified the TB proteins displayed by infected cells in about 50 percent of the human population. He is now working on the remaining 50 percent and believes that once those studies are finished, he’ll have a very good idea of which proteins could be used to make a TB vaccine that would work for nearly everyone.

Once those proteins are chosen, his team can work on designing the vaccine and then testing it in animals, with hopes of being ready for clinical trials in about six years.

In spite of the challenges ahead, Bryson remains optimistic about the possibility of success, and credits his mother for instilling a positive attitude in him while he was growing up.

“My mom decided to raise all four of her children by herself, and she made it look so flawless,” Bryson says. “She instilled a sense of ‘you can do what you want to do,’ and a sense of optimism. There are so many ways that you can say that something will fail, but why don’t we look to find the reasons to continue?”

One of the things he loves about MIT is that he has found a similar can-do attitude across the Institute.

“The engineer ethos of MIT is that yes, this is possible, and what we’re trying to find is the way to make this possible,” he says. “I think engineering and infectious disease go really hand-in-hand, because engineers love a problem, and tuberculosis is a really hard problem.”

When not tackling hard problems, Bryson likes to lighten things up with ice cream study breaks at Simmons Hall, where he is an associate head of house. Using an ice cream machine he has had since 2009, Bryson makes gallons of ice cream for dorm residents several times a year. Nontraditional flavors such as passion fruit or jalapeno strawberry have proven especially popular.

“Recently I did flavors of fall, so I did a cinnamon ice cream, I did a pear sorbet,” he says. “Toasted marshmallow was a huge hit, but that really destroyed my kitchen.”


Pablo Jarillo-Herrero wins BBVA Foundation Frontiers of Knowledge Award

MIT physicist shares 400,000-euro award for influential work on “magic-angle” graphene.


Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, has won the 2025 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences for “discoveries concerning the ‘magic angle’ that allows the behavior of new materials to be transformed and controlled.”

He shares the 400,000-euro award with Allan MacDonald of the University of Texas at Austin. According to the BBVA Foundation, “the pioneering work of the two physicists has achieved both the theoretical foundation and experimental validation of a new field where superconductivity, magnetism, and other properties can be obtained by rotating new two-dimensional materials like graphene.” Graphene is a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure.

Theoretical foundation, experimental validation

In a theoretical model published in 2011, MacDonald predicted that on twisting two graphene layers at a given angle, of around 1 degree, the interaction of electrons would produce new emerging properties.
 
In 2018, Jarillo-Herrero delivered the experimental confirmation of this “magic angle” by rotating two graphene sheets in a way that transformed the material’s behavior, giving rise to new properties like superconductivity.

The physicists’ work “has opened up new frontiers in physics by demonstrating that rotating matter to a given angle allows us to control its behavior, obtaining properties that could have a major industrial impact,” explained award committee member María José García Borge, a research professor at the Institute for the Structure of Matter. “Superconductivity, for example, could bring about far more sustainable electricity transmission, with virtually no energy loss.”

Almost science fiction

MacDonald’s initial discovery had little immediate impact. It was not until some years later, when it was confirmed in the laboratory by Jarillo-Herrero, that its true importance was revealed. 

“The community would never have been so interested in my subject, if there hadn’t been an experimental program that realized that original vision,” observes MacDonald, who refers to his co-laureate’s achievement as “almost science fiction.”

Jarillo-Herrero had been intrigued by the possible effects of placing two graphene sheets on top of each other with a precise rotational alignment, because “it was uncharted territory, beyond the reach of the physics of the past, so was bound to produce some interesting results.”

But the scientist was still unsure of how to make it work in the lab. For years, he had been stacking together layers of the super-thin material, but without being able to specify the angle between them. Finally, he devised a way to do so, making the angle smaller and smaller until he got to the “magic” angle of 1.1 degrees at which the graphene revealed some extraordinary behavior.

“It was a big surprise, because the technique we used, though conceptually straightforward, was hard to pull off in the lab,” says Jarillo-Herrero, who is also affiliated with the Materials Research Laboratory.

Since 2009, the BBVA has given Frontiers of Knowledge Awards to more than a dozen MIT faculty members. The Frontiers of Knowledge Awards, spanning eight prize categories, recognize world-class research and cultural creation and aim to celebrate and promote the value of knowledge as a global public good. The BBVA Foundation works to support scientific research and cultural creation, disseminate knowledge and culture, and recognize talent and innovation. 


Richard Hynes, a pioneer in the biology of cellular adhesion, dies at 81

Professor, mentor, and leader at MIT for more than 50 years shaped fundamental understandings of cell adhesion, the extracellular matrix, and molecular mechanisms of metastasis.


MIT Professor Emeritus Richard O. Hynes PhD ’71, a cancer biologist whose discoveries reshaped modern understandings of how cells interact with each other and their environment, passed away on Jan. 6. He was 81.

Hynes is best known for his discovery of integrins, a family of cell-surface receptors essential to cell–cell and cell–matrix adhesion. He played a critical role in establishing the field of cell adhesion biology, and his continuing research revealed mechanisms central to embryonic development, tissue integrity, and diseases including cancer, fibrosis, thrombosis, and immune disorders.

Hynes was the Daniel K. Ludwig Professor for Cancer Research, Emeritus, an emeritus professor of biology, and a member of the Koch Institute for Integrated Cancer Research at MIT and the Broad Institute of MIT and Harvard. During his more than 50 years on the faculty at MIT, he was deeply respected for his academic leadership at the Institute and internationally, as well as his intellectual rigor and contributions as an educator and mentor.

“Richard had an enormous impact in his career. He was a visionary leader of the MIT Cancer Center, what is now the Koch Institute, during a time when the progress in understanding cancer was just starting to be translated into new therapies,” reflects Matthew Vander Heiden, director of the Koch Institute and the Lester Wolfe (1919) Professor of Molecular Biology. “The research from his laboratory launched an entirely new field by defining the molecules that mediate interactions between cells and between cells and their environment. This laid the groundwork for better understanding the immune system and metastasis.”

Pond skipper

Born in Kenya, Hynes grew up during the 1950s in Liverpool, in the United Kingdom. While he sometimes recounted stories of being schoolmates with two of the Beatles, and in the same Boy Scouts troop as Paul McCartney, his academic interests were quite different, and he specialized in the sciences at a young age. Both of his parents were scientists: His father was a freshwater ecologist, and his mother a physics teacher. Hynes and all three of his siblings followed their parents into scientific fields.

"We talked science at home, and if we asked questions, we got questions back, not answers. So that conditioned me into being a scientist, for sure," Hynes said of his youth.

He described his time as an undergraduate and master’s student at Cambridge University during the 1960s as “just fantastic,” noting that it was shortly after two 1962 Nobel Prizes were awarded to Cambridge researchers — one to Francis Crick and James Watson for the structure of DNA, the other to John Kendrew and Max Perutz for the structures of proteins — and Cambridge was “the place to be” to study biology.

Newly married, Hynes and his wife traded Cambridge, U.K. for Cambridge, Massachusetts, so that he could conduct doctoral work at MIT under the direction of Paul Gross. He tried (and by his own assessment, failed) to differentiate maternal messages among the three germ layers of sea urchin embryos. However, he did make early successful attempts to isolate the globular protein tubulin, a building block for essential cellular structures, from sea urchins.

Inspired by a course he had taken with Watson in the United States, Hynes began work during his postdoc at the Institute of Cancer Research in the U.K. on the early steps of oncogenic transformation and the role of cell migration and adhesion; it was here that he made his earliest discovery and characterizations of the fibronectin protein.

Recruited back to MIT by Salvador Luria, founding director of the MIT Center for Cancer Research, whom he had met during a summer at Woods Hole Oceanographic Institute on Cape Cod, Hynes returned to the Institute in 1975 as a founding faculty member of the center and an assistant professor in the Department of Biology.

Big questions about tiny cells

To his own research, Hynes brought the same spirit of inquiry that had characterized his upbringing, asking fundamental questions: How do cells interact with each other? How do they stick together to form tissues?

His research focused on proteins that allow cells to adhere to each other and to the extracellular matrix — a mesh-like network that surrounds cells, providing structural support, as well as biochemical and mechanical cues from the local microenvironment. These proteins include integrins, a type of cell surface receptor, and fibronectins, a family of extracellular adhesive proteins. Integrins are the major adhesion receptors connecting the extracellular matrix to the intracellular cytoskeleton, or main architectural support within the cell.

Hynes began his career as a developmental biologist, studying how cells move to the correct locations during embryonic development. During this stage of development, proper modulation of cell adhesion is critical for cells to move to the correct locations in the embryo.

Hynes’ work also revealed that dysregulation of cell-to-matrix contact plays an important role in cancer cells’ ability to detach from a tumor and spread to other parts of the body, key steps in metastasis.

As a postdoc, Hynes had begun studying the differences in the surface landscapes of healthy cells and tumor cells. It was this work that led to the discovery of fibronectin, which is often lost when cells become cancerous.

He and others found that fibronectin is an important part of the extracellular matrix. When fibronectin is lost, cancer cells can more easily free themselves from their original location and metastasize to other sites in the body. By studying how fibronectin normally interacts with cells, Hynes and others discovered a family of cell surface receptors known as integrins, which function as important physical links with the extracellular matrix. In humans, 24 integrin proteins have been identified. These proteins help give tissues their structure, enable blood to clot, and are essential for embryonic development.

“Richard’s discoveries, along with others’, of cell surface integrins led to the development of a number of life-altering treatments. Among these are treatment of autoimmune diseases such as multiple sclerosis,” notes longtime colleague Phillip Sharp, MIT Institute professor emeritus.

As research technologies advanced, including proteomic and extracellular matrix isolation methods developed directly in Hynes’ laboratory, he and his group were able to uncover increasingly detailed information about specific cell adhesion proteins, the biological mechanisms by which they operate, and the roles they play in normal biology and disease.

In cancer, their work helped to uncover how cell adhesion (and the loss thereof) and the extracellular matrix contribute not only to fundamental early steps in the metastatic process, but also tumor progression, therapeutic response, and patient prognosis. This included studies that mapped matrix protein signatures associated with cancer and non-cancer cells and tissues, followed by investigations into how differentially expressed matrix proteins can promote or suppress cancer progression. 

Hynes and his colleagues also demonstrated how extracellular matrix composition can influence immunotherapy, such as the importance of a family of cell adhesion proteins called selectins for recruiting natural killer cells to tumors. Further, Hynes revealed links between fibronectin, integrins, and other matrix proteins with tumor angiogenesis, or blood vessel development, and also showed how interaction with platelets can stimulate tumor cells to remodel the extracellular matrix to support invasion and metastasis. In pursuing these insights into the oncogenic mechanisms of matrix proteins, Hynes and members of his laboratory have identified useful diagnostic and prognostic biomarkers, as well as therapeutic targets.

Along the way, Hynes shaped not only the research field, but also the careers of generations of trainees.

“There was much to emulate in Richard’s gentle, patient, and generous approach to mentorship. He centered the goals and interests of his trainees, fostered an inclusive and intellectually rigorous environment, and cared deeply about the well-being of his lab members. Richard was a role model for integrity in both personal and professional interactions and set high expectations for intellectual excellence,” recalls Noor Jailkhani, a former Hynes Lab postdoc.

Jailkhani is CEO and co-founder, with Hynes, of Matrisome Bio, a biotech company developing first-in-class targeted therapies for cancer and fibrosis by leveraging the extracellular matrix. “The impact of his long and distinguished scientific career was magnified through the generations of trainees he mentored, whose influence spans academia and the biotechnology industry worldwide. I believe that his dedication to mentorship stands among his most far-reaching and enduring contributions,” she says.

A guiding light

Widely sought for his guidance, Hynes served in a number of key roles at MIT and in the broader scientific community. As head of MIT’s Department of Biology from 1989 to 1991, then a decade as director of the MIT Center for Cancer Research, his leadership has helped shape the Institute’s programs in both areas.

“Words can’t capture what a fabulous human being Richard was. I left every interaction with him with new insights and the warm glow that comes from a good conversation,” says Amy Keating, the Jay A. Stein (1968) Professor, professor of biology and biological engineering, and head of the Department of Biology. “Richard was happy to share stories, perspectives, and advice, always with a twinkle in his eye that conveyed his infinite interest in and delight with science, scientists, and life itself. The calm support that he offered me, during my years as department head, meant a lot and helped me do my job with confidence.”

Hynes served as director of the MIT Center for Cancer Research from 1991 until 2001, positioning the center’s distinguished cancer biology program for expansion into its current, interdisciplinary research model as MIT’s Koch Institute for Integrative Cancer Research. “He recruited and strongly supported Tyler Jacks to the faculty, who subsequently became director and headed efforts to establish the Koch Institute,” recalls Sharp.

Jacks, a David H. Koch (1962) Professor of Biology and founding director of the Koch Institute, remembers Hynes as a thoughtful, caring, and highly effective leader in the Center for Cancer Research, or CCR, and in the Department of Biology. “I was fortunate to be able to lean on him when I took over as CCR director. He encouraged me to drop in — unannounced — with questions and concerns, which I did regularly. I learned a great deal from Richard, at every level,” he says.

Hynes’ leadership and recognition extended well beyond MIT to national and international contexts, helping to shape policy and strengthen connections between MIT researchers and the wider field. He served as a scientific governor of the Wellcome Trust, a global health research and advocacy foundation based in the United Kingdom, and co-chaired U.S. National Academy committees establishing guidelines for stem cell and genome editing research.

“Richard was an esteemed scientist, a stimulating colleague, a beloved mentor, a role model, and to me a partner in many endeavors both within and beyond MIT,” notes H. Robert Horvitz, a David H. Koch (1962) Professor of Biology. He was a wonderful human being, and a good friend. I am sad beyond words at his passing.”

Awarded Howard Hughes medical investigator status in 1988, Hynes’ research and leadership have since been recognized with a number of other notable honors. Most recently, he received the 2022 Albert Lasker Basic Medical Research Award, which he shared with Erkki Ruoslahti of Sanford Burnham Prebys and Timothy Springer of Harvard University, for his discovery of integrins and pioneering work in cell adhesion.

His other awards include the Canada Gairdner International Award, the Distinguished Investigator Award from the International Society for Matrix Biology, the Robert and Claire Pasarow Medical Research Award, the E.B. Wilson Medal from the American Society for Cell Biology, the David Rall Medal from the National Academy of Medicine and the Paget-Ewing Award from the Metastasis Research Society. Hynes was a member of the National Academy of Sciences, the National Academy of Medicine, the Royal Society of London, the American Association for the Advancement of Science, and the American Academy of Arts and Sciences.

Easily recognized by a commanding stature that belied his soft-spoken nature, Hynes was known around MIT’s campus not only for his acuity, integrity, and wise counsel, but also for his community spirit and service. From serving food at community socials to moderating events and meetings or recognizing the success of colleagues and trainees, his willingness to help spanned roles of every size.

“Richard was a phenomenal friend and colleague. He approached complex problems with a thoughtfulness and clarity that few can achieve,” notes Vander Heiden. “He was also so generous in his willingness to provide help and advice, and did so with a genuine kindness that was appreciated by everyone.”

Hynes is survived by his wife Fleur, their sons Hugh and Colin and their partners, and four grandchildren.


Professor of the practice Robert Liebeck, leading expert on aircraft design, dies at 87

A giant in aviation, Liebeck had taught at MIT since 2000 and was a pioneer in the famed Blended-Wing Body experimental aircraft.


Robert Liebeck, a professor of the practice in the MIT Department of Aeronautics and Astronautics and one of the world’s leading experts on aircraft design, aerodynamics, and hydrodynamics, died on Jan. 12 at age 87.

“Bob was a mentor and dear friend to so many faculty, alumni, and researchers at AeroAstro over the course of 25 years,” says Julie Shah, department head and the H.N. Slater Professor of Aeronautics and Astronautics at MIT. “He’ll be deeply missed by all who were fortunate enough to know him.”

Liebeck’s long and distinguished career in aerospace engineering included a number of foundational contributions to aerodynamics and aircraft design, beginning with his graduate research into high-lift airfoils. His novel designs came to be known as “Liebeck airfoils” and are used primarily for high-altitude reconnaissance airplanes; Liebeck airfoils have also been adapted for use in Formula One racing cars, racing sailboats, and even a flying replica of a giant pterosaur.

He was perhaps best known for his groundbreaking work on blended wing body (BWB) aircraft. He oversaw the BWB project at Boeing during his celebrated five-decade tenure at the company, working closely with NASA on the X-48 experimental aircraft. After retiring as senior technical fellow at Boeing in 2020, Liebeck remained active in BWB research. He served as technical advisor at BWB startup JetZero, which is aiming to build a more fuel-efficient aircraft for both military and commercial use and has set a target date of 2027 for its demonstration flight. 

Liebeck was appointed a professor of the practice at MIT in 2000, and taught classes on aircraft design and aerodynamics. 

“Bob contributed to the department both in aircraft capstones and also in advising students and mentoring faculty, including myself,” says John Hansman, the T. Wilson Professor of Aeronautics and Astronautics. “In addition to aviation, Bob was very significant in car racing and developed the downforce wing and flap system which has become standard on F1, IndyCar, and NASCAR cars.”

He was a major contributor to the Silent Aircraft Project, a collaboration between MIT and Cambridge University led by Dame Ann Dowling. Liebeck also worked closely with Professor Woody Hoburg ’08 and his research group, advising on students’ research into efficient methods for designing aerospace vehicles. Before Hoburg was accepted into the NASA astronaut corps in 2017, the group produced an open-source Python package, GPkit, for geometric programming, which was used to design a five-day endurance unmanned aerial vehicle for the U.S. Air Force.

“Bob was universally respected in aviation and he was a good friend to the department,” remembers Professor Ed Greitzer.

Liebeck was an AIAA honorary fellow and Boeing senior technical fellow, as well as a member of the National Academy of Engineering, Royal Aeronautical Society, and Academy of Model Aeronautics. He was a recipient of the Guggenheim Medal and ASME Spirit of St. Louis Medal, among many other awards, and was inducted into the International Air and Space Hall of Fame.

An avid runner and motorcyclist, Liebeck is remembered by friends and colleagues for his adventurous nature and generosity of spirit. Throughout a career punctuated by honors and achievements, Liebeck found his greatest satisfaction in teaching. In addition to his role at MIT, he was an adjunct faculty member at University of California at Irving and served as faculty member for that university’s Design/Build/Fly and Human-Powered Airplane teams.

“It is the one job where I feel I have done some good — even after a bad lecture,” he told AeroAstro Magazine in 2007. “I have decided that I am finally beginning to understand aeronautical engineering, and I want to share that understanding with our youth.”


Electrifying boilers to decarbonize industry

AtmosZero, co-founded by Addison Stark SM ’10, PhD ’14, developed a modular heat pump to electrify the centuries-old steam boiler.


More than 200 years ago, the steam boiler helped spark the Industrial Revolution. Since then, steam has been the lifeblood of industrial activity around the world. Today the production of steam — created by burning gas, oil, or coal to boil water — accounts for a significant percentage of global energy use in manufacturing, powering the creation of paper, chemicals, pharmaceuticals, food, and more.

Now, the startup AtmosZero, founded by Addison Stark SM ’10, PhD ’14; Todd Bandhauer; and Ashwin Salvi, is taking a new approach to electrify the centuries-old steam boiler. The company has developed a modular heat pump capable of delivering industrial steam at temperatures up to 150 degrees Celsius to serve as a drop-in replacement for combustion boilers.

The company says its first 1-megawatt steam system is far cheaper to operate than commercially available electric solutions thanks to ultra-efficient compressor technology, which uses 50 percent less electricity than electric resistive boilers. The founders are hoping that’s enough to make decarbonized steam boilers drive the next industrial revolution.

“Steam is the most important working fluid ever,” says Stark, who serves as AtmosZero’s CEO. “Today everything is built around the ubiquitous availability of steam. Cost-effectively electrifying that requires innovation that can scale. In other words, it requires a mass-produced product — not one-off projects.”

Tapping into steam

Stark joined the Technology and Policy Program when he came to MIT in 2007. He ultimately completed a dual master’s degree by adding mechanical engineering to his studies.

“I was interested in the energy transition and in accelerating solutions to enable that,” Stark says. “The transition isn’t happening in a vacuum. You need to align economics, policy, and technology to drive that change.”

Stark stayed at MIT to earn his PhD in mechanical engineering, studying thermochemical biofuels.

After MIT, Stark began working on early-stage energy technologies with the Department of Energy’s Advanced Research Projects Agency— Energy (ARPA-E), with a focus on manufacturing efficiency, the energy-water nexus, and electrification.

“Part of that work involved applying my training at MIT to things that hadn’t really been innovated on for 50 years,” Stark says. “I was looking at the heat exchanger. It’s so fundamental. I thought, ‘How might we reimagine it in the context of modern advances in manufacturing technology?’”

The problem is as difficult as it is consequential, touching nearly every corner of the global industrial economy. More than 2.2 gigatons of CO2 emissions are generated each year to turn water into steam — accounting for more than 5 percent of global energy-related emissions.

In 2020, Stark co-authored an article in the journal Joule with Gregory Thiel SM ’12, PhD ’15 titled, “To decarbonize industry, we must decarbonize heat.” The article examined opportunities for industrial heat decarbonization, and it got Stark excited about the potential impact of a standardized, scalable electric heat pump.

Most electric boiler options today bring huge increases in operating costs. Many also make use of factory waste heat, which requires pricey retrofits. Stark never imagined he’d become an entrepreneur, but he soon realized no one was going to act on his findings for him.

“The only path to seeing this invention brought out into the world was to found and run the company,” Stark says. “It’s something I didn’t anticipate or necessarily want, but here I am.”

Stark partnered with former ARPA-E awardee Todd Bandhauer, who had been inventing new refrigerant compressor technology in his lab at Colorado State University, and former ARPA-E colleague Ashwin Salvi. The team officially founded AtmosZero in 2022.

“The compressor is the engine of the heat pump and defines the efficiency, cost, and performance,” Stark says. “The fundamental challenge of delivering heat is that the higher your heat pump is raising the air temperature, the lower your maximum efficiency. It runs into thermodynamic limitations. By designing for optimum efficiency in the operational windows that matter for the refrigerants we’re using, and for the precision manufacturing of our compressors, we’re able to maximize the individual stages of compression to maximize operational efficiency.”

The system can work with waste heat from air or water, but it doesn’t need waste heat to work. Many other electric boilers rely on waste heat, but Stark thinks that adds too much complexity to installation and operations.

Instead, in AtmosZero’s novel heat pump cycle, heat from ambient-temperature air is used to warm a liquid heat transfer material, which evaporates a refrigerant so it flows into the system’s series of compressors and heat exchangers, reaching high enough temperatures to boil water while recovering heat from the refrigerant once it reaches lower temperatures. The system can be ramped up and down to seamlessly fit into existing industrial processes.

“We can work just like a combustion boiler,” Stark says. “At the end of the day, customers don’t want to change how their manufacturing facilities operate in order to electrify. You can’t change or increase complexity on-site.”

That approach means the boiler can be deployed in a range of industrial contexts without unique project costs or other changes.

“What we really offer is flexibility and something that can drop in with ease and minimize total capital costs,” Stark says.

From 1 to 1,000

AtmosZero already has a pilot 650 kilowatt system operating at a customer facility near its headquarters in Loveland, Colorado. The company is currently focused on demonstrating year-round durability and reliability of the system as they work to build out their backlog of orders and prepare to scale. 

Stark says once the system is brought to a customer’s facility, it can be installed in an afternoon and deployed in a matter of days, with zero downtime.

AtmosZero is aiming to deliver a handful of units to customers over the next year or two, with plans to deploy hundreds of units a year after that. The company is currently targeting manufacturing plants using under 10 megawatts of thermal energy at peak demand, which represents most U.S. manufacturing facilities.

Stark is proud to be part of a growing group of MIT-affiliated decarbonization startups, some of which are targeting specific verticals, like Boston Metal for steel and Sublime Systems for cement. But he says beyond the most common materials, the industry gets very fragmented, with one of the only common threads being the use of steam.

“If we look across industrial segments, we see the ubiquity of steam,” Stark says. “It’s a tremendously ripe opportunity to have impact at scale. Steam cannot be removed from industry. So much of every industrial process that we’ve designed over the last 160 years has been around the availability of steam. So, we need to focus on ways to deliver low-emissions steam rather than removing it from the equation.”


Polar weather on Jupiter and Saturn hints at the planets’ interior details

New research may explain the striking differences between the two planets’ polar vortex patterns.


Over the years, passing spacecraft have observed mystifying weather patterns at the poles of Jupiter and Saturn. The two planets host very different types of polar vortices, which are huge atmospheric whirlpools that rotate over a planet’s polar region. On Saturn, a single massive polar vortex appears to cap the north pole in a curiously hexagonal shape, while on Jupiter, a central polar vortex is surrounded by eight smaller vortices, like a pan of swirling cinnamon rolls.

Given that both planets are similar in many ways — they are roughly the same size and made from the same gaseous elements — the stark difference in their polar weather patterns has been a longstanding mystery.

Now, MIT scientists have identified a possible explanation for how the two different systems may have evolved. Their findings could help scientists understand not only the planets’ surface weather patterns, but also what might lie beneath the clouds, deep within their interiors.

In a study appearing this week in the Proceedings of the National Academy of Sciences, the team simulates various ways in which well-organized vortex patterns may form out of random stimulations on a gas giant. A gas giant is a large planet that is made mostly of gaseous elements, such as Jupiter and Saturn. Among a wide range of plausible planetary configurations, the team found that, in some cases, the currents coalesced into a single large vortex, similar to Saturn’s pattern, whereas other simulations produced multiple large circulations, akin to Jupiter’s vortices.

After comparing simulations, the team found that vortex patterns, and whether a planet develops one or multiple polar vortices, comes down to one main property: the “softness” of a vortex’s base, which is related to the interior composition. The scientists liken an individual vortex to a whirling cylinder spinning through a planet’s many atmospheric layers. When the base of this swirling cylinder is made of softer, lighter materials, any vortex that evolves can only grow so large. The final pattern can then allow for multiple smaller vortices, similar to those on Jupiter. In contrast, if a vortex’s base is made of harder, denser stuff, it can grow much larger and subsequently engulf other vortices to form one single, massive vortex, akin to the monster cyclone on Saturn.

“Our study shows that, depending on the interior properties and the softness of the bottom of the vortex, this will influence the kind of fluid pattern you observe at the surface,” says study author Wanying Kang, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “I don’t think anyone’s made this connection between the surface fluid pattern and the interior properties of these planets. One possible scenario could be that Saturn has a harder bottom than Jupiter.”

The study’s first author is MIT graduate student Jiaru Shi.

Spinning up

Kang and Shi’s new work was inspired by images of Jupiter and Saturn that have been taken by the Juno and Cassini missions. NASA’s Juno spacecraft has been orbiting around Jupiter since 2016, and has captured stunning images of the planet’s north pole and its multiple swirling vortices. From these images, scientists have estimated that each of Jupiter’s vortices is immense, spanning about 3,000 miles across — almost half as wide as the Earth itself.

The Cassini spacecraft, prior to intentionally burning up in Saturn’s atmosphere in 2017, orbited the ringed planet for 13 years. Its observations of Saturn’s north pole recorded a single, hexagonal-shaped polar vortex, about 18,000 miles wide.

“People have spent a lot of time deciphering the differences between Jupiter and Saturn,” Shi says. “The planets are about the same size and are both made mostly of hydrogen and helium. It’s unclear why their polar vortices are so different.”

Shi and Kang set out to identify a physical mechanism that would explain why one planet might evolve a single vortex, while the other hosts multiple vortices. To do so, they worked with a two-dimensional model of surface fluid dynamics. While a polar vortex is three-dimensional in nature, the team reasoned that they could accurately represent vortex evolution in two dimensions, as the fast rotation of Jupiter and Saturn enforces uniform motion along the rotating axis.

“In a fast-rotating system, fluid motion tends to be uniform along the rotating axis,” Kang explains. “So, we were motivated by this idea that we can reduce a 3D dynamical problem to a 2D problem because the fluid pattern does not change in 3D. This makes the problem hundreds of times faster and cheaper to simulate and study.”

Getting to the bottom

Following this reasoning, the team developed a two-dimensional model of vortex evolution on a gas giant, based on an existing equation that describes how swirling fluid evolves over time.

“This equation has been used in many contexts, including to model midlatitude cyclones on Earth,” Kang says. “We adapted the equation to the polar regions of Jupiter and Saturn.”

The team applied their two-dimensional model to simulate how fluid would evolve over time on a gas giant under different scenarios. In each scenario, the team varied the planet’s size, its rate of rotation, its internal heating, and the softness or hardness of the rotating fluid, among other parameters. They then set a random “noise” condition, in which fluid initially flowed in random patterns across the planet’s surface. Finally, they observed how the fluid evolved over time given the scenario’s specific conditions.

Over multiple different simulations, they observed that some scenarios evolved to form a single large polar vortex, like Saturn, whereas others formed multiple smaller vortices, like Jupiter. After analyzing the combinations of parameters and variables in each scenario and how they related to the final outcome, they landed on a single mechanism to explain whether a single or multiple vortices evolve: As random fluid motions start to coalesce into individual vortices, the size to which a vortex can grow is limited by how soft the bottom of the vortex is. The softer, or lighter the gas is that is rotating at the bottom of a vortex, the smaller the vortex is in the end, allowing for multiple smaller-scale vortices to coexist at a planet’s pole, similar to those on Jupiter.

Two circles, with chaotic lines, represent planets. On left, the lines slowly form multiple unstable vortexes, but on right, only a single stable vortex is formed.

Conversely, the harder or denser a vortex bottom is, the larger the system can grow, to a size where eventually it can follow the planet’s curvature as a single, planetary-scale vortex, like the one on Saturn.

If this mechanism is indeed what is at play on both gas giants, it would suggest that Jupiter could be made of softer, lighter material, while Saturn may harbor heavier stuff in its interior.

“What we see from the surface, the fluid pattern on Jupiter and Saturn, may tell us something about the interior, like how soft the bottom is,” Shi says. “And that is important because maybe beneath Saturn’s surface, the interior is more metal-enriched and has more condensable material which allows it to provide stronger stratification than Jupiter. ”

"Because Jupiter and Saturn are otherwise so similar, their different polar weather has been a puzzle,” says Yohai Kaspi, a professor of geophysical fluid dynamics at the Weizmann Institute of Science, and a member of the Juno mission’s science team, who was not involved in the new study. “The work by Shi and Kang reveals a surprising link between these differences and the planets’ deep interior ‘softness’, offering a new way to map the key internal properties that shape their atmospheres."

This research was supported, in part, by a Mathworks Fellowship and endowed funding from MIT’s Department of Earth, Atmospheric and Planetary Sciences.


How collective memory of the Rwandan genocide was preserved

Delia Wendel’s new book illuminates a painful and painstaking effort by citizens to bear witness to atrocities.


The 1994 genocide in Rwanda took place over a little more than three months, during which militias representing the Hutu ethnic group conducted a mass murder of members of the Tutsi ethnic group along with some politically moderate members of the Hutu and Twa groups. Soon after, local citizens and aid workers began to document the atrocities that had occurred in the country.

They were establishing evidence of a genocide that many outsiders were slow to acknowledge; other countries and the U.N. did not recognize it until 1998. By preserving scenes of massacre and victims’ remains, this effort allowed foreigners, journalists, and neighbors to witness what had happened. Though the citizens’ work was emotionally and physically challenging, they used these sites of memory to seek justice for victims who had been killed and harmed.

In so doing, these efforts turned memory into officially recognized history. Now, in a new book, MIT scholar Delia Wendel carefully explores this work, shedding new light on the people who created the state’s genocide memorials, and the decisions they made in the process — such as making the remains of the dead available for public viewing. She also examines how the state gained control of the effort and has chosen to represent the past through these memorials.

“I’m seeking to recuperate this forgotten history of the ethics of the work, while also contending with the motivations of state sovereignty that has sustained it,” says Wendel, who is the Class of 1922 Career Development Associate Professor of Urban Studies and International Development in MIT’s Department of Urban Studies and Planning (DUSP).

That book, “Rwanda’s Genocide Heritage: Between Justice and Sovereignty,” is published by Duke University Press and is freely available through the MIT Libraries. In it, Wendel uncovers new details about the first efforts to preserve the memory of the genocide, analyzes the social and political dynamics, and examines their impact on people and public spaces.

“The shift from memory to history is important because it also requires recognition that is official or more public in nature,” Wendel says. “Survivors, their kin, their relatives, they know their histories. What they’re wishing to happen is a form of repair, or justice, or empowerment, that comes with disclosing those histories. That truth-telling aspect is really important.”

Conversations and memory

Wendel’s book was well over a decade in the making — and emerged from a related set of scholarly inquiries about peace-building activities in the wake of genocide. For this project, about memorializing genocide, Wendel visited over 30 villages in Rwanda over a span of many years, gradually making connections and building dialogues with citizens, in addition to conducting more conventional social science research.

“Speaking with rural residents started to unlock a lot of different types of conversations,” Wendel says of those visits. “A good deal of those conversations had to do with memory, and with relationships to place, neighbors, and authority.” She adds: “These are topics that people are very hesitant to speak about, and rightly so. This has been a book that took a long time to research and build some semblance of trust.”

During her research, Wendel also talked at length with some key figures involved in the process, including Louis Kanamugire, a Rwandan who became the first head of the country’s post-war Genocide Memorial Commission. Kanamugire, who lost his parents in the genocide, felt it was necessary to preserve and display the remains of genocide victims, including at four key sites that later become official state memorials.

This process involved, as Wendel puts it, the “gruesome” work of cleaning and preserving bodies and bones to provide both material evidence of genocide and the grounds for beginning the work of societal repair and individual healing.

Wendel also uncovers, in detail for the first time, the work done by Mario Ibarra, a Chilean aid worker for the U.N. who also investigated atrocities, photographed evidence extensively, conducted preservation work, and contributed to the country’s Genocide Memorial Commission as well. The relationships between global human rights practice and genocide survivors seeking justice, in terms of preserving and documenting evidence, is at the core of the book and, Wendel believes, a previously underappreciated aspect of this topic.

“The story of Rwanda memorialization that has typically been told is one of state control,” Wendel says. “But in the beginning, the government followed independent initiatives by this human rights worker and local residents who really spurred this on.”

In the book, Wendel also examines how Rwanda’s memorialization practices relates to those of other countries, often in the so-called Global South. This phenomenon is something she terms “trauma heritage,” and has followed similar trajectories across countries in Africa and South America, for instance.

“Trauma heritage is the act of making visible the violence that had been actively hidden, and intervening in the dynamics of power,” she says. “Making such public spaces for silenced pain is a way of seeking recognition of those harms, and [seeking] forms of justice and repair.”

The tensions of memorialization

To be clear, Rwanda has been able to construct genocide memorials in the first place because, in the mid-1990s, Tutsi troops regained power in the country by defeating their Hutu adversaries. Subsequently, in a state without unlimited free expression, the government has considerable control over the content and forms of memorialization that take place.

Meanwhile, there have always been differing views about, say, displaying victims’ remains, and to what degree such a practice underlines their humanity or emphasizes the dehumanizing treatment they suffered. Then too, atrocities can produce a wide range of psychological responses among the living, including survivors’ guilt and the sheer difficulty many experience in expressing what they have witnessed. The process of memorialization, in such circumstances, will likely be fraught.

“The book is about the tensions and paradoxes between the ethics of this work and its politics, which have a lot to do with state sovereignty and control,” Wendel says. “It’s rooted in the tension between what’s invisible and what’s visible, between this bid to be seen and to recognize the humanity of the victims and yet represent this dehumanizing violence. These are irresolvable dilemmas that were felt by the people doing this work.”

Or, as Wendel writes in the book, Rwandans and others immersed in similar struggles for justice around the world have had to grapple with the “messy politics of repair, searching for seemingly impossible redress for injustice.”

Other experts have praised Wendel’s book, such as Pumla Gobodo-Madikizela, a professor at Stellenbosch University in South Africa, who studies the psychological effects of mass violence. Gobodo-Madikizela has cited Wendel’s “extraordinary narratives” about the book’s principal figures, observing that they “not only preserve the remains but also reclaim the victims’ humanity. … Wendel shows how their labor becomes a defiant insistence on visibility that transforms the act of cleaning into a form of truth-telling, making injustice materially and spatially undeniable.”

For her part, Wendel hopes the book will engage readers interested in multiple related issues, including Rwandan and African history, the practices and politics of public memory, human rights and peace-building, and the design of public memorials and related spaces, including those built in the aftermath of traumatic historical episodes.

“Rwanda’s genocide heritage remains an important endeavor in memory justice, even if its politics need to be contended with at the same time,” Wendel says. 


Helping companies with physical operations around the world run more intelligently

Founded by two MIT alumni, Samsara’s platform gives companies a central hub to learn from their workers, equipment, and other infrastructure.


Running large companies in construction, logistics, energy, and manufacturing requires careful coordination between millions of people, devices, and systems. For more than a decade, Samsara has helped those companies connect their assets to get work done more intelligently.

Founded by John Bicket SM ’05 and Sanjit Biswas SM ’05, Samsara’s platform gives companies with physical operations a central hub to track and learn from workers, equipment, and other infrastructure. Layered on top of that platform are real-time analytics and notifications designed to prevent accidents, reduce risks, save fuel, and more.

Tens of thousands of customers have used Samsara’s platform to improve their operations since its founding in 2015. Home Depot, for instance, used Samsara’s artificial intelligence-equipped dashcams to reduce their total auto liability claims by 65 percent in one year. Maxim Crane Works saved more than $13 million in maintenance costs using Samsara’s equipment and vehicle diagnostic data in 2024. Mohawk Industries, the world’s largest flooring manufacturer, improved their route efficiency and saved $7.75 million annually.

“It’s all about real-world impact,” says Biswas, Samsara’s CEO. “These organizations have complex operations and are functioning at a massive scale. Workers are driving millions of miles and consuming tons of fuel. If you can understand what’s happening and run analysis in the cloud, you can find big efficiency improvements. In terms of safety, these workers are putting their lives at risk every day to keep this infrastructure running. You can literally save lives if you can reduce risk.”

Finding big problems

Biswas and Bicket started PhD programs at MIT in 2002, both conducting research around networking in the Computer Science and Artificial Intelligence Laboratory (CSAIL). They eventually applied their studies to build a wireless network called MIT RoofNet.

Upon graduating with master’s degrees, Biswas and Bicket decided to commercialize the technologies they worked on, founding the company Meraki in 2006.

“How do you get big Wi-Fi networks out in the world?” Biswas asks. “With MIT RoofNet, we covered Cambridge in Wi-Fi. We wanted to enable other people to build big Wi-Fi networks and make Wi-Fi go mainstream for larger campuses and offices.”

Over the next six years, Meraki’s technology was used to create millions of Wi-Fi networks around the world. In 2012, Meraki was acquired by Cisco. Biswas and Bicket left Cisco in 2015, unsure of what they’d work on next.

“The way we found ourselves to Samsara was through the same curiosity we had as graduate students,” Biswas says. “This time it dealt more with the planet’s infrastructure. We were thinking about how utilities work, and how construction happens at the scale of cities and states. It drew us into operations, which is the infrastructure backbone of the planet.”

As the founders learned about industries like logistics, utilities, and construction, they realized they could use their technical background to improve safety and efficiency.

“All these industries have a lot in common,” Biswas says. “They have a lot of field workers — often thousands of them — they have a lot of assets like trucks and equipment, and they’re trying to orchestrate it all. The throughline was the importance of data.”

When they founded Samsara 10 years ago, many people were still collecting field data with pen and paper.

“Because of our technical background, we knew that if you could collect the data and run sophisticated algorithms like AI over it, you could get a ton of insights and improve the way those operations run,” Biswas says.

Biswas says extracting insights from data is easy. Making field-ready products and getting them into the hands of frontline workers took longer.

Samsara started by tapping into existing sensors in buildings, cars, and other assets. They also built their own, including AI-equipped cameras and GPS trackers that can monitor driving behavior. That formed the foundation of Samsara’s Connected Operations Platform. On top of that, Samsara Intelligence processes data in the cloud and provides insights like ways to calculate the best routes for commercial vehicles, be more proactive with maintenance, and reduce fuel consumption.

Samsara’s platform can be used to detect if a commercial vehicle or snowplow driver is on their phone and send an audio message nudging them to stay safe and focused. The platform can also deliver training and coaching.

“That’s the kind of thing that reduces risk, because workers are way less likely to be distracted,” Biswas says. “If you do for millions of workers, you reduce risk at scale.”

The platform also allows managers to query their data in a ChatGPT-style interface, asking questions such as: Who are my safest drivers? Which vehicles need maintenance? And what are my least fuel-efficient trucks?

“Our platform helps recognize frontline workers who are safe and efficient in their job,” Biswas says. “These people are largely unsung heroes. They keep our planet running, but they don’t hear ‘thank you’ very often. Samsara helps companies recognize the safest workers on the field and give them recognition and rewards. So, it’s about modernizing equipment but also improving the experience of millions of people that help run this vital infrastructure.”

Continuing to grow

Today Samsara processes 20 trillion data points a year and monitors 90 billion miles of driving. The company employs about 4,000 people across North America and Europe.

“It still feels early for us,” Biswas says. “We’ve been around for 10 years and gotten some scale, but we needed to build this platform to be able to build more products and have more impact. If you step back, operations is 40 percent of the world’s GDP, so we see a lot of opportunities to do more with this data. For instance, weather is part of Samsara Intelligence, and weather is 20 to 25 percent of the risk, and so we’re training AI models to reduce risk from the weather. And on the sustainability side, the more data we have, the more we can help optimize for things like fuel consumption or transitioning to electric vehicles. Maintenance is another fascinating data problem.”

The founders have also maintained a connection with MIT — and not just because the City of Boston’s Department of Public Works and the MBTA are customers. Last year, the Biswas Family Foundation announced funding for a four-year postdoctoral fellowship program at MIT for early-stage researchers working to improve health care.

Biswas says Samsara’s journey has been incredibly rewarding and notes the company is well-positioned to leverage advances in AI to further its impact going forward.

“It’s been a lot of fun and also a lot of hard work,” Biswas says. “What’s exciting is that each decade of the company feels different. It’s almost like a new chapter — or a whole new book. Right now, there’s so many incredible things happening with data and AI. It feels as exciting as it did in the early days of the company. It feels very much like a startup.”


Efficient cooling method could enable chip-based trapped-ion quantum computers

New technique could improve the scalability of trapped-ion quantum computers, an essential step toward making them practically useful.


Quantum computers could rapidly solve complex problems that would take the most powerful classical supercomputers decades to unravel. But they’ll need to be large and stable enough to efficiently perform operations. To meet this challenge, researchers at MIT and elsewhere are developing trapped-ion quantum computers based on ultra-compact photonic chips. These chip-based systems offer a scalable alternative to existing trapped-ion quantum computers, which rely on bulky optical equipment.

The ions in these quantum computers must be cooled to extremely cold temperatures to minimize vibrations and prevent errors. So far, such trapped-ion systems based on photonic chips have been limited to inefficient and slow cooling methods.

Now, a team of researchers at MIT and MIT Lincoln Laboratory has implemented a much faster and more energy-efficient method for cooling trapped ions using photonic chips. Their approach achieved cooling to about 10 times below the limit of standard laser cooling.

Key to this technique is a photonic chip that incorporates precisely designed antennas to manipulate beams of tightly focused, intersecting light.

The researchers’ initial demonstration takes a key step toward scalable chip-based architectures that could someday enable quantum computing systems with greater efficiency and stability.

“We were able to design polarization-diverse integrated-photonics devices, utilize them to develop a variety of novel integrated-photonics-based systems, and apply them to show very efficient ion cooling. However, this is just the beginning of what we can do using these devices. By introducing polarization diversity to integrated-photonics-based trapped-ion systems, this work opens the door to a variety of advanced operations for trapped ions that weren’t previously attainable, even beyond efficient ion cooling — all research directions we are excited to explore in the future,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this architecture.

She is joined on the paper by lead authors Sabrina Corsetti, an EECS graduate student; Ethan Clements, a former postdoc who is now a staff scientist at MIT Lincoln Laboratory; Felix Knollmann, a graduate student in the Department of Physics; John Chiaverini, senior member of the technical staff at Lincoln Laboratory and a principal investigator in MIT’s Center for Quantum Engineering; as well as others at Lincoln Laboratory and MIT. The research appears today in two joint publications in Light: Science and Applications and Physical Review Letters.

Seeking scalability

While there are many types of quantum systems, this research is focused on trapped-ion quantum computing. In this application, a charged particle called an ion is formed by peeling an electron from an atom, and then trapped using radio-frequency signals and manipulated using optical signals.

Researchers use lasers to encode information in the trapped ion by changing its state. In this way, the ion can be used as a quantum bit, or qubit. Qubits are the building blocks of a quantum computer.

To prevent collisions between ions and gas molecules in the air, the ions are held in vacuum, often created with a device known as a cryostat. Traditionally, bulky lasers sit outside the cryostat and shoot different light beams through the cryostat’s windows toward the chip. These systems require a room full of optical components to address just a few dozen ions, making it difficult to scale to the large numbers of ions needed for advanced quantum computing. Slight vibrations outside the cryostat can also disrupt the light beams, ultimately reducing the accuracy of the quantum computer.

To get around these challenges, MIT researchers have been developing integrated-photonics-based systems. In this case, the light is emitted from the same chip that traps the ion. This improves scalability by eliminating the need for external optical components.

“Now, we can envision having thousands of sites on a single chip that all interface up to many ions, all working together in a scalable way,” Knollmann says.

But integrated-photonics-based demonstrations to date have achieved limited cooling efficiencies.

Keeping their cool

To enable fast and accurate quantum operations, researchers use optical fields to reduce the kinetic energy of the trapped ion. This causes the ion to cool to nearly absolute zero, an effective temperature even colder than cryostats can achieve.

But common methods have a higher cooling floor, so the ion still has a lot of vibrational energy after the cooling process completes. This would make it hard to use the qubits for high-quality computations.

The MIT researchers utilized a more complex approach, known as polarization-gradient cooling, which involves the precise interaction of two beams of light.

Each light beam has a different polarization, which means the field in each beam is oscillating in a different direction (up and down, side to side, etc.). Where these beams intersect, they form a rotating vortex of light that can force the ion to stop vibrating even more efficiently.

Although this approach had been shown previously using bulk optics, it hadn’t been shown before using integrated photonics.

To enable this more complex interaction, the researchers designed a chip with two nanoscale antennas, which emit beams of light out of the chip to manipulate the ion above it.

These antennas are connected by waveguides that route light to the antennas. The waveguides are designed to stabilize the optical routing, which improves the stability of the vortex pattern generated by the beams.

“When we emit light from integrated antennas, it behaves differently than with bulk optics. The beams, and generated light patterns, become extremely stable. Having these stable patterns allows us to explore ion behaviors with significantly more control,” Clements says.

The researchers also designed the antennas to maximize the amount of light that reaches the ion. Each antenna has tiny curved notches that scatter light upward, spaced just right to direct light toward the ion.

“We built upon many years of development at Lincoln Laboratory to design these gratings to emit diverse polarizations of light,” Corsetti says.

They experimented with several architectures, characterizing each to better understand how it emitted light.

With their final design in place, the researchers demonstrated ion cooling that was nearly 10 times below the limit of standard laser cooling, referred to as the Doppler limit. Their chip was able to reach this limit in about 100 microseconds, several times faster than other techniques.

“The demonstration of enhanced performance using optics integrated in the ion-trap chip lays the foundation for further integration that can allow new approaches for quantum-state manipulation, and that could improve the prospects for practical quantum-information processing,” adds Chiaverini. “Key to achieving this advance was the cross-Institute collaboration between the MIT campus and Lincoln groups, a model that we can build on as we take these next steps.”

In the future, the team plans to conduct characterization experiments on different chip architectures and demonstrate polarization-gradient cooling with multiple ions. In addition, they hope to explore other applications that could benefit from the stable light beams they can generate with this architecture.

Other authors who contributed to this research are Ashton Hattori (MIT), Zhaoyi Li (MIT), Milica Notaros (MIT), Reuel Swint (Lincoln Laboratory), Tal Sneh (MIT), Patrick Callahan (Lincoln Laboratory), May Kim (Lincoln Laboratory), Aaron Leu (MIT), Gavin West (MIT), Dave Kharas (Lincoln Laboratory), Thomas Mahony (Lincoln Laboratory), Colin Bruzewicz (Lincoln Laboratory), Cheryl Sorace-Agaskar (Lincoln Laboratory), Robert McConnell (Lincoln Laboratory), and Isaac Chuang (MIT).

This work is funded, in part, by the U.S. Department of Energy, the U.S. National Science Foundation, the MIT Center for Quantum Engineering, the U.S. Department of Defense, an MIT Rolf G. Locher Endowed Fellowship, and an MIT Frederick and Barbara Cronin Fellowship.