General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
The brain power behind sustainable AI

PhD student Miranda Schwacke explores how computing inspired by the human brain can fuel energy-efficient artificial intelligence.


How can you use science to build a better gingerbread house?

That was something Miranda Schwacke spent a lot of time thinking about. The MIT graduate student in the Department of Materials Science and Engineering (DMSE) is part of Kitchen Matters, a group of grad students who use food and kitchen tools to explain scientific concepts through short videos and outreach events. Past topics included why chocolate “seizes,” or becomes difficult to work with when melting (spoiler: water gets in), and how to make isomalt, the sugar glass that stunt performers jump through in action movies.

Two years ago, when the group was making a video on how to build a structurally sound gingerbread house, Schwacke scoured cookbooks for a variable that would produce the most dramatic difference in the cookies.

“I was reading about what determines the texture of cookies, and then tried several recipes in my kitchen until I got two gingerbread recipes that I was happy with,” Schwacke says.

She focused on butter, which contains water that turns to steam at high baking temperatures, creating air pockets in cookies. Schwacke predicted that decreasing the amount of butter would yield denser gingerbread, strong enough to hold together as a house.

“This hypothesis is an example of how changing the structure can influence the properties and performance of material,” Schwacke said in the eight-minute video.

That same curiosity about materials properties and performance drives her research on the high energy cost of computing, especially for artificial intelligence. Schwacke develops new materials and devices for neuromorphic computing, which mimics the brain by processing and storing information in the same place. She studies electrochemical ionic synapses — tiny devices that can be “tuned” to adjust conductivity, much like neurons strengthening or weakening connections in the brain.

“If you look at AI in particular — to train these really large models — that consumes a lot of energy. And if you compare that to the amount of energy that we consume as humans when we’re learning things, the brain consumes a lot less energy,” Schwacke says. “That’s what led to this idea to find more brain-inspired, energy-efficient ways of doing AI.”

Her advisor, Bilge Yildiz, underscores the point: One reason the brain is so efficient is that data doesn’t need to be moved back and forth.

“In the brain, the connections between our neurons, called synapses, are where we process information. Signal transmission is there. It is processed, programmed, and also stored in the same place,” says Yildiz, the Breene M. Kerr (1951) Professor in the Department of Nuclear Science and Engineering and DMSE. Schwacke’s devices aim to replicate that efficiency.

Scientific roots

The daughter of a marine biologist mom and an electrical engineer dad, Schwacke was immersed in science from a young age. Science was “always a part of how I understood the world.”

“I was obsessed with dinosaurs. I wanted to be a paleontologist when I grew up,” she says. But her interests broadened. At her middle school in Charleston, South Carolina, she joined a FIRST Lego League robotics competition, building robots to complete tasks like pushing or pulling objects. “My parents, my dad especially, got very involved in the school team and helping us design and build our little robot for the competition.”

Her mother, meanwhile, studied how dolphin populations are affected by pollution for the National Oceanic and Atmospheric Administration. That had a lasting impact.

“That was an example of how science can be used to understand the world, and also to figure out how we can improve the world,” Schwacke says. “And that’s what I’ve always wanted to do with science.”

Her interest in materials science came later, in her high school magnet program. There, she was introduced to the interdisciplinary subject, a blend of physics, chemistry, and engineering that studies the structure and properties of materials and uses that knowledge to design new ones.

“I always liked that it goes from this very basic science, where we’re studying how atoms are ordering, all the way up to these solid materials that we interact with in our everyday lives — and how that gives them their properties that we can see and play with,” Schwacke says.

As a senior, she participated in a research program with a thesis project on dye-sensitized solar cells, a low-cost, lightweight solar technology that uses dye molecules to absorb light and generate electricity.

“What drove me was really understanding, this is how we go from light to energy that we can use — and also seeing how this could help us with having more renewable energy sources,” Schwacke says.

After high school, she headed across the country to Caltech. “I wanted to try a totally new place,” she says, where she studied materials science, including nanostructured materials thousands of times thinner than a human hair. She focused on materials properties and microstructure — the tiny internal structure that governs how materials behave — which led her to electrochemical systems like batteries and fuel cells.

AI energy challenge

At MIT, she continued exploring energy technologies. She met Yildiz during a Zoom meeting in her first year of graduate school, in fall 2020, when the campus was still operating under strict Covid-19 protocols. Yildiz’s lab studies how charged atoms, or ions, move through materials in technologies like fuel cells, batteries, and electrolyzers.

The lab’s research into brain-inspired computing fired Schwacke’s imagination, but she was equally drawn to Yildiz’s way of talking about science.

“It wasn’t based on jargon and emphasized a very basic understanding of what was going on — that ions are going here, and electrons are going here — to understand fundamentally what’s happening in the system,” Schwacke says.

That mindset shaped her approach to research. Her early projects focused on the properties these devices need to work well — fast operation, low energy use, and compatibility with semiconductor technology — and on using magnesium ions instead of hydrogen, which can escape into the environment and make devices unstable.

Her current project, the focus of her PhD thesis, centers on understanding how the insertion of magnesium ions into tungsten oxide, a metal oxide whose electrical properties can be precisely tuned, changes its electrical resistance. In these devices, tungsten oxide serves as a channel layer, where resistance controls signal strength, much like synapses regulate signals in the brain.

“I am trying to understand exactly how these devices change the channel conductance,” Schwacke says.

Schwacke’s research was recognized with a MathWorks Fellowship from the School of Engineering in 2023 and 2024. The fellowship supports graduate students who leverage tools like MATLAB or Simulink in their work; Schwacke applied MATLAB for critical data analysis and visualization.

Yildiz describes Schwacke’s research as a novel step toward solving one of AI’s biggest challenges.

“This is electrochemistry for brain-inspired computing,” Yildiz says. “It’s a new context for electrochemistry, but also with an energy implication, because the energy consumption of computing is unsustainably increasing. We have to find new ways of doing computing with much lower energy, and this is one way that can help us move in that direction.”

Like any pioneering work, it comes with challenges, especially in bridging the concepts between electrochemistry and semiconductor physics.

“Our group comes from a solid-state chemistry background, and when we started this work looking into magnesium, no one had used magnesium in these kinds of devices before,” Schwacke says. “So we were looking at the magnesium battery literature for inspiration and different materials and strategies we could use. When I started this, I wasn’t just learning the language and norms for one field — I was trying to learn it for two fields, and also translate between the two.”

She also grapples with a challenge familiar to all scientists: how to make sense of messy data.

“The main challenge is being able to take my data and know that I’m interpreting it in a way that’s correct, and that I understand what it actually means,” Schwacke says.

She overcomes hurdles by collaborating closely with colleagues across fields, including neuroscience and electrical engineering, and sometimes by just making small changes to her experiments and watching what happens next.

Community matters

Schwacke is not just active in the lab. In Kitchen Matters, she and her fellow DMSE grad students set up booths at local events like the Cambridge Science Fair and Steam It Up, an after-school program with hands-on activities for kids.

“We did ‘pHun with Food’ with ‘fun’ spelled with a pH, so we had cabbage juice as a pH indicator,” Schwacke says. “We let the kids test the pH of lemon juice and vinegar and dish soap, and they had a lot of fun mixing the different liquids and seeing all the different colors.”

She has also served as the social chair and treasurer for DMSE’s graduate student group, the Graduate Materials Council. As an undergraduate at Caltech, she led workshops in science and technology for Robogals, a student-run group that encourages young women to pursue careers in science, and assisted students in applying for the school’s Summer Undergraduate Research Fellowships.

For Schwacke, these experiences sharpened her ability to explain science to different audiences, a skill she sees as vital whether she’s presenting at a kids’ fair or at a research conference.

“I always think, where is my audience starting from, and what do I need to explain before I can get into what I’m doing so that it’ll all make sense to them?” she says.

Schwacke sees the ability to communicate as central to building community, which she considers an important part of doing research. “It helps with spreading ideas. It always helps to get a new perspective on what you’re working on,” she says. “I also think it keeps us sane during our PhD.”

Yildiz sees Schwacke’s community involvement as an important part of her resume. “She’s doing all these activities to motivate the broader community to do research, to be interested in science, to pursue science and technology, but that ability will help her also progress in her own research and academic endeavors.”

After her PhD, Schwacke wants to take that ability to communicate with her to academia, where she’d like to inspire the next generation of scientists and engineers. Yildiz has no doubt she’ll thrive.

“I think she’s a perfect fit,” Yildiz says. “She’s brilliant, but brilliance by itself is not enough. She’s persistent, resilient. You really need those on top of that.”


With a new molecule-based method, physicists peer inside an atom’s nucleus

An alternative to massive particle colliders, the approach could reveal insights into the universe’s starting ingredients.


Physicists at MIT have developed a new way to probe inside an atom’s nucleus, using the atom’s own electrons as “messengers” within a molecule.

In a study appearing today in the journal Science, the physicists precisely measured the energy of electrons whizzing around a radium atom that had been paired with a fluoride atom to make a molecule of radium monofluoride. They used the environments within molecules as a sort of microscopic particle collider, which contained the radium atom’s electrons and encouraged them to briefly penetrate the atom’s nucleus.

Typically, experiments to probe the inside of atomic nuclei involve massive, kilometers-long facilities that accelerate beams of electrons to speeds fast enough to collide with and break apart nuclei. The team’s new molecule-based method offers a table-top alternative to directly probe the inside of an atom’s nucleus.

Within molecules of radium monofluoride, the team measured the energies of a radium atom’s electrons as they pinged around inside the molecule. They discerned a slight energy shift and determined that electrons must have briefly penetrated the radium atom’s nucleus and interacted with its contents. As the electrons winged back out, they retained this energy shift, providing a nuclear “message” that could be analyzed to sense the internal structure of the atom’s nucleus.

The team’s method offers a new way to measure the nuclear “magnetic distribution.” In a nucleus, each proton and neutron acts like a small magnet, and they align differently depending on how the nucleus’ protons and neutrons are spread out. The team plans to apply their method to precisely map this property of the radium nucleus for the first time. What they find could help to answer one of the biggest mysteries in cosmology: Why do we see much more matter than antimatter in the universe?

“Our results lay the groundwork for subsequent studies aiming to measure violations of fundamental symmetries at the nuclear level,” says study co-author Ronald Fernando Garcia Ruiz, who is the Thomas A. Franck Associate Professor of Physics at MIT. “This could provide answers to some of the most pressing questions in modern physics.”

The study’s MIT co-authors include Shane Wilkins, Silviu-Marian Udrescu, and Alex Brinson, along with collaborators from multiple institutions including the Collinear Resonance Ionization Spectroscopy Experiment (CRIS) at CERN in Switzerland, where the experiments were performed.

Molecular trap

According to scientists’ best understanding, there must have been almost equal amounts of matter and antimatter when the universe first came into existence. However, the overwhelming majority of what scientists can measure and observe in the universe is made from matter, whose building blocks are the protons and neutrons within atomic nuclei.

This observation is in stark contrast to what our best theory of nature, the Standard Model, predicts, and it is thought that additional sources of fundamental symmetry violation are required to explain the almost complete absence of antimatter in our universe. Such violations could be seen within the nuclei of certain atoms such as radium.

Unlike most atomic nuclei, which are spherical in shape, the radium atom’s nucleus has a more asymmetrical configuration, similar to a pear. Scientists predict that this pear shape could significantly enhance their ability to sense the violation of fundamental symmetries, to the extent that they may be potentially observable.

“The radium nucleus is predicted to be an amplifier of this symmetry breaking, because its nucleus is asymmetric in charge and mass, which is quite unusual,” says Garcia Ruiz, whose group has focused on developing methods to probe radium nuclei for signs of fundamental symmetry violation.

Peering inside the nucleus of a radium atom to investigate fundamental symmetries is an incredibly tricky exercise.

“Radium is naturally radioactive, with a short lifetime and we can currently only produce radium monofluoride molecules in tiny quantities,” says study lead author Shane Wilkins, a former postdoc at MIT. “We therefore need incredibly sensitive techniques to be able measure them.”

The team realized that by placing a radium atom in a molecule, they could contain and amplify the behavior of its electrons.

“When you put this radioactive atom inside of a molecule, the internal electric field that its electrons experience is orders of magnitude larger compared to the fields we can produce and apply in a lab,” explains Silviu-Marian Udrescu PhD ’24, a study co-author. “In a way, the molecule acts like a giant particle collider and gives us a better chance to probe the radium’s nucleus.”

Energy shift

In their new study, the team first paired radium atoms with fluoride atoms to create molecules of radium monofluoride. They found that in this molecule, the radium atom’s electrons were effectively squeezed, increasing the chance for electrons to interact with and briefly penetrate the radium nucleus.

The team then trapped and cooled the molecules and sent them through a system of vacuum chambers, into which they also sent lasers, which interacted with the molecules. In this way the researchers were able to precisely measure the energies of electrons inside each molecule.

When they tallied the energies, they found that the electrons appeared to have a slightly different energy compared to what physicists expect if they did not penetrate the nucleus. Although this energy shift was small — just a millionth of the energy of the laser photon used to excite the molecules — it gave unambiguous evidence of the molecules’ electrons interacting with the protons and neutrons inside the radium nucleus.

“There are many experiments measuring interactions between nuclei and electrons outside the nucleus, and we know what those interactions look like,” Wilkins explains. “When we went to measure these electron energies very precisely, it didn’t quite add up to what we expected assuming they interacted only outside of the nucleus. That told us the difference must be due to electron interactions inside the nucleus.”

“We now have proof that we can sample inside the nucleus,” Garcia Ruiz says. “It’s like being able to measure a battery’s electric field. People can measure its field outside, but to measure inside the battery is far more challenging. And that’s what we can do now.”

Going forward, the team plans to apply the new technique to map the distribution of forces inside the nucleus. Their experiments have so far involved radium nuclei that sit in random orientations inside each molecule at high temperature. Garcia Ruiz and his collaborators would like to be able to cool these molecules and control the orientations of their pear-shaped nuclei such that they can precisely map their contents and hunt for the violation of fundamental symmetries.

“Radium-containing molecules are predicted to be exceptionally sensitive systems in which to search for violations of the fundamental symmetries of nature,” Garcia Ruiz says. “We now have a way to carry out that search.”

This research was supported, in part, by the U.S. Department of Energy. 


At MIT, a day of hands-on, kid-friendly learning

Organized by the MIT Museum, the 2025 Cambridge Science Carnival included activities with air cannons, sea bots, and electron microscopes.


Back and better than ever, the Cambridge Science Carnival, an annual free family-friendly science extravaganza, was held on Sunday, Sept. 21, at the Kendall/MIT Open Space.

Founded by the MIT Museum in 2007, and organized with the support of MIT and the City of Cambridge, the 2025 event drew approximately 20,000 attendees and featured more than 140 activities, demonstrations, and installations tied to the topics of science, technology, engineering, arts, and mathematics (STEAM).

Among the carnival’s wide variety of activities was the popular robot petting zoo, an annual showcase involving more than a dozen companies and local robotics clubs, including FIRST Tech Challenge and FIRST Robotics Competition. Participants were invited to engage with a range of different robots, from building with LEGOs and erector sets to piloting underwater robots to learning about the science of automation.

“Every exhibit and every moment of discovery today reinforces why Cambridge remains a global leader in STEAM,” Cambridge Mayor Denise Simmons said in her remarks at the event. “The creativity, ingenuity, and joy on display here today are a powerful reminder that science isn’t just for labs and lecture halls — it’s for everyone.”

Other activities included an appearance from the popular kid-friendly podcast “Tumble Science,” with co-host Marshall Escamilla testing fans’ knowledge of different STEAM topics drawn from “Tumble Science.” Clark University’s smoke-ring air cannons were a particular hit with the under-7-year-old set, while “Cycle To Science” showed off a gravity-defying bicycle wheel that, while spinning, was suspended on one side by a simple piece of string. Attendees also enjoyed live music, food trucks, and activities exploring everything from pipette art to the chemistry of glass. 

At the robot petting zoo, FIRST Robotics volunteer mentor Dominique Regli reflected on the event as someone who was herself first inspired by similar festivals more than a decade earlier. 

“Seeing kids of all ages interact with the robots made me think back to when I was a seventh grader, and how getting to see some of these robots for the first time was truly life-changing for me,” said Regli, who has been involved with FIRST Robotics since 2018 and is now an MIT computer science PhD student and affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL). “These types of events are so important to expose students to what's possible.”

Throughout its history, a key aspect of the carnival has been MIT’s close collaboration with the City of Cambridge, which ran several activities. Cambridge Public School teachers led and the Public Works Department hosted a “Trash or Treasure” activity, which helped teach kids about recycling and composting. The carnival is a major contribution to the Institute’s objective of connecting the MIT ecosystem with Cambridge residents and local communities. 

“Cambridge is one of the world’s leading science cities, with more Nobel laureates per capita than any other city on the planet,” says Michael John Gorman, director of the MIT Museum. “The Cambridge Science Carnival is a beloved day in the Cambridge calendar which brings science out of the labs and onto the streets.” 

With a focus on engaging families and kids ranging from kindergarten to the eighth grade, one important outcome this year was to give undergraduate and graduate students the opportunity to showcase their work and hone their skills in clearly communicating science concepts to the public. There were over 50 activities led by MIT students, as well as participants from other local schools such as Boston College and Boston, Clark, Harvard, Northeastern, and Tufts universities.

Typically organized as part of the annual Cambridge Science Festival, this year the Cambridge Science Carnival returned as a standalone event while the larger festival undergoes a strategic transition for its relaunch in 2026. The MIT Museum offered free admission during the carnival and is always free to Cambridge residents, as well as active military, EBT cardholders, members of the Massachusetts Teachers Association, and MIT ID holders.

“For MIT researchers, discovery often happens in a lab or a classroom, but the truth is, the spark of discovery can happen anywhere,” said Alfred Ironside, MIT vice president for communications, in remarks at the event. “That’s really what today is about: feeding curiosity, encouraging questions, and showing that science is not locked away behind closed doors. It’s for everyone.”


Startup’s tablets deliver cancer drugs more evenly over time

An MIT team’s technology could allow cancer drugs to be delivered more steadily into the bloodstream, to improve effectiveness and reduce side effects.


Pills are by far the most convenient form of cancer treatment, but most oral cancer drugs quickly dissolve in the stomach, delivering a burst of chemicals into the bloodstream all at once. That can cause side effects. It also may limit the drug’s effectiveness because its concentration in the blood may become too low after the initial burst.

Now, the startup Enzian Pharmaceutics, founded by Aron Blaesi PhD ’14 and former principal research scientist Nannaji Saka ScD ’74, is developing an oral tablet that delivers drugs into the gastric fluid and the blood steadily over time. The company’s tablets use tiny 3D-printed fibers that turn into a gel-like substance when exposed to water. The tablets have been shown to stay in the stomach of animals for up to a day, slowly degrading while releasing the drug in controlled quantities.

The company is currently validating its tablets’ ability to stay in place in a small number of healthy human volunteers. In about a year, it plans to begin testing the technology’s ability to improve the effectiveness and safety of cancer drugs in patients.

“A lot of orally delivered cancer drugs could benefit from this,” says Blaesi, who incorporated the company in 2016. “Right now, soon after someone has taken a cancer drug, its concentration in the blood can be up to 50 times greater than when they are supposed to take the next pill. During the peak, the drug goes into the heart, it goes into the liver, the brain, and it can cause a lot of problems, while at the end of the dosing interval the concentration in the blood may be too low. By taking out that peak and increasing the time the drug is released, we could improve the effectiveness of treatments and mitigate certain side effects.”

In search of innovation

When Blaesi came to MIT, he knew he wanted his mechanical engineering PhD work to form the basis of a company. Early on, as part of the Novartis-MIT Center for Continuous Manufacturing, he worked on manufacturing pills with an injection molding machine that melted and solidified the material, in contrast to the traditional process of compacting powder. He noticed injection molding made the pills far less porous.

“If you put a typical pill into a fluid or into the stomach, the fluid percolates the pores and quickly dissolves it,” Blaesi explains. “That’s not the case when you have an injection molded product. That’s when Dr. Saka, who I met almost daily to discuss my research with, and I started to realize that microstructure is very important.”

The researchers began exploring how different tablet microstructures changed the rate at which drugs are released. For more precision, they moved from injection molding to 3D printing.

Using MIT machine shops, Blaesi built a 3D printer and produced tightly wound microstructures that could carry the drugs. He focused on fibrous structures with space between the fibers, because they would allow gastrointestinal fluid to percolate the pill and dissolve rapidly. He tested the structures in both his Cambridge, Massachusetts, apartment and at MIT’s shared facilities.

Blaesi then experimented with different carrier materials, finding that the higher the molecular weight, the longer it took the pill to dissolve because the material would absorb water and expand before degrading.

“Initially I thought, ‘Oh no, the drug isn’t being dissolved fast enough anymore,’” Blaesi recalls. “Then we thought, ‘Everything has its place.’ This could stay in the stomach for longer because of the expansion. Then it could release the drug over time. We realized this wouldn’t just improve manufacturing, it would improve the product.”

In 2019, Blaesi and Saka published the first paper on their expandable fibrous tablets for prolonged drug delivery. It received a mixed reception.

“Some reviewers said, ‘Research on similar gastroretentive dosage forms has been done for 40 years and no one’s really succeeded,’” Blaesi recalls. “People said, ‘It will never work. Do experiments in animals and then we’ll talk.’”

Blaesi moved back to Switzerland during the Covid-19 pandemic and ran his animal experiments there.

“The reviewers were right: What we had didn’t work,” Blaesi says. “But we adjusted the design and showed we could make the pill stay in the stomach for longer.”

Inside Enzian’s final tablet design, tiny fibers are arranged in a grid. When water flows into the spaces between the fibers, they expand to form a strong gel-like substance that slowly erodes in the stomach, steadily releasing the drug. In animal studies, Enzian’s team showed its technology allowed tablets to remain in the stomach for 12 to 24 hours before being safely excreted.

The team soon found cancer drugs would be a good fit for their technology.

“A lot of cancer drugs are only soluble in acidic solutions, so they can only be absorbed while the drug is in the stomach,” Blaesi explains. “But on an empty stomach, the drug may be in the stomach for just 30 or 40 minutes at present. For a full stomach, it’s a few hours. And because you have a short time to deliver the drug, you need to release a high dose immediately. That shoots up the blood concentration, and if you dose every 12 hours, the concentration is going down during the other 10 hours.”

From the lab to patients

In upcoming human trials, Enzian plans to use its tablets to deliver a drug for prostate cancer that Blaesi says is currently dosed at several hundred milligrams a day. He hopes to get down to about a tenth of that with a better therapeutic effect.

Enzian also believes its technology could improve treatments for blood, skin, and breast cancers.

“This could really be used to improve treatment for a variety of cancers,” Blaesi says. “We believe this is a more efficient and effective way to deliver drugs.”

Maximizing effectiveness and minimizing side effects is also important in clinical trials, where a new drug’s superiority over existing treatments must be shown, and a single adverse event can end its development.

The upcoming move into patients is the culmination of more than a decade of work for Blaesi, who is confident Enzian can deliver on its promise of improving treatments.

“The opportunity is enormous,” Blaesi says. “So many oral cancer drugs have this delivery problem. We still have to do the efficacy and safety studies on patients, but we expect this to be a game changer.”


Five with MIT ties elected to National Academy of Medicine for 2025

Professors Facundo Batista and Dina Katabi, along with three additional MIT alumni, are honored for their outstanding professional achievement and commitment to service.


On Oct. 20 during its annual meeting, the National Academy of Medicine announced the election of 100 new members, including MIT faculty members Dina Katabi and Facundo Batista, along with three additional MIT alumni.

Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine, recognizing individuals who have demonstrated outstanding professional achievement and commitment to service.

Facundo Batista is the associate director and scientific director of the Ragon Institute of MGH, MIT and Harvard, as well as the first Phillip T. and Susan M. Ragon Professor in the MIT Department of Biology. The National Academy of Medicine recognized Batista for “his work unraveling the biology of antibody-producing B cells to better understand how our body’s immune systems responds to infectious disease.” More recently, Facundo’s research has advanced preclinical vaccine and therapeutic development for globally important diseases including HIV, malaria, and influenza.

Batista earned a PhD from the International School of Advanced Studies and established his lab in 2002 as a member of the Francis Crick Institute (formerly the London Research Institute), simultaneously holding a professorship at Imperial College London. In 2016, he joined the Ragon Institute to pursue a new research program applying his expertise in B cells and antibody responses to vaccine development, and preclinical vaccinology for diseases including SARS-CoV-2 and HIV. Batista is an elected fellow or member of the U.K. Academy of Medical Sciences, the American Academy of Microbiology, the Academia de Ciencias de América Latina, and the European Molecular Biology Organization, and he is chief editor of The EMBO Journal.

Dina Katabi SM ’99, PhD ’03 is the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science at MIT. Her research spans digital health, wireless sensing, mobile computing, machine learning, and computer vision. Katabi’s contributions include efficient communication protocols for the internet, advanced contactless biosensors, and novel AI models that interpret physiological signals. The NAM recognized Katabi for “pioneering digital health technology that enables non-invasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. She has translated this technology to advance objective, sensitive measures of disease trajectory and treatment response in clinical trials.”

Katabi is director of the MIT Center for Wireless Networks and Mobile Computing. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), where she leads the Networks at MIT Research Group. Katabi received a bachelor’s degree from the University of Damascus and MS and PhD degrees in computer science from MIT. She is a MacArthur Fellow; a member of the American Academy of Arts and Sciences, National Academy of Sciences, and National Academy of Engineering; and a recipient of the ACM Computing Prize. 

Additional MIT alumni who were elected to the NAM for 2025 are:

Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy, and inspires positive actions across sectors.

“I am deeply honored to welcome these extraordinary health and medicine leaders and researchers into the National Academy of Medicine,” says NAM President Victor J. Dzau. “Their demonstrated excellence in tackling public health challenges, leading major discoveries, improving health care, advancing health policy, and addressing health equity will critically strengthen our collective ability to tackle the most pressing health challenges of our time.” 


A “seating chart” for atoms helps locate their positions in materials

The DIGIT imaging tool could enable the design of quantum devices and shed light on atomic-scale processes in cells and tissues.


If you think of a single atom as a grain of sand, then a wavelength of visible light — which is a thousand times larger than the atom’s width — is comparable to an ocean wave. The light wave can dwarf an atom, missing it entirely as it passes by. This gulf in size has long made it impossible for scientists to see and resolve individual atoms using optical microscopes alone.

Only recently have scientists found ways to break this “diffraction limit,” to see features that are smaller than the wavelength of light. With new techniques known as super-resolution microscopy, scientists can see down to the scale of a single molecule.

And yet, individual atoms have still been too small for optical microscopes — which are much simpler and less expensive than super-resolution techniques — to distinguish, until now.

In an open-access paper appearing today in Nature Communications, MIT scientists present a new computational method that enables optical microscopes to resolve individual atoms and zero in on their exact locations in a crystal structure.

The team’s new “discrete grid imaging technique,” or DIGIT, is a computational imaging approach that scientists can apply to optical data to calculate the most probable location of individual atoms based on a very important clue: the material’s known atomic configuration. As long as scientists have an idea of what a material’s physical atomic layout should be, they can use this layout as a sort of map to determine where specific atoms or features must be located.

“It’s like you know there’s a seating chart,” says lead author Yuqin “Sophia” Duan, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “Previous methods could tell you what section an atom is in. But now we can take this seating chart as prior knowledge, and can pinpoint exactly which seat the atom is in.”

With DIGIT, the team can now pinpoint individual atoms with a resolution of 0.178 angstroms. (One angstrom is one-tenth of a nanometer, which is less than half the width of a single atom). The technique enables optical microscopes to localize atomic-scale features in any material that has a known atomic pattern, such as crystalline materials or certain proteins with repeating molecular chains.

The team says the method could help guide the design of quantum devices, which often require placing individual atoms precisely within a crystal. Beyond quantum technologies, DIGIT can also provide new insights into how defects and impurities shape the behavior of advanced materials — from semiconductors to superconductors.

Duan’s co-authors at MIT are Qiushi Gu, Hanfeng Wang, Yong Hu, Kevin Chen, Matthew Trusheim, and EECS Professor Dirk Englund.

Grid support

Scientists can image features smaller than a nanometer, and sometimes as small as a single atom, but not with optical microscopes. In these cases, they use transmission or scanning electron microscopes, which send high-energy beams of electrons into a sample to generate an image based on the pattern in which the electrons scatter. These electron-based methods produce highly detailed, near-atomic-scale images, but they require imaging in a vacuum and at high energies, and only work in ultrathin, synthetic, or solid-state materials. Electron-based imaging methods are too harsh for more delicate living specimens.

In contrast, optical microscopes work at lower energies, in ambient conditions, and are safe to apply to biological samples. But they cannot discern features past the diffraction limit. Essentially, a microscope is unable to see features that are smaller than half the wavelength of visible light (about 200 to 300 nanometers) that a microscope sends in to probe a sample. Atoms, then, have long eluded optical microscopes.

In 2014, however, the Nobel Prize in Chemistry was awarded to developers of a technique to overcome the diffraction limit. Super-resolution microscopy works by shining laser light on a sample at a specific frequency that is known to resonate with a feature of interest, such as a certain molecule. When that molecule resonates, it effectively announces its presence in the material. With this optical manipulation, scientists can visualize features as small as 10 nanometers, on the scale of a single molecule.

Duan and Englund looked to resolve even smaller features by combining super-resolution techniques with statistical analysis and knowledge of materials that has often been overlooked.

“One thing that gets ignored in imaging optical systems is the physical configuration of your system,” Duan says. “For example, if you want to visualize defects in a diamond system, these defects can only be at certain positions, since they have to follow the grid of the atomic diamond structure. In proteins, there are some structures that grow in an organized grid, and their location must be somewhere along that physical grid.”

The researchers suspected that if they had a reasonably accurate map of a material’s atomic structure (imagine the ball-and-stick models of molecules in a chemistry classroom), they might use such maps as a template and try out many different orientations and rotation angles to find the closest match to whatever features are initially visualized using super-resolution microscopy.

“No one has ever done this before, to include the physical constraints or system information into the resolution technique,” Duan says.

Blurriness, collapsed

To test their idea, the researchers worked with a sample of diamond — a crystal whose microstructure is well-understood and resembles an organized grid, or lattice, of repeating carbon atoms. The researchers blindly knocked out some carbon atoms in the lattice and replaced them with silicon atoms using facilities at MIT.nano. Their goal was to identify and determine the precise locations of the errant silicon atoms.

To do so, they first used established techniques of super-resolution microscopy to probe the diamond sample, using lasers set to specific wavelengths at frequencies known to resonate with the silicon atoms but not the carbon atoms. With this technique, researchers produced images that depicted the silicon atoms, but only as a uniform blur.

The team then applied DIGIT to further resolve the picture. Knowing that diamond in general has a grid-like configuration of carbon atoms, the researchers took this configuration as a map, or seating chart of sorts, and assumed that any silicon atoms that took the place of a carbon atom must sit within the grid, which has a known spacing between atoms.

“Because the silicon atoms are substituting carbon atoms in the lattice, that means they must obey some integer multiple of the atomic spacing of the crystal lattice, separating any two silicon atoms,” Englund says. “That prior knowledge makes the localization different than if you add a purely amorphous material.”

The researchers essentially simulated many possibilities of orientations and rotation angles of the diamond lattice, superimposed on the blurry image of atoms that the super-resolution microscopy technique produced.

“The trick is that, in certain materials, atoms aren’t spread out randomly — they sit on a grid inside a crystal,” Duan explains. “We used that prior knowledge to sharpen the microscope’s picture. Once we factored in that ‘atomic grid,’ the blurriness collapsed, and we could pinpoint exact positions.”

In the end, they found the technique could pinpoint the location of individual silicon atoms within the diamond lattice, with a precision of 0.178 angstroms — the sharpest resolution of any optical-based imaging technique. The team has made the DIGIT code available on GitHub for anyone to apply to their optical measurements, provided their sample of interest has a well-understood atomic structure. Then, they hope that scientists will start to see much finer and detailed features and processes using light.

“It’s a big step — it takes optical microscopes into the realm of atomic scale, something people thought only electron microscopes or X-rays could do,” Duan says. “That opens up a whole new way of studying materials and biology.”


Charts can be social artifacts that communicate more than just data

Researchers find that design elements of data visualizations influence viewers’ assumptions about the source of the information and its trustworthiness.


The degree to which someone trusts the information depicted in a chart can depend on their assumptions about who made the data visualization, according to a pair of studies by MIT researchers.

For instance, if someone infers that a graph about a controversial topic like gun violence was produced by an organization they feel is in opposition with their beliefs or political views, they may discredit the information or dismiss the visualization all together.

The researchers found that even the clearest visualizations often communicate more than the data they explicitly depict, and can elicit strong judgments from viewers about the social contexts, identities, and characteristics of those who made the chart.

Readers make these assessments about the social context of a visualization primarily from its design features, like the color palette or the way information is arranged, rather than the underlying data. Often, these inferences are unintended by the designers.

Qualitative and quantitative studies revealed that these social inferences aren’t restricted to certain subgroups, nor are they caused by limited data literacy.

The researchers consolidate their findings into a framework that scientists and communicators can use to think critically about how design choices might affect these social assumptions. Ultimately, they hope this work leads to better strategies for scientific communication.

“If you are scrolling through social media and you see a chart, and you immediately dismiss it as something an influencer has produced just to get attention, that shapes your entire experience with the chart before you even dig into the data. We’ve shown in these papers that visualizations do more than just communicate the data they are depicting — they also communicate other social signals,” says Arvind Satyanarayan, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-senior author of this research.

He is joined on the paper by co-lead authors Amy Rae Fox, a former CSAIL postdoc, and Michelle Morgenstern, a current postdoc in MIT’s anthropology program; and co-senior author Graham M. Jones, professor of anthropology. Two related papers on this research will be presented at the IEEE Visualization Conference.

Charts as social artifacts

During the height of the Covid-19 pandemic, social media was awash in charts from organizations like the World Health Organization and Centers for Disease Control and Prevention, which were designed to convey information about the spread of disease.

The MIT researchers studied how these visualizations were being used to discuss the pandemic. They found that some citizen scientists were using the underlying data to make visualizations of their own, challenging the findings of mainstream science.

“This was an unexpected discovery as, previously, citizen scientists were typically aligned with mainstream scientists. It took us a few years to figure out how to study this phenomenon more deeply,” Satyanarayan says.

Most research into data visualization studies how charts communicate data. Instead, the researchers wanted to explore visualizations from a social and linguistic perspective to assess the information they convey beyond the data.

Linguistic anthropologists have found that, while language allows people to communicate ideas, it also holds social meaning beyond the words people use. For instance, an accent or dialect can indicate that someone is part of a particular community.

By “pointing” to certain social meanings, identities, and characteristics, language serves what is known as a socio-indexical function.

“We wanted to see if things in the visual language of data communication might point to certain institutions, or the kinds of people in those institutions, that carry a meaning that could be unintended by the makers of the visualization,” Jones says.

To do this, the researchers conducted an initial, qualitative study of users on the social media platform Tumblr. During one-on-one interviews, the researchers showed users a variety of real visualizations from online sources, as well as modified visualizations where they removed the textual information, like titles and axes labels.

Stripping out the textual information was particularly important, since it mimics the way people often interact with online visualizations.

“Our engagement with social media is a few quick seconds. People aren’t taking the time to read the title of a chart or look at the data very carefully,” Satyanarayan says.

The interviews revealed that users made detailed inferences about the people or organizations who created the visualizations based on what they called “vibes,” design elements, like colors or the use of certain graphics. These inferences in turn impacted their trust in the data.

For instance, after seeing a chart with the flags of Georgia and Texas and a graph with two lines in red and black, but no text, one user said, “This kind of looks like something a Texas Republican (legislator) would put on Twitter or on their website, or as part of a campaign presentation.”

A quantitative approach

Building on this initial work, the researchers used the same methodology in three quantitative studies involving surveys sent to larger groups of people from a variety of backgrounds.

They found the same phenomenon: People make inferences about the social context of a visualization based on its design, which can lead to misunderstandings about, and mistrust in, the data it depicts.

For instance, users felt some visualizations were so neatly arranged they believed them to be advertisements, and therefore not trustworthy. In another example, one user dismissed a chart by a Pulitzer-prize winning designer because they felt the hand-drawn graphical style indicated it was made by “some female Instagram influencer who is just trying to look for attention.”

“If that is the first reaction someone has to a chart, it is going to massively impact the degree to which they trust it,” Satyanarayan says.

Moreover, when the researchers reintroduced text in the visualizations from which it had been removed, users still made these social inferences.

Typically, in data visualization, the solution to such a problem would be to create clearer charts or educate people about data literacy. But this research points to a completely different kind of data literacy, Jones says.

“It is not erroneous for people to be drawing these inferences. It requires a lot of cultural knowledge about where visualizations come from, how they are made, and how they circulate. Drawing these inferences is a feature, not a bug, of the way we use signs,” he says.

From these results, they created a classification framework to organize the social inferences users made and the design elements that contributed to them. They hope the typology serves as a tool designers can use to develop more effective visualizations, as well as a starting point for additional studies.

Moving forward, the researchers want to continue exploring the role of data visualizations as social artifacts, perhaps by drilling down on each design feature they identified in the typology. They also want to expand the scope of their study to include visualizations in research papers and scientific journals.

“Part of the value of this work is a methodological contribution to render a set of phenomena amenable to experimental study. But this work is also important because it showcases an interdisciplinary cross-pollination that is powerful and unique to MIT,” Jones says.

This work was supported, in part, by MIT METEOR and PFPFEE fellowships, an Amar G. Bose Fellowship, an Alfred P. Sloan Fellowship, and the National Science Foundation.


The student becomes the teacher

Titus Roesler was ready to drop his class in signal processing. Now, he hopes to become an expert in the field.


Coming from a small high school in rural South Dakota that didn’t offer advanced placement (AP) classes, Titus Roesler ’25 didn’t have the easiest start at MIT. But when his efforts to catch up academically to his peers led to a job as a teaching assistant, it changed everything.

Roesler, who graduated last spring with a bachelor’s degree in electrical engineering and is now working on a master’s, has built a reputation for himself as a student-teacher at MIT. Since discovering his affinity for teaching and mentoring, he’s been a teaching assistant for four different classes and designed two seminars from scratch.

Through teaching, Roesler has not only helped other students, but also improved his own grasp of complex subjects. That includes signal processing, which involves manipulating signals, such as radio waves, to make them more useful for applications like wireless communications. He has become fascinated by the topic and hopes to continue working in the field.

Roesler lights up when talking about teaching, but he didn’t always think it was in the cards.

“I don't know that anyone who knew me pre-MIT would believe that I do things like give recitations to crowded rooms, because I think everyone thought, ‘Titus is that quiet kid, he never talked at all.’”

Learning through teaching

Growing up in Marion, South Dakota, a town with a population around 800, Roesler didn’t have MIT on his radar, but he knew he liked math. His high school capstone project involved helping his classmates on the math section of the ACT, and he tutored a few of his classmates. His teacher let him teach trigonometry one day, and he toured local colleges with the plan of becoming a high school math teacher.

But that changed after he self-studied calculus through MIT’s OpenCourseWare offerings and set his sights on the Institute.

Roesler worked overtime during his first year at MIT to catch up with what his peers had learned back in high school. On his first physics exam, he answered only one question correctly — a multiple-choice question he had guessed on. But MIT’s Experimental Study Group (ESG) kept him afloat during his first year, and it quickly led to more opportunities.

When, in the spring of his first year, his multivariable calculus instructor asked him to stay after class one day, Roesler was sure he was in trouble. She actually wanted to see if he could TA for her next year.

“I was flattered because there was still a month left in the class. Plenty of time for me to fail,” Roesler jokes.

He loved the job. During a Friday night office hour session, he stayed for extra hours to help a student whom he saw a lot of himself in — someone who was also from a rural background and had also entered MIT without a strong mathematics background. He went on to become the student’s tutor. The position gave him the opportunity to be the teacher he’d always wanted to have.

As a TA, “I wasn't coming at things from the perspective of ‘Everyone already knows A, B, C’ before I explained. I would always try to start from the ground up and give my perspective on it,” Roesler says.

From his mentorship and teaching work, he received the Undergraduate Teaching Award from the Department of Electrical Engineering and Computer Science and the Outstanding Associate Advisor Award from the Office of the First Year. After joining ESG during his first year, Roesler stayed on as an associate advisor in the learning community for the next three years. His work earned him the Fiekowsky Award for Excellence in Teaching and the Fiekowsky Award for Community Service.

The right blend

Signal processing, the focus of his graduate work, “is where calculus, geometry, linear algebra, probability, statistics, algorithms, and numerical analysis all come into play on practical problems of real-world interest,” Roesler says. “For me, it’s the right blend of theory and application.”

Due to the field’s wide scope, Roesler notices potential applications for signal processing everywhere, and how different fields intersect within the discipline. “Everything comes together in just the right way,” he says.

He is especially interested in signal-processing problems such as source separation, which aims to recover a set of source signals from a set of mixed signals. During his senior year, he spent two semesters on a project where he wrote a Python program to separate harmonies in Bach chorales.

For his master’s degree, following a summer research internship at MIT Lincoln Laboratory, Roesler has stayed at the laboratory, this time venturing into high-frequency radio communications. He’s currently working on a research project that applies the theory of compressed sensing (which states that, under certain conditions, it is possible to reconstruct signals from very few measurements) to communications.

What fascinates Roesler are “something-from-nothing” problems.

“The kind of problems I’m interested in are underdetermined, inverse problems,” he says. For example, imagine trying to reconstruct a full image from only a handful of pixels. While on the surface this seems impossible, researchers have recovered quality images by applying the techniques of compressed sensing.

Running and serving

Roesler has also spent extensive time running, a sport he’s loved since fifth grade. In 2023, he raced a marathon in 2 hours and 46 minutes and went on to run the Boston Marathon in both 2024 and 2025. To prepare, he spent a lot of time reading up on the psychology of running, which he says was the first time he used the scientific method. Now, he just runs for fun and uses it as a way to focus and collect this thoughts.

He has also served on the executive team of the Undergraduate Mathematics Association, as a resident peer mentor at Baker House, and a tutor for two classes. At the PKG Center, he’s been a program lead and counselor for its pre-orientation program.

Roesler still gets excited about seeing the impact of his teaching. At the end of one semester teaching a tutorial, he took his class on a picnic. They surprised him with a card and a bag of goodies. 

Recalling the moment, he says: “I thought, How does it get better? It was wonderful.”


Neural activity helps circuit connections mature into optimal signal transmitters

Scientists identified how circuit connections in fruit flies tune to the right size and degree of signal transmission capability. Understanding this could lead to a way to tweak abnormal signal transmission in certain disorders.


Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.

Understanding how that happens is important, not only for advancing fundamental knowledge about how nervous systems develop, but also because many disorders such as epilepsy, autism, or intellectual disability can arise from aberrations of synaptic transmission, says senior author Troy Littleton, the Menicon Professor in The Picower Institute and MIT’s Department of Biology. The new findings, funded in part by a 2021 grant from the National Institutes of Health, provide insights into how active zones develop the ability to send neurotransmitters across synapses to their circuit targets. It’s not instant or predestined, the study shows. It can take days to fully mature, and that is regulated by neural activity.

If scientists can fully understand the process, Littleton says, then they can develop molecular strategies to intervene to tweak synaptic transmission when it’s happening too much or too little in disease.

“We’d like to have the levers to push to make synapses stronger or weaker, that’s for sure,” Littleton says. “And so knowing the full range of levers we can tug on to potentially change output would be exciting.”

Littleton Lab research scientist Yuliya Akbergenova led the study published Oct. 14 in the Journal of Neuroscience.

How newborn synapses grow up

In the study, the researchers examined neurons that send the neurotransmitter glutamate across synapses to control muscles in the fly larvae. To study how the active zones in the animals matured, the scientists needed to keep track of their age. That hasn’t been possible before, but Akbergenova overcame the barrier by cleverly engineering the fluorescent protein mMaple, which changes its glow from green to red when zapped with 15 seconds of ultraviolet light, into a component of the glutamate receptors on the receiving side of the synapse. Then, whenever she wanted, she could shine light and all the synapses already formed before that time would glow red, and any new ones that formed subsequently would glow green.

With the ability to track each active zone’s birthday, the authors could then document how active zones developed their ability to increase output over the course of days after birth. The researchers actually watched as synapses were built over many hours by tagging each of eight kinds of proteins that make up an active zone. At first, the active zones couldn’t transmit anything. Then, as some essential early proteins accumulated, they could send out glutamate spontaneously, but not if evoked by electrical stimulation of their host neuron (simulating how that neuron might be signaled naturally in a circuit). Only after several more proteins arrived did active zones possess the mature structure for calcium ions to trigger the fusion of glutamate vesicles to the cell membrane for evoked release across the synapse.

Activity matters

Of course, construction does not go on forever. At some point, the fly larva stops building one synapse and then builds new ones further down the line as the neuronal axon expands to keep up with growing muscles. The researchers wondered whether neural activity had a role in driving that process of finishing up one active zone and moving on to build the next.

To find out, they employed two different interventions to block active zones from being able to release glutamate, thereby preventing synaptic activity. Notably, one of the methods they chose was blocking the action of a protein called Synaptotagmin 1. That’s important because mutations that disrupt the protein in humans are associated with severe intellectual disability and autism. Moreover, the researchers tailored the activity-blocking interventions to just one neuron in each larva because blocking activity in all their neurons would have proved lethal.

In neurons where the researchers blocked activity, they observed two consequences: the neurons stopped building new active zones and instead kept making existing active zones larger and larger. It was as if the neuron could tell the active zone wasn’t releasing glutamate and tried to make it work by giving it more protein material to work with. That effort came at the expense of starting construction on new active zones.

“I think that what it’s trying to do is compensate for the loss of activity,” Littleton says.

Testing indicated that the enlarged active zones the neurons built in hopes of restarting activity were functional (or would have been if the researchers weren’t artificially blocking them). This suggested that the way the neuron sensed that glutamate wasn’t being released was therefore likely to be a feedback signal from the muscle side of the synapse. To test that, the scientists knocked out a glutamate receptor component in the muscle, and when they did, they found that the neurons no longer made their active zones larger.

Littleton says the lab is already looking into the new questions the discoveries raise. In particular: What are the molecular pathways that initiate synapse formation in the first place, and what are the signals that tell an active zone it has finished growing? Finding those answers will bring researchers closer to understanding how to intervene when synaptic active zones aren’t developing properly.

In addition to Littleton and Akbergenova, the paper’s other authors are Jessica Matthias and Sofya Makeyeva.

In addition to the National Institutes of Health, The Freedom Together Foundation provided funding for the study.


Creating AI that matters

How the MIT-IBM Watson AI Lab is shaping AI-sociotechnical systems for the future.


When it comes to artificial intelligence, MIT and IBM were there at the beginning: laying foundational work and creating some of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.

Today, collaborations like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. This is critical for industries and the labor force that stand to benefit, particularly in the short term: from $3-4 trillion of forecast global economic benefits and 80 percent productivity gains for knowledge workers and creative tasks, to significant incorporations of generative AI into business processes (80 percent) and software applications (70 percent) in the next three years.

While industry has seen a boom in notable models, chiefly in the past year, academia continues to drive the innovation, contributing most of the highly cited research. At the MIT-IBM Watson AI Lab, success takes the form of 54 patent disclosures, an excess of 128,000 citations with an h-index of 162, and more than 50 industry-driven use cases. Some of the lab’s many achievements include improved stent placement with AI imaging techniques, slashing computational overhead, shrinking models while maintaining performance, and modeling of interatomic potential for silicate chemistry.

“The lab is uniquely positioned to identify the ‘right’ problems to solve, setting us apart from other entities,” says Aude Oliva, lab MIT director and director of strategic industry engagement in the MIT Schwarzman College of Computing. “Further, the experience our students gain from working on these challenges for enterprise AI translates to their competitiveness in the job market and the promotion of a competitive industry.”

“The MIT-IBM Watson AI Lab has had tremendous impact by bringing together a rich set of collaborations between IBM and MIT’s researchers and students,” says Provost Anantha Chandrakasan, who is the lab’s MIT co-chair and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “By supporting cross-cutting research at the intersection of AI and many other disciplines, the lab is advancing foundational work and accelerating the development of transformative solutions for our nation and the world.”

Long-horizon work

As AI continues to garner interest, many organizations struggle to channel the technology into meaningful outcomes. A 2024 Gartner study finds that, “at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025,” demonstrating ambition and widespread hunger for AI, but a lack of knowledge for how to develop and apply it to create immediate value.

Here, the lab shines, bridging research and deployment. The majority of the lab’s current-year research portfolio is aligned to use and develop new features, capacities, or products for IBM, the lab’s corporate members, or real-world applications. The last of these comprise large language models, AI hardware, and foundation models, including multi-modal, bio-medical, and geo-spatial ones. Inquiry-driven students and interns are invaluable in this pursuit, offering enthusiasm and new perspectives while accumulating domain knowledge to help derive and engineer advancements in the field, as well as opening up new frontiers for exploration with AI as a tool.

Findings from the AAAI 2025 Presidential panel on the Future of AI Research support the need for contributions from academia-industry collaborations like the lab in the AI arena: “Academics have a role to play in providing independent advice and interpretations of these results [from industry] and their consequences. The private sector focuses more on the short term, and universities and society more on a longer-term perspective.”

Bringing these strengths together, along with the push for open sourcing and open science, can spark innovation that neither could achieve alone. History shows that embracing these principles, and sharing code and making research accessible, has long-term benefits for both the sector and society. In line with IBM and MIT’s missions, the lab contributes technologies, findings, governance, and standards to the public sphere through this collaboration, thereby enhancing transparency, accelerating reproducibility, and ensuring trustworthy advances.

The lab was created to merge MIT’s deep research expertise with IBM’s industrial R&D capacity, aiming for breakthroughs in core AI methods and hardware, as well as new applications in areas like health care, chemistry, finance, cybersecurity, and robust planning and decision-making for business.

Bigger isn't always better

Today, large foundation models are giving way to smaller, more task-specific models yielding better performance. Contributions from lab members like Song Han, associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), and IBM Research’s Chuang Gan help make this possible, through work such as once-for-all and AWQ. Innovations such as these improve efficiency with better architectures, algorithm shrinking, and activation-aware weight quantization, letting models like language processing run on edge devices at faster speeds and reduced latency.

Consequently, foundation, vision, multimodal, and large language models have seen benefits, allowing for the lab research groups of Oliva, MIT EECS Associate Professor Yoon Kim, and IBM Research members Rameswar Panda, Yang Zhang, and Rogerio Feris to build on the work. This includes techniques to imbue models with external knowledge and the development of linear attention transformer methods for higher throughput, compared to other state-of-the-art systems. 

Understanding and reasoning in vision and multimodal systems has also seen a boon. Works like “Task2Sim” and “AdaFuse” demonstrate improved vision model performance if pre-training takes place on synthetic data, and how video action recognition can be boosted by fusing channels from past and current feature maps.

As part of a commitment to leaner AI, the lab teams of Gregory Wornell, the MIT EECS Sumitomo Electric Industries Professor in Engineering, IBM Research’s Chuang Gan, and David Cox, VP for foundational AI at IBM Research and the lab’s IBM director, have shown that model adaptability and data efficiency can go hand in hand. Two approaches, EvoScale and Chain-of-Action-Thought reasoning (COAT), enable language models to make the most of limited data and computation by improving on prior generation attempts through structured iteration, narrowing in on a better response. COAT uses a meta-action framework and reinforcement learning to tackle reasoning-intensive tasks via self-correction, while EvoScale brings a similar philosophy to code generation, evolving high-quality candidate solutions. These techniques help to enable resource-conscious, targeted, real-world deployment.

“The impact of MIT-IBM research on our large language model development efforts cannot be overstated,” says Cox. “We’re seeing that smaller, more specialized models and tools are having an outsized impact, especially when they are combined. Innovations from the MIT-IBM Watson AI Lab help shape these technical directions and influence the strategy we are taking in the market through platforms like watsonx.”

For example, numerous lab projects have contributed features, capabilities, and uses to IBM’s Granite Vision, which provides impressive computer vision designed for document understanding, despite its compact size. This comes at a time when there’s a growing need for extraction, interpretation, and trustworthy summarization of information and data contained in long formats for enterprise purposes.

Other achievements that extend beyond direct research on AI and across disciplines are not only beneficial, but necessary for advancing the technology and lifting up society, concludes the 2025 AAAI panel.

Work from the lab’s Caroline Uhler and Devavrat Shah — both Andrew (1956) and Erna Viterbi Professors in EECS and the Institute for Data, Systems, and Society (IDSS) — along with IBM Research’s Kristjan Greenewald, transcends specializations. They are developing causal discovery methods to uncover how interventions affect outcomes, and identify which ones achieve desired results. The studies include developing a framework that can both elucidate how “treatments” for different sub-populations may play out, like on an ecommerce platform or mobility restrictions on morbidity outcomes. Findings from this body of work could influence the fields of marketing and medicine to education and risk management.

“Advances in AI and other areas of computing are influencing how people formulate and tackle challenges in nearly every discipline. At the MIT-IBM Watson AI Lab, researchers recognize this cross-cutting nature of their work and its impact, interrogating problems from multiple viewpoints and bringing real-world problems from industry, in order to develop novel solutions,” says Dan Huttenlocher, MIT lab co-chair, dean of the MIT Schwarzman College of Computing, and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

A significant piece of what makes this research ecosystem thrive is the steady influx of student talent and their contributions through MIT’s Undergraduate Research Opportunities Program (UROP), MIT EECS 6A Program, and the new MIT-IBM Watson AI Lab Internship Program. Altogether, more than 70 young researchers have not only accelerated their technical skill development, but, through guidance and support by the lab’s mentors, gained knowledge in AI domains to become emerging practitioners themselves. This is why the lab continually seeks to identify promising students at all stages in their exploration of AI’s potential.

“In order to unlock the full economic and societal potential of AI, we need to foster ‘useful and efficient intelligence,’” says Sriram Raghavan, IBM Research VP for AI and IBM chair of the lab. “To translate AI promise into progress, it’s crucial that we continue to focus on innovations to develop efficient, optimized, and fit-for-purpose models that can easily be adapted to specific domains and use cases. Academic-industry collaborations, such as the MIT-IBM Watson AI Lab, help drive the breakthroughs that make this possible.”


Over 1,000 MIT students inspired to work toward climate solutions

Incoming students tested the climate simulation tool En-ROADS with the goal of creating “a healthier, safer, more prosperous, and more sustainable world.”


Recently, more than 1,000 MIT students stepped into the shoes of global decision-makers by trying out En-ROADS, a simulation tool developed to test climate policies, explore solutions, and envision a cleaner and safer environmental future.

MIT is committed to climate action, and this year’s new student orientation showcased that commitment. For the first time ever, incoming Leaders for Global Operations (LGO), Executive MBA, Sloan Fellow MBA, MBA, and undergraduate students all explored the capabilities of En-ROADS.

“The goal is for MIT to become one of the world’s most prolific, collaborative, and interdisciplinary sources of technological, behavioral, and policy solutions for the global climate challenge over the next decade,” MIT Provost Anantha P. Chandrakasan told an audience of about 300 undergraduates from the Class of 2029. “It is something we need to do urgently, and today is your opportunity to play a role in that bold mission.”

Connecting passion with science for change

In group workshop sessions, students collaborated to create a world in which global warming stays well below 2 degrees Celsius above preindustrial levels — the goal of the 2015 Paris Agreement. Backed by the latest science, the En-ROADS simulator let them explore firsthand how policies like carbon pricing and clean energy investments affect our climate, economy, and health. Over 450 incoming MBA students even role-played as delegates at a global climate summit conference, tasked with negotiating a global agreement to address the harm caused by climate change.

For first-year MBA student Allison Somuk, who played the role of President Xi Jinping of China, the workshop was not only eye-opening about climate, but also altered how she plans to approach her future work and advocacy.

“Before the simulation, I didn’t have data on climate change, so I was surprised to see how close we are to catastrophic temperature increases. What surprised me most was how difficult it was to slow that trajectory. It required significant action and compromise from nearly every sector, not just a few. As someone passionate about improving maternal health care in developing nations, my view of contributing factors has broadened. I now see how maternal health may be affected by a larger system where climate policy decisions directly affect women’s health outcomes.”

MIT Sloan Research Affiliate Andrew Jones, who is also executive director and co-founder of Climate Interactive and co-creator of the En-ROADS tool, presented several sessions during orientation. Looking back on the week, he found the experience personally rewarding.  

“Engaging with hundreds of students, I was inspired by the powerful alignment between their passion for climate action and MIT’s increased commitment to delivering on climate goals. This is a pivotal moment for breakthroughs on our campus.”

Other presenters included Jennifer Graham, MIT Sustainability Initiative senior associate director; Jason Jay, MIT Sustainability Initiative director; Krystal Noiseux, MIT Climate Pathways Project associate director; Bethany Patten, MIT Climate Policy Center executive director; and John Sterman, Jay W. Forrester Professor of Management, professor in the MIT Institute for Data, Systems, and Society, and director of the MIT System Dynamics Group.

Chris Rabe, the MIT Climate Project’s Education Program director, was impressed, but not surprised, by how much students learned so quickly as they worked together to solve the problem with En-ROADS.

“By integrating reflection, emotional dynamics, multi-generational perspectives, group work, and inquiry, the En-ROADS simulation provides an ideal foundation for first-year students to explore the breadth of climate and sustainability opportunities at MIT. In the process, students came to recognize the many levers and multi-solving approaches required to address the complex challenges of climate change.”

Inspiring climate leaders

The En-ROADS workshops were a true team effort, made possible with the help of senior staff at MIT Sloan School of Management and the MBA program office, and members of the MIT Sloan Sustainability Initiative, Climate Pathways Project, Climate Policy Center, the Climate Project, Office of the First Year, and entire undergraduate Orientation team.

“Altogether, over a thousand of the newest members of the MIT community have now had a chance to learn for themselves about the climate crisis,” says Sterman, “and what we can do to create a healthier, safer, more prosperous, and more sustainable world — and how they can get involved to bring that world into being, even as first-year undergrads and MBAs.” 

By the end of the workshops, the students’ spirits were buoyed. They all had successfully found ways to keep global warming to below 2 C.  When asked, “What would you love about being part of this new future you’ve created?,”  a more positive, optimistic word cloud came into view. Answers included:

First-year MBA student Ruby Eisenbud sums up the sentiment many new MIT students came away with after their workshop.

“Coming to Sloan, one of the questions on my mind was: How can we, as future leaders, make a positive impact related to climate change? While En-ROADS is a simulation, it felt like we experienced, in the smallest way, what it could be like to be a leader navigating the diverging interests of all stakeholders involved in mitigating the impacts of the climate crisis. While the simulation prompted us to face the difficult reality of climate change, it also reinforced my motivation to emphasize climate in my work at Sloan and beyond.”


A new advising neighborhood takes shape along the Infinite Corridor

The Undergraduate Advising Center’s new home in Building 11 creates a bright, welcoming, and functional destination for MIT undergraduate students.


On any given day, MIT’s famed 825-foot Infinite Corridor serves as a busy, buzzing pedestrian highway, offering campus commuters a quick, if congested, route from point A to B. With the possible exception of MIT Henge twice a year, it doesn’t exactly invite lingering.

Thanks to a recent renovation on the first floor of Building 11, the former location of Student Financial Services, there’s now a compelling reason for students to step off the busy throughfare and pause for conversation or respite.

Dubbed by one onlooker as “the spaceport,” the area has been transformed into an airy, multi-functional hub. Nestled inside is the Undergraduate Advising Center (UAC), which launched in 2023 to provide holistic support for students’ personal and academic growth by providing individualized advising for all four years, offering guidance about and connections to MIT resources, and partnering with faculty and departments to ensure a comprehensive advising experience.

Students can now find another key service conveniently located close by: Career Advising and Professional Development has moved into renovated office suites just down the hall, in Building 7.

“It’s just stunning!” marvels Diep Luu, senior associate dean and director of the UAC. “You can’t help but notice the contrast between the historic architecture and the contemporary design. The space is filled with natural light thanks to the floor-to-ceiling windows, and it makes the environment both energizing and comfortable.”

Designed by Merge Architects, the 5,000 square-foot space opens off the Infinite with several informal public spaces for students and community members. These include a series of soaring, vaulted booths with a variety of tables and seating to support multiple kinds of socialization and/or work, a cozy lounge lined with pi wallpaper (carried out to 10,638 digits after 3.14), and the “social stairs” for informal gatherings and workshops. Beyond that, glass doors lead to the UAC office space, which features open workstations, private advising rooms, and conference rooms with Zoom capability.

“We wanted to incorporate as many different kinds of spaces to accommodate as many different kinds of interactions as we could,” explains Kate Trimble, senior associate dean and chief of staff of the Division of Graduate and Undergraduate Education (GUE), who helped guide the renovation project. “After all, the UAC will support all undergraduate students for their entire four-year MIT journey, through a wide variety of experiences, challenges, and celebrations.”

Homing in on the  “Boardwalk or Park Place of MIT real estate”

The vision for the new district began to percolate in 2022. At the time, GUE (then known as the Office of the Vice Chancellor, or OVC) was focusing on two separate, key priorities: reconfiguring office space in a post-pandemic, flex-work world; and creating a new undergraduate advising center, in accordance with one of the Task Force 2021 recommendations.

A faculty and staff working group gathered information and ideas from offices and programs that had already implemented “flex-space” strategies, such as Human Resources, IS&T, and the MIT Innovation Headquarters. In thinking about an advising center of the size and scope envisioned, Trimble notes, “we quickly zeroed in on the Building 11 space. It’s such a prominent location. Former Vice Chancellor (and current Vice President for Research) Ian A. Waitz referred to it as the “Boardwalk or Park Place of MIT real estate. And if you’re thinking about a center that’s going to serve all undergraduates, you really want it to be convenient and centrally located — and boy, that’s a perfect space.”

As plans were made to relocate Student Financial Services to a new home in Building E17, the renovation team engaged undergraduate students and advising staff in the design process through a series of charrette-style workshops and focus groups. Students shared feedback about spaces on campus where they felt most comfortable, as well as those they disliked. From staff, the team learned which design elements would make the space as functional as possible, allowing for the variety of interactions they typically have with students.

The team selected Merge Architects for the project, Trimble says, because “they understood that we were not looking to build something that was an architectural temple, but rather a functional and fun space that meets the needs of our students and staff. They’ve been creative and responsive partners.” She also credits the MIT Campus Construction group and the Office of Campus Planning for their crucial role in the renovation. “I can’t say enough good things about them. They’ve been superb guides through a long and complicated process.”

A more student-centric Infinite Corridor

Construction wrapped up in late summer, and the UAC held an open house for students on Registration Day, Sept. 3. It buzzed with activity as students admired the space, chatted with UAC staff, took photos, and met the office mascot, Winni, a friendly chocolate Labrador retriever.

“Students have been amazed by the transformation,” says Luu. “We wanted a space that encourages community and collaboration, one that feels alive and dynamic, and the early feedback suggests that’s exactly what’s happening,” Luu explains. “It also gives us a chance to better connect students not only with what the UAC offers, but also with support across the Institute.

“Last year, the UAC offices were behind these two wooden doors in the Infinite Corridor and you had to know that they were there to get to them,” says junior Caleb Mathewos, who has been a UAC orientation leader and captain over the past two years. “The space is very inviting now. I’ve seen people sitting there and working, or just relaxing between classes. I see my friends every now and then, and I’ll stop by and chat with them. Because it’s so much more open, it makes the UAC feel a lot more accessible to students.”

Senior Calvin Macatantan, who’s been involved with the UAC’s First Generation/Low Income Program since his first year and served as an associate advisor and orientation leader, thinks the new space will make it easier for students — especially first years — to find what they need to navigate at MIT. “Before, resources felt scattered across different parts of the Infinite, even though they had similar missions of advising and supporting students. It's nice that there’s a central, welcoming space where those supports connect, and I think that will make a big difference in how students experience MIT.”

The transformation adds significantly to a trend toward creating more student-centric spaces along the Infinite. In the past few years, MIT has added two new study lounges in Building 3, the DEN and the LODGE, and the Department of Materials Science and Engineering built the DMSE Breakerspace in Building 4. This fall, another office suite along the Infinite will be remodeled into a new tutoring hub.

"It’s wonderful to see the UAC space and the whole advising ‘neighborhood,’ if you will, come to fruition,” says Vice Chancellor for Graduate and Undergraduate Education David L. Darmofal. “The need to strengthen undergraduate advising and the opportunity to do so through an Institute advising hub was an outcome of the Task Force 2021 effort, and it’s taken years of thoughtful reflection by many stakeholders to lay the foundation for such a significant sea change in advising. This space is a tangible, visible commitment to putting students first.”


MIT Maritime Consortium releases “Nuclear Ship Safety Handbook”

First-of-its-kind handbook serves as a guide for design safety for civilian nuclear ships.


Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standard for safe maritime nuclear propulsion.

“This handbook is a critical tool in efforts to support the adoption of nuclear in the maritime industry,” explains Themis Sapsis, the William I. Koch Professor of Mechanical Engineering at MIT, director of the MIT Center for Ocean Engineering, and co-director of the MIT Maritime Consortium. “The goal is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry.”

Using research data and standards, combined with operational experiences during civilian maritime nuclear operations, the handbook provides unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations, a topic of growing importance on the national and international stage. 

“Right now, the nuclear-maritime policies that exist are outdated and often tied only to specific technologies, like pressurized water reactors,” says Jose Izurieta, a graduate student in the Department of Mechanical Engineering (MechE) Naval Construction and Engineering (2N) Program, and one of the handbook authors. “With the recent U.K.-U.S. Technology Prosperity Deal now including civil maritime nuclear applications, I hope the handbook can serve as a foundation for creating a clear, modern regulatory framework for nuclear-powered commercial ships.”

The recent memorandum of understanding signed by the U.S. and U.K calls for the exploration of “novel applications of advanced nuclear energy, including civil maritime applications,” and for the parties to play “a leading role informing the establishment of international standards, potential establishment of a maritime shipping corridor between the Participants’ territories, and strengthening energy resilience for the Participants’ defense facilities.”

“The U.S.-U.K. nuclear shipping corridor offers a great opportunity to collaborate with legislators on establishing the critical framework that will enable the United States to invest on nuclear-powered merchant vessels — an achievement that will reestablish America in the shipbuilding space,” says Fotini Christia, the Ford International Professor of the Social Sciences, director of the Institute for Data, Systems, and Society (IDSS), and co-director of the MIT Maritime Consortium.

“With over 30 nations now building or planning their first reactors, nuclear energy’s global acceptance is unprecedented — and that momentum is key to aligning safety rules across borders for nuclear-powered ships and the respective ports,” says Koroush Shirvan, the Atlantic Richfield Career Development Professor in Energy Studies at MIT and director of the Reactor Technology Course for Utility Executives.

The handbook, which is divided into chapters in areas involving the overlapping nuclear and maritime safety design decisions that will be encountered by engineers, is careful to balance technical and practical guidance with policy considerations.

Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, says the handbook will significantly benefit the entire maritime community, specifically naval architects and marine engineers, by providing standardized guidelines for design and operation specific to nuclear powered commercial vessels.

“This will assist in enhancing safety protocols, improve risk assessments, and ensure consistent compliance with international regulations,” MacLean says. “This will also help foster collaboration amongst engineers and regulators. Overall, this will further strengthen the reliability, sustainability, and public trust in nuclear-powered maritime systems.”

Anthony Valiaveedu, the handbook’s lead author, and co-author Nat Edmonds, are both students in the MIT Master’s Program in Technology and Policy (TPP) within the IDSS. The pair are also co-authors of a paper published in Science Policy Review earlier this year that offered structured advice on the development of nuclear regulatory policies.

“It is important for safety and technology to go hand-in-hand,” Valiaveedu explains. “What we have done is provide a risk-informed process to begin these discussions for engineers and policymakers.”

“Ultimately, I hope this framework can be used to build strong bilateral agreements between nations that will allow nuclear propulsion to thrive,” says fellow co-author Izurieta.

Impact on industry

“Maritime designers needed a source of information to improve their ability to understand and design the reactor primary components, and development of the 'Nuclear Ship Safety Handbook' was a good step to bridge this knowledge gap,” says Christopher J. Wiernicki, American Bureau of Shipping (ABS) chair and CEO. “For this reason, it is an important document for the industry.”

The ABS, which is the American classification society for the maritime industry, develops criteria and provides safety certification for all ocean-going vessels. ABS is among the founding members of the MIT Maritime Consortium. Capital Clean Energy Carriers Corp., HD Korea Shipbuilding and Offshore Engineering, and Delos Navigation Ltd. are also consortium founding members. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.

“As we consider a net-zero framework for the shipping industry, nuclear propulsion represents a potential solution. Careful investigation remains the priority, with safety and regulatory standards at the forefront,” says Jerry Kalogiratos, CEO of Capital Clean Energy Carriers Corp. “As first movers, we are exploring all options. This handbook lays the technical foundation for the development of nuclear-powered commercial vessels.”

Sangmin Park, senior vice president at HD Korea Shipbuilding and Offshore Engineering, says “The 'Nuclear Ship Safety Handbook' marks a groundbreaking milestone that bridges shipbuilding excellence and nuclear safety. It drives global collaboration between industry and academia, and paves the way for the safe advancement of the nuclear maritime era.”

Maritime at MIT

MIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. Maritime Consortium projects, including the handbook, reflect national priorities aimed at revitalizing the U.S. shipbuilding and commercial maritime industries.

The MIT Maritime Consortium, which launched in 2024, brings together MIT and maritime industry leaders to explore data-powered strategies to reduce harmful emissions, optimize vessel operations, and support economic priorities.

“One of our most important efforts is the development of technologies, policies, and regulations to make nuclear propulsion for commercial ships a reality,” says Sapsis. “Over the last year, we have put together an interdisciplinary team with faculty and students from across the Institute. One of the outcomes of this effort is this very detailed document providing detailed guidance on how such effort should be implemented safely.”

Handbook contributors come from multiple disciplines and MIT departments, labs, and research centers, including the Center for Ocean Engineering, IDSS, MechE’s Course 2N Program, the MIT Technology and Policy Program, and the Department of Nuclear Science and Engineering.

MIT faculty members and research advisors on the project include Sapsis; Christia; Shirvan; MacLean; Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering, director, Center for Advanced Nuclear Energy Systems, and director of science and technology for the Nuclear Reactor Laboratory; and Captain Andrew Gillespy, professor of the practice and director of the Naval Construction and Engineering (2N) Program.

“Proving the viability of nuclear propulsion for civilian ships will entail getting the technologies, the economics and the regulations right,” says Buongiorno. “This handbook is a meaningful initial contribution to the development of a sound regulatory framework.”

“We were lucky to have a team of students and knowledgeable professors from so many fields,” says Edmonds. “Before even beginning the outline of the handbook, we did significant archival and history research to understand the existing regulations and overarching story of nuclear ships. Some of the most relevant documents we found were written before 1975, and many of them were stored in the bellows of the NS Savannah.”

The NS Savannah, which was built in the late 1950s as a demonstration project for the potential peacetime uses of nuclear energy, was the first nuclear-powered merchant ship. The Savannah was first launched on July 21, 1959, two years after the first nuclear-powered civilian vessel, the Soviet ice-breaker Lenin, and was retired in 1971.

Historical context for this project is important, because the reactor technologies envisioned for maritime propulsion today are quite different from the traditional pressurized water reactors used by the U.S. Navy. These new reactors are being developed not just in the maritime context, but also to power ports and data centers on land; they all use low-enriched uranium and are passively cooled. For the maritime industry, Sapsis says, “the technology is there, it’s safe, and it’s ready.”

The Nuclear Ship Safety Handbook is publicly available on the MIT Maritime Consortium website and from the MIT Libraries. 


Solar energy startup Active Surfaces wins inaugural PITCH.nano competition

Twelve START.nano companies competed for the grand prize of nanoBucks to be used at MIT.nano’s facilities.


The inaugural PITCH.nano competition, hosted by MIT.nano’s hard technology accelerator START.nano, provided a platform for early-stage startups to present their innovations to MIT and Boston’s hard-tech startup ecosystem.

The grand prize winner was Active Surfaces, a startup that is generating renewable energy exactly where it is going to be used through lightweight, flexible solar cells. Active Surfaces says its ultralight, peel-and-stick panels will reimagine how we deploy photovoltaics in the built environment.

Shiv Bhakta MBA ’24, SM ’24, CEO and co-founder, delivered the winning presentation to an audience of entrepreneurs, investors, startup incubators, and industry partners at PITCH.nano on Sept. 30. Active Surfaces received the grand prize of 25,000 nanoBucks — equivalent to $25,000 that can be spent at MIT.nano facilities.

Why has MIT.nano chosen to embrace startup activity as much as we do? asked Vladimir Bulović, MIT.nano faculty director, at the start of PITCH.nano. “We need to make sure that entrepreneurs can be born out of MIT and can take the next technical ideas developed in the lab out into the market, so they can make the next millions of jobs that the world needs.”

The journey of a hard-tech entrepreneur takes at least 10 years and 100 million dollars, explained Bulović. By linking open tool facilities to startup needs, MIT.nano can make those first few years a little bit easier, bringing more startups to the scale-up stage.

“Getting VCs [venture capitalists] to invest in hard tech is challenging,” explained Joyce Wu SM ’00, PhD ’07, START.nano program manager. “Through START.nano, we provide discounted access to MIT.nano’s cleanrooms, characterization tools, and laboratories for startups to build their prototypes and attract investment earlier and with reduced spend. Our goal is to support the translation of fundamental research to real-world solutions in hard tech.”

In addition to discounted access to tools, START.nano helps early-stage companies become part of the MIT and Cambridge innovation network. PITCH.nano, inspired by the MIT 100K Competition, was launched as a new opportunity this year to introduce these hard-tech ventures to the investor and industry community. Twelve startups delivered presentations that were evaluated by a panel of four judges who are, themselves, venture capitalists and startup founders.

“It is amazing to see the quality, diversity, and ingenuity of this inspiring group of startups,” said judge Brendan Smith PhD ’18, CEO of SiTration, a company that was part of the inaugural START.nano cohort. “Together, these founders are demonstrating the power of fundamental hard-tech innovation to solve the world’s greatest challenges, in a way that is both scalable and profitable.”

Startups who presented at PITCH.nano spanned a wide range of focus areas. In the fields of climate, energy, and materials, the audience heard from Addis Energy, Copernic Catalysts, Daqus Energy, VioNano Innovations, Active Surfaces, and Metal Fuels; in life sciences, Acorn Genetics, Advanced Silicon Group, and BioSens8; and in quantum and photonics, Qunett, nOhm Devices, and Brightlight Photonics. The common thread for these companies: They are all using MIT.nano to advance their innovations.

“MIT.nano has been instrumental in compressing our time to market, especially as a company building a novel, physical product,” said Bhakta. “Access to world-class characterization tools — normally out of reach for startups — lets us validate scale-up much faster. The START.nano community accelerates problem-solving, and the nanoBucks award is directly supporting the development of our next prototypes headed to pilot.”

In addition to the grand prize, a 5,000 nanoBucks audience choice award went to Advanced Silicon Group, a startup that is developing a next-generation biosensor to improve testing in pharma and health tech.

Now in its fifth year, START.nano has supported 40 companies spanning a diverse set of market areas — life sciences, clean tech, semiconductors, photonics, quantum, materials, and software. Fourteen START.nano companies have graduated from the program, proving that START.nano is indeed succeeding in its mission to help early-stage ventures advance from prototype to manufacturing. “I believe MIT.nano has a fantastic opportunity here,” said judge Davide Marini, PhD ’03, co-founder and CEO of Inkbit, “to create the leading incubator for hard tech entrepreneurs worldwide.”

START.nano accepts applications on a monthly basis. The program is made possible through the generous support of FEMSA.


MIT Global Seed Funds catalyze research in over 20 countries

Launched in 2008, the program has expanded exponentially and spent $30 million on high-impact research.


Since launching in 2008, the MIT Global Seed Funds (GSF) program has awarded roughly $30 million to more than 1,300 high-impact faculty research projects across the world, spurring consequential collaborations on topics that include swine-fever vaccines, deforestation of the Amazon, the impact of “coral mucus” on the Japanese island of Okinawa, and the creation of an AI-driven STEM-education lab within Nigeria’s oldest university.

Administered by the MIT Center for International Studies (CIS) and open to MIT faculty and principal investigators, GSF boasts a unique funding structure consisting of both a general fund for unrestricted geographical use and more than 20 different specific funds for individual universities, regions, and countries.

GSF projects often tackle critical challenges that require international solutions, culminating in patents, policy changes, and published papers in journals such as Nature and Science. Some faculty-led projects from this year include Professor Hugh Herr’s modular crutches for people with disabilities in Sierra Leone, Research Scientist Paolo Santi’s large-language models to predict energy consumption in grocery stores, and Professor Ernest Fraenkel’s development of mRNA therapies for the neurodegenerative disease amyotrophic lateral sclerosis (ALS).

GSF Assistant Director Justin Leahey, who is managing director of the MIT-Germany and MIT-Switzerland programs, says that GSF has expanded exponentially over the years, including most recently into the Czech Republic, Norway, Slovakia, and — starting in fall 2025 — Hungary. This year there were a grand total of roughly 300 research proposals submitted for consideration, with many of the accepted proposals including the active participation of students at both the graduate and undergraduate level.

Central to GSF’s work is “reciprocal exchange” — the concept of collaborators in and out of MIT sharing their work and exchanging ideas in an egalitarian way, rather than bringing a one-sided approach to different research challenges. Frequent collaborator Raffaella Gozzelino, a neurology researcher and principal investigator at NOVA Medical School in Portugal who works closely with Jacquin Niles, an MIT professor of biological engineering, says that research is more impactful “when specialized knowledge integrates local realities and reveals potential solutions to national challenges,” and views the spirit of reciprocal exchange as something that revolves around “sharing knowledge and co-creating solutions that empower one another and build bridges across borders.”

For Cindy Xie ’24, MCP ’25, her master’s thesis emerged from the first-ever GSF-supported research internship in Cape Verde, where she worked with Niles and Gozzelino to explore the impact of climate change on anemia in the country of 500,000 people, focusing specifically on its largest island of Santiago. Xie says that she was struck by the intertwined intersectional nature of the issues of nutrition, climate, and infection in Santiago, home to the nation’s capital city of Praia. For example, Xie and Gozzelino’s team found that respondents perceived a rise in costs of fresh produce over time, exacerbated by drought and unpredictable agricultural conditions, which in turn impacted existing nutritional deficiencies and increased residents’ susceptibility to mosquito-borne diseases.

“Though this multidisciplinary research lens is challenging in terms of actual project implementation, it was meaningful in that it generated insights and connections across fields that allow our research to be better contextualized within the experiences of the communities that it impacts,” Xie says.

Gozzelino says that it has been meaningful to witness how scientific research can transcend academic boundaries and generate real impact. She says that, by examining the effects of climate change on infectious diseases and nutrition in Cape Verde, the team will be able to build a framework that can directly inform public policy.

“Contributing to a project that underscores the importance of integrating scientific knowledge into decision-making will safeguard vulnerable populations and make them feel included in the society they belong,” Gozzelino says. “This collaboration has revealed the enormous potential of international partnerships to strengthen local research capacity and address global challenges.”

During her time in Cape Verde working with Xie and Gozzelino, Amulya Aluru ’23, MEng ’24 got to meet with 20 local officials and connect with new people in a wide range of roles across the country, helping her “recognize the power of interpersonal relationships and collaboration” in public health research. She says that the structure of the GSF grant gave her the unique experience of having mentors and coworkers in three different countries, spanning Cape Verde, the United States, and Portugal.

Aluru says that this kind of cross-pollination “enabled me to strengthen my research with different perspectives and challenged me to approach my work in a way that I’d never done before, with a more global mindset.”

Xie similarly expresses her deep appreciation for the long-term relationships she has built through the project and the linkages between Santiago and Boston, which itself is home to one of the world’s largest Cape Verdean diasporas. “As a student, this was a valuable experience to inform the approaches to collaboration that I would like to implement in my own future work,” Xie says.

More broadly, Gozzelino sees GSF grants like the Cape Verde one as being not simply a vehicle for financial support, but “a catalyst for turning partnerships into long-term impactful collaborations, demonstrating how global networks can aid the development of human capital.”

GSF’s long history of reaching across departments and borders has led to multiple meaningful academic collaborations that have since come to span continents — and decades. In 2015, Professor Jörn Dunkel — an applied mathematician at MIT — kicked off work on a data-sharing repository for bacterial biofilms with the interdisciplinary German microbiologist Knut Drescher, then a professor of biophysics at Philipps-Universität Marburg in Germany. Dunkel and Drescher have since co-authored more than 15 papers together in publications like Nature Physics and Science Advances alongside their teams of graduate students and postdocs, even with Drescher having moved locations and crossed country lines to Switzerland as a faculty member at the University of Basel’s Biozentrum Center for Molecular Life Sciences.

“Our collaboration often creates great synergy by combining my team’s experiments with the theory from Jörn’s team,” says Drescher. “It is a great joy to see his perspective on the experimental systems we are working on. He is able to really understand and engage with experimental biological data, identifying patterns in seemingly distant biological systems.”

In explaining the CIS initiative’s success, Leahey points to the synergistic, academically eclectic, cross-disciplinary nature of the program. “[GSF] is a research fund that doesn’t ‘fund research’ in the conventional sense,” he says. “It seeds early-stage collaboration and lets people explore.”

The MIT Global Seed Funds applications are now open, with a deadline of Dec. 16.


Alan Whitney, MIT Haystack Observatory radio astronomer who pioneered very long baseline interferometry, dies at 81

Longtime research scientist who served as associate director and interim director helped guide Haystack to decades of influential leadership in the development and refinement of the VLBI technique.


Alan Robert Whitney ’66, SM ’67, PhD ’74, a longtime research scientist at the MIT Haystack Observatory who also served its associate director and interim director, died on Sept. 28 at age 81.

Whitney was a key contributor to the accomplishments and reputation of Haystack Observatory, having led the development of innovative technologies to advance the powerful radio science technique of very long baseline interferometry (VLBI). He ascended to the rank of MIT principal research scientist, served for many years as associate director of the observatory, and in 2007–08 took the reins as interim director. In 2011, he was awarded an MIT Excellence award.

From an early age, Whitney displayed extraordinary talent. Raised in Wyoming, as a high schooler he won the state science fair in 1962 by building a satellite telemetry receiver, which he designed and built from transistors and other discrete components in a barn on his family’s dairy farm. He enrolled at MIT and completed a five-year master’s degree via a cooperative internship program with Bell Laboratories, subsequently earning his PhD in electrical engineering.

Haystack Director Phil Erickson says, “Alan’s personality and enthusiasm were infectious, and his work represented the best ideals of the Haystack and MIT research enterprise — innovative, curious, and exploring the frontiers of basic and applied science and technology.”

In the late 1960s, as part of his PhD work, he was heavily involved in the pioneering development of VLBI, an extraordinary technique that yielded direct measurements of continental drift and information on distant radio sources at unprecedented angular resolution. A landmark paper led by Whitney demonstrated the presence of apparent superluminal (faster than light) motion of radio sources, which was explained as highly relativistic motion aligned toward the Earth. He spent the rest of his long and productive career at Haystack, pushing forward VLBI technology to ever-greater heights and ever-more impactful scientific capabilities.

“Alan was a technology pillar, a stalwart builder and worldwide ambassador of Haystack, and a leading figure of the VLBI geodetic community who inspired generations of scientists and engineers,” says Pedro Elosegui, leader of the Haystack geodesy group. “He contributed fundamentally to the vision and design of the VLBI Geodetic Observing System, outlining a path to a next-generation VLBI system with unprecedented new capabilities to address emerging space geodesy science needs such as global sea-level rise.”

The early days of VLBI demanded heroic and grueling efforts, traveling the world with exotic devices in hand-carried luggage, mounting and dismounting thousands of magnetic tapes every couple of minutes for hours on end, troubleshooting complex and sensitive instrumentation, and writing highly specialized software for the mainframe computers of the day. Whitney was fully engaged on all these fronts. By the early 1980s, the Mark III recording and correlation systems, whose development was led by Whitney, were established as the state of the art in VLBI technology, and a standard around which the global VLBI community coalesced.

Whitney later led the transition to VLBI disk-based recording. Specialized and robust Mark V systems optimized for shipping logistics and handling were transferred to industry for commercialization, leading once again to widespread global adoption of Haystack-developed VLBI technology. Consistently across all these developments, Whitney identified and exploited the most relevant and practical emerging technologies for the Haystack VLBI mission in hardware, software, and computing infrastructure.

In the latter part of his career, Whitney continued to innovate, pushing the technical boundaries of VLBI. A key advance was the Mark 6 (Mk6) recording system, capable of yet faster recording, higher sensitivity, and more robustness. The Mk6 recorders’ essential capability allowed the creation of the Event Horizon Telescope, which famously yielded the first image of the shadow of a black hole. Mk6 recorders are now used to routinely record data roughly 100,000 times faster than the computer tapes used at the start of his career.

As a senior technical and scientific leader, Whitney provided broad leadership and consultation to Haystack, and worked on a number of projects outside of the VLBI world. He served as interim Haystack director from January 2007 until a permanent director was appointed in September 2008. He also engaged with the development project for the international Murchison Widefield Array (MWA) in Australia, focused on frontier research studying early universe development. Whitney assumed the role of MWA project director from 2008 until groups in Australia took over the construction phase of the project a few years later. Until his full retirement in 2012, Whitney continued to provide invaluable technical insights and support at Haystack, and was a trusted and wise counsel to the Haystack Director’s Office. In 2020, Whitney was a co-recipient of the 2020 Breakthrough Prize in Fundamental Physics awarded to the Event Horizon Telescope Collaboration.

Alan Whitney was a top-notch technologist with a broad perspective that allowed him to guide Haystack to decades of influential leadership in the development and refinement of the VLBI technique. His dedication at MIT to the observatory, its people, and its mission were a source of inspiration to many at Haystack and well beyond. He was widely admired for the clarity of his thought, the sharpness of his intellect, and his genial and friendly nature. His numerous local, national, and global colleagues will feel his absence.


School of Engineering welcomes new faculty in 2024-25

The newest MIT engineering faculty are conducting research across a diverse range of subject areas.


The MIT School of Engineering welcomes new faculty members across six of its academic units. This new cohort of faculty members, who have recently started their roles at MIT, conduct research across a diverse range of disciplines.

“We are thrilled to welcome these accomplished scholars to the School of Engineering,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor in the Department of Mechanical Engineering. “Each brings unique expertise across a wide range of fields and is advancing knowledge with real-world impact. They all share a deep commitment to research excellence and a passion for teaching and mentorship.”

Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Data, Systems, and Society (IDSS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing.

The new engineering faculty include:

Masha Folk joined the Department of Aeronautics and Astronautics as an assistant professor in July 2024 and is currently the Charles Stark Draper Career Development Professor. Her research focuses on sustainable aerospace technology driven by a deep desire to accelerate carbon-neutral aviation. She previously worked as an aerodynamics specialist for Rolls-Royce. Folk received her BS in aerospace engineering from Ohio State University, her MS in aerospace engineering from Purdue University, and her PhD in energy, fluids, and turbomachinery from the University of Cambridge.

Sophia Henneberg joined the Department of Nuclear Science and Engineering (NSE) as an assistant professor in September. Her research focuses on developing, utilizing, and extending optimization tools to identify new, promising stellarator designs, which are a promising path toward fusion energy. Previously, she was the principal investigator of EUROfusion’s Stellarator Optimization Theory, Simulation, Validation, and Verification group. Henneberg received a BS in physics at the Goethe-Universität, an MA in physics at the University of Wisconsin at Madison, and a PhD in physics at the University of York.

Omar Khattab joined the Department of Electrical Engineering and Computer Science as an assistant professor in July. He is also affiliated with the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research develops new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. Khattab previously worked as a research scientist at Databricks. He received a BS in computer science from Carnegie Mellon University and a PhD in computer science from Stanford University.

Tania Lopez-Silva joined the Department of Materials Science and Engineering as an assistant professor in July. Her research focuses on supramolecular hydrogels — soft materials made from self-assembling molecules, primarily peptides. Previously, she served as a postdoc at the National Cancer Institute. Lopez-Silva earned her BS in chemistry from Tecnológico de Monterrey and her MA and PhD in chemistry from Rice University.

Ethan Peterson ’13 joined the Department of Nuclear Science and Engineering as an assistant professor in July 2024. His research focuses on improving radiation transport and transmutation methods for the design of fusion technologies, as well as whole-facility modeling for fusion power plants. Previously, he worked as a research scientist at MIT’s Plasma Science and Fusion Center. Peterson received his BS in nuclear engineering and physics from MIT and his PhD in plasma physics from the University of Wisconsin at Madison.

Dean Price joined the Department of Nuclear Science and Engineering as the Atlantic Richfield Career Development Professor in Energy Studies and an assistant professor in September. His work focuses on the simulation and control of advanced reactors, with expertise in uncertainty quantification, scientific machine learning, and artificial intelligence for nuclear applications. Previously, he was the Russell L. Heath Distinguished Postdoctoral Fellow at Idaho National Laboratory. He earned his BS in nuclear engineering from the University of Illinois and his PhD in nuclear engineering from the University of Michigan.

Daniel Varon joined the Department of Aeronautics and Astronautics as the Boeing Assistant Professor, holding an MIT Schwarzman College of Computing shared position with IDSS, in July. Varon’s research focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. Previously, he held a visiting postdoctoral fellowship at the Princeton School of Public and International Affairs. Varon earned a BS in physics and a BA in English literature from McGill University, and an MS in applied mathematics and PhD in atmospheric chemistry from Harvard University.

Raphael Zufferey joined the Department of Mechanical Engineering as an assistant professor in January. He studies bioinspired methods and unconventional designs to solve seamless aerial and aquatic locomotion for applications in ocean sciences. Zufferey previously worked as a Marie Curie postdoc at the École Polytechnique Fédérale de Lausanne (EPFL). He received his BA in micro-engineering and MS in robotics from EPFL and a PhD in robotics and aeronautics from Imperial College London.

The School of Engineering is also welcoming a number of faculty in the Department of EECS and the IDSS who hold shared positions with the MIT Schwarzman College of Computing and other departments. These include: Bailey Flanigan, Brian Hedden, Yunha Hwang, Benjamin Lindquist, Paris Smaragdis, Pu “Paul" Liang, Mariana Popescu, and Daniel Varon. For more information about these faculty members, read the Schwarzman College of Computing’s recent article.

Additionally, the School of Engineering has adopted the shared faculty search model to hire its first shared faculty member: Mark Rau. For more information, read the School of Humanities, Arts, and Social Sciences recent article.


MIT Schwarzman College of Computing welcomes 11 new faculty for 2025

The faculty members occupy core computing and shared positions, bringing varied backgrounds and expertise to the MIT community.


The MIT Schwarzman College of Computing welcomes 11 new faculty members in core computing and shared positions to the MIT community. They bring varied backgrounds and expertise spanning sustainable design, satellite remote sensing, decision theory, and the development of new algorithms for declarative artificial intelligence programming, among others.

“I warmly welcome this talented group of new faculty members. Their work lies at the forefront of computing and its broader impact in the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

College faculty include those with appointments in the Department of Electrical Engineering and Computer Science (EECS) or in the Institute for Data, Systems, and Society (IDSS), which report into both the MIT Schwarzman College of Computing and the School of Engineering. There are also several new faculty members in shared positions between the college and other MIT departments and sections, including Political Science, Linguistics and Philosophy, History, and Architecture.

“Thanks to another successful year of collaborative searches, we have hired six additional faculty in shared positions, bringing the total to 20,” says Huttenlocher.

The new shared faculty include:

Bailey Flanigan is an assistant professor in the Department of Political Science, holding an MIT Schwarzman College of Computing shared position with EECS. Her research combines tools from social choice theory, game theory, algorithms, statistics, and survey methods to advance political methodology and strengthen democratic participation. She is interested in sampling algorithms, opinion measurement, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Flanigan was a postdoc at Harvard University’s Data Science Initiative, and she earned her PhD in computer science from Carnegie Mellon University.

Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with EECS. His research focuses on how we ought to form beliefs and make decisions. His works span epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford University. He received his BA from Princeton University and his PhD from MIT, both in philosophy.

Yunha Hwang is an assistant professor in the Department of Biology, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Laboratory for Information and Decision Systems. Her research interests span machine learning for sustainable biomanufacturing, microbial evolution, and open science. She serves as the co-founder and chief scientist at Tatta Bio, a scientific nonprofit dedicated to advancing genomic AI for biological discovery. She holds a BS in computer science from Stanford University and a PhD in biology from Harvard University.

Ben Lindquist is an assistant professor in the History Section, holding an MIT Schwarzman College of Computing shared position with EECS. Through a historical lens, his work observes the ways that computing has circulated with ideas of religion, emotion, and divergent thinking. His book, “The Feeling Machine” (University of Chicago Press, forthcoming), follows the history of synthetic speech to examine how emotion became a subject of computer science. He was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.

Mariana Popescu is an assistant professor in the Department of Architecture, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). A computational architect and structural designer, Popescu has a strong interest and experience in innovative ways of approaching the fabrication process and use of materials in construction. Her area of expertise is computational and parametric design, with a focus on digital fabrication and sustainable design. Popescu earned her doctorate at ETH Zurich.

Paris Smaragdis SM ’97, PhD ’01 is a professor in the Music and Theater Arts Section, holding an MIT Schwarzman College of Computing shared position with EECS. His research focus lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. Prior to coming to MIT, he worked as a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science. He holds a BMus from Berklee College of Music and earned his PhD in perceptual computing from MIT.

Daniel Varon is an assistant professor in the Department of Aeronautics and Astronautics, holding an MIT Schwarzman College of Computing shared position with IDSS. His work focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. An atmospheric scientist, Varon is particularly interested in greenhouse gasses, air pollution, and satellite remote sensing. He holds an MS in applied mathematics and a PhD in atmospheric chemistry, both from Harvard University.

In addition, the School of Engineering has adopted the shared faculty search model to hire its first shared faculty member:

Mark Rau is an assistant professor in the Music and Theater Arts Section, holding a School of Engineering shared position with EECS. He is involved in developing graduate programming focused on music technology. He has an interest in musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis. His work focuses on musical instruments and creative audio effects. He holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University. He earned his PhD at Stanford’s Center for Computer Research in Music and Acoustics.

The new core faculty are:

Mitchell Gordon is an assistant professor in EECS. He is also a member of CSAIL. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. His work has won awards at conferences in human-computer interaction and artificial intelligence, including a best paper award at CHI and an Oral at NeurIPS. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.

Omar Khattab is an assistant professor in EECS. He is also a member of CSAIL. His work focuses on natural language processing, information retrieval, and AI systems. His research includes developing new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. He received his BS from Carnegie Mellon University and his PhD from Stanford University, both in computer science.

Rachit Nigam will join EECS as an assistant professor in January 2026. He will also be a member of CSAIL and the Microsystems Technology Laboratories. He works on programming languages and computer architecture to address the design, verification, and usability challenges of specialized hardware. He was previously a visiting scholar at MIT. Nigam earned an MS and PhD in computer science from Cornell University.


Lincoln Laboratory and Haystack Observatory team up to unveil hidden parts of the galaxy

A proposed telescope made of thousands of tiny, identical satellites will work together to reveal low-frequency radio waves in space.


For centuries, humans have sought to study the stars and celestial bodies, whether through observations made by naked eye or by telescopes on the ground and in space that can view the universe across nearly the entire electromagnetic spectrum. Each view unlocks new information about the denizens of space — X-ray pulsars, gamma-ray bursts — but one is still missing: the low-frequency radio sky.

Researchers from MIT Lincoln Laboratory, the MIT Haystack Observatory, and Lowell Observatory are working on a NASA-funded concept study called the Great Observatory for Long Wavelengths, or GO-LoW, that outlines a method to view the universe at as-of-yet unseen low frequencies using a constellation of thousands of small satellites. The wavelengths of these frequencies are 15 meters to several kilometers in length, which means they require a very big telescope in order to see clearly.

"GO-LoW will be a new kind of telescope, made up of many thousands of spacecraft that work together semi-autonomously, with limited input from Earth," says Mary Knapp, the principal investigator for GO-LoW at the MIT Haystack Observatory. "GO-LoW will allow humans to see the universe in a new light, opening up one of the very last frontiers in the electromagnetic spectrum."

The difficulty in viewing the low-frequency radio sky comes from Earth's ionosphere, a layer of the atmosphere that contains charged particles that prevent very low-frequency radio waves from passing through. Therefore, a space-based instrument is required to observe these wavelengths. Another challenge is that long-wavelength observations require correspondingly large telescopes, which would need to be many kilometers in length if built using traditional dish antenna designs. GO-LoW will use interferometry — a technique that combines signals from many spatially separated receivers that, when put together, will function as one large telescope — to obtain highly detailed data from exoplanets and other sources in space. A similar technique was used to make the first image of a black hole and, more recently, an image of the first known extrasolar radiation belts.

Melodie Kao, a member of the team from Lowell Observatory, says the data could reveal details about an exoplanet's makeup and potential for life. "[The radio wave aurora around an exoplanet] carries important information, such as whether or not the planet has a magnetic field, how strong it is, how fast the planet is rotating, and even hints about what's inside," she says. "Studying exoplanet radio aurorae and the magnetic fields that they trace is an important piece of the habitability puzzle, and it's a key science goal for GO-LoW."

Several recent trends and technology developments will make GO-LoW possible in the near future, such as the declining cost of mass-produced small satellites, the rise of mega-constellations, and the return of large, high-capacity launch vehicles like NASA's Space Launch System. Go-LoW would be the first mega-constellation that uses interferometry for scientific purposes.

The GO-LoW constellation will be built through several successive launches, each containing thousands of spacecraft. Once they reach low-Earth orbit, the spacecraft will be refueled before journeying on to their final destination — an Earth-sun Lagrange point where they will then be deployed. Lagrange points are regions in space where the gravitational forces of two large celestial bodies (like the sun and Earth) are in equilibrium, such that a spacecraft requires minimal fuel to maintain its position relative to the two larger bodies.  At this long distance from Earth (1 astronomical unit, or approximately 93 million miles), there will also be much less radio-frequency interference that would otherwise obscure GO-LoW’s sensitive measurements.

"GO-LoW will have a hierarchical architecture consisting of thousands of small listener nodes and a smaller number of larger communication and computation nodes (CCNs)," says Kat Kononov, a team member from Lincoln Laboratory's Applied Space Systems Group, who has been working with MIT Haystack staff since 2020, with Knapp serving as her mentor during graduate school. A node refers to an individual small satellite within the constellation. "The listener nodes are small, relatively simple 3U CubeSats — about the size of a loaf of bread — that collect data with their low-frequency antennas, store it in memory, and periodically send it to their communication and computation node via a radio link." In comparison, the CCNs are about the size of a mini-fridge.

The CCN will keep track of the positions of the listener nodes in their neighborhood; collect and reduce the data from their respective listener nodes (around 100 of them); and then transmit that data back to Earth, where more intensive data processing can be performed.

At full strength, with approximately 100,000 listener nodes, the GO-LoW constellation should be able to see exoplanets with magnetic fields in the solar neighborhood — within 5 to 10 parsecs — many for the very first time.

The GO-LoW research team recently published the results of their findings from Phase I of the study, which identified a type of advanced antenna called a vector sensor as the best type for this application. In 2024, Lincoln Laboratory designed a compact deployable version of the sensor suitable for use in space.

The team is now working on Phase II of the program, which is to build a multi-agent simulation of constellation operations.

"What we learned during the Phase I study is that the hard part for GO-LoW is not any specific technology … the hard part is the system: the system engineering and the autonomy to run the system," says Knapp. "So, how do we build this constellation such that it's a tractable problem? That's what we’re exploring in this next part of the study."

GO-LoW is one of many civil space programs at Lincoln Laboratory that aim to harness advanced technologies originally developed for national security to enable new space missions that support science and society. "By adapting these capabilities to serve new stakeholders, the laboratory helps open novel frontiers of discovery while building resilient, cost-effective systems that benefit the nation and the world," says Laura Kennedy, who is the deputy lead of Lincoln Laboratory's Civil Space Systems and Technology Office.

"Like landing on the moon in 1969, or launching Hubble in the 1990s, GO-LoW is envisioned to let us see something we've never seen before and generate scientific breakthroughs," says Kononov.

Go-LoW is a collaboration between Lincoln Laboratory, Haystack Observatory, and Lowell University, as well as Lenny Paritsky from LeafLabs and Jacob Turner from Cornell University.


New software designs eco-friendly clothing that can reassemble into new items

To reduce waste, the Refashion program helps users create outlines for adaptable clothing, such as pants that can be reconfigured into a dress. Each component of these pieces can be replaced, rearranged, or restyled.


It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.

Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, adapting to trends and the ways our bodies change?

A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe are attempting to bring eco-friendly, versatile garments to life. Their new “Refashion” software system breaks down fashion design into modules — essentially, smaller building blocks — by allowing users to draw, plan, and visualize each element of a clothing item. The tool turns fashion ideas into a blueprint that outlines how to assemble each component into reconfigurable clothing, such as a pair of pants that can be transformed into a dress.

With Refashion, users simply draw shapes and place them together to develop an outline for adaptable fashion pieces. It’s a visual diagram that shows how to cut garments, providing a straightforward way to design things like a shirt with an attachable hood for rainy days. One could also create a skirt that can then be reconfigured into a dress for a formal dinner, or maternity wear that fits during different stages of pregnancy.

“We wanted to create garments that consider reuse from the start,” says Rebecca Lin, MIT Department of Electrical Engineering and Computer Science (EECS) PhD student, CSAIL and Media Lab researcher, and lead author on a paper presenting the project. “Most clothes you buy today are static, and are discarded when you no longer want them. Refashion instead makes the most of our garments by helping us design items that can be easily resized, repaired, or restyled into different outfits.”

Modules à la mode

The researchers conducted a preliminary user study where both designers and novices explored Refashion and were able to create garment prototypes. Participants assembled pieces such as an asymmetric top that could be extended into a jumpsuit, or remade into a formal dress, often within 30 minutes. These results suggest that Refashion has the potential to make prototyping garments more approachable and efficient. But what features might contribute to this ease of use?

Its interface first presents a simple grid in its “Pattern Editor” mode, where users can connect dots to outline the boundaries of a clothing item. It’s essentially drawing rectangular panels and specifying how different modules will connect to each other.

Users can customize the shape of each component, create a straight design for garments (which might be useful for less form-fitting items, like chinos) or perhaps tinkering with one of Refashion’s templates. A user can edit pre-designed blueprints for things like a T-shirt, fitted blouse, or trousers.

Another, more creative route is to change the design of individual modules. One can choose the “pleat” feature to fold a garment over itself, similar to an accordion, for starters. It’s a useful way to design something like a maxi dress. The “gather” option adds an artsy flourish, where a garment is crumpled together to create puffy skirts or sleeves. A user might even go with the “dart” module, which removes a triangular piece from the fabric. It allows for shaping a garment at the waist (perhaps for a pencil skirt) or tailor to the upper body (fitted shirts, for instance).

While it might seem that each of these components needs to be sewn together, Refashion enables users to connect garments through more flexible, efficient means. Edges can be seamed together via double-sided connectors such as metal snaps (like the buttons used to close a denim jacket) or Velcro dots. A user could also fasten them in pins called brads, which have a pointed side that they stick through a hole and split into two “legs” to attach to another surface; it’s a handy way to secure, say, a picture on a poster board. Both connective methods make it easy to reconfigure modules, should they be damaged or a “fit check” calls for a new look.

As a user designs their clothing piece, the system automatically creates a simplified diagram of how it can be assembled. The pattern is divided into numbered blocks, which is dragged onto different parts of a 2D mannequin to specify the position of each component. The user can then simulate how their sustainable clothing will look on 3D models of a range of body types (one can also upload a model).

Finally, a digital blueprint for sustainable clothing can extend, shorten, or combine with other pieces. Thanks to Refashion, a new piece could be emblematic of a potential shift in fashion: Instead of buying new clothes every time we want a new outfit, we can simply reconfigure existing ones. Yesterday’s scarf could be today’s hat, and today’s T-shirt could be tomorrow’s jacket.

“Rebecca’s work is at an exciting intersection between computation and art, craft, and design,” says MIT EECS professor and CSAIL principal investigator Erik Demaine, who advises Lin. “I’m excited to see how Refashion can make custom fashion design accessible to the wearer, while also making clothes more reusable and sustainable.”

Constant change

While Refashion presents a greener vision for the future of fashion, the researchers note that they’re actively improving the system. They intend to revise the interface to support more durable items, stepping beyond standard prototyping fabrics. Refashion may soon support other modules, like curved panels, as well. The CSAIL-Adobe team may also evaluate whether their system can use as few materials as possible to minimize waste, and whether it can help “remix” old store-bought outfits.

Lin also plans to develop new computational tools that help designers create unique, personalized outfits using colors and textures. She’s exploring how to design clothing by patchwork — essentially, cutting out small pieces from materials like decorative fabrics, recycled denim, and crochet blocks and assembling them into a larger item.

“This is a great example of how computer-aided design can also be key in supporting more sustainable practices in the fashion industry,” says Adrien Bousseau, a senior researcher at Inria Centre at Université Côte d'Azur who wasn’t involved in the paper. “By promoting garment alteration from the ground up, they developed a novel design interface and accompanying optimization algorithm that helps designers create garments that can undergo a longer lifetime through reconfiguration. While sustainability often imposes additional constraints on industrial production, I am confident that research like the one by Lin and her colleagues will empower designers in innovating despite these constraints.”

Lin wrote the paper with Adobe Research scientists Michal Lukáč and Mackenzie Leake, who is the paper’s senior author and a former CSAIL postdoc. Their work was supported, in part, by the MIT Morningside Academy for Design, an MIT MAKE Design-2-Making Mini-Grant, and the Natural Sciences and Engineering Research Council of Canada. The researchers presented their work recently at the ACM Symposium on User Interface Software and Technology.


In a surprising discovery, scientists find tiny loops in the genomes of dividing cells

Enabled by a new high-resolution mapping technique, the findings overturn a long-held belief that the genome loses its 3D structure when cells divide.


Before cells can divide, they first need to replicate all of their chromosomes, so that each of the daughter cells can receive a full set of genetic material. Until now, scientists had believed that as division occurs, the genome loses the distinctive 3D internal structure that it typically forms.

Once division is complete, it was thought, the genome gradually regains that complex, globular structure, which plays an essential role in controlling which genes are turned on in a given cell.

However, a new study from MIT shows that in fact, this picture is not fully accurate. Using a higher-resolution genome mapping technique, the research team discovered that small 3D loops connecting regulatory elements and genes persist in the genome during cell division, or mitosis.

“This study really helps to clarify how we should think about mitosis. In the past, mitosis was thought of as a blank slate, with no transcription and no structure related to gene activity. And we now know that that’s not quite the case,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT. “What we see is that there’s always structure. It never goes away.”

The researchers also discovered that these regulatory loops appear to strengthen when chromosomes become more compact in preparation for cell division. This compaction brings genetic regulatory elements closer together and encourages them to stick together. This may help cells “remember” interactions present in one cell cycle and carry it to the next one.

“The findings help to bridge the structure of the genome to its function in managing how genes are turned on and off, which has been an outstanding challenge in the field for decades,” says Viraat Goel PhD ’25, the lead author of the study.

Hansen and Edward Banigan, a research scientist in MIT’s Institute for Medical Engineering and Science, are the senior authors of the paper, which appears today in Nature Structural and Molecular Biology. Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Gerd Blobel, a professor at the Perelman School of Medicine at the University of Pennsylvania, are also authors of the study.

A surprising finding

Over the past 20 years, scientists have discovered that inside the cell nucleus, DNA organizes itself into 3D loops. While many loops enable interactions between genes and regulatory regions that may be millions of base pairs away from each other, others are formed during cell division to compact chromosomes. Much of the mapping of these 3D structures has been done using a technique called Hi-C, originally developed by a team that included MIT researchers and was led by Job Dekker at the University of Massachusetts Chan Medical School. To perform Hi-C, researchers use enzymes to chop the genome into many small pieces and biochemically link pieces that are near each other in 3D space within the cell’s nucleus. They then determine the identities of the interacting pieces by sequencing them.

However, that technique doesn’t have high enough resolution to pick out all specific interactions between genes and regulatory elements such as enhancers. Enhancers are short sequences of DNA that can help to activate the transcription of a gene by binding to the gene’s promoter — the site where transcription begins.

In 2023, Hansen and others developed a new technique that allows them to analyze 3D genome structures with 100 to 1,000 times greater resolution than was previously possible. This technique, known as Region-Capture Micro-C (RC-MC), uses a different enzyme that cuts the genome into small fragments of similar size. It also focuses on a smaller segment of the genome, allowing for high-resolution 3-D mapping of a targeted genome region.

Using this technique, the researchers were able to identify a new kind of genome structure that hadn’t been seen before, which they called “microcompartments.” These are tiny highly connected loops that form when enhancers and promoters located near each other stick together.

In that paper, experiments revealed that these loops were not formed by the same mechanisms that form other genome structures, but the researchers were unable to determine exactly how they do form. In hopes of answering that question, the team set out to study cells as they undergo cell division. During mitosis, chromosomes become much more compact, so that they can be duplicated, sorted, and divvied up between two daughter cells. As this happens, larger genome structures called A/B compartments and topologically associating domains (TADs) disappear completely.

The researchers believed that the microcompartments they had discovered would also disappear during mitosis. By tracking cells through the entire cell division process, they hoped to learn how the microcompartments appear after mitosis is completed.

“During mitosis, it has been thought that almost all gene transcription is shut off. And before our paper, it was also thought that all 3D structure related to gene regulation was lost and replaced by compaction. It’s a complete reset every cell cycle,” Hansen says.

However, to their surprise, the researchers found that microcompartments could still be seen during mitosis, and in fact they become more prominent as the cell goes through cell division.

“We went into this study thinking, well, the one thing we know for sure is that there’s no regulatory structure in mitosis, and then we accidentally found structure in mitosis,” Hansen says.

Using their technique, the researchers also confirmed that larger structures such as A/B compartments and TADs do disappear during mitosis, as had been seen before.

“This study leverages the unprecedented genomic resolution of the RC-MC assay to reveal new and surprising aspects of mitotic chromatin organization, which we have overlooked in the past using traditional 3C-based assays. The authors reveal that, contrary to the well-described dramatic loss of TADs and compartmentalization during mitosis, fine-scale “microcompartments” — nested interactions between active regulatory elements — are maintained or even transiently strengthened,” says Effie Apostolou, an associate professor of molecular biology in medicine at Weill Cornell Medicine, who was not involved in the study.

A spike in transcription

The findings may offer an explanation for a spike in gene transcription that usually occurs near the end of mitosis, the researchers say. Since the 1960s, it had been thought that transcription ceased completely during mitosis, but in 2016 and 2017, a few studies showed that cells undergo a brief spike of transcription, which is quickly suppressed until the cell finishes dividing.

In their new study, the MIT team found that during mitosis, microcompartments are more likely to be found near the genes that spike during cell division. They also discovered that these loops appear to form as a result of the genome compaction that occurs during mitosis. This compaction brings enhancers and promoters closer together, allowing them to stick together to form microcompartments.

Once formed, the loops that constitute microcompartments may activate gene transcription somewhat by accident, which is then shut off by the cell. When the cell finishes dividing, entering a state known as G1, many of these small loops become weaker or disappear.

“It almost seems like this transcriptional spiking in mitosis is an undesirable accident that arises from generating a uniquely favorable environment for microcompartments to form during mitosis,” Hansen says. “Then, the cell quickly prunes and filters many of those loops out when it enters G1.”

Because chromosome compaction can also be influenced by a cell’s size and shape, the researchers are now exploring how variations in those features affect the structure of the genome and in turn, gene regulation.

“We are thinking about some natural biological settings where cells change shape and size, and whether we can perhaps explain some 3D genome changes that previously lack an explanation,” Hansen says. “Another key question is how does the cell then pick what are the microcompartments to keep and what are the microcompartments to remove when you enter G1, to ensure fidelity of gene expression?”

The research was funded in part by the National Institutes of Health, a National Science Foundation CAREER Award, the Gene Regulation Observatory of the Broad Institute, a Pew-Steward Scholar Award for Cancer Research, the Mathers Foundation, the MIT Westaway Fund, the Bridge Project of the Koch Institute and Dana-Farber/Harvard Cancer Center, and the Koch Institute Support (core) Grant from the National Cancer Institute.


Book reviews technologies aiming to remove carbon from the atmosphere

In “Carbon Removal,” Howard Herzog and Niall MacDowell assess proposed methods of removing carbon already in the atmosphere as a means of mitigating climate change.


Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized subject matter for nonspecialists” and includes Herzog’s 2018 book, “Carbon Capture.”

Burning fossil fuels, as well as other human activities, cause the release of carbon dioxide (CO2) into the atmosphere, where it acts like a blanket that warms the Earth, resulting in climate change. Much attention has focused on mitigation technologies that reduce emissions, but in their book, Herzog and Mac Dowell have turned their attention to “carbon dioxide removal” (CDR), an approach that removes carbon already present in the atmosphere.

In this new volume, the authors explain how CO2 naturally moves into and out of the atmosphere and present a brief history of carbon removal as a concept for dealing with climate change. They also describe the full range of “pathways” that have been proposed for removing CO2 from the atmosphere. Those pathways include engineered systems designed for “direct air capture” (DAC), as well as various “nature-based” approaches that call for planting trees or taking steps to enhance removal by biomass or the oceans. The book offers easily accessible explanations of the fundamental science and engineering behind each approach.

The authors compare the “quality” of the different pathways based on the following metrics:

Accounting. For public acceptance of any carbon-removal strategy, the authors note, the developers need to get the accounting right — and that’s not always easy. “If you’re going to spend money to get CO2 out of the atmosphere, you want to get paid for doing it,” notes Herzog. It can be tricky to measure how much you have removed, because there’s a lot of CO2 going in and out of the atmosphere all the time. Also, if your approach involves, say, burning fossil fuels, you must subtract the amount of CO2 that’s emitted from the total amount you claim to have removed. Then there’s the timing of the removal. With a DAC device, the removal happens right now, and the removed CO2 can be measured. “But if I plant a tree, it’s going to remove CO2 for decades. Is that equivalent to removing it right now?” Herzog queries. How to take that factor into account hasn’t yet been resolved.

Permanence. Different approaches keep the CO2 out of the atmosphere for different durations of time. How long is long enough? As the authors explain, this is one of the biggest issues, especially with nature-based solutions, where events such as wildfires or pestilence or land-use changes can release the stored CO2 back into the atmosphere. How do we deal with that?

Cost. Cost is another key factor. Using a DAC device to remove CO2 costs far more than planting trees, but it yields immediate removal of a measurable amount of CO2 that can then be locked away forever. How does one monetize that trade-off?

Additionality. “You’re doing this project, but would what you’re doing have been done anyway?” asks Herzog. “Is your effort additional to business as usual?” This question comes into play with many of the nature-based approaches involving trees, soils, and so on.

Permitting and governance. These issues are especially important — and complicated — with approaches that involve doing things in the ocean. In addition, Herzog points out that some CCS projects could also achieve carbon removal, but they would have a hard time getting permits to build the pipelines and other needed infrastructure.

The authors conclude that none of the CDR strategies now being proposed is a clear winner on all the metrics. However, they stress that carbon removal has the potential to play an important role in meeting our climate change goals — not by replacing our emissions-reduction efforts, but rather by supplementing them. However, as Herzog and Mac Dowell make clear in their book, many challenges must be addressed to move CDR from today’s speculation to deployment at scale, and the book supports the wider discussion about how to move forward. Indeed, the authors have fulfilled their stated goal: “to provide an objective analysis of the opportunities and challenges for CDR and to separate myth from reality.”


Breaking the old model of education with MIT Open Learning

Free MIT study materials enabled 16-year-old Vivan Mirchandani’s nontraditional learning path, opening up scientific research and academic opportunities.


At an age when many kids prefer to play games on their phones, 11-year-old Vivan Mirchandani wanted to explore physics videos. Little did he know that MIT Open Learning’s free online resources would change the course of his life. 

Now, at 16, Mirchandani is well on his way to a career as a physics scholar — all because he forged his own unconventional educational journey.

Nontraditional education has granted Mirchandani the freedom to pursue topics he’s personally interested in. This year, he wrote a paper on cosmology that proposes a new framework for understanding Einstein’s general theory of relativity. Other projects include expanding on fluid dynamics laws for cats, training an AI model to resemble the consciousness of his late grandmother, and creating his own digital twin. That’s in addition to his regular studies, regional science fairs, Model United Nations delegation, and a TEDEd Talk.

Mirchandani started down this path between the ages of 10 and 12, when he decided to read books and find online content about physics during the early Covid-19 lockdown in India. He was shocked to find that MIT Open Learning offers free course videos, lecture notes, exams, and other resources from the Institute on sites like MIT OpenCourseWare and the newly launched MIT Learn.

“My first course was 8.01 (Classical Mechanics), and it completely changed how I saw physics,” Mirchandani says. “Physics sounded like elegance. It’s the closest we’ve ever come to have a theory of everything.”

Experiencing “real learning”

Mirchandani discovered MIT Open Learning through OpenCourseWare, which offers free, online, open educational resources from MIT undergraduate and graduate courses. He says MIT Open Learning’s “academically rigorous” content prepares learners to ask questions and think like a scientist.

“Instead of rote memorization, I finally experienced real learning,” Mirchandani says. “OpenCourseWare was a holy grail. Without it, I would still be stuck on the basic concepts.”

Wanting to follow in the footsteps of physicists like Sir Isaac Newton, Albert Einstein, and Stephen Hawking, Mirchandani decided at age 12 he would sacrifice his grade point average to pursue a nontraditional educational path that gave him hands-on experience in science.

“The education system doesn’t prepare you for actual scientific research, it prepares you for exams,” Mirchandani says. “What draws me to MIT Open Learning and OpenCourseWare is it breaks the old model of education. It’s not about sitting in a lecture hall, it’s about access and experimentation.”

With guidance from his physics teacher, Mirchandani built his own curriculum using educational materials on MIT OpenCourseWare to progress from classical physics to computer science to quantum physics. He has completed more than 27 online MIT courses to date.

“The best part of OpenCourseWare is you get to study from the greatest institution in the world, and you don’t have to pay for it,” he says.

Innovating in the real world

6.0001 (Introduction to Computer Science and Programming Using Python) and slides from 2.06 (Fluid Dynamics) gave Mirchandani the foundation to help with the family business, Dynamech Engineers, which sells machinery for commercial snack production. Some of the recent innovations he has assisted with include a zero-oil frying technology that cuts 300 calories per kilogram, a gas-based heat exchange system, and a simplified, singular machine combining the processes of two separate machines. Using the modeling techniques he learned through MIT OpenCourseWare, Mirchandani designed how these products would work without losing efficiency.

But when you ask Mirchandani which achievement he is most proud of, he’ll say it’s being one of 35 students accepted for the inaugural RSI-India cohort, an academic program for high school students modeled after the Research Science Institute program co-sponsored by MIT and the Center for Excellence in Education. Competing against other Indian students who had perfect scores on their board exams and SATs, he didn’t expect to get in, but the program valued the practical research experience he was able to pursue thanks to the knowledge he gained from his external studies.

“None of it would have happened without MIT OpenCourseWare,” he says. “It’s basically letting curiosity get the better of us. If everybody does that, we’d have a better scientific community.”


Method teaches generative AI models to locate personalized objects

After being trained with this technique, vision-language models can better identify a unique item in a new scene.


Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog-owner to do while onsite.

But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.    

To address this shortcoming, researchers from MIT, the MIT-IBM Watson AI Lab, the Weizmann Institute of Science, and elsewhere have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.

Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.

When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.

Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.

This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.

“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique.

Mirza is joined on the paper by co-lead authors Sivan Doveh, a postdoc at Stanford University who was a graduate student at Weizmann Institute of Science when this research was conducted; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision.

An unexpected shortcoming

Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.

A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.

“The research community has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some visual information is lost in the process of merging the two components together, but we just don’t know,” Mirza says.

The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.

Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.

“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.

To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.

They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.

“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.

Forcing the focus

But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.

For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.

To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”

“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.

The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.

In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12 percent on average. When they included the dataset with pseudo-names, the performance gains reached 21 percent.

As model size increases, their technique leads to greater performance gains.

In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.

“This work reframes few-shot personalized object localization — adapting on the fly to the same object across new scenes — as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs. Given the immense significance of quick, instance-specific grounding — often without finetuning — for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.

Additional co-authors are Wei Lin, a research associate at Johannes Kepler University; Eli Schwartz, a research scientist at IBM Research; Hilde Kuehne, professor of computer science at Tuebingen AI Center and an affiliated professor at the MIT-IBM Watson AI Lab; Raja Giryes, an associate professor at Tel Aviv University; Rogerio Feris, a principal scientist and manager at the MIT-IBM Watson AI Lab; Leonid Karlinsky, a principal research scientist at IBM Research; Assaf Arbelle, a senior research scientist at IBM Research; and Shimon Ullman, the Samy and Ruth Cohn Professor of Computer Science at the Weizmann Institute of Science.

This research was funded, in part, by the MIT-IBM Watson AI Lab.


Darcy McRose and Mehtaab Sawhney ’20, PhD ’24 named 2025 Packard Fellows for Science and Engineering

McRose, an environmental microbiologist, is recognized for researching the ecological roles of antibiotics in shaping ecosystems, agriculture, and health.


The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and EngineeringDarcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University. 

The honorees are among 20 junior faculty named among the nation’s most innovative early-career scientists and engineers. Each Packard Fellow receives an unrestricted research grant of $875,000 over five years to support their pursuit of pioneering research and bold new ideas.

“I’m incredibly grateful and honored to be awarded a Packard Fellowship,” says McRose. “It will allow us to continue our work exploring how small molecules control microbial communities in soils and on plant roots, with much-appreciated flexibility to follow our imagination wherever it leads us.”

McRose and her lab study secondary metabolites — small organic molecules that microbes and plants release into soils. Often known as antibiotics, these compounds do far more than fight infections; they can help unlock soil nutrients, shape microbial communities around plant roots, and influence soil fertility.

“Antibiotics made by soil microorganisms are widely used in medicine, but we know surprisingly little about what they do in nature,” explains McRose. “Just as healthy microbiomes support human health, plant microbiomes support plant health, and secondary metabolites can help to regulate the microbial community, suppressing pathogens and promoting beneficial microbes.” 

Her lab integrates techniques from genetics, chemistry, and geosciences to investigate how these molecules shape interactions between microbes and plants in soil — one of Earth’s most complex and least-understood environments. By using secondary metabolites as experimental tools, McRose aims to uncover the molecular mechanisms that govern processes like soil fertility and nutrient cycling that are foundational to sustainable agriculture and ecosystem health.

Studying antibiotics in the environments where they evolved could also yield new strategies for combating soil-borne pathogens and improving crop resilience. “Soil is a true scientific frontier,” McRose says. “Studying these environments has the potential to reveal fascinating, fundamental insights into microbial life — many of which we can’t even imagine yet.”

A native of California, McRose earned her bachelor’s and master’s degrees from Stanford University, followed by a PhD in geosciences from Princeton University. Her graduate thesis focused on how bacteria acquire trace metals from the environment. Her postdoctoral research on secondary metabolites at Caltech was supported by multiple fellowships, including the Simons Foundation Marine Microbial Ecology Postdoctoral Fellowship, the L’Oréal USA For Women in Science Fellowship, and a Division Fellowship from Biology and Biological Engineering at Caltech.

McRose joined the MIT faculty in 2022. In 2025, she was named a Sloan Foundation Research Fellow in Earth System Science and awarded the Maseeh Excellence in Teaching Award.

Past Packard Fellows have gone on to earn the highest honors, including Nobel Prizes in chemistry and physics, the Fields Medal, Alan T. Waterman Awards, Breakthrough Prizes, Kavli Prizes, and elections to the National Academies of Science, Engineering, and Medicine. Each year, the foundation reviews 100 nominations for consideration from 50 invited institutions. The Packard Fellowships Advisory Panel, a group of 12 internationally recognized scientists and engineers, evaluates the nominations and recommends 20 fellows for approval by the Packard Foundation Board of Trustees.


MIT-Toyota collaboration powers driver assistance in millions of vehicles

A decade-plus alliance between MIT’s AgeLab and Toyota’s Collaborative Safety Research Center is recognized as a key contributor to advancements in automotive safety and human-machine interaction.


A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.

Toyota recently marked the completion of its 100th project through the Collaborative Safety Research Center (CSRC), celebrating MIT’s role in shaping technologies that enhance driver-assistance features and continue to forge the path for automated mobility. A key foundation for the 100th project is CSRC’s ongoing support for MIT CTL’s Advanced Vehicle Technology (AVT) Consortium.

Real-world data, real-world impact

“AVT was conceptualized over a decade ago as an academic-industry partnership to promote shared investment in real-world, naturalistic data collection, analysis, and collaboration — efforts aimed at advancing safer, more convenient, and more comfortable automobility,” says Bryan Reimer, founder and co-director of AVT. “Since its founding, AVT has drawn together over 25 organizations — including vehicle manufacturers, suppliers, insurers, and consumer research groups — to invest in understanding how automotive technologies function, how they influence driver behavior, and where further innovation is needed. This work has enabled stakeholders like Toyota to make more-informed decisions in product development and deployment.”

“CSRC’s 100th project marks a significant milestone in our collaboration,” Reimer adds. “We deeply value CSRC’s sustained investment, and commend the organization’s commitment to global industry impact and the open dissemination of research to advance societal benefit.”

“Toyota, through its Collaborative Safety Research Center, is proud to be a founding member of the AVT Consortium,” says Jason Hallman, senior manager of Toyota CSRC. “Since 2011, CSRC has collaborated with researchers such as AVT and MIT AgeLab on projects that help inform future products and policy, and to promote a future safe mobility society for all. The AVT specifically has helped us to study the real-world use of several vehicle technologies now available.”

Among these technologies are lane-centering assistance and adaptive cruise control — widely-used technologies that benefit from an understanding of how drivers interact with automation. “AVT uniquely combines vehicle and driver data to help inform future products and highlight the interplay between the performance of these features and the drivers using them,” says Josh Domeyer, principal scientist at CSRC.

Influencing global standards and Olympic-scale innovation

Insights from MIT’s pedestrian-driver interaction research with CSRC also helped shape Toyota’s automated vehicle communication systems. “These data helped develop our foundational understanding that drivers and pedestrians use their movements to communicate during routine traffic encounters,” said Domeyer. “This concept informed the deployment of Toyota’s e-Palette at the Tokyo Olympics, and it has been captured as a best practice in an ISO standard for automated driving system communication.”

The AVT Consortium's naturalistic driving datasets continue to serve as a foundation for behavioral safety strategies. From identifying moments of distraction to understanding how drivers multitask behind the wheel, the work is guiding subtle but impactful design considerations.

“By studying the natural behaviors of drivers and their contexts in the AVT datasets, we hope to identify new ways to encourage safe habits that align with customer preferences,” Domeyer says. “These can include subtle nudges, or modifications to existing vehicle features, or even communication and education partnerships outside of Toyota that reinforce these safe driving habits.”

Professor Yossi Sheffi, director of MIT CTL, comments, “This partnership exemplifies the impact of MIT collaborative research on industry to make real, practical innovation possible.” 

A model for industry-academic collaboration

Founded in 2015, the AVT Consortium brings together automotive manufacturers, suppliers, and insurers to accelerate research in driver behavior, safety, and the transition toward automated systems. The consortium’s interdisciplinary approach — integrating engineering, human factors, and data science — has helped generate one of the world’s most unique and actionable real-world driving datasets.

As Toyota celebrates its research milestone, MIT reflects on a partnership that exemplifies the power of industry-academic collaboration to shape safer, smarter mobility.


MIT engineers solve the sticky-cell problem in bioreactors and other industries

Their system uses electrochemically generated bubbles to detach cells from surfaces, which could accelerate the growth of carbon-absorbing algae and lifesaving cell therapies.


To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.

Both processes are hampered by cells’ tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slows down biofuel production, interferes with biosensors and implants, and makes the food and beverage industry less efficient.

Now, MIT researchers have developed an approach for detaching cells from surfaces on demand, using electrochemically generated bubbles. In an open-access paper published in Science Advances, the researchers demonstrated their approach in a lab prototype and showed it could work across a range of cells and surfaces without harming the cells.

“We wanted to develop a technology that could be high-throughput and plug-and-play, and that would allow cells to attach and detach on demand to improve the workflow in these industrial processes,” says Professor Kripa Varanasi, senior author of the study. “This is a fundamental issue with cells, and we’ve solved it with a process that can scale. It lends itself to many different applications.”

Joining Varanasi on the study are co-first authors Bert Vandereydt, a PhD student in mechanical engineering, and former postdoc Baptiste Blanc.

Solving a sticky problem

The researchers began with a mission.

“We’ve been working on figuring out how we can efficiently capture CO2 across different sources and convert it into valuable products for various end markets,” Varanasi says. “That’s where this photobioreactor and cell detachment comes into the picture.”

Photobioreactors are used to grow carbon-absorbing algae cells by creating tightly controlled environments involving water and sunlight. They feature long, winding tubes with clear surfaces to let in the light algae need to grow. When algae stick to those surfaces, they block out the light, requiring cleaning.

“You have to shut down and clean up the entire reactor as frequently as every two weeks,” Varanasi says. “It’s a huge operational challenge.”

The researchers realized other industries have similar problem due to many cells’ natural adhesion, or stickiness. Each industry has its own solution for cell adhesion depending on how important it is that the cells survive. Some people scrape the surfaces clean, while others use special coatings that are toxic to cells.

In the pharmaceutical and biotech industries, cell detachment is typically carried out using enzymes. However, this method poses several challenges — it can damage cell membranes, is time-consuming, and requires large amounts of consumables, resulting in millions of liters of biowaste.

To create a better solution, the researchers began by studying other efforts to clear surfaces with bubbles, which mainly involved spraying bubbles onto surfaces and had been largely ineffective.

“We realized we needed the bubbles to form on the surfaces where we don’t want these cells to stick, so when the bubbles detach it creates a local fluid flow that creates shear stress at the interface and removes the cells,” Varanasi explains.

Electric currents generate bubbles by splitting water into hydrogen and oxygen. But previous attempts at using electricity to detach cells were hampered because the cell culture mediums contain sodium chloride, which turns into bleach when combined with an electric current. The bleach damages the cells, making it impractical for many applications.

“The culprit is the anode — that’s where the sodium chloride turns to bleach,” Vandereydt explained. “We figured if we could separate that electrode from the rest of the system, we could prevent bleach from being generated.”

To make a better system, the researchers built a 3-square-inch glass surface and deposited a gold electrode on top of it. The layer of gold is so thin it doesn’t block out light. To keep the other electrode separate, the researchers integrated a special membrane that only allows protons to pass through. The set up allowed the researchers to send a current through without generating bleach.

To test their setup, they allowed algae cells from a concentrated solution to stick to the surfaces. When they applied a voltage, the bubbles separated the cells from the surfaces without harming them.

The researchers also studied the interaction between the bubbles and cells, finding the higher the current density, the more bubbles were created and the more algae was removed. They developed a model for understanding how much current would be needed to remove algae in different settings and matched it with results from experiments involving algae as well as cells from ovarian cancer and bones.

“Mammalian cells are orders of magnitude more sensitive than algae cells, but even with those cells, we were able to detach them with no impact to the viability of the cell,” Vandereydt says.

Getting to scale

The researchers say their system could represent a breakthrough in applications where bleach or other chemicals would harm cells. That includes pharmaceutical and food production.

“If we can keep these systems running without fouling and other problems, then we can make them much more economical,” Varanasi says.

For cell culture plates used in the pharmaceutical industry, the team envisions their system comprising an electrode that could be robotically moved from one culture plate to the next, to detach cells as they’re grown. It could also be coiled around algae harvesting systems.

“This has general applicability because it doesn’t rely on any specific biological or chemical treatments, but on a physical force that is system-agnostic,” Varanasi says. “It’s also highly scalable to a lot of different processes, including particle removal.”

Varanasi cautions there is much work to be done to scale up the system. But he hopes it can one day make algae and other cell harvesting more efficient.

“The burning problem of our time is to somehow capture CO2 in a way that’s economically feasible,” Varanasi says. “These photobioreactors could be used for that, but we have to overcome the cell adhesion problem.”

The work was supported, in part, by Eni S.p.A through the MIT Energy Initiative, the Belgian American Educational Foundation Fellowship, and the Maria Zambrano Fellowship.


Why some quantum materials stall while others scale

In a new study, MIT researchers evaluated quantum materials’ potential for scalable commercial success — and identified promising candidates.


People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.

What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising materials — a big deal since they may spend years studying a single material.

Now, MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors. The researchers used their framework to evaluate over 16,000 materials, finding that the materials with the highest quantum fluctuation in the centers of their electrons also tend to be more expensive and environmentally damaging. The researchers also identified a set of materials that achieve a balance between quantum functionality and sustainability for further study.

The team hopes their approach will help guide the development of more commercially viable quantum materials that could be used for next generation microelectronics, energy harvesting applications, medical diagnostics, and more.

“People studying quantum materials are very focused on their properties and quantum mechanics,” says Mingda Li, associate professor of nuclear science and engineering and the senior author of the work. “For some reason, they have a natural resistance during fundamental materials research to thinking about the costs and other factors. Some told me they think those factors are too ‘soft’ or not related to science. But I think within 10 years, people will routinely be thinking about cost and environmental impact at every stage of development.”

The paper appears in Materials Today. Joining Li on the paper are co-first authors and PhD students Artittaya Boonkird, Mouyang Cheng, and Abhijatmedhi Chotrattanapituk, along with PhD students Denisse Cordova Carrizales and Ryotaro Okabe; former graduate research assistants Thanh Nguyen and Nathan Drucker; postdoc Manasi Mandal; Instructor Ellan Spero of the Department of Materials Science and Engineering (DMSE); Professor Christine Ortiz of the Department of DMSE; Professor Liang Fu of the Department of Physics; Professor Tomas Palacios of the Department of Electrical Engineering and Computer Science (EECS); Associate Professor Farnaz Niroui of EECS; Assistant Professor Jingjie Yeo of Cornell University; and PhD student Vsevolod Belosevich and Assostant Professor Qiong Ma of Boston College.

Materials with impact

Cheng and Boonkird say that materials science researchers often gravitate toward quantum materials with the most exotic quantum properties rather than the ones most likely to be used in products that change the world.

“Researchers don’t always think about the costs or environmental impacts of the materials they study,” Cheng says. “But those factors can make them impossible to do anything with.”

Li and his collaborators wanted to help researchers focus on quantum materials with more potential to be adopted by industry. For this study, they developed methods for evaluating factors like the materials’ price and environmental impact using their elements and common practices for mining and processing those elements. At the same time, they quantified the materials’ level of “quantumness” using an AI model created by the same group last year, based on a concept proposed by MIT professor of physics Liang Fu, termed quantum weight.

“For a long time, it’s been unclear how to quantify the quantumness of a material,” Fu says. “Quantum weight is very useful for this purpose. Basically, the higher the quantum weight of a material, the more quantum it is.”

The researchers focused on a class of quantum materials with exotic electronic properties known as topological materials, eventually assigning over 16,000 materials scores on environmental impact, price, import resilience, and more.

For the first time, the researchers found a strong correlation between the material’s quantum weight and how expensive and environmentally damaging it is.

“That’s useful information because the industry really wants something very low-cost,” Spero says. “We know what we should be looking for: high quantum weight, low-cost materials. Very few materials being developed meet that criteria, and that likely explains why they don’t scale to industry.”

The researchers identified 200 environmentally sustainable materials and further refined the list down to 31 material candidates that achieved an optimal balance of quantum functionality and high-potential impact.

The researchers also found that several widely studied materials exhibit high environmental impact scores, indicating they will be hard to scale sustainably. “Considering the scalability of manufacturing and environmental availability and impact is critical to ensuring practical adoption of these materials in emerging technologies,” says Niroui.

Guiding research

Many of the topological materials evaluated in the paper have never been synthesized, which limited the accuracy of the study’s environmental and cost predictions. But the authors say the researchers are already working with companies to study some of the promising materials identified in the paper.

“We talked with people at semiconductor companies that said some of these materials were really interesting to them, and our chemist collaborators also identified some materials they find really interesting through this work,” Palacios says. “Now we want to experimentally study these cheaper topological materials to understand their performance better.”

“Solar cells have an efficiency limit of 34 percent, but many topological materials have a theoretical limit of 89 percent. Plus, you can harvest energy across all electromagnetic bands, including our body heat,” Fu says. “If we could reach those limits, you could easily charge your cell phone using body heat. These are performances that have been demonstrated in labs, but could never scale up. That’s the kind of thing we’re trying to push forward."

This work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.


Earthquake damage at deeper depths occurs long after initial activity

While the Earth’s upper crust recovers quickly from seismic activity, new research finds the mid-crust recovers much more slowly, if at all.


Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.

Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.

“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.

The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.

Everything but the quakes

In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.

“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.

What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.

“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.

Balancing the energy budget

Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.

What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.

“Either of those are not what we expected,” says Frank. “And both of them are interesting.”

Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.

“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.


Engineering next-generation fertilizers

MIT postdoc Giorgio Rizzo harnesses plant chemistry to design sustainable fertilizers that could reshape modern farming.


Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”

As an undergraduate student, he watched as a researcher mounted a towering chromatography column layered with colorful plant chemicals in a laboratory. When the researcher switched on a UV light, the colors turned into fluorescent shades of blue, green, red and pink. “I realized in that exact moment that I wanted to be the same person, separating new unknown compounds from a rare plant with potential pharmaceutical properties,” he recalls.

These experiences set him on a path from a master’s degree in organic chemistry to his current work as a postdoc in the MIT Department of Civil and Environmental Engineering, where he focuses on developing sustainable fertilizers and studying how rare earth elements can boost plant resilience, with the aim of reducing agriculture’s environmental impact.

In the lab of MIT Professor Benedetto Marelli, Rizzo studies plant responses to environmental stressors, such as heat, drought, and prolonged UV irradiation. This includes developing new fertilizers that can be applied as seed coating to help plants grow stronger and enhance their resistance.

“We are working on new formulations of fertilizers that aim to reduce the huge environmental impact of classical practices in agriculture based on NPK inorganic fertilizers,” Rizzo explains. Although they are fundamental to crop yields, their tendency to accumulate in soil is detrimental to the soil health and microbiome living in it. In addition, producing NPK (nitrogen, phosphorus, and potassium) fertilizers is one of the most energy-consuming and polluting chemical processes in the world.

“It is mandatory to reshape our conception of fertilizers and try to rely, at least in part, on alternative products that are safer, cheaper, and more sustainable,” he says.

Recently, Rizzo was awarded a Kavanaugh Fellowship, a program administered by the Department of Materials Science and Engineering that gives MIT graduate students and postdocs entrepreneurial training and resources to bring their research from the lab to the market. “This prestigious fellowship will help me build a concrete product for a company, adding more value to our research,” he says.

Rizzo hopes their work will help farmers increase their crop yields without compromising soil quality or plant health. A major barrier to adopting new fertilizers is cost, as many farmers rely heavily on each growing season’s output and cannot risk investing in products that may underperform compared to traditional NPK fertilizers. The fertilizers being developed in the Marelli Lab address this challenge by using chitin and chitosan, abundant natural materials that make them far less expensive to produce, which Rizzo hopes will encourage farmers to try them.

“Through the Kavanaugh Fellowship, I will spend this year trying to bring the technology outside the lab to impact the world and meet the need for farmers to support their prosperity,” he says.

Mentorship has been a defining part of his postdoc experience. Rizzo describes Professor Benedetto Marelli as “an incredible mentor” who values his research interests and supports him through every stage of his work. The lab spans a wide range of projects — from plant growth enhancement and precision chemical delivery to wastewater treatment, vaccine development for fish, and advanced biochemical processes. “My colleagues created a stimulant environment with different research topics,” he notes. He is also grateful for the work he does with international institutions, which has helped him build a network of researchers and academics around the world.

Rizzo enjoys the opportunity to mentor students in the lab and appreciates their curiosity and willingness to learn. “It is one of the greatest qualities you can have as a scientist because you must be driven by curiosity to discover the unexpected,” he says.

He describes MIT as a “dynamic and stimulating experience,” but also acknowledges how overwhelming it can be. “You will feel like a small fish in a big ocean,” he says. “But that is exactly what MIT is: an ocean full of opportunities and challenges that are waiting to be solved.”

Beyond his professional work, Rizzo enjoys nature and the arts. An avid reader, he balances his scientific work with literature and history. “I never read about science-related topics — I read about it a lot already for my job,” he says. “I like classic literature, novels, essays, history of nations, and biographies. Often you can find me wandering in museums’ art collections.” Classical art, Renaissance, and Pre-Raphaelites are his favorite artistic currents.

Looking ahead, Rizzo hopes to shift his professional pathway toward startups or companies focused on agrotechnical improvement. His immediate goal is to contribute to initiatives where research has a direct, tangible impact on everyday life.

“I want to pursue the option of being part of a spinout process that would enable my research to have a direct impact in everyday life and help solve agricultural issues,” he adds.


Optimizing food subsidies: Applying digital platforms to maximize nutrition

An algorithm can change the face of food assistance policy in the Global South, says MIT assistant professor and J-WAFS researcher Ali Aouad.


Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition. 

World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.

J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.” 

Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.

Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I've always been fascinated by trying to solve these issues,” he noted.

His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains. 

For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from  the data, and so this is where we need more sophisticated and robust algorithms.”

Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.” 

The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”

While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.

There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges.


Checking the quality of materials just got easier with a new AI tool

Acting as a “virtual spectrometer,” SpectroGen generates spectroscopic data in any modality, such as X-ray or infrared, to quickly assess a material’s quality.


Manufacturing better batteries, faster electronics, and more effective pharmaceuticals depends on the discovery of new materials and the verification of their quality. Artificial intelligence is helping with the former, with tools that comb through catalogs of materials to quickly tag promising candidates.

But once a material is made, verifying its quality still involves scanning it with specialized instruments to validate its performance — an expensive and time-consuming step that can hold up the development and distribution of new technologies.

Now, a new AI tool developed by MIT engineers could help clear the quality-control bottleneck, offering a faster and cheaper option for certain materials-driven industries.

In a study appearing today in the journal Matter, the researchers present “SpectroGen,” a generative AI tool that turbocharges scanning capabilities by serving as a virtual spectrometer. The tool takes in “spectra,” or measurements of a material in one scanning modality, such as infrared, and generates what that material’s spectra would look like if it were scanned in an entirely different modality, such as X-ray. The AI-generated spectral results match, with 99 percent accuracy, the results obtained from physically scanning the material with the new instrument.

Certain spectroscopic modalities reveal specific properties in a material: Infrared reveals a material’s molecular groups, while X-ray diffraction visualizes the material’s crystal structures, and Raman scattering illuminates a material’s molecular vibrations. Each of these properties is essential in gauging a material’s quality and typically requires tedious workflows on multiple expensive and distinct instruments to measure.

With SpectroGen, the researchers envision that a diversity of measurements can be made using a single and cheaper physical scope. For instance, a manufacturing line could carry out quality control of materials by scanning them with a single infrared camera. Those infrared spectra could then be fed into SpectroGen to automatically generate the material’s X-ray spectra, without the factory having to house and operate a separate, often more expensive X-ray-scanning laboratory.

The new AI tool generates spectra in less than one minute, a thousand times faster compared to traditional approaches that can take several hours to days to measure and validate.

“We think that you don’t have to do the physical measurements in all the modalities you need, but perhaps just in a single, simple, and cheap modality,” says study lead Loza Tadesse, assistant professor of mechanical engineering at MIT. “Then you can use SpectroGen to generate the rest. And this could improve productivity, efficiency, and quality of manufacturing.”

The study was led by Tadesse, with former MIT postdoc Yanmin Zhu serving as first author.

Beyond bonds

Tadesse’s interdisciplinary group at MIT pioneers technologies that advance human and planetary health, developing innovations for applications ranging from rapid disease diagnostics to sustainable agriculture.

“Diagnosing diseases, and material analysis in general, usually involves scanning samples and collecting spectra in different modalities, with different instruments that are bulky and expensive and that you might not all find in one lab,” Tadesse says. “So, we were brainstorming about how to miniaturize all this equipment and how to streamline the experimental pipeline.”

Zhu noted the increasing use of generative AI tools for discovering new materials and drug candidates, and wondered whether AI could also be harnessed to generate spectral data. In other words, could AI act as a virtual spectrometer?

A spectroscope probes a material’s properties by sending light of a certain wavelength into the material. That light causes molecular bonds in the material to vibrate in ways that scatter the light back out to the scope, where the light is recorded as a pattern of waves, or spectra, that can then be read as a signature of the material’s structure.

For AI to generate spectral data, the conventional approach would involve training an algorithm to recognize connections between physical atoms and features in a material, and the spectra they produce. Given the complexity of molecular structures within just one material, Tadesse says such an approach can quickly become intractable.

“Doing this even for just one material is impossible,” she says. “So, we thought, is there another way to interpret spectra?”

The team found an answer with math. They realized that a spectral pattern, which is a sequence of waveforms, can be represented mathematically. For instance, a spectrum that contains a series of bell curves is known as a “Gaussian” distribution, which is associated with a certain mathematical expression, compared to a series of narrower waves, known as a “Lorentzian” distribution, that is described by a separate, distinct algorithm. And as it turns out, for most materials infrared spectra characteristically contain more Lorentzian waveforms, while Raman spectra are more Gaussian, and X-ray spectra is a mix of the two.

Tadesse and Zhu worked this mathematical interpretation of spectral data into an algorithm that they then incorporated into a generative AI model.

It’s a physics-savvy generative AI that understands what spectra are,” Tadesse says. “And the key novelty is, we interpreted spectra not as how it comes about from chemicals and bonds, but that it is actually math — curves and graphs, which an AI tool can understand and interpret.”

Data co-pilot

The team demonstrated their SpectroGen AI tool on a large, publicly available dataset of over 6,000 mineral samples. Each sample includes information on the mineral’s properties, such as its elemental composition and crystal structure. Many samples in the dataset also include spectral data in different modalities, such as X-ray, Raman, and infrared. Of these samples, the team fed several hundred to SpectroGen, in a process that trained the AI tool, also known as a neural network, to learn correlations between a mineral’s different spectral modalities. This training enabled SpectroGen to take in spectra of a material in one modality, such as in infrared, and generate what a spectra in a totally different modality, such as X-ray, should look like.

Once they trained the AI tool, the researchers fed SpectroGen spectra from a mineral in the dataset that was not included in the training process. They asked the tool to generate a spectra in a different modality, based on this “new” spectra. The AI-generated spectra, they found, was a close match to the mineral’s real spectra, which was originally recorded by a physical instrument. The researchers carried out similar tests with a number of other minerals and found that the AI tool quickly generated spectra, with 99 percent correlation.

“We can feed spectral data into the network and can get another totally different kind of spectral data, with very high accuracy, in less than a minute,” Zhu says.

The team says that SpectroGen can generate spectra for any type of mineral. In a manufacturing setting, for instance, mineral-based materials that are used to make semiconductors and battery technologies could first be quickly scanned by an infrared laser. The spectra from this infrared scanning could be fed into SpectroGen, which would then generate a spectra in X-ray, which operators or a multiagent AI platform can check to assess the material’s quality.

“I think of it as having an agent or co-pilot, supporting researchers, technicians, pipelines and industry,” Tadesse says. “We plan to customize this for different industries’ needs.”

The team is exploring ways to adapt the AI tool for disease diagnostics, and for agricultural monitoring through an upcoming project funded by Google. Tadesse is also advancing the technology to the field through a new startup and envisions making SpectroGen available for a wide range of sectors, from pharmaceuticals to semiconductors to defense.


Helping scientists run complex data analyses without writing code

Co-founded by an MIT alumnus, Watershed Bio offers researchers who aren’t software engineers a way to run large-scale analyses to accelerate biology.


As costs for diagnostic and sequencing technologies have plummeted in recent years, researchers have collected an unprecedented amount of data around disease and biology. Unfortunately, scientists hoping to go from data to new cures often require help from someone with experience in software engineering.

Now, Watershed Bio is helping scientists and bioinformaticians run experiments and get insights with a platform that lets users analyze complex datasets regardless of their computational skills. The cloud-based platform provides workflow templates and a customizable interface to help users explore and share data of all types, including whole-genome sequencing, transcriptomics, proteomics, metabolomics, high-content imaging, protein folding, and more.

“Scientists want to learn about the software and data science parts of the field, but they don’t want to become software engineers writing code just to understand their data,” co-founder and CEO Jonathan Wang ’13, SM ’15 says. “With Watershed, they don’t have to.”

Watershed is being used by large and small research teams across industry and academia to drive discovery and decision-making. When new advanced analytic techniques are described in scientific journals, they can be added to Watershed’s platform immediately as templates, making cutting-edge tools more accessible and collaborative for researchers of all backgrounds.

“The data in biology is growing exponentially, and the sequencing technologies generating this data are only getting better and cheaper,” Wang says. “Coming from MIT, this issue was right in my wheelhouse: It’s a tough technical problem. It’s also a meaningful problem because these people are working to treat diseases. They know all this data has value, but they struggle to use it. We want to help them unlock more insights faster.”

No code discovery

Wang expected to major in biology at MIT, but he quickly got excited by the possibilities of building solutions that scaled to millions of people with computer science. He ended up earning both his bachelor’s and master’s degrees from the Department of Electrical Engineering and Computer Science (EECS). Wang also interned at a biology lab at MIT, where he was surprised how slow and labor-intensive experiments were.

“I saw the difference between biology and computer science, where you had these dynamic environments [in computer science] that let you get feedback immediately,” Wang says. “Even as a single person writing code, you have so much at your fingertips to play with.”

While working on machine learning and high-performance computing at MIT, Wang also co-founded a high frequency trading firm with some classmates. His team hired researchers with PhD backgrounds in areas like math and physics to develop new trading strategies, but they quickly saw a bottleneck in their process.

“Things were moving slowly because the researchers were used to building prototypes,” Wang says. “These were small approximations of models they could run locally on their machines. To put those approaches into production, they needed engineers to make them work in a high-throughput way on a computing cluster. But the engineers didn’t understand the nature of the research, so there was a lot of back and forth. It meant ideas you thought could have been implemented in a day took weeks.”

To solve the problem, Wang’s team developed a software layer that made building production-ready models as easy as building prototypes on a laptop. Then, a few years after graduating MIT, Wang noticed technologies like DNA sequencing had become cheap and ubiquitous.

“The bottleneck wasn’t sequencing anymore, so people said, ‘Let’s sequence everything,’” Wang recalls. “The limiting factor became computation. People didn’t know what to do with all the data being generated. Biologists were waiting for data scientists and bioinformaticians to help them, but those people didn’t always understand the biology at a deep enough level.”

The situation looked familiar to Wang.

“It was exactly like what we saw in finance, where researchers were trying to work with engineers, but the engineers never fully understood, and you had all this inefficiency with people waiting on the engineers,” Wang says. “Meanwhile, I learned the biologists are hungry to run these experiments, but there is such a big gap they felt they had to become a software engineer or just focus on the science.”

Wang officially founded Watershed in 2019 with physician Mark Kalinich ’13, a former classmate at MIT who is no longer involved in day-to-day operations of the company.

Wang has since heard from biotech and pharmaceutical executives about the growing complexity of biology research. Unlocking new insights increasingly involves analyzing data from entire genomes, population studies, RNA sequencing, mass spectrometry, and more. Developing personalized treatments or selecting patient populations for a clinical study can also require huge datasets, and there are new ways to analyze data being published in scientific journals all the time.

Today, companies can run large-scale analyses on Watershed without having to set up their own servers or cloud computing accounts. Researchers can use ready-made templates that work with all the most common data types to accelerate their work. Popular AI-based tools like AlphaFold and Geneformer are also available, and Watershed’s platform makes sharing workflows and digging deeper into results easy.

“The platform hits a sweet spot of usability and customizability for people of all backgrounds,” Wang says. “No science is ever truly the same. I avoid the word product because that implies you deploy something and then you just run it at scale forever. Research isn’t like that. Research is about coming up with an idea, testing it, and using the outcome to come up with another idea. The faster you can design, implement, and execute experiments, the faster you can move on to the next one.”

Accelerating biology

Wang believes Watershed is helping biologists keep up with the latest advances in biology and accelerating scientific discovery in the process.

“If you can help scientists unlock insights not a little bit faster, but 10 or 20 times faster, it can really make a difference,” Wang says.

Watershed is being used by researchers in academia and in companies of all sizes. Executives at biotech and pharmaceutical companies also use Watershed to make decisions about new experiments and drug candidates.

“We’ve seen success in all those areas, and the common thread is people understanding research but not being an expert in computer science or software engineering,” Wang says. “It’s exciting to see this industry develop. For me, it’s great being from MIT and now to be back in Kendall Square where Watershed is based. This is where so much of the cutting-edge progress is happening. We’re trying to do our part to enable the future of biology.”


New MIT initiative seeks to transform rare brain disorders research

The Rare Brain Disorders Nexus aims to accelerate the development of novel therapies for a spectrum of uncommon brain diseases.


More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.

Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.

“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”

Building new coalitions

Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.

Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”

Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.

RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.

MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.

These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”

“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”


Geologists discover the first evidence of 4.5-billion-year-old “proto Earth”

Materials from ancient rocks could reveal conditions in the early solar system that shaped the early Earth and other planets.


Scientists at MIT and elsewhere have discovered extremely rare remnants of “proto Earth,” which formed about 4.5 billion years ago, before a colossal collision irreversibly altered the primitive planet’s composition and produced the Earth as we know today. Their findings, reported today in the journal Nature Geosciences, will help scientists piece together the primordial starting ingredients that forged the early Earth and the rest of the solar system.

Billions of years ago, the early solar system was a swirling disk of gas and dust that eventually clumped and accumulated to form the earliest meteorites, which in turn merged to form the proto Earth and its neighboring planets.

In this earliest phase, Earth was likely rocky and bubbling with lava. Then, less than 100 million years later, a Mars-sized meteorite slammed into the infant planet in a singular “giant impact” event that completely scrambled and melted the planet’s interior, effectively resetting its chemistry. Whatever original material the proto Earth was made from was thought to have been altogether transformed.

But the MIT team’s findings suggest otherwise. The researchers have identified a chemical signature in ancient rocks that is unique from most other materials found in the Earth today. The signature is in the form of a subtle imbalance in potassium isotopes discovered in samples of very old and very deep rocks. The team determined that the potassium imbalance could not have been produced by any previous large impacts or geological processes occurring in the Earth presently.

The most likely explanation for the samples’ chemical composition is that they must be leftover material from the proto Earth that somehow remained unchanged, even as most of the early planet was impacted and transformed.

“This is maybe the first direct evidence that we’ve preserved the proto Earth materials,” says Nicole Nie, the Paul M. Cook Career Development Assistant Professor of Earth and Planetary Sciences at MIT. “We see a piece of the very ancient Earth, even before the giant impact. This is amazing because we would expect this very early signature to be slowly erased through Earth’s evolution.”

The study’s other authors include Da Wang of Chengdu University of Technology in China, Steven Shirey and Richard Carlson of the Carnegie Institution for Science in Washington, Bradley Peters of ETH Zürich in Switzerland, and James Day of Scripps Institution of Oceanography in California.

A curious anomaly

In 2023, Nie and her colleagues analyzed many of the major meteorites that have been collected from sites around the world and carefully studied. Before impacting the Earth, these meteorites likely formed at various times and locations throughout the solar system, and therefore represent the solar system’s changing conditions over time. When the researchers compared the chemical compositions of these meteorite samples to Earth, they identified among them a “potassium isotopic anomaly.”

Isotopes are slightly different versions of an element that have the same number of protons but a different number of neutrons. The element potassium can exist in one of three naturally-occurring isotopes, with mass numbers (protons plus neutrons) of 39, 40, and 41, respectively. Wherever potassium has been found on Earth, it exists in a characteristic combination of isotopes, with potassium-39 and potassium-41 being overwhelmingly dominant. Potassium-40 is present, but at a vanishingly small percentage in comparison.

Nie and her colleagues discovered that the meteorites they studied showed balances of potassium isotopes that were different from most materials on Earth. This potassium anomaly suggested that any material that exhibits a similar anomaly likely predates Earth’s present composition. In other words, any potassium imbalance would be a strong sign of material from the proto Earth, before the giant impact reset the planet’s chemical composition.

“In that work, we found that different meteorites have different potassium isotopic signatures, and that means potassium can be used as a tracer of Earth’s building blocks,” Nie explains.

“Built different”

In the current study, the team looked for signs of potassium anomalies not in meteorites, but within the Earth. Their samples include rocks, in powder form, from Greenland and Canada, where some of the oldest preserved rocks are found. They also analyzed lava deposits collected from Hawaii, where volcanoes have brought up some of the Earth’s earliest, deepest materials from the mantle (the planet’s thickest layer of rock that separates the crust from the core).

“If this potassium signature is preserved, we would want to look for it in deep time and deep Earth,” Nie says.

The team first dissolved the various powder samples in acid, then carefully isolated any potassium from the rest of the sample and used a special mass spectrometer to measure the ratio of each of potassium’s three isotopes. Remarkably, they identified in the samples an isotopic signature that was different from what’s been found in most materials on Earth.

Specifically, they identified a deficit in the potassium-40 isotope. In most materials on Earth, this isotope is already an insignificant fraction compared to potassium’s other two isotopes. But the researchers were able to discern that their samples contained an even smaller percentage of potassium-40. Detecting this tiny deficit is like spotting a single grain of brown sand in a bucket rather than a scoop full of of yellow sand.

The team found that, indeed, the samples exhibited the potassium-40 deficit, showing that the materials “were built different,” says Nie, compared to most of what we see on Earth today.

But could the samples be rare remnants of the proto Earth? To answer this, the researchers assumed that this might be the case. They reasoned that if the proto Earth were originally made from such potassium-40-deficient materials, then most of this material would have undergone chemical changes — from the giant impact and subsequent, smaller meteorite impacts — that ultimately resulted in the materials with more potassium-40 that we see today. 

The team used compositional data from every known meteorite and carried out simulations of how the samples’ potassium-40 deficit would change following impacts by these meteorites and by the giant impact. They also simulated geological processes that the Earth experienced over time, such as the heating and mixing of the mantle. In the end, their simulations produced a composition with a slightly higher fraction of potassium-40 compared to the samples from Canada, Greenland, and Hawaii. More importantly, the simulated compositions matched those of most modern-day materials.

The work suggests that materials with a potassium-40 deficit are likely leftover original material from the proto Earth.

Curiously, the samples’ signature isn’t a precise match with any other meteorite in geologists’ collections. While the meteorites in the team’s previous work showed potassium anomalies, they aren’t exactly the deficit seen in the proto Earth samples. This means that whatever meteorites and materials originally formed the proto Earth have yet to be discovered.

“Scientists have been trying to understand Earth’s original chemical composition by combining the compositions of different groups of meteorites,” Nie says. “But our study shows that the current meteorite inventory is not complete, and there is much more to learn about where our planet came from.”

This work was supported, in part, by NASA and MIT.


A new system can dial expression of synthetic genes up or down

The promoter editing system could be used to fine-tune gene therapy or to more efficiently reprogram cells for therapeutic use.


For decades, synthetic biologists have been developing gene circuits that can be transferred into cells for applications such as reprogramming a stem cell into a neuron or generating a protein that could help treat a disease such as fragile X syndrome.

These gene circuits are typically delivered into cells by carriers such as nonpathogenic viruses. However, it has been difficult to ensure that these cells end up producing the correct amount of the protein encoded by the synthetic gene.

To overcome that obstacle, MIT engineers have designed a new control mechanism that allows them to establish a desired protein level, or set point, for any gene circuit. This approach also allows them to edit the set point after the circuit is delivered.

“This is a really stable and multifunctional tool. The tool is very modular, so there are a lot of transgenes you could control with this system,” says Katie Galloway, an assistant professor in Chemical Engineering at MIT and the senior author of the new study.

Using this strategy, the researchers showed that they could induce cells to generate consistent levels of target proteins. In one application that they demonstrated, they converted mouse embryonic fibroblasts to motor neurons by delivering high levels of a gene that promotes that conversion.

MIT graduate student Sneha Kabaria is the lead author of the paper, which appears today in Nature Biotechnology. Other authors include Yunbeen Bae ’24; MIT graduate students Mary Ehmann, Brittany Lende-Dorn, Emma Peterman, and Kasey Love; Adam Beitz PhD ’25; and former MIT postdoc Deon Ploessl.

Dialing up gene expression

Synthetic gene circuits are engineered to include not only the gene of interest, but also a promoter region. At this site, transcription factors and other regulators can bind, turning on the expression of the synthetic gene.

However, it’s not always possible to get all of the cells in a population to express the desired gene at a uniform level. One reason for that is that some cells may take up just one copy of the circuit, while others receive many more. Additionally, cells have natural variation in how much protein they produce.

That has made reprogramming cells challenging because it’s difficult to ensure that every cell in a population of skin cells, for example, will produce enough of the necessary transcription factors to successfully transition into a new cell identity, such as a neuron or induced pluripotent stem cell.

In the new paper, the researchers devised a way to control gene expression levels by changing the distance between the synthetic gene and its promoter. They found that when there was a longer DNA “spacer” between the promoter region and the gene, the gene would be expressed at a lower level. That extra distance, they showed, makes it less likely that transcription factors bound to the promoter will effectively turn on gene transcription.

Then, to create set points that could be edited, the researchers incorporated sites within the spacer that can be excised by an enzyme called Cre recombinase. As parts of the spacer are cut out, it helps bring the transcription factors closer to the gene of interest, which turns up gene expression.

The researchers showed they could create spacers with multiple excision points, each targeted by different recombinases. This allowed them to create a system called DIAL, that they could use to establish “high,” “med,” “low” and “off” set points for gene expression.

After the DNA segment carrying the gene and its promoter is delivered into cells, recombinases can be added to the cells, allowing the set point to be edited at any time.

The researchers demonstrated their system in mouse and human cells by delivering the gene for different fluorescent proteins and functional genes, and showed that they could get uniform expression across the a population of cells at the target level.

“We achieved uniform and stable control. This is very exciting for us because lack of uniform, stable control has been one of the things that's been limiting our ability to build reliable systems in synthetic biology. When there are too many variables that affect your system, and then you add in normal biological variation, it’s very hard to build stable systems,” Galloway says.

Reprogramming cells

To demonstrate potential applications of the DIAL system, the researchers then used it to deliver different levels of the gene HRasG12V to mouse embryonic fibroblasts. This HRas variant has previously been shown to increase the rate of conversion of fibroblasts to neurons. The MIT team found that in cells that received a higher dose of the gene, a larger percentage of them were able to successfully transform into neurons.

Using this system, researchers now hope to perform more systematic studies of different transcription factors that can induce cells to transition to different cell types. Such studies could reveal how different levels of those factors affect the success rate, and whether changing the transcription factors levels might alter the cell type that is generated.

In ongoing work, the researchers have shown that DIAL can be combined with a system they previously developed, known as ComMAND, that uses a feedforward loop to help prevent cells from overexpressing a therapeutic gene.

Using these systems together, it could be possible to tailor gene therapies to produce specific, consistent protein levels in the target cells of individual patients, the researchers say.

“This is something we’re excited about because both DIAL and ComMAND are highly modular, so you could not only have a well-controlled gene therapy that’s somewhat general for a population, but you could, in theory, tailor it for any given person or any given cell type,” Galloway says.

The research was funded, in part, by the National Institute of General Medical Sciences, the National Science Foundation, and the Institute for Collaborative Biotechnologies.


MIT releases financials and endowment figures for 2025

The Institute’s pooled investments returned 14.8 percent last year; endowment stands at $27.4 billion.


The Massachusetts Institute of Technology Investment Management Company (MITIMCo) announced today that MIT’s unitized pool of endowment and other MIT funds generated an investment return of 14.8 percent during the fiscal year ending June 30, 2025, as measured using valuations received within one month of fiscal year end. At the end of the fiscal year, MIT’s endowment funds totaled $27.4 billion, excluding pledges. Over the 10 years ending June 30, 2025, MIT generated an annualized return of 10.7 percent.

The endowment is the bedrock of MIT’s finances, made possible by gifts from alumni and friends for more than a century. The use of the endowment is governed by a state law that requires MIT to maintain each endowed gift as a permanent fund, preserve its purchasing power, and spend it as directed by its original donor. Most of the endowment’s funds are restricted and must be used for a specific purpose. MIT uses the bulk of the income these endowed gifts generate to support financial aid, research, and education.

The endowment supports 50 percent of undergraduate tuition, helping to enable the Institute’s need-blind undergraduate admissions policy, which ensures that an MIT education is accessible to all qualified candidates regardless of financial resources. MIT works closely with all families of undergraduates who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2024-25, the average need-based MIT undergraduate scholarship was $62,127. Fifty-seven percent of MIT undergraduates received need-based financial aid, and 39 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.

Effective in fiscal 2026, MIT enhanced undergraduate financial aid, ensuring that all families with incomes below $200,000 and typical assets have tuition fully covered by scholarships, and that families with incomes below $100,000 and typical assets pay nothing at all for their students’ MIT education. Eighty-eight percent of seniors who graduated in academic year 2025 graduated with no debt.

MITIMCo is a unit of MIT, created to manage and oversee the investment of the Institute’s endowment, retirement, and operating funds.

MIT’s Report of the Treasurer for fiscal year 2025, which details the Institute’s annual financial performance, was made available publicly today.


Ray Kurzweil ’70 reinforces his optimism in tech progress

Receiving the Robert A. Muh award, the technologist and author heralded a bright future for AI, breakthroughs in longevity, and more.


Innovator, futurist, and author Ray Kurzweil ’70 emphasized his optimism about artificial intelligence, and technological progress generally, in a lecture on Wednesday while accepting MIT’s Robert A. Muh Alumni Award from the School of Humanities, Arts, and Social Sciences (SHASS).

Kurzweil offered his signature high-profile forecasts about how AI and computing will entirely blend with human functionality, and proposed that AI will lead to monumental gains in longevity, medicine, and other realms of life.

“People do not appreciate that the rate of progress is accelerating,” Kurzweil said, forecasting “incredible breakthroughs” over the next two decades.

Kurzweil delivered his lecture, titled “Reinventing Intelligence,” in the Thomas Tull Concert Hall of the Edward and Joyce Linde Music Building, which opened earlier in 2025 on the MIT campus.

The Muh Award was founded and endowed by Robert A. Muh ’59 and his wife Berit, and is one of the leading alumni honors granted by SHASS and MIT. Muh, a life member emeritus of the MIT Corporation, established the award, which is granted every two years for “extraordinary contributions” by alumni in the humanities, arts, and social sciences.

Robert and Berit Muh were both present at the lecture, along with their daughter Carrie Muh ’96, ’97, SM ’97.

Agustín Rayo, dean of SHASS, offered introductory remarks, calling Kurzweil “one of the most prolific thinkers of our time.” Rayo added that Kurzweil “has built his life and career on the belief that ideas change the world, and change it for the better.”

Kurzweil has been an innovator in language recognition technologies, developing advances and founding companies that have served people who are blind or low-vision, and helped in music creation. He is also a best-selling author who has heralded advances in computing capabilities, and even the merging of human and machines.

The initial segment of Kurzweil’s lecture was autobiographical in focus, reflecting on his family and early years. The families of both of Kurzweil’s parents fled the Nazis in Europe, seeking refuge in the U.S., with the belief that people could create a brighter future for themselves.

“My parents taught me the power of ideas can really change the world,” Kurzweil said.

Showing an early interest in how things worked, Kurzweil had decided to become an inventor by about the age of 7, he recalled. He also described his mother as being tremendously encouraging to him as a child. The two would take walks together, and the young Kurzweil would talk about all the things he imagined inventing.

“I would tell her my ideas and no matter how fantastical they were, she believed them,” he said. “Now other parents might have simply chuckled … but she actually believed my ideas, and that actually gave me my confidence, and I think confidence is important in succeeding.”

He became interested in computing by the early 1960s and majored in both computer science and literature as an MIT undergraduate.

Kurzweil has a long-running association with MIT extending far beyond his undergraduate studies. He served as a member of the MIT Corporation from 2005 to 2012 and was the 2001 recipient of the $500,000 Lemelson-MIT Prize, an award for innovation, for his development of reading technology.

“MIT has played a major role in my personal and professional life over the years,” Kurzweil said, calling himself “truly honored to receive this award.” Addressing Muh, he added: “Your longstanding commitment to our alma mater is inspiring.”

After graduating from MIT, Kurzweil launched a successful career developing innovative computing products, including one that recognized text across all fonts and could produce an audio reading. He also developed leading-edge music synthesizers, among many other advances.

In a corresponding part of his career, Kurzweil has become an energetic author, whose best-known books include “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999), “The Singularity Is Near” (2005), and “The Singularity Is Nearer” (2024), among many others.

Kurzweil was recently named chief AI officer of Beyond Imagination, a robotics firm he co-founded; he has also held a position at Google in recent years, working on natural language technologies.

In his remarks, Kurzweil underscored his view that, as exemplified and enabled by the growth of computing power over time, technological innovation moves at an exponential pace.

“People don’t really think about exponential growth; they think about linear growth,” Kurzweil said.

This concept, he said, makes him confident that a string of innovations will continue at remarkable speed.

“One of the bigger transformations we’re going to see from AI in the near term is health and medicine,” Kurweil said, forecasting that human medical trials will be replaced by simulated “digital trials.”

Kurzweil also believes computing and AI advances can lead to so many medical advances it will soon produce a drastic improvement in human longevity.

“These incredible breakthroughs are going to lead to what we’ll call longevity escape velocity,” Kurzweil said. “By roughly 2032 when you live through a year, you’ll get back an entire year from scientific progress, and beyond that point you’ll get back more than a year for every year you live, so you’ll be going back into time as far as your health is concerned,” Kurweil said. He did offer that these advances will “start” with people who are the most diligent about their health.

Kurzweil also outlined one of his best-known forecasts, that AI and people will be combined. “As we move forward, the lines between humans and technology will blur, until we are … one and the same,” Kurzweil said. “This is how we learn to merge with AI. In the 2030s, robots the size of molecules will go into our brains, noninvasively, through the capillaries, and will connect our brains directly to the cloud. Think of it like having a phone, but in your brain.”

“By 2045, once we have fully merged with AI, our intelligence will no longer be constrained … it will expand a millionfold,” he said. “This is what we call the singularity.”

To be sure, Kurzweil acknowledged, “Technology has always been a double-edged sword,” given that a drone can deliver either medical supplies or weaponry. “Threats of AI are real, must be taken seriously, [and] I think we are doing that,” he said. In any case, he added, we have “a moral imperative to realize the promise of new technologies while controlling the peril.” He concluded: “We are not doomed to fail to control any of these risks.” 


Gene-Wei Li named associate head of the Department of Biology

The associate professor aims to help the department continue to be a worldwide leader in education, biological sciences, and fundamental research.


Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year. 

Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he was appointed as a Howard Hughes Medical Institute (HHMI) Investigator

“I am grateful to Gene-Wei for joining the leadership team,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “Gene will be a key leader in our educational initiatives, both digital and residential, and will be a critical part of keeping our department strong and forward-looking.” 

A great environment to do science

Li says he was inspired to take on the role in part because of the way MIT Biology facilitates career development during every stage — from undergraduate and graduate students to postdocs and junior faculty members, as he was when he started in the department as an assistant professor just 10 years ago. 

“I think we all benefit a lot from our environment, and I think this is a great environment to do science and educate people, and to create a new generation of scientists,” he says. “I want us to keep doing well, and I’m glad to have the opportunity to contribute to this effort.” 

As part of his portfolio as associate department head, Li will continue in the role of scientific director of the Koch Biology Building, Building 68. In the last year, the previous scientific director, Stephen Bell, Uncas and Helen Whitaker Professor of Biology and HHMI Investigator, has continued to provide support and ensured a steady ramp-up, transitioning Li into his new duties. The building, which opened its doors in 1994, is in need of a slate of updates and repairs. 

Although Li will be managing more administrative duties, he has provided a stable foundation for his lab to continue its interdisciplinary work on the quantitative biology of gene expression, parsing the mechanisms by which cells control the levels of their proteins and how this enables cells to perform their functions. His recent work includes developing a method that leverages the AI tool AlphaFold to predict whether protein fragments can recapitulate the native interactions of their full-length counterparts.  

“I’m still very heavily involved, and we have a lab environment where everyone helps each other. It’s a team, and so that helps elevate everyone,” he says. “It’s the same with the whole building: nobody is working by themselves, so the science and administrative parts come together really nicely.” 

Teaching for the future

Li is considering how the department can continue to be a global leader in biological sciences while navigating the uncertainty surrounding academia and funding, as well as the likelihood of reduced staff support and tightening budgets.

“The question is: How do you maintain excellence?” Li says. “That involves recruiting great people and giving them the resources that they need, and that’s going to be a priority within the limitations that we have to work with.” 

Li will also be serving as faculty advisor for the MIT Biology Teaching and Learning Group, headed by Mary Ellen Wiltrout, and will serve on the Department of Biology Digital Learning Committee and the new Open Learning Biology Advisory Committee. Li will serve in the latter role in order to represent the department and work with new faculty member and HHMI Investigator Ron Vale on Institute-level online learning initiatives. Li will also chair the Biology Academic Planning Committee, which will help develop a longer-term outlook on faculty teaching assignments and course offerings. 

Li is looking forward to hearing from faculty and students about the way the Institute teaches, and how it could be improved, both for the students on campus and for the online learners from across the world. 

“There are a lot of things that are changing; what are the core fundamentals that the students need to know, what should we teach them, and how should we teach them?” 

Although the commitment to teaching remains unchanged, there may be big transitions on the horizon. With two young children in school, Li is all too aware that the way that students learn today is very different from what he grew up with, and also very different from how students were learning just five or 10 years ago — writing essays on a computer, researching online, using AI tools, and absorbing information from media like short-form YouTube videos. 

“There’s a lot of appeal to a shorter format, but it’s very different from the lecture-based teaching style that has worked for a long time,” Li says. “I think a challenge we should and will face is figuring out the best way to communicate the core fundamentals, and adapting our teaching styles to the next generation of students.” 

Ultimately, Li is excited about balancing his research goals along with joining the department’s leadership team, and knows he can look to his fellow researchers in Building 68 and beyond for support.

“I’m privileged to be working with a great group of colleagues who are all invested in these efforts,” Li says. “Different people may have different ways of doing things, but we all share the same mission.” 


MIT Schwarzman College of Computing and MBZUAI launch international collaboration to shape the future of AI

The MIT–MBZUAI Collaborative Research Program will unite faculty and students from both institutions to advance AI and accelerate its use in pressing scientific and societal challenges.


The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.

Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the technological foundations of AI and its applications in three core areas: scientific discovery, human thriving, and the health of the planet.

“Artificial intelligence is transforming nearly every aspect of human endeavor. MIT’s leadership in AI is greatly enriched through collaborations with leading academic institutions in the U.S. and around the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our collaboration with MBZUAI reflects a shared commitment to advancing AI in ways that are responsible, inclusive, and globally impactful. Together, we can explore new horizons in AI and bring broad benefits to society.”

“This agreement will unite the efforts of researchers at two world-class institutions to advance frontier AI research across scientific discovery, human thriving, and the health of the planet. By combining MBZUAI’s focus on foundational models and real-world deployment with MIT’s depth in computing and interdisciplinary innovation, we are creating a transcontinental bridge for discovery. Together, we will not only expand the boundaries of AI science, but also ensure that these breakthroughs are pursued responsibly and applied where they matter most — improving human health, enabling intelligent robotics, and driving sustainable AI at scale,” says Eric Xing, president and university professor at MBZUAI.

Each institution has appointed an academic director to oversee the program on its campus. At MIT, Philip Isola, the Class of 1948 Career Development Professor in the Department of Electrical Engineering and Computer Science, will serve as program lead. At MBZUAI, Le Song, professor of machine learning, will take on the role.

Supported by MBZUAI — the first university dedicated entirely to advancing science through AI, and based in Abu Dhabi, U.A.E. — the collaboration will fund a number of joint research projects per year. The findings will be openly publishable, and each project will be led by a principal investigator from MIT and one from MBZUAI, with project selections made by a steering committee composed of representatives from both institutions.


MIT physicists improve the precision of atomic clocks

A new method turns down quantum noise that obscures the “ticking” of atoms, and could enable stable, transportable atomic clocks.


Every time you check the time on your phone, make an online transaction, or use a navigation app, you are depending on the precision of atomic clocks.

An atomic clock keeps time by relying on the “ticks” of atoms as they naturally oscillate at rock-steady frequencies. Today’s atomic clocks operate by tracking cesium atoms, which tick over 10 billion times per second. Each of those ticks is precisely tracked using lasers that oscillate in sync, at microwave frequencies.

Scientists are developing next-generation atomic clocks that rely on even faster-ticking atoms such as ytterbium, which can be tracked with lasers at higher, optical frequencies. If they can be kept stable, optical atomic clocks could track even finer intervals of time, up to 100 trillion times per second.

Now, MIT physicists have found a way to improve the stability of optical atomic clocks, by reducing “quantum noise” — a fundamental measurement limitation due to the effects of quantum mechanics, which obscures the atoms’ pure oscillations. In addition, the team discovered that an effect of a clock’s laser on the atoms, previously considered irrelevant, can be used to further stabilize the laser.

The researchers developed a method to harness a laser-induced “global phase” in ytterbium atoms, and have boosted this effect with a quantum-amplification technique. The new approach doubles the precision of an optical atomic clock, enabling it to discern twice as many ticks per second compared to the same setup without the new method. What’s more, they anticipate that the precision of the method should increase steadily with the number of atoms in an atomic clock.

The researchers detail the method, which they call global phase spectroscopy, in a study appearing today in the journal Nature. They envision that the clock-stabilizing technique could one day enable portable optical atomic clocks that can be transported to various locations to measure all manner of phenomena.

“With these clocks, people are trying to detect dark matter and dark energy, and test whether there really are just four fundamental forces, and even to see if these clocks can predict earthquakes,” says study author Vladan Vuletić, the Lester Wolfe Professor of Physics at MIT. “We think our method can help make these clocks transportable and deployable to where they’re needed.”

The paper’s co-authors are Leon Zaporski, Qi Liu, Gustavo Velez, Matthew Radzihovsky, Zeyang Li, Simone Colombo, and Edwin Pedrozo-Peñafiel, who are members of the MIT-Harvard Center for Ultracold Atoms and the MIT Research Laboratory of Electronics.

Ticking time

In 2020, Vuletić and his colleagues demonstrated that an atomic clock could be made more precise by quantumly entangling the clock’s atoms. Quantum entanglement is a phenomenon by which particles can be made to behave in a collective, highly correlated manner. When atoms are quantumly entangled, they redistribute any noise, or uncertainty in measuring the atoms’ oscillations, in a way that reveals a clearer, more measurable “tick.”

In their previous work, the team induced quantum entanglement among several hundred ytterbium atoms that they first cooled and trapped in a cavity formed by two curved mirrors. They sent a laser into the cavity, which bounced thousands of times between the mirrors, interacting with the atoms and causing the ensemble to entangle. They were able to show that quantum entanglement could improve the precision of existing atomic clocks by essentially reducing the noise, or uncertainty between the laser’s and atoms’ tick rates.

At the time, however, they were limited by the ticking instability of the clock’s laser. In 2022, the same team derived a way to further amplify the difference in laser versus atom tick rates with “time reversal” — a trick that relies on entangling and de-entangling the atoms to boost the signal acquired in between.

However, in that work the team was still using traditional microwaves, which oscillate at much lower frequencies than the optical frequency standards ytterbium atoms can provide. It was as if they had painstakingly lifted a film of dust off a painting, only to then photograph it with a low-resolution camera.

“When you have atoms that tick 100 trillion times per second, that’s 10,000 times faster than the frequency of microwaves,” Vuletić says. “We didn’t know at the time how to apply these methods to higher-frequency optical clocks that are much harder to keep stable.”

About phase

In their new study, the team has found a way to apply their previously developed approach of time reversal to optical atomic clocks. They then sent in a laser that oscillates near the optical frequency of the entangled atoms.

“The laser ultimately inherits the ticking of the atoms,” says first author Zaporski. “But in order for this inheritance to hold for a long time, the laser has to be quite stable.”

The researchers found they were able to improve the stability of an optical atomic clock by taking advantage of a phenomenon that scientists had assumed was inconsequential to the operation. They realized that when light is sent through entangled atoms, the interaction can cause the atoms to jump up in energy, then settle back down into their original energy state and still carry the memory about their round trip.

“One might think we’ve done nothing,” Vuletić says. “You get this global phase of the atoms, which is usually considered irrelevant. But this global phase contains information about the laser frequency.”

In other words, they realized that the laser was inducing a measurable change in the atoms, despite bringing them back to the original energy state, and that the magnitude of this change depends on the laser’s frequency.

“Ultimately, we are looking for the difference of laser frequency and the atomic transition frequency,” explains co-author Liu. “When that difference is small, it gets drowned by quantum noise. Our method amplifies this difference above this quantum noise.”

In their experiments, the team applied this new approach and found that through entanglement they were able to double the precision of their optical atomic clock.

“We saw that we can now resolve nearly twice as small a difference in the optical frequency or, the clock ticking frequency, without running into the quantum noise limit,” Zaporski says. “Although it’s a hard problem in general to run atomic clocks, the technical benefits of our method it will make it easier, and we think this can enable stable, transportable atomic clocks.”

This research was supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, the U.S. Office of Science, the National Quantum Information Science Research Centers, and the Quantum Systems Accelerator.


Uncovering new physics in metals manufacturing

MIT researchers discovered a hidden atomic order that persists in metals even after extreme processing.


For decades, it’s been known that subtle chemical patterns exist in metal alloys, but researchers thought they were too minor to matter — or that they got erased during manufacturing. However, recent studies have shown that in laboratory settings, these patterns can change a metal’s properties, including its mechanical strength, durability, heat capacity, radiation tolerance, and more.

Now, researchers at MIT have found that these chemical patterns also exist in conventionally manufactured metals. The surprising finding revealed a new physical phenomenon that explains the persistent patterns.

In a paper published in Nature Communications today, the researchers describe how they tracked the patterns and discovered the physics that explains them. The authors also developed a simple model to predict chemical patterns in metals, and they show how engineers could use the model to tune the effect of such patterns on metallic properties, for use in aerospace, semiconductors, nuclear reactors, and more.

“The conclusion is: You can never completely randomize the atoms in a metal. It doesn’t matter how you process it,” says Rodrigo Freitas, the TDK Assistant Professor in the Department of Materials Science and Engineering. “This is the first paper showing these non-equilibrium states that are retained in the metal. Right now, this chemical order is not something we’re controlling for or paying attention to when we manufacture metals.”

For Freitas, an early-career researcher, the findings offer vindication for exploring a crowded field that he says few believed would lead to unique or broadly impactful results. He credits the U.S. Air Force Office of Scientific Research, which supported the work through their Young Investigator Program. He also credits the collaborative effort that enabled the paper, which features three MIT PhD students as co-first authors: Mahmudul Islam, Yifan Cao, and Killian Sheriff.

“There was the question of whether I should even be tackling this specific problem because people have been working on it for a long time,” Freitas says. “But the more I learned about it, the more I saw researchers were thinking about this in idealized laboratory scenarios. We wanted to perform simulations that were as realistic as possible to reproduce these manufacturing processes with high fidelity. My favorite part of this project is how non-intuitive the findings are. The fact that you cannot completely mix something together, people didn’t see that coming.”

From surprises to theories

Freitas’ research team began with a practical question: How fast do chemical elements mix during metal processing? Conventional wisdom held that there’s a point where the chemical composition of metals becomes completely uniform from mixing during manufacturing. By finding that point, the researchers thought they could develop a simple way to design alloys with different levels of atomic order, also known as short-range order.

The researchers used machine-learning techniques to track millions of atoms as they moved and rearranged themselves under conditions that mimicked metal processing.

“The first thing we did was to deform a piece of metal,” Freitas explains. “That’s a common step during manufacturing: You roll the metal and deform it and heat it up again and deform it a little more, so it develops the structure you want. We did that and we tracked chemical order. The thought was as you deform the material, its chemical bonds are broken and that randomizes the system. These violent manufacturing processes essentially shuffle the atoms.”

The researchers hit a snag during the mixing process: The alloys never reached a fully random state. That was a surprise, because no known physical mechanism could explain the result.

“It pointed to a new piece of physics in metals,” the researchers write in the paper. “It was one of those cases where applied research led to a fundamental discovery.”

To uncover the new physics, the researchers developed computational tools, including high-fidelity machine-learning models, to capture atomic interactions, along with new statistical methods that quantify how chemical order changes over time. They then applied these tools in large-scale molecular dynamics simulations to track how atoms rearrange during processing.

The researchers found some standard chemical arrangements in their processed metals, but at higher temperatures than would normally be expected. Even more surprisingly, they found completely new chemical patterns never seen outside of manufacturing processes. This was the first time such patterns were observed. The researchers referred to the patterns as “far-from-equilibrium states.”

The researchers also built a simple model that reproduced key features of the simulations. The model explains how the chemical patterns arise from defects known as dislocations, which are like three-dimensional scribbles within a metal. As the metal is deformed, those scribbles warp, shuffling nearby atoms along the way. Previously, researchers believed that shuffling completely erased order in the metals, but they found that dislocations favor some atomic swaps over others, resulting not in randomness but in subtle patterns that explain their findings.

“These defects have chemical preferences that guide how they move,” Freitas says. “They look for low energy pathways, so given a choice between breaking chemical bonds, they tend to break the weakest bonds, and it’s not completely random. This is very exciting because it’s a non-equilibrium state: It’s not something you’d see naturally occurring in materials. It’s the same way our bodies live in non-equilibrium. The temperature outside is always hotter or colder than our bodies, and we’re maintaining that steady state equilibrium to stay alive. That’s why these states exist in metal: the balance between an internal push toward disorder plus this ordering tendency of breaking certain bonds that are always weaker than others.”

Applying a new theory

The researchers are now exploring how these chemical patterns develop across a wide range of manufacturing conditions. The result is a map that links various metal processing steps to different chemical patterns in metal.

To date, this chemical order and the properties they tune have been largely considered an academic subject. With this map, the researchers hope engineers can begin thinking of these patterns as levers in design that can be pulled during production to get new properties.

“Researchers have been looking at the ways these atomic arrangements change metallic properties — a big one is catalysis,” Freitas says of the process that drives chemical reactions. “Electrochemistry happens at the surface of the metal, and it’s very sensitive to local atomic arrangements. And there have been other properties that you wouldn't think would be influenced by these factors. Radiation damage is another big one. That affects these materials’ performance in nuclear reactors.”

Researchers have already told Freitas the paper could help explain other surprise findings about metallic properties, and he’s excited for the field to move from fundamental research into chemical order to more applied work.

“You can think of areas where you need very optimized alloys like aerospace,” Freitas says. “They care about very specific compositions. Advanced manufacturing now makes it possible to combine metals that normally wouldn’t mix through deformation. Understanding how atoms actually shuffle and mix in those processes is crucial, because it’s the key to gaining strength while still keeping the low density. So, this could be a huge deal for them.”

This work was supported, in part, by the U.S. Air Force Office of Scientific Research, MathWorks, and the MIT-Portugal Program.


Engineered “natural killer” cells could help fight cancer

A new study identifies genetic modifications that make these immune cells, known as CAR-NK cells, more effective at destroying cancer cells.


One of the newest weapons that scientists have developed against cancer is a type of engineered immune cell known as CAR-NK (natural killer) cells. Similar to CAR-T cells, these cells can be programmed to attack cancer cells.

MIT and Harvard Medical School researchers have now come up with a new way to engineer CAR-NK cells that makes them much less likely to be rejected by the patient’s immune system, which is a common drawback of this type of treatment.

The new advance may also make it easier to develop “off-the-shelf” CAR-NK cells that could be given to patients as soon as they are diagnosed. Traditional approaches to engineering CAR-NK or CAR-T cells usually take several weeks.

“This enables us to do one-step engineering of CAR-NK cells that can avoid rejection by host T cells and other immune cells. And, they kill cancer cells better and they’re safer,” says Jianzhu Chen, an MIT professor of biology, a member of the Koch Institute for Integrative Cancer Research,and one of the senior authors of the study.

In a study of mice with humanized immune systems, the researchers showed that these CAR-NK cells could destroy most cancer cells while evading the host immune system.

Rizwan Romee, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is also a senior author of the paper, which appears today in Nature Communications. The paper’s lead author is Fuguo Liu, a postdoc at the Koch Institute and a research fellow at Dana-Farber.

Evading the immune system

NK cells are a critical part of the body’s natural immune defenses, and their primary responsibility is to locate and kill cancer cells and virus-infected cells. One of their cell-killing strategies, also used by T cells, is a process called degranulation. Through this process, immune cells release a protein called perforin, which can poke holes in another cell to induce cell death.

To create CAR-NK cells to treat cancer patients, doctors first take a blood sample from the patient. NK cells are isolated from the sample and engineered to express a protein called a chimeric antigen receptor (CAR), which can be designed to target specific proteins found on cancer cells.

Then, the cells spend several weeks proliferating until there are enough to transfuse back into the patient. A similar approach is also used to create CAR-T cells. Several CAR-T cell therapies have been approved to treat blood cancers such as lymphoma and leukemia, but CAR-NK treatments are still in clinical trials.

Because it takes so long to grow a population of engineered cells that can be infused into the patient, and those cells may not be as viable as cells that came from a healthy person, researchers are exploring an alternative approach: using NK cells from a healthy donor.

Such cells could be grown in large quantities and would be ready whenever they were needed. However, the drawback to these cells is that the recipient’s immune system may see them as foreign and attack them before they can start killing cancer cells.

In the new study, the MIT team set out to find a way to help NK cells “hide” from a patient’s immune system. Through studies of immune cell interactions, they showed that NK cells could evade a host T-cell response if they did not carry surface proteins called HLA class 1 proteins. These proteins, usually expressed on NK cell surfaces, can trigger T cells to attack if the immune system doesn’t recognize them as “self.”

To take advantage of this, the researchers engineered the cells to express a sequence of siRNA (short interfering RNA) that interferes with the genes for HLA class 1. They also delivered the CAR gene, as well as the gene for either PD-L1 or single-chain HLA-E (SCE). PD-L1 and SCE are proteins that make NK cells more effective by turning up genes that are involved in killing cancer cells.

All of these genes can be carried on a single piece of DNA, known as a construct, making it simple to transform donor NK cells into immune-evasive CAR-NK cells. The researchers used this construct to create CAR-NK cells targeting a protein called CD-19, which is often found on cancerous B cells in lymphoma patients.

NK cells unleashed

The researchers tested these CAR-NK cells in mice with a human-like immune system. These mice were also injected with lymphoma cells.

Mice that received CAR-NK cells with the new construct maintained the NK cell population for at least three weeks, and the NK cells were able to nearly eliminate cancer in those mice. In mice that received either NK cells with no genetic modifications or NK cells with only the CAR gene, the host immune cells attacked the donor NK cells. In these mice, the NK cells died out within two weeks, and the cancer spread unchecked.

The researchers also found that these engineered CAR-NK cells were much less likely to induce cytokine release syndrome — a common side effect of immunotherapy treatments, which can cause life-threatening complications.

Because of CAR-NK cells’ potentially better safety profile, Chen anticipates that they could eventually be used in place of CAR-T cells. For any CAR-NK cells that are now in development to target lymphoma or other types of cancer, it should be possible to adapt them by adding the construct developed in this study, he says.

The researchers now hope to run a clinical trial of this approach, working with colleagues at Dana-Farber. They are also working with a local biotech company to test CAR-NK cells to treat lupus, an autoimmune disorder that causes the immune system to attack healthy tissues and organs.

The research was funded, in part, by Skyline Therapeutics, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund and the Elisa Rah (2004, 2006) Memorial Fund, the Claudia Adams Barr Foundation, and the Koch Institute Support (core) Grant from the National Cancer Institute.


New prediction model could improve the reliability of fusion power plants

The approach combines physics and machine learning to avoid damaging disruptions when powering down tokamak fusion machines.


Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.

Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investigate how the devices can spin up plasma and harness its energy. One of the challenges that tokamaks face is how to safely and reliably turn off a plasma current that is circulating at speeds of up to 100 kilometers per second, at temperatures of over 100 million degrees Celsius.

Such “rampdowns” are necessary when a plasma becomes unstable. To prevent the plasma from further disrupting and potentially damaging the device’s interior, operators ramp down the plasma current. But occasionally the rampdown itself can destabilize the plasma. In some machines, rampdowns have caused scrapes and scarring to the tokamak’s interior — minor damage that still requires considerable time and resources to repair.

Now, scientists at MIT have developed a method to predict how plasma in a tokamak will behave during a rampdown. The team combined machine-learning tools with a physics-based model of plasma dynamics to simulate a plasma’s behavior and any instabilities that may arise as the plasma is ramped down and turned off. The researchers trained and tested the new model on plasma data from an experimental tokamak in Switzerland. They found the method quickly learned how plasma would evolve as it was tuned down in different ways. What’s more, the method achieved a high level of accuracy using a relatively small amount of data. This training efficiency is promising, given that each experimental run of a tokamak is expensive and quality data is limited as a result.

The new model, which the team highlights this week in an open-access Nature Communications paper, could improve the safety and reliability of future fusion power plants.

“For fusion to be a useful energy source it’s going to have to be reliable,” says lead author Allen Wang, a graduate student in aeronautics and astronautics and a member of the Disruption Group at MIT’s Plasma Science and Fusion Center (PSFC). “To be reliable, we need to get good at managing our plasmas.”

The study’s MIT co-authors include PSFC Principal Research Scientist and Disruptions Group leader Cristina Rea, and members of the Laboratory for Information and Decision Systems (LIDS) Oswin So, Charles Dawson, and Professor Chuchu Fan, along with Mark (Dan) Boyer of Commonwealth Fusion Systems and collaborators from the Swiss Plasma Center in Switzerland.

“A delicate balance”

Tokamaks are experimental fusion devices that were first built in the Soviet Union in the 1950s. The device gets its name from a Russian acronym that translates to a “toroidal chamber with magnetic coils.” Just as its name describes, a tokamak is toroidal, or donut-shaped, and uses powerful magnets to contain and spin up a gas to temperatures and energies high enough that atoms in the resulting plasma can fuse and release energy.

Today, tokamak experiments are relatively low-energy in scale, with few approaching the size and output needed to generate safe, reliable, usable energy. Disruptions in experimental, low-energy tokamaks are generally not an issue. But as fusion machines scale up to grid-scale dimensions, controlling much higher-energy plasmas at all phases will be paramount to maintaining a machine’s safe and efficient operation.

“Uncontrolled plasma terminations, even during rampdown, can generate intense heat fluxes damaging the internal walls,” Wang notes. “Quite often, especially with the high-performance plasmas, rampdowns actually can push the plasma closer to some instability limits. So, it’s a delicate balance. And there’s a lot of focus now on how to manage instabilities so that we can routinely and reliably take these plasmas and safely power them down. And there are relatively few studies done on how to do that well.”

Bringing down the pulse

Wang and his colleagues developed a model to predict how a plasma will behave during tokamak rampdown. While they could have simply applied machine-learning tools such as a neural network to learn signs of instabilities in plasma data, “you would need an ungodly amount of data” for such tools to discern the very subtle and ephemeral changes in extremely high-temperature, high-energy plasmas, Wang says.

Instead, the researchers paired a neural network with an existing model that simulates plasma dynamics according to the fundamental rules of physics. With this combination of machine learning and a physics-based plasma simulation, the team found that only a couple hundred pulses at low performance, and a small handful of pulses at high performance, were sufficient to train and validate the new model.

The data they used for the new study came from the TCV, the Swiss “variable configuration tokamak” operated by the Swiss Plasma Center at EPFL (the Swiss Federal Institute of Technology Lausanne). The TCV is a small experimental fusion experimental device that is used for research purposes, often as test bed for next-generation device solutions. Wang used the data from several hundred TCV plasma pulses that included properties of the plasma such as its temperature and energies during each pulse’s ramp-up, run, and ramp-down. He trained the new model on this data, then tested it and found it was able to accurately predict the plasma’s evolution given the initial conditions of a particular tokamak run.

The researchers also developed an algorithm to translate the model’s predictions into practical “trajectories,” or plasma-managing instructions that a tokamak controller can automatically carry out to for instance adjust the magnets or temperature maintain the plasma’s stability. They implemented the algorithm on several TCV runs and found that it produced trajectories that safely ramped down a plasma pulse, in some cases faster and without disruptions compared to runs without the new method.

“At some point the plasma will always go away, but we call it a disruption when the plasma goes away at high energy. Here, we ramped the energy down to nothing,” Wang notes. “We did it a number of times. And we did things much better across the board. So, we had statistical confidence that we made things better.”

The work was supported in part by Commonwealth Fusion Systems (CFS), an MIT spinout that intends to build the world’s first compact, grid-scale fusion power plant. The company is developing a demo tokamak, SPARC, designed to produce net-energy plasma, meaning that it should generate more energy than it takes to heat up the plasma. Wang and his colleagues are working with CFS on ways that the new prediction model and tools like it can better predict plasma behavior and prevent costly disruptions to enable safe and reliable fusion power.

“We’re trying to tackle the science questions to make fusion routinely useful,” Wang says. “What we’ve done here is the start of what is still a long journey. But I think we’ve made some nice progress.”

Additional support for the research came from the framework of the EUROfusion Consortium, via the Euratom Research and Training Program and funded by the Swiss State Secretariat for Education, Research, and Innovation.


Printable aluminum alloy sets strength records, may enable lighter aircraft parts

Incorporating machine learning, MIT engineers developed a way to 3D print alloys that are much stronger than conventionally manufactured versions.


MIT engineers have developed a printable aluminum alloy that can withstand high temperatures and is five times stronger than traditionally manufactured aluminum.

The new printable metal is made from a mix of aluminum and other elements that the team identified using a combination of simulations and machine learning, which significantly pruned the number of possible combinations of materials to search through. While traditional methods would require simulating over 1 million possible combinations of materials, the team’s new machine learning-based approach needed only to evaluate 40 possible compositions before identifying an ideal mix for a high-strength, printable aluminum alloy.

When they printed the alloy and tested the resulting material, the team confirmed that, as predicted, the aluminum alloy was as strong as the strongest aluminum alloys that are manufactured today using traditional casting methods.

The researchers envision that the new printable aluminum could be made into stronger, more lightweight and temperature-resistant products, such as fan blades in jet engines. Fan blades are traditionally cast from titanium — a material that is more than 50 percent heavier and up to 10 times costlier than aluminum — or made from advanced composites.

“If we can use lighter, high-strength material, this would save a considerable amount of energy for the transportation industry,” says Mohadeseh Taheri-Mousavi, who led the work as a postdoc at MIT and is now an assistant professor at Carnegie Mellon University.

“Because 3D printing can produce complex geometries, save material, and enable unique designs, we see this printable alloy as something that could also be used in advanced vacuum pumps, high-end automobiles, and cooling devices for data centers,” adds John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering at MIT.

Hart and Taheri-Mousavi provide details on the new printable aluminum design in a paper published in the journal Advanced Materials. The paper’s MIT co-authors include Michael Xu, Clay Houser, Shaolou Wei, James LeBeau, and Greg Olson, along with Florian Hengsbach and Mirko Schaper of Paderborn University in Germany, and Zhaoxuan Ge and Benjamin Glaser of Carnegie Mellon University.

Micro-sizing

The new work grew out of an MIT class that Taheri-Mousavi took in 2020, which was taught by Greg Olson, professor of the practice in the Department of Materials Science and Engineering. As part of the class, students learned to use computational simulations to design high-performance alloys. Alloys are materials that are made from a mix of different elements, the combination of which imparts exceptional strength and other unique properties to the material as a whole.

Olson challenged the class to design an aluminum alloy that would be stronger than the strongest printable aluminum alloy designed to date. As with most materials, the strength of aluminum depends in large part on its microstructure: The smaller and more densely packed its microscopic constituents, or “precipitates,” the stronger the alloy would be.

With this in mind, the class used computer simulations to methodically combine aluminum with various types and concentrations of elements, to simulate and predict the resulting alloy’s strength. However, the exercise failed to produce a stronger result. At the end of the class, Taheri-Mousavi wondered: Could machine learning do better?

“At some point, there are a lot of things that contribute nonlinearly to a material’s properties, and you are lost,” Taheri-Mousavi says. “With machine-learning tools, they can point you to where you need to focus, and tell you for example, these two elements are controlling this feature. It lets you explore the design space more efficiently.”

Layer by layer

In the new study, Taheri-Mousavi continued where Olson’s class left off, this time looking to identify a stronger recipe for aluminum alloy. This time, she used machine-learning techniques designed to efficiently comb through data such as the properties of elements, to identify key connections and correlations that should lead to a more desirable outcome or product.

She found that, using just 40 compositions mixing aluminum with different elements, their machine-learning approach quickly homed in on a recipe for an aluminum alloy with higher volume fraction of small precipitates, and therefore higher strength, than what the previous studies identified. The alloy’s strength was even higher than what they could identify after simulating over 1 million possibilities without using machine learning.

To physically produce this new strong, small-precipitate alloy, the team realized 3D printing would be the way to go instead of traditional metal casting, in which molten liquid aluminum is poured into a mold and is left to cool and harden. The longer this cooling time is, the more likely the individual precipitate is to grow.

The researchers showed that 3D printing, broadly also known as additive manufacturing, can be a faster way to cool and solidify the aluminum alloy. Specifically, they considered laser bed powder fusion (LBPF) — a technique by which a powder is deposited, layer by layer, on a surface in a desired pattern and then quickly melted by a laser that traces over the pattern. The melted pattern is thin enough that it solidfies quickly before another layer is deposited and similarly “printed.” The team found that LBPF’s inherently rapid cooling and solidification enabled the small-precipitate, high-strength aluminum alloy that their machine learning method predicted.

“Sometimes we have to think about how to get a material to be compatible with 3D printing,” says study co-author John Hart. “Here, 3D printing opens a new door because of the unique characteristics of the process — particularly, the fast cooling rate. Very rapid freezing of the alloy after it’s melted by the laser creates this special set of properties.”

Putting their idea into practice, the researchers ordered a formulation of printable powder, based on their new aluminum alloy recipe. They sent the powder — a mix of aluminum and five other elements — to collaborators in Germany, who printed small samples of the alloy using their in-house LPBF system. The samples were then sent to MIT where the team ran multiple tests to measure the alloy’s strength and image the samples’ microstructure.

Their results confirmed the predictions made by their initial machine learning search: The printed alloy was five times stronger than a casted counterpart and 50 percent stronger than alloys designed using conventional simulations without machine learning. The new alloy’s microstructure also consisted of a higher volume fraction of small precipitates, and was stable at high temperatures of up to 400 degrees Celsius — a very high temperature for aluminum alloys.

The researchers are applying similar machine-learning techniques to further optimize other properties of the alloy.

“Our methodology opens new doors for anyone who wants to do 3D printing alloy design,” Taheri-Mousavi says. “My dream is that one day, passengers looking out their airplane window will see fan blades of engines made from our aluminum alloys.”

This work was carried out, in part, using MIT.nano’s characterization facilities.


Matthew Shoulders named head of the Department of Chemistry

A leading researcher in protein folding biochemistry and next-generation protein engineering techniques will advance chemistry research and education.


Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026. 

“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”

Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.

“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”

Shoulders studies how cells fold proteins, and he develops ​and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.

“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.

In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.

“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.” 

Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment. 

“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”

In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.

“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.

Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.

He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.

At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.

Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.

Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.


Report: Sustainability in supply chains is still a firm-level priority

Analysis from MIT’s Center for Transportation and Logistics finds companies are still acting to reduce emissions, but often lag in measurement techniques.


Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.   

During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.

“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”

The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.

The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.

Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.

“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.

The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.

The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.

Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.

“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”

When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.

That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.

Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.

“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.” 


Chemists create red fluorescent dyes that may enable clearer biomedical imaging

The new dyes are based on boron-containing molecules that were previously too unstable for practical use.


MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.

The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.

In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.

That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.

“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.

MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.

Stabilized borenium

Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.

Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.

Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.

When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.

Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.

However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.

His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.

In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.

“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”

Potential applications

The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.

For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.

Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.

“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.

If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.

“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”

In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.

The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.


Secretary of Energy Chris Wright ’85 visits MIT

Panel discussions focused on innovation in many forms of energy, then a tour of campus featured student research.


U.S. Secretary of Energy Chris Wright ’85 visited MIT on Monday, meeting Institute leaders, discussing energy innovation at a campus forum, viewing poster presentations from researchers supported through the MIT-GE Vernova Energy and Climate Alliance, and watching energy research demos in the lab where he used to work as a student. 

“I’ve always been in energy because I think it’s just far and away the world’s most important industry,” Wright said at the forum, which included a panel discussion with business leaders and a fireside chat with MIT Professor Ernest Moniz, who was the U.S. secretary of energy from 2013 to 2017. Wright added: “Not only is it by far the world’s most important industry, because it enables all the others, but it’s also a booming time right now. … It is an awesomely exciting time to be in energy.”

Wright was greeted on campus by MIT President Sally Kornbluth, who also gave introductory remarks at the forum, held in MIT’s Samberg Center. While the Institute has added many research facilities and buildings since Wright was a student, Kornbluth observed, the core MIT ethos remains the same.

“MIT is still MIT,” Kornbluth said. “It’s a community that rewards merit, boldness, and scientific rigor. And it’s a magnet for people with a drive to solve hard problems that matter in the real world, an enthusiasm for working with industry, and an ethic of national service.”

When it comes to energy research, Kornbluth added, “MIT is developing transformational approaches to make American energy more secure, reliable, affordable, and clean — which in turn will strengthen both U.S. competitiveness and national security.”

At the event, Wright, the 17th U.S. secretary of energy, engaged in a fireside chat with Moniz, the 13th U.S. secretary of energy, the Cecil and Ida Green Professor of Physics and Engineering Systems Post-Tenure, a special advisor to the MIT president, and the founding director of the MIT Energy Initiative (MITEI). Wright began his remarks by reflecting on Kornbluth’s description of the Institute.

“Merit, boldness, and scientific rigor,” Wright said. “That is MIT … to me. That hit me hard when I got here, and frankly, it’s a good part of the reason my life has gone the way it’s gone.”

On energy topics, Wright emphasized the need for continued innovation in energy across a range of technologies, including fusion, geothermal, and more, while advocating for the benefits of vigorous market-based progress. Before becoming secretary of energy, Wright most recently served as founder and CEO of Liberty Energy. He also was the founder of Pinnacle Technologies, among other enterprises. Wright was confirmed as secretary by the U.S. Senate in February.

Asked to name promising areas of technological development, Wright focused on three particular areas of interest. Citing artificial intelligence, he noted that the interest in it was “overwhelming,” with many possible applications. Regarding fusion energy, Wright said, “We are going to see meaningful breakthroughs.” And quantum computing, he added, was going to be a “game-changer” as well.

Wright also emphasized the value of federal support for fundamental research, including projects in the national laboratories the Department of Energy oversees.

“The 17 national labs we have in this country are absolute jewels. They are gems of this country,” Wright said. He later noted, “There are things, like this foundational research, that are just an essential part of our country and an essential part of our future.”

Moniz asked Wright a range of questions in the fireside chat, while adding his own perspective at times about the many issues connected to energy abundance globally.

“Climate, energy, security, equity, affordability, have to be recognized as one conversation, and not separate conversations,” Moniz said. “That’s what’s at stake in my view.”

Wright’s appearance was part of the Energy Freedom Tour developed by the American Conservation Coalition (ACC), in coordination with the Hamm Institute for American Energy at Oklahoma State University. Later stops are planned for Stanford University and Texas A&M University.

Ann Bluntzer Pullin, executive director of the Hamm Institute, gave remarks at the forum as well, noting the importance of making students aware of the energy industry and helping to “get them excited about the impact this career can make.” She also praised MIT’s advances in the field, adding, “This is where so many ideas were born and executed that have allowed America to really thrive in this energy abundance in our country that we have [had] for so long.”

The forum also featured remarks from Roger Martella, chief corporate officer, chief sustainability officer, and head of government affairs at GE Vernova. In March, MIT and GE Vernova announced a new five-year joint program, the MIT-GE Vernova Energy and Climate Alliance, featuring research projects, education programs, and career opportunities for MIT students.

“That’s what we’re about, electrification as the lifeblood of prosperity,” Martella said, describing GE Vernova’s work. “When we’re here at MIT we feel like we’re living history every moment when we’re walking down the halls, because no institution has [contributed] to innovation and technology more, doing it every single day to advance prosperity for all people around the world.”

A panel discussion at the forum featured Wright speaking along with three MIT alumni who are active in the energy business: Carlos Araque ’01, SM ’02, CEO of Quaise Energy, a leading-edge firm in geothermal energy solutions; Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems, a leading fusion energy firm and an MIT spinout; and Milo Werner SM ’07, MBA ’07, a general partner at DCVC and expert in energy and climate investments. The panel was moderated by Chris Barnard, president of the ACC.

Mumgaard noted that Commonwealth Fusion Systems launched in 2018 with “an explicit mission, working with MIT still today, of putting fusion onto an industrial trajectory,” although there is “plenty left to do, still, at that intersection of science, technology, innovation, and business.”

Araque said he believes geothermal is “metric-by-metric” more powerful and profitable than many other forms of energy. “This is not a stop-gap,” he added. Quaise is currently developing its first power-plant-scale facility in the U.S.

Werner noted that the process of useful innovation only begins in the lab; making an advance commercially viable is the critical next step. The biggest impact “is not in the breakthrough,” she said. “It’s not in the discovery that you make in the lab. It’s actually once you’ve built a billion of them. That’s when you actually change the world.”

After the forum, Wright took a tour of multiple research centers on the MIT campus, including the MIT.nano facility, guided by Vladimir Bulović, faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology.

At MIT.nano, Bulović showed Wright the Titan Krios G3i, a nearly room-size electron microscope that enables researchers to take a high-resolution look at the structure of tiny particles, with a variety of research applications. The tour also viewed one of MIT.nano’s cleanrooms, a shared fabrication facility used by both MIT researchers and users outside of MIT, including many in industry.

On a different note, in an MIT.nano hallway, Bulović showed Wright the One.MIT mosaics, which contain the names of all MIT students and employees past and present — well over 300,000 in all. First etched on a 6-inch wafer, the mosaics are a visual demonstration of the power of nanotechnology — and a searchable display, so Bulović located Wright’s name, which is printed near the chin of one of the figures on the MIT seal.

The tour ended in the basement of Building 10, in what is now the refurbished Grainger Energy Machine Facility, where Wright used to conduct research. After earning his undergraduate degree in mechanical engineering, Wright entered into graduate studies at MIT before leaving, as he recounted at the forum, to pursue business opportunities.

At the lab, Wright met with David Perreault, the Ford Foundation Professor of Engineering; Steven Leeb, the Emanuel Landsman Professor and a specialist in power systems; and Sam Coday, the Emanuel E. Landsman Career Development Chair and an assistant professor in the Department of Electrical Engineering and Computer Science. A half-dozen MIT graduate students gave Wright demos of their research projects, all involving energy-generation innovations. Wright readily engaged with all the graduate students about the technologies and the parameters of the devices, and asked the students about their own careers.

Wright was accompanied on the lab tour by MIT Provost Anantha Chandrakasan, himself an expert in developing energy-efficient systems. Chandrakasan delivered closing remarks at the forum in the Samberg Center, noting MIT’s “strong partnership with the Department of Energy” and its “long and proud history of engaging industry.”

As such, Chandrakasan said, MIT has a “role as a resource in service of the nation, so please don’t hesitate to call on us.”


A simple formula could guide the design of faster-charging, longer-lasting batteries

MIT researchers developed a model that explains lithium intercalation rates in lithium-ion batteries.


At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.

This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.

In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.

Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.

“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.

The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.

“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.

Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.

Modeling lithium flow

For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.

However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.

In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.

For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.

The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.

“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”

This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.

Faster charging

In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.

“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.

Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.

The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.

“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”

The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.