General News from MIT - Massachusetts Institute of Technology

Latest general updates from MIT.

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Championing fusion’s promising underdog

Sophia Henneberg, assistant professor in the Department of Nuclear Science and Engineering, is developing stellarators to harness fusion energy.


Like many people who end up going into physics, Sophia Henneberg had a hard time, when she was young, choosing between that discipline and mathematics. Both subjects came easily to her, and she — unlike many of her peers — thought they were fun. Henneberg grew up in a small town in central Germany, and it was not until one week before applying to college that she decided on physics, reasoning that it would still give her the chance to do plenty of math, while also affording opportunities to connect with a broad range of applications. 

Midway through her undergraduate studies at Goethe University in Frankfurt, she started taking courses in plasma physics and almost instantly knew that she had found her niche. “Most of the visible material in the universe is in the form of hot, ionized gas called plasma, so studying that is really fundamental,” she says. “And there’s this amazing application, fusion, which has the potential to become an unlimited energy source.”

Early on, Henneberg resolved to try to make that potential a reality, and she’s been pursuing that goal at MIT since becoming the Norman Rasmussen Career Development Assistant Professor in the Department of Nuclear Science and Engineering in fall of 2025. Her research focus is on stellarators — a kind of fusion machine that has been overshadowed for many decades by another fusion device called the tokamak.  Both of these machines rely on magnetic confinement — using powerful magnetic fields to compress a plasma into a tiny volume causing some of the atoms within this dense cluster to fuse together, unleashing energy in the process. In the tokamak, the plasma assumes the shape of a donut. In a stellarator, the plasma is also contained within a rounded loop, only this one resembles a twisted donut.

As a PhD candidate at the University of York (in the United Kingdom), Henneberg studied the instabilities that can arise in tokamaks, where plasma temperatures often exceed 100 million degrees Celsius and currents induced within the plasma can attain speeds of roughly 100 kilometers per second. In such an ultra-extreme setting — more than six times hotter than the core of the sun — sudden surges of energy, leading to something akin to small-scale solar flares, can breach the magnetic cage enclosing the plasma, thereby disrupting the fusion process and possibly damaging the reactor itself. Henneberg started hearing about stellarators in her classes and, after a bit of research, she came to realize that “they could be much more stable if you design them in the right way.”

Striking a favorable balance

In 2016, she began a postdoctoral fellowship at the Max Planck Institute (MPI) for Plasma Physics in Greifswald, Germany, joining the Stellarator Theory Group. Greifswald may well have been the best place for her to carry out stellarator research, given that the world’s biggest and most advanced reactor of this type, Wendelstein 7-X (W7-X), was based there, and experiments were just starting in the year she arrived.

Her main assignment at MPI was to work on stellarator optimization, figuring out the best way to design the reactor to meet the engineering and physics goals — a task not unlike that of tuning a car to achieve maximum fuel efficiency or, for a racecar, maximum speed. Henneberg’s interest in optimization continues to this day, remaining central to her research agenda at MIT.

“If you want to design a stellarator, there are two principal components you can look at,” she says. The first relates to the shape of the boundary, or cage, into which the plasma will ultimately be confined. This shape is constrained by magnetic fields that are generated, in turn, by a series of superconducting coils that might range in number anywhere from around 4 to 50. In stellarators, the coils tend to be bent rather than circular. That gives rise to twists in the magnetic fields, but it also makes the coils more complicated and likely more expensive. Henneberg has come up with ways to simplify the optimization process — one of which involves designing the plasma boundary and the shape of the coils in the same step rather than looking at them separately. 

“We’ve now reached the point where stellarator performances can exceed those of tokamaks, because we’re able to optimize them very well, but you have to put the effort in,” she says. “You can’t get good performance out of just any twisty donut.”

The best of both worlds

In a 2024 paper, Henneberg and her former Greifswald colleague, Gabriel Plunk, introduced the notion of a stellarator-tokamak hybrid reactor. The goal, they wrote, is both “simple and compelling: to combine the strengths of the two concepts into a single device” that outperforms either of the existing modes.

One of Henneberg’s major preoccupations at present is exploring ways of converting a tokamak into a stellarator that basically entails adding just a few coils — of the bent variety — that can be turned on or off. “This can be an easy way for people in the tokamak community to think more about the possible benefits of the stellarator,” she says. While no one has yet built a hybrid, at least one university has secured funding to do so.

Interest in stellarators has been steadily mounting in recent years, a fact that delights Henneberg. When she started working in this area almost a decade ago, the field of stellarator optimization was tiny and there were very few people she could converse with. There’s much more research going on today, which means that more ideas are coming out, along with some exciting results. Commercial interest is growing as well, and Henneberg has been in contact with several stellarator startup companies, including Type One Energy and Thea Energy in the United States and Proxima Fusion and Gauss Fusion in Germany.

“It seems to me that most new startups these days are focusing on stellarators,” Henneberg says. “With so many companies now entering the field, it can seem like the technical issues involved in fusion are already solved, but there are still many interesting open questions. I’m working on improved designs that advance both the physics and the economic feasibility.”

That’s where her students come in. She believes that one part of her role as an MIT professor is to train the next generation of stellarator experts — people who will help, for instance, to design effective coils that are easy to make, as well as to improve reactor performance overall. 

During her first term, she co-taught the renowned Fusion Design (22.63) course alongside MIT Professor Dennis Whyte. This course has had a remarkable influence on the fusion community, leading to nine published papers with over 1,000 citations and inspiring the creation of several companies. In the fall 2025 version of this course, students were charged with comparing designs for stellarators with machines that relied on a different way of confining the plasma called magnetic mirrors.

After just a few months at MIT, Henneberg has been impressed with her students, calling them “highly motivated and a lot of fun to work with.” She’s confident that her research group will soon be making progress.

She is also happy to be affiliated with MIT’s Plasma Science and Fusion Center, which is internationally recognized as a leading university laboratory in this field. “It’s great to have so many experts [primarily in tokamaks] in one place that I can work with and learn from,” Henneberg says. “Because of my interest in hybrid reactors, my research will really benefit from all the expertise here on the tokamak side.”


Augmenting citizen science with computer vision for fish monitoring

MIT Sea Grant works with the Woodwell Climate Research Center and other collaborators to demonstrate a deep learning-based system for fish monitoring.


Each spring, river herring populations migrate from Massachusetts coastal waters to begin their annual journey up rivers and streams to freshwater spawning habitat. River herring have faced severe population declines over the past several decades, and their migration is extensively monitored across the region, primarily through traditional visual counting and volunteer-based programs. 

Monitoring fish movement and understanding population dynamics are essential for informing conservation efforts and supporting fisheries management. With the annual herring run getting underway this month, researchers and resource managers once again take on the challenge of counting and estimating the migrating fish population as accurately as possible. 

A team of researchers from the Woodwell Climate Research Center, MIT Sea Grant, the MIT Computer Science and Artificial Intelligence Lab (CSAIL), MIT Lincoln Laboratory, and Intuit explored a new monitoring method using underwater video and computer vision to supplement citizen science efforts. The researchers — Zhongqi Chen and Linda Deegan from the Woodwell Climate Research Center, Robert Vincent and Kevin Bennett from MIT Sea Grant, Sara Beery and Timm Haucke from MIT CSAIL, Austin Powell from Intuit, and Lydia Zuehsow from MIT Lincoln Laboratory — published a paper describing this work in the journal Remote Sensing in Ecology and Conservation this February. 

The open-access paper, “From snapshots to continuous estimates: Augmenting citizen science with computer vision for fish monitoring,” outlines how recent advancements in computer vision and deep learning, from object detection and tracking to species classification, offer promising real-world solutions for automating fish counting with improved efficiency and data quality. 

Traditional monitoring methods are constrained by time, environmental conditions, and labor intensity. Volunteer visual counts are limited to brief daytime sampling windows, missing nighttime movement and short migration pulses, when hundreds of fish pass by within the span of a few minutes. While technologies like passive acoustic monitoring and imaging sonar have advanced continuous fish monitoring under certain conditions, the most promising and low-cost option — manual review of underwater video — is still labor-intensive and time-consuming. With the growing demand for automated video processing solutions, this study presents a scalable, cost-effective, and efficient deep learning-based system for reliable automated fish monitoring. 

The team built an end-to-end pipeline — from in-field underwater cameras to video labeling and model training — to achieve automated, computer vision-powered fish counting. Videos were collected from three rivers in Massachusetts: the Coonamessett River in Falmouth, the Ipswich River (Ipswich), and the Santuit River in Mashpee. 

To prepare the training dataset, the team selected video clips with variations in lighting, water clarity, fish species and density, time of day, and season to ensure that the computer vision model would work reliably across diverse real-world scenarios. They used an open-source web platform to manually label the videos frame-by-frame with bounding boxes to track fish movement. In total, they labeled 1,435 video clips and annotated 59,850 frames. 

The researchers compared and validated the computer vision counts with human video reviews, stream-side visual counts, and data from passive integrated transponder (PIT) tagging. They concluded that models trained on diverse multi-site and multi-year data performed best and produced season-long, high-resolution counts consistent with traditionally established estimates. Going one step further, the system provided insights into migration behavior, timing, and movement patterns linked to environmental factors. Using video from the 2024 Coonamesset River migration, the system counted 42,510 river herring and revealed that upstream migration peaked at dawn, while downstream migration was largely nocturnal, with fish utilizing darker, quieter periods to avoid predators.

With this real-world application, the researchers aim to advance computer vision in fisheries management and provide a framework and best practices for integrating the technology into conservation efforts for a wide range of aquatic species. “MIT Sea Grant has been funding work on this topic for some time now, and this excellent work by Zhongqi Chen and colleagues will advance fisheries monitoring capabilities and improve fish population assessments for fisheries managers and conservation groups,” Vincent says. “It will also provide education and training for students, the public, and citizen science groups in support of the ecologically and culturally important river herring populations along our coasts.”

Still, continued traditional monitoring is essential for maintaining consistency in long-term datasets until fisheries management agencies fully implement automated counting systems. Even then, computer vision and citizen science should be seen as complementary. Volunteers will be necessary for camera maintenance and for contributing directly to the computer vision workflow, from video annotation to model verification. The researchers envision that integrating citizen observations and computer vision-generated data will help create a more comprehensive and holistic approach to environmental monitoring.

This work was funded by MIT Sea Grant, with additional support provided by the Northeast Climate Adaptation Science Center, an MIT Abdul Latif Jameel Water and Food Systems seed grant, the AI and Biodiversity Change Global Center (supported by the National Science Foundation and the Natural Sciences and Engineering Research Council of Canada), and the MIT Undergraduate Research Opportunities Program.


Why solid-state batteries keep short-circuiting

New insights into metallic cracks that harm battery performance could advance the longstanding quest to develop energy-dense solid-state batteries.


Batteries that use solid metal as their charge-carrying electrolyte could potentially be a safer and far more energy-dense alternative to lithium-ion batteries. However, these solid-state batteries have been plagued by the formation of metallic cracks called dendrites that cause them to short circuit.

The problem has so far prevented such batteries from becoming a major player in energy storage. But now, research from MIT could finally help engineers find a way to get past this hurdle.

For decades, many researchers have treated dendrites as largely the result of mechanical stress — like cracks that form on the sidewalk when a tree root grows underneath. But MIT engineers have discovered the exact opposite: Faster dendrite growth was associated with lower stress levels in a commonly used battery electrolyte material. Using a new technique that allowed them to directly measure the stress around growing dendrites, the researchers found cracks formed at stress levels as low as 25 percent of what would be expected under mechanical stress alone.

The experiments, published in Nature today, instead revealed another culprit: chemical reactions caused by high electrical currents that weaken the electrolyte and make it more susceptible to dendrite growth. Researchers had previously proposed that such reactions cause dendrite growth, but the new study provides the first experimental data on the interplay between chemical and mechanical stress in dendrite formation.

“Direct measurement techniques allowed us to see how tough the material is as we cycle the cell,” says Cole Fincher, the paper’s first author and an MIT PhD student in materials science and engineering. “What we saw was that if you just test the ceramic electrolyte on the benchtop, it’s about as tough as your tooth. But during charging, it gets a lot weaker — closer to the brittleness of a lollipop.”

The findings reveal why developing stronger electrolytes alone hasn’t solved the dendrite problem. It also points to the importance of developing more chemically stable materials to finally fulfill the promise of high-density solid-state batteries.

“There’s a large community of researchers that are constantly trying to discover and design better solid electrolytes to enable the solid-state battery,” says senior author Yet-Ming Chiang, MIT’s Kyocera Professor of Materials Science and Engineering. “This study provides guidance in those efforts. We discovered a new mechanism by which these dendrites grow, allowing us to explore ways to design around it to make solid-state batteries successful.”

Joining Fincher and Chiang on the paper are MIT PhD student Colin Gilgenbach; Thermo Fisher Scientific scientists Christian Roach and Rachel Osmundsen; MIT.nano researcher Aubrey Penn; MIT Toyota Professor in Materials Processing W. Craig Carter; MIT Kyocera Professor of Materials Science and Engineering James LeBeau; University of Michigan Professor Michael Thouless; and Brown University Professor Brian W. Sheldon.

Measuring stress

Dendrites have presented a major roadblock to battery development since the 1970s. One reason lithium-ion batteries have become ubiquitous while other approaches have stalled is that their commonly used graphite anodes are less susceptible to dendrite formation. That’s a shame because solid-state batteries that use lithium metal as an anode and a solid electrolyte could theoretically store far more energy in the same sized package with less weight. They could thus enable longer-lasting phones and laptops, or electric cars with double the range of today’s options.

“There’s no more energy-dense form of lithium than lithium metal,” Chiang says. “But the dendrite problem has limited progress with solid-state batteries.”

Lithium metal is soft like taffy. Fincher, who has been studying the dendrite problem in the labs of Chiang and Carter, says one puzzle is how such a soft material can penetrate into the hard electrolyte materials being explored for use in solid-state batteries.

“The ceramics that have been used in these applications are stiff, like a coffee mug, so it’s been hoped that solid-state batteries would stop this relatively soft dendrite from growing,” Fincher explains.

Believing that mechanical stress causes dendrites, scientists have worked to develop stronger electrolytes that can withstand more mechanical stress. Some researchers have proposed that chemical reactions play a role in dendrite formation, but how those reactions worked with mechanical stress was not known.

For their Nature study, the researchers set out to directly observe mechanical and chemical changes in a commonly used solid-state electrolyte material as dendrites grew. Solid-state batteries are typically organized like a sandwich, which makes it hard to look inside the middle electrolyte layer. For their first experiment, the researchers developed a special solid-state battery cell in which the ceramic layers can be observed from the side, allowing the researchers to watch dendrite growth occurring in the electrolyte.

The researchers also used a measurement technique called birefringence microscopy to precisely measure the stress around the dendrite, which Fincher developed as part of his PhD thesis.

“It works the same way as polarized sunglasses when you look at something like a windshield,” Fincher explains of the technique. “When light comes through, residual stresses in the glass enable light of some orientations to pass faster than others, and that can give rise to observable rainbow patterns. These patterns can be used to measure stress.”

The technique gave the researchers a way to both visualize and quantify stress around actively growing dendrites for the first time, leading to the unexpected findings.

“Normally you would expect that the faster a dendrite grows, the more stress it creates,” Chiang says. “Instead, we observed exactly the opposite. The faster it grew, the lower the stress around it, meaning the solid electrolyte is breaking under a lower stress, and therefore it’s been embrittled.”

In fact, the dendrites grew at stress levels far weaker than expected. Fincher describes the weaker electrolyte as electrochemically corroded.

“Imagine you test a piece of glass one day, and the next day it’s only a quarter as strong,” Chiang says. “It was very surprising.”

Led by LeBeau, the researchers then cooled the electrolyte to extremely low temperatures and applied a powerful imaging technique called cryogenic scanning transmission electron microscopy that allowed them to study the area around the dendrite on nearly atomic scales. The imaging revealed that the passage of ionic current through the material had caused chemical reactions that made it more brittle.

“The electric current drives the flow of lithium ions through the solid electrolyte,” Chiang explains. “That causes a highly concentrated flow of lithium ions at the dendrite tip. We believe that leads to a chemical reduction of the material compound, which leads to its decomposition into new phases. You start with a crystalline phase of the electrolyte, then there’s a volume contraction after the deposition that is consistent with the embrittlement we see.”

Toward better batteries

The experiment was done on one of the most stable electrolytes used in solid-state batteries, making the researchers confident the findings will carry over to other electrolyte materials.

“This tells us we have to look for electrolyte materials that are even more stable, especially when in contact with lithium metal, which chemically speaking is very reducing,” Chiang says. “This will help direct the search for new materials.”

For instance, Chiang says now that they understand more about the chemical changes causing embrittlement, researchers could explore materials that actually get tougher as cracks grow.

The researchers say it will take more work to figure out what electrochemical reactions are taking place to make the electrolyte so much weaker. But they say their approach for directly observing stresses could also help improve materials for use in devices like fuel cells and electrolyzers.

The work was supported by the center for Mechano-Chemical Understanding of Solid Ionic Conductors, a Department of Energy Engineering Frontiers Research Center, the National Science Foundation, and Fincher’s Department of Defense Science and Engineering Graduate Fellowship, and was carried out using MIT.nano facilities.


QS World University Rankings rates MIT No. 1 in 12 subjects for 2026

The Institute also ranks second in seven subject areas.


QS World University Rankings has placed MIT in the No. 1 spot in 12 subject areas for 2026, the organization announced today.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Chemistry; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Engineering and Technology; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; and Physics and Astronomy.

MIT also placed second in seven subject areas: Architecture/Built Environment; History of Art; Biological Sciences; Economics and Econometrics; Marketing; Natural Sciences; and Statistics and Operational Research.

For 2026, universities were evaluated in 55 specific subjects and five broader subject areas.

Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.

MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 14 straight years.


Wristband enables wearers to control a robotic hand with their own movements

By moving their hands and fingers, users can direct a robot to play piano or shoot a basketball, or they can manipulate objects in a virtual environment.


The next time you’re scrolling your phone, take a moment to appreciate the feat: The seemingly mundane act is possible thanks to the coordination of 34 muscles, 27 joints, and over 100 tendons and ligaments in your hand. Indeed, our hands are the most nimble parts of our bodies. Mimicking their many nuanced gestures has been a longstanding challenge in robotics and virtual reality.

Now, MIT engineers have designed an ultrasound wristband that precisely tracks a wearer’s hand movements in real-time. The wristband produces ultrasound images of the wrist’s muscles, tendons, and ligaments as the hand moves, and is paired with an artificial intelligence algorithm that continuously translates the images into the corresponding positions of the five fingers and palm.

The researchers can train the wristband to learn a wearer’s hand motions, which the device can communicate in real-time to a robot or a virtual environment.

In demonstrations, the team has shown that a person wearing the wristband can wirelessly control a robotic hand. As the person gestures or points, the robot does the same. In a sort of wireless marionette interaction, the wearer can manipulate the robot to play a simple tune on the piano and shoot a small basketball into a desktop hoop. With the same wristband, a wearer can also manipulate objects on a computer screen, for instance pinching their fingers together to enlarge and minimize a virtual object.

The team is using the wristband to gather hand motion data from many more users with different hand sizes, finger shapes, and gestures. They envision building a large dataset of hand motions that can be plumbed, for instance, to train humanoid robots in dexterity tasks, such as performing certain surgical procedures. The ultrasound band could also be used to grasp, manipulate, and interact with objects in video games, design applications, or other virtual settings.

“We think this work has immediate impact in potentially replacing hand tracking techniques with wearable ultrasound bands in virtual and augmented reality,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering at MIT. “It could also provide huge amounts of training data for dexterous humanoid robots.”

Zhao, Gengxi Lu, and their colleagues present the wristband’s new design in a paper appearing today in Nature Electronics. Their MIT co-authors are former postdocs Xiaoyu Chen, Shucong Li, and Bolei Deng; graduate students SeongHyeon Kim and Dian Li; postdocs Shu Wang and Runze Li; and Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science. Other co-authors are graduate students Yushun Zheng and Junhang Zhang, Baoqiang Liu, Chen Gong, and Professor Qifa Zhou from the University of Southern California.

Seeing strings

There are currently a number of approaches to capturing and mimicking human hand dexterity in robots. Some approaches use cameras to record a person’s hand movements as they manipulate objects or perform tasks. Others involve having a person wear a glove with sensors, which records the person’s hand movements and transmits the data to a receiving robot. But erecting a complex camera system for different applications is impractical and prone to visual obstacles. And sensor-laden gloves could limit a person’s natural hand motions and sensations.

A third approach uses the electrical signals from muscles in the wrist or forearm that scientists then correlate with specific hand movements. Researchers have made significant advances in this approach, however these signals are easily affected by noise in the environment. They are also not sensitive enough to distinguish subtle changes in movements. For instance, they may discern whether a thumb and index finger are pinched together or pulled apart, but not much of the in-between path.

Zhao’s team wondered whether ultrasound imaging might capture more dexterous and continuous hand movements. His group has been developing various forms of ultrasound stickers — miniaturized versions of the transducers used in doctor’s offices that are paired with hydrogel material that can safely stick to skin.

In their new study, the team incorporated the ultrasound sticker design into a wearable wristband to continuously image the muscles and tendons in the wrist.

“The tendons and muscles in your wrist are like strings pulling on puppets, which are your fingers,” Lu says. “So the idea is: Each time you take a picture of the state of the strings, you’ll know the state of the hand.”

Mapping manipulation

The team designed a wristband with an ultrasound sticker that is the size of a smartwatch, and added onboard electronics that are about as small as a cellphone. They attached the wristband to a volunteer’s wrist and confirmed that the device produced clear and continuous images of the wrist as the volunteer moved their fingers in various gestures.

The challenge then was to relate the black and white ultrasound images of the wrist to specific positions of the hand. As it turns out, the fingers and thumb are capable of 22 degrees of freedom, or different ways of extending or angling. The researchers found that they could identify specific regions in their ultrasound images of the wrist that correlate to each of these 22 degrees of freedom. For instance, changes in one region relate to thumb extension, while changes in another region correlate with movements of the index finger.

To establish these connections, a volunteer wearing the wristband would move their hand in various positions while the researchers recorded the gestures with multiple cameras surrounding the volunteer. By matching changes in certain regions of the ultrasound images with hand positions recorded by the cameras, the team could label wrist image regions with the corresponding degree of freedom in the hand. But to do this translation continuously, and in real-time, would be an impossible task for humans.

So, the team turned to artificial intelligence. They used an AI algorithm that can be trained to recognize image patterns and correlate them with specific labels and, in this case, the hand’s various degrees of freedom. The researchers trained the algorithm with ultrasound images that they meticulously labeled, annotating the image regions associated with a specific degree of freedom. They tested the algorithm on a new set of ultrasound images and found it correctly predicted the corresponding hand gestures.

Once the researchers successfully paired the AI algorithm with the wristband, they tested the device on more volunteers. For the new study, eight volunteers with different hand and wrist sizes wore the wristband while they formed various hand gestures and grasps, including making the signs for all 26 letters in American Sign Language. They also held objects such as a tennis ball, a plastic bottle, a pair of scissors, and a pencil. In each case, the wristband precisely tracked and predicted the position of the hand.

To demonstrate potential applications, the team developed a simple computer program that they wirelessly paired with the wristband. As a wearer went through the motions of pinching and grasping, the gestures corresponded to zooming in and out on an object on the computer screen, and virtually moving and manipulating it in a smooth and continuous fashion.

The researchers also tested the wristband as a wireless controller of a simple commercial robotic hand. While wearing the wristband, a volunteer went through the motions of playing a keyboard. The robot in turn mimicked the motions in real-time to play a simple tune on a piano. The same robot was also able to mimic a person’s finger taps to play a desktop basketball game.

Zhao is planning to further miniaturize the wristband’s hardware, as well as train the AI software on many more gestures and movements from volunteers with wider ranging hand sizes and shapes. Ultimately, the team is building toward a wearable hand tracker that can be worn by anyone, to wirelessly manipulate humanoid robots or virtual objects with high dexterity.

“We believe this is the most advanced way to track dexterous hand motion, through wearable imaging of the wrist,” Zhao says. “We think these wearable ultrasound bands can provide intuitive and versatile controls for virtual reality and robotic hands.”

This research was supported, in part, by MIT, the U.S. National Institutes of Health, the U.S. National Science Foundation, the U.S. Department of Defense, and Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology.


Enduring passions for medicine, journalism, and triathlons

As an aspiring physician-scientist and editor-in-chief of The Tech, MIT senior Alex Tang has found inspiration in the lives of patients and others in his community.


Alex Tang’s dream of becoming a physician started in grade school when he read Lisa Sanders’ “Diagnosis” column in The New York Times Magazine. Although he often encountered unfamiliar medical terms, Tang was captivated by the magic of medicine, as Sanders described how physicians turned puzzling sets of symptoms into concrete diagnoses and treatment plans for patients.

A decade later, Tang is one step closer to achieving his dream. The MIT senior has challenged himself academically, dual-majoring in chemistry and biology and minoring in biomedical engineering. “All of the courses have encouraged me to think about problems through different lenses,” he says.

Tang has also challenged himself as the editor-in-chief of MIT’s student newspaper, The Tech, and as a competitive triathlete. In the fall, he will begin medical school, where he hopes to develop clinical skills and continue honing his scientific abilities. Ultimately, he aspires to pursue a career as a physician-scientist, focusing on how cancers respond to and resist treatment. He wants to help convert those insights into novel therapies that can be tailored to individual cancer patients.

“I want to advance precision oncology, ensuring that each patient receives the most effective, personalized treatment possible,” he says.

Thriving in the lab

Originally from Massachusetts, Tang was eager to make the most of his MIT experience, especially because of its extensive research opportunities. “Both my parents worked in the Cambridge biotech space, and being able to contribute to innovative science here has been a priority,” he says.

Early on, Tang gravitated toward oncology after joining the Nir Hacohen Lab at the Broad Institute, an interest cemented after taking 7.45 (Cancer Biology), which was taught by professors Tyler Jacks and Michael Hemann. Fascinated by how new cancer therapies were changing patients’ lives, he joined a project with implications for patients with difficult prognoses: For the last three-and-half years, Tang has been studying the effects of combined immunotherapy and targeted molecular therapy on tumors in patients with metastatic colorectal cancer.

“I hope my work can provide clarity for patients and physicians, and empower them to be confident in their options for care,” Tang says.

Last year, Tang was awarded a prestigious Goldwater Scholarship, which supports undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.

In addition to gaining technical skills, Tang has found working in the Hacohen Lab to be enriching in other important ways.

“What’s been great about research is learning from experts in the field who become your role models,” he says, “They are at the frontiers of investigating the most challenging questions in the field, and iterating through the scientific process with them is such a joy.”

Looking forward to medical school, he hopes to complement his basic science research with work that is more clinically involved.

“I want to bridge the gap between fundamental discoveries and tangible improvements in patient care,” Tang says. He has already set out on this mission, recently leading the development of a prognostic assay in lung cancer.

Breaking news

After stopping by the booth for MIT’s student newspaper, The Tech, during Campus Preview Weekend, Tang knew he wanted to join and contribute to a publication that has long chronicled MIT’s history and culture. Starting as a news writer and later serving as editor-in-chief, he learned how to write under pressure, reported on major campus events, and balanced leadership with collaboration.

“It’s been such an honor and pleasure to document people across the diverse MIT community who are all contributing to the character of the Institute in different ways,” he says.

It’s an activity he’ll drop everything for.

“When we have things come up and we have to do a breaking news story or we have some editorial thing that needs to be managed, I’ll just stop working to sort out whatever’s happening,” he says. “I think that’s what passion really is about.”

His journey with The Tech has not always been easy. In the summer between his first and second year, he found himself solely responsible for producing the paper’s news content amidst a staff shortage while the paper was facing financial difficulties.

“Coming into sophomore fall, I focused on recruiting more staff and seeking out ways to get more funding,” Tang says. “The paper wouldn’t be here without the people, both students and faculty advisors alike, who bought into The Tech’s mission.”

Though he hopes to pursue a career in medicine, Tang has found journalism to be integral in shaping how he will connect and communicate with patients and colleagues.

“You are responsible for taking someone’s story, breaking it down, and retelling it in your own words in a way that you feel would resonate with the audience and serve the community,” he says.

An outlet through triathlon

Despite his busy schedule, Tang prioritizes staying active and maintaining fitness. A former competitive swimmer in high school and now a triathlete, he still finds himself drawn back to the water when everything around him feels fast-paced.

“Swimming, biking, and running are good ways to de-stress,” Tang says. “It’s therapeutic in the sense that you can just let go. The race is just that culmination of letting it go at a more elevated level.”

He credits MIT’s infrastructure for helping him stay committed to training. “My dorm is steps away from the pool and the track,” he says. “The convenience is superb.”

Tang has found success in competitions, most recently placing third in his age group at the 2025 Boston Triathlon. In fact, it is the feeling of accomplishment that pushes him every day.

“There are many days when you want to take it easy, but you have to remember the joy waiting for you at the end of the race when you’ve put in the work,” he says. “It motivates me to be conscious and aware of what I’m doing in practice.”

During the summer, Tang and his younger brother go out for long runs in the Boston suburbs. “It is great to have my brother push me every day,” Tang says. “There has been no one more supportive of me than my family.”


Active Surfaces aims to install peel-and-stick solar panels everywhere

This award-winning startup with roots at the MIT Energy Initiative is developing lightweight, flexible, high-efficiency solar energy films designed to be used on roofs, walls, and any curved surface.


Active Surfaces, a startup based on solar-energy technologies rooted in MIT research, is well on its way to developing what co-founder Richard Swartwout SM ’18, PhD ’21 calls “solar 2.0.” The company’s technology is in response to a need Swartwout recognized while observing energy challenges in India during an MIT Energy Initiative (MITEI) fellowship.

Within the last two years, the company has raised more than $10 million in venture capital, corporate investment, and state grants, most recently announcing in October an investment from the Tokyo-based electric utility Electric Power Development Co. Active Surfaces also opened their current manufacturing development site — a 5,000-square-foot facility — in 2024 in Woburn, Massachusetts, that is now filled with industrial roll-to-roll printers and other equipment being cost-optimized before the equipment is scaled up for a first-of-its-kind commercial-scale manufacturing plant.

Based on more than 10 years of MIT research and resulting patents — three held by Swartwout — collaborators at Active Surfaces have developed a novel approach to solar. Instead of silicon, the “solar 1.0” technology that dominates today, their solar cells are made of perovskite, a class of materials that are cheap, abundant, lightweight, flexible, and highly efficient at absorbing and emitting light.

“We need to start thinking about more and more places to put solar,” Swartwout says, “and we need to dramatically cut the cost of manufacturing and installing it.” Active Surfaces is now designing the solar technology that can meet those goals.

In recent years, homeowners, electric utilities, and others have adopted silicon-based systems, and in 2024 installed solar capacity worldwide exceeded 2 terawatts. However, some experts believe that by 2050 the world will need 20 terawatts of installed solar capacity in order to meet rapidly increasing demand for electricity while also reducing carbon emissions.

A long-standing target

Silicon technology was fine for its original purpose — generating electricity for NASA’s early spacecraft — and later for utility setups in remote locations. No matter that the silicon solar cells are brittle and require heavy racks to support them. Swartwout first became aware of the limitations of silicon solar during a trip he took to India to observe energy challenges encountered by people in remote areas in 2016 as part of his fellowship from MITEI’s Tata Center for Technology and Design. In talking with residents, Swartwout heard repeatedly that people didn’t trust solar sources of electricity because the brittle panels “fail very prematurely in those sorts of locations.”

Motivated by that early experience, plus the need for rapid worldwide growth in solar generation, Swartwout and Shiv Bhakta MBA ’24, SM ’24 co-founded Active Surfaces in 2022. The pair provides an unusual blend of expertise: Bhakta, the CEO and former civil and environmental engineering and business student through the Leaders for Global Operations program, offers strong strategic market experience, while Swartwout, the CTO and a former student in electrical engineering and computer science, spent a decade at MIT working on solar R&D and printed electronics innovation.

Other research groups have worked with perovskites, but the most promising compositions and manufacturing techniques were toxic, and managing their toxicity made large-scale manufacturing impractical. The Active Surfaces process instead uses a novel perovskite ink consisting entirely of nontoxic components. Layers of electronic material are deposited onto a thin substrate, and an electrode is deposited onto the surface to make a module. The solar modules are then protected from the environment using an epoxy that dries within seconds under an ultraviolet lamp. The module, now as thin as 15 microns thick, can readily be attached to any surface.

The finished solar film generates as much electricity as an equivalent surface area of silicon cell, and the confirmed durability under realistic temperatures and humidity exceeds 10 years. The lightweight, mechanically robust solar film is easy to install — an advantage that brings the overall cost way down compared to the cost of silicon solar. For a conventional rooftop silicon system, as much as half of the total cost is often for installation. “That’s because those panels are not designed to be easily deployed through general construction,” says Swartwout. “A flexible solar panel is much more in line with how we do construction. To put it on your roof, you would just unroll it like you would unroll an asphalt shingle or a roofing membrane.”

In addition, the flexible films can be fabricated by a cost-effective mass-production method called roll-to-roll manufacturing, in which material is continuously unrolled from one spool and rewound onto another. The machines operate at high speed, and the capital investment required is low. As a result, says Swartwout, “there isn’t much benefit to having centralized manufacturing, so you can think about a distributed manufacturing model.” That solves another problem with the current silicon solar technology: China now manufactures almost all solar cells, and, notes Swartwout, “many countries don’t want to have their energy supply chains totally dependent on China. With our technology, you can have regionalized manufacturing locally … more like today’s auto market.”

Growing up but not cutting ties

While Active Surfaces’ films are not yet full-sized, they have been growing rapidly, Swartwout says. “Within three months, our product went from lab-scale to 6 inches by 6 inches, and then within another four months or so, it went from that size to 6 inches by 2 feet — the biggest size that our current machines can process.” But, he adds, the 6x6 sample is “representative of what a minimum viable manufacturing process would be.”

The company continues to maintain its close ties to MIT. Several MIT professors are among the startup’s advisors. And the company is located just 15 miles from MIT, so staff members are frequently at MIT.nano, especially to make use of inspection tools like scanning electron microscopes and occasionally to use fabrication facilities not available at their own lab. In addition, the startup sometimes sponsors work at MIT.nano, in particular when they need a next-generation extension on one of the MIT patents. Swartwout calls the startup–MIT relationship a “good synergy,” and comments that they set up Active Surfaces “with that in mind.”

Swartwout is optimistic about what’s ahead for Active Surfaces. “We think that we have a really huge market. So the upfront capital that our investors are committing is worth the end-stage growth of what [our technology] could actually do for the future energy landscape as a whole.”


MIT hosts its first High School Regional Science Bowl

At a daylong science competition, high school students gathered from across New England to test their science knowledge for a shot at nationals in Washington.


“Guys, have the buzzers been tested?”

On Saturday, Feb. 21, volunteers for the 2026 MIT Science Bowl High School Regional hustled around the spacious auditorium, setting up chairs and buzzers and laying out sharpened pencils. The room slowly quieted as all high schoolers filed in, dressed in matching, dark green Science Bowl T-shirts.

By late afternoon, after rounds and rounds of fast-paced questioning, the auditorium pulsed with tension and anxiously bouncing knees as the final seconds of the competition ticked down.

“Patients with Tay–Sachs disease —” began the moderator, Gideon Tzafriri, president of the Science Bowl and a senior at MIT.

A buzzer cut him off.

“Interrupt,” Tzafriri announced.

The entire audience seemed to hold their breath. A student from Lexington High School Team 1 offered their answer: “lysosome.”

“Correct.”

Moments later, the Lexington, Massachusetts, team sealed the match. The room erupted into cheers, with students vaulting from their seats and rushing down to hug and congratulate their teammates. The final score of the 2026 MIT Science Bowl was 148 to 52, with Lexington High School Team 1 winning against Philips Exeter Team 1.

“I think I can speak for all of us when I say we feel ecstatic,” said Jerry Xu, one of the members of the winning team. “It’s been a long-term collaborative effort, we’ve been practicing for many years. We’ve worked together as a team for so long, it’s just such a great feeling to be here with my friends.”

Around Xu, the rest of his teammates proudly nodded.

The 2026 MIT High School Regional Science Bowl marked the Institute’s first time hosting a regional competition, expanding its long-running involvement with the national tournament. While MIT has hosted the national high school competition for eight years, this regional event created a new qualifying pathway for New England schools vying for a place at the National Science Bowl in Washington.

The competition involves round-robin style questions on complex biology, chemistry, and physics questions, and some topics lie well beyond the scope of regular high school classes. In a long day of tough science questions and rapidly beeping buzzers, the event had brought together 26 teams from 14 schools across Massachusetts, New Hampshire, and Rhode Island.

“The whole team put immense effort into learning about science, enjoying themselves, having fun, and trusting the process,” said Nicholas Gould, the Lexington High School team’s coach and their physics teacher. “It’s not about the win, it’s the process of getting there, the experiences they take with them and what they learn about themselves and each other.”

For many competitors, the draw wasn’t just the chance to win a medal, but to further their knowledge.

“I came here because I wanted to be on a science team just because I like science, and my experience has been pretty amazing,” said Vritti Mehra, a student at Portsmouth High School in New Hampshire.

Others spoke of the importance of representation.

“I’m proud to be a girl in this tournament because as you can see, there are not a lot of females here. But I’m very glad that I’m part of this community because of the friendliness, the competition, and this fostered a love for science for me,” said Katherine Wang, from Lexington High School Team 3, who has been competing since sixth grade. “My mom has a PhD, so she really inspires me to become the best.”

The regional marked a beginning to MIT, and an end for many graduating seniors, both competitors and volunteers.

“Most of us have been doing Science Bowl since middle school, so this feels like a culmination of everything we’ve done,” said William Jung, another member of the winning team.

For Tzafriri, the president of the bowl, the event carried a similar resonance, since he also competed in the event himself when he was in high school.

“It’s nice to finally finish off something that I started in high school,” said Tzafriri.

As the event came to an end, the winning team lined up at the front of the auditorium, with proud grins and the golden medals around their necks glistening under fluorescent lights. Cameras flashed in quick succession as the event’s organizers and volunteers watched proudly from either side.

“I get to help kids have fun with science and actively participate in science,” said Jiaxing Wang, one of the event’s organizers. “The Science Bowl is something I discovered in my junior year of high school: It was very late in the cycle, so I want to be able to help kids like me to compete and have the experience they deserve and desire.”

For Lexington’s seniors, this event sends them to Washington. For MIT, it signals something larger: a continuous investment into young scientists, encouraging a future full of possibility.


How to create “humble” AI

An MIT-led team is designing artificial intelligence systems for medical diagnosis that are more collaborative and forthcoming about uncertainty.


Artificial intelligence holds promise for helping doctors diagnose patients and personalize treatment options. However, an international group of scientists led by MIT cautions that AI systems, as currently designed, carry the risk of steering doctors in the wrong direction because they may overconfidently make incorrect decisions.

One way to prevent these mistakes is to program AI systems to be more “humble,” according to the researchers. Such systems would reveal when they are not confident in their diagnoses or recommendations and would encourage users to gather additional information when the diagnosis is uncertain.

“We’re now using AI as an oracle, but we can use AI as a coach. We could use AI as a true co-pilot. That would not only increase our ability to retrieve information but increase our agency to be able to connect the dots,” says Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School.

Celi and his colleagues have created a framework that they say can guide AI developers in designing systems that display curiosity and humility. This new approach could allow doctors and AI systems to work as partners, the researchers say, and help prevent AI from exerting too much influence over doctors’ decisions.

Celi is the senior author of the study, which appears today in BMJ Health and Care Informatics. The paper’s lead author is Sebastián Andrés Cajas Ordoñez, a researcher at MIT Critical Data, a global consortium led by the Laboratory for Computational Physiology within the MIT Institute for Medical Engineering and Science.

Instilling human values

Overconfident AI systems can lead to errors in medical settings, according to the MIT team. Previous studies have found that ICU physicians defer to AI systems that they perceive as reliable even when their own intuition goes against the AI suggestion. Physicians and patients alike are more likely to accept incorrect AI recommendations when they are perceived as authoritative.

In place of systems that offer overconfident but potentially incorrect advice, health care facilities should have access to AI systems that work more collaboratively with clinicians, the researchers say.

“We are trying to include humans in these human-AI systems, so that we are facilitating humans to collectively reflect and reimagine, instead of having isolated AI agents that do everything. We want humans to become more creative through the usage of AI,” Cajas Ordoñez says.

To create such a system, the consortium designed a framework that includes several computational modules that can be incorporated into existing AI systems. The first of these modules requires an AI model to evaluate its own certainty when making diagnostic predictions. Developed by consortium members Janan Arslan and Kurt Benke of the University of Melbourne, the Epistemic Virtue Score acts as a self-awareness check, ensuring the system’s confidence is appropriately tempered by the inherent uncertainty and complexity of each clinical scenario.

With that self-awareness in place, the model can tailor its response to the situation. If the system detects that its confidence exceeds what the available evidence supports, it can pause and flag the mismatch, requesting specific tests or history that would resolve the uncertainty, or recommending specialist consultation. The goal is an AI that not only provides answers but also signals when those answers should be treated with caution.

“It’s like having a co-pilot that would tell you that you need to seek a fresh pair of eyes to be able to understand this complex patient better,” Celi says.

Celi and his colleagues have previously developed large-scale databases that can be used to train AI systems, including the Medical Information Mart for Intensive Care (MIMIC) database from Beth Israel Deaconess Medical Center. His team is now working on implementing the new framework into AI systems based on MIMIC and introducing it to clinicians in the Beth Israel Lahey Health system.

This approach could also be implemented in AI systems that are used to analyze X-ray images or to determine the best treatment options for patients in the emergency room, among others, the researchers say.

Toward more inclusive AI

This study is part of a larger effort by Celi and his colleagues to create AI systems that are designed by and for the people who are ultimately going to be most impacted by these tools. Many AI models, such as MIMIC, are trained on publicly available data from the United States, which can lead to the introduction of biases toward a certain way of thinking about medical issues, and exclusion of others.

Bringing in more viewpoints is critical to overcoming these potential biases, says Celi, emphasizing that each member of the global consortium brings a distinct perspective to a broader, collective understanding.

Another problem with existing AI systems used for diagnostics is that they are usually trained on electronic health records, which weren’t originally intended for that purpose. This means that the data lack much of the context that would be useful in making diagnoses and treatment recommendations. Additionally, many patients never get included in those datasets because of lack of access, such as people who live in rural areas.

At data workshops hosted by MIT Critical Data, groups of data scientists, health care professionals, social scientists, patients, and others work together on designing new AI systems. Before beginning, everyone is prompted to think about whether the data they’re using captures all the drivers of whatever they aim to predict, ensuring they don’t inadvertently encode existing structural inequities into their models.

“We make them question the dataset. Are they confident about their training data and validation data? Do they think that there are patients that were excluded, unintentionally or intentionally, and how will that affect the model itself?” he says. “Of course, we cannot stop or even delay the development of AI, not just in health care, but in every sector. But, we must be more deliberate and thoughtful in how we do this.”

The research was funded by the Boston-Korea Innovative Research Project through the Korea Health Industry Development Institute.


A complicated future for a methane-cleansing molecule

A new model shows how levels of the “atmosphere’s detergent” may rise and fall in response to climate change.


Methane is a powerful greenhouse gas that is second only to carbon dioxide in driving up global temperatures. But it doesn’t linger in the atmosphere for long thanks to molecules called hydroxyl radicals, which are known as the “atmosphere’s detergent” for their ability to break down methane. As the planet warms, however, it’s unclear how the air-cleaning agents will respond.

MIT scientists are now shedding some light on this. The team has developed a new model to study different processes that control how levels of hydroxyl radical will shift with warming temperatures.

They find that the picture is complicated. As temperatures increase, so too will water vapor in the atmosphere, which will in turn boost the molecule’s concentrations. But rising temperatures will also increase “biogenic volatile organic compound emissions” — gases that are naturally released by some plants and trees. These natural emissions can reduce hydroxyl radical and dampen water vapor’s boosting effect.

Specifically, the team finds that if the planet’s average temperatures rise by 2 degrees Celsius, the accompanying rise in water vapor will increase hydroxyl radical levels by about 9 percent. But the corresponding increase in biogenic emissions would in turn bring down hydroxyl radical levels by 6 percent. The final accounting could mean a small boost, of about 3 percent, in the atmosphere’s ability to break down methane and other chemical compounds as the planet warms.

“Hydroxyl radicals are important in determining the lifetime of methane and other reactive greenhouse gases, as well as gases that affect public health, including ozone and certain other air pollutants,” says study author Qindan Zhu, who led the work as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

“There’s a whole range of environmental reasons why we want to understand what’s going on with this molecule,” adds Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in EAPS. “We want to make sure it’s around to chemically remove all these gases and pollutants.”

Fiore and Zhu’s new study appears today in the Journal of Advances in Modeling Earth Systems (JAMES). The study’s MIT co-authors include Jian Guan and Paolo Giani, along with Robert Pincus, Nicole Neumann, George Milly, and Clare Singer of Lamont-Doherty Earth Observatory and the Columbia Climate School, and Brian Medeiros at the National Center for Atmospheric Research.

A natural neutralizer

The hydroxyl radical, known chemically as OH, is made up of one oxygen atom and one hydrogen atom, along with an unpaired electron. This configuration makes the molecule extremely reactive. Like a chemical vacuum cleaner, OH easily pulls an electron or hydrogen atom away from other molecules, breaking them down into weaker, more water-soluble forms. In this way, OH reduces a vast range of chemicals, including some air pollutants, pathogens, and ozone. And changes in OH are a powerful lever on methane.

“For methane, the reaction with OH is considered the most important loss pathway,” Zhu says. “About 90 percent of the methane that’s removed from the atmosphere is due to the reaction with OH.”

Indeed, it’s thanks to reactions with hydroxyl radical that methane can only stick around in the atmosphere for about a decade — far shorter than carbon dioxide, which can linger for 1,000 years or longer. But even as OH breaks down methane already in the atmosphere, more methane continues to accumulate. Rising methane concentrations, in addition to human-derived emissions of carbon dioxide, are driving global warming, and it’s unclear how OH’s methane-clearing power will keep up.

“The questions we’re exploring here are: What are the main processes that control OH concentrations? And how will OH respond to climate change?” Fiore says.

An aquaplanet’s air

For their study, the researchers developed a new model to simulate levels of OH in the atmosphere under a current global climate scenario, compared to a future warmer climate. Their model, dubbed “AquaChem,” is an expansion of a simplified model that is part of a suite of tools developed by the Community Earth System Model (CESM) project. The model that the team chose to build off is one that represents the Earth as a simplified “aquaplanet,” with an entirely ocean-covered surface.

Aquaplanet models allow scientists to study detailed interactions in the atmosphere in response to changes in surface temperatures, without having to also spend computing time and energy on simulating complex dynamics between the land, water, and polar ice caps.

To the aquaplanet model, Zhu added an atmospheric chemistry component that simulates detailed chemical reactions in the atmosphere consistent with the applied surface temperatures. The chemical reactions that she modeled represent those that are known to affect OH concentrations.

OH is primarily produced when ozone interacts with sunlight in the presence of water vapor. For instance, scientists have found that OH levels can vary depending certain anthropogenic and natural emissions, all of which Zhu incorporated separately and together into the AquaChem model in order to isolate the impact of each process on OH.

The emissions in particular include carbon monoxide, methane, nitrogen oxides, and volatile organic compounds (VOCs), some of which are emitted through human practices, and others that are given off by natural processes. One type of naturally-derived VOCs are “biogenic” emissions — gases, such as isoprene, that some plants and trees emit through tiny pores called stomata during transpiration.

Into the AquaChem model, Zhu plugged in data that were available for each type of emissions from the year 2000 — a year that is generally considered to represent the current climate in a simplified form. She set the aquaplanet’s sea surface temperatures to the zonal annual mean of that year, and found that the model accurately reproduced the major sensitivities of OH chemistry to the underlying chemical processing as simulated in a more complex chemistry-climate model.

Then, Zhu ran the model under a second, globally warming scenario. She set the planet’s sea surface temperatures to warm by 2 degrees Celsius (a warming that is likely to occur unless global anthropogenic carbon emissions are mitigated). The team looked at how this warming would affect the various types of emissions and chemical processes, and how these changes would ultimately affect levels of OH in the atmosphere.

In the end, they found the two biggest drivers of OH levels were rising water vapor and biogenic emissions. They found that global warming would increase the amount of water vapor to the atmosphere, which in turn would boost production of OH by 9 percent. However, this same degree of warming would also increase biogenic emissions such as isoprene, which reacts with and breaks down OH, bringing down its levels by 6 percent.

The team recognizes that there are many other factors that affect the response of isoprene emissions to surface warming. Rising CO2, not considered in this study, may dampen this temperature-driven response. Of all the factors that can shift OH levels under global warming, the researchers caution that biogenic emissions are the most uncertain, even though they appear to have a large influence. Going forward, the scientists plan to update AquaChem to continue studying how biogenic emissions, as well as other processes and climate scenarios, could sway OH concentrations.

“We know that changes in atmospheric OH, even of a few percent, can actually matter for interpreting how methane might accumulate in the atmosphere,” Zhu says. “Understanding future trends of OH will allow us to determine future trends of methane.”

This work was supported, in part, by Spark Climate Solutions and the National Oceanic and Atmospheric Administration. 


Advancing international trade research and finding community

Sojun Park, a postdoc at the Center for International Studies, has learned much from his research on intellectual property as well as his interactions with students and mentors at MIT.


The sense of support and community was palpable when Sojun Park, a postdoc at the MIT Center for International Studies (CIS), delivered a recent presentation on The Global Diffusion of AI Technologies and Its Political Drivers. The event, part of the CIS Global Research and Policy Seminar, filled the venue with audience members from across MIT. 

“My work is directly connected to what CIS faculty have previously done on international trade and security,” Park said afterwards. “If I hadn’t received a postdoctoral fellowship and come to MIT, I wouldn’t have been able to think through the security implications of my intellectual property research. I’ve been tremendously motivated by these scholars.”

Park’s time at CIS has been both grounding and transformative, offering him a scholarly home that has shaped his research and helped broaden his intellectual horizons.

Pursuing interdisciplinary research and connections 

Before pursuing a tenure-track position, Park set his sights on conducting research at MIT. When he came across a public posting about the CIS Postdoctoral Associate Program, he took a chance and applied.

“My own research is interdisciplinary, and I knew that I could really benefit from the interdisciplinary environment at MIT, and specifically at CIS, where faculty are coming not only from political science, but also affiliated with the Department of Economics and MIT Sloan [School of Management],” he says.

Park was thrilled to receive the paid fellowship, which offers an academic year at MIT and dedicated office space at CIS. At MIT, he is free to use his time toward his own research, and has found value in pursuing topics that are of interest to the CIS community — whether it’s AI or global governance. He’s published prolifically along the way, including two articles in the Review of International Organizations and the Review of International Political Economy.

He’s also continued to work on his forthcoming book, “From Privilege to Prosperity: Knowledge Diffusion and the Global Governance of Intellectual Property,” which examines how technologies can be transferred legitimately across borders. “By 'legitimately,' I am asking under what circumstances would firms volunteer to share their technologies? I’m interested in institutions and institutional environments that allow large businesses to share their technologies with smaller businesses based in the development world that may not possess the ability to come up with their own technologies,” he explains.

During the spring 2026 semester, he is collaborating with the center’s Undergraduate Fellows Program. This program enables postdocs to work on their research projects with MIT undergraduates. Park is working with two CIS undergraduate fellows to develop a new dataset examining international trade in green technologies. This opportunity reconnects Park to his early academic experiences in South Korea that set him on the path to MIT.

Path to MIT

“Students in South Korea are trained to be problem-solvers,” explains Park, who was born and raised in Seoul. The country’s rigorous college entrance exams reward those who can answer the most questions quickly and accurately in a limited amount of time.

While taking a test in high school, Park stumbled over a question that he couldn’t answer, regardless of how much time he spent concentrating on it. He handed in the exam, but took the problem home and spent hours puzzling over it — he just couldn’t let it go. “In hindsight, I see this as the moment I decided that I wanted to become a scholar,” Park says.

While majoring in international studies and economics (statistics) at Korea University, he had the opportunity to participate in a semester-long exchange program at the University of Texas at Austin. There, Park enrolled in a political science course on game theory that explored how individual state actors’ decisions influenced one another’s choices and outcomes in trade, conflict, and diplomacy. The instructor used the ongoing war between North and South Korea as a case study, demonstrating the unique circumstances for escalation or de-escalation depending upon how the key actors made choices along the way.

“I saw for the first time how quantitative methods could be applied to international relations and political economy,” Park says — and he knew that his next step was going to be graduate work in the United States. He began a joint MA and PhD program in political science at Princeton University the following year, supported by a Fulbright Fellowship.

Park’s 2025 dissertation examined the global governance of intellectual property rights — and it was timely. He began his PhD program in 2018, “the point at which the U.S. and China trade war had just begun.” During the pandemic, he was moved by the ongoing debates regarding vaccine inequality. “I realized then that intellectual property was at the center of these global economic challenges.” With little political science research on the topic, he “set out to create a systemic framework” to study it.

Simultaneously, he served as a teaching assistant in undergraduate courses in statistical analysis and realized that he deeply enjoyed the experience of teaching and interacting with students. It was a very different experience from his own college years. 

“In South Korea, it’s common for the learning environment to be one in which the professor just delivers lectures, but I found that in the United States’ higher education system, the classroom is truly interactive. I learned something from each of my students.” Soon, Park was certain that he not only wanted to build a career in academic research, but also a future that heavily incorporated teaching and mentoring students.

Before graduating, he spent a year at Georgetown University as a predoctoral fellow affiliated with the Mortara Center for International Studies. This experience enabled him to explore the policy implications of his research and engage with policymakers in Washington — skills he will draw on in his new position.

Lasting lessons from CIS

Park recently accepted a position as assistant professor at the National University of Singapore. Beginning fall 2026, he will be teaching graduate students affiliated with the school of public policy — most of whom will have career experience as practitioners in the public or private sectors. 

He’ll take many lessons from MIT to his new academic home, he says. “Based on what I learned in the United States, I’ll make the learning environment in the graduate courses I teach much more interactive and collaborative.”

At CIS, Mihaela Papa, director of research and principal research scientist, and Evan Lieberman, the center’s director and professor of political science, connected Park to associated faculty whose research interests were related with his own. “Meeting with all of these scholars whose research relates in some way to intellectual property rights made me think about how my own interests can expand to other topics,” Park explains. 

But the biggest takeaway of all is that he learned how to share his own research with scholars who study unfamiliar topics, to exchange ideas and discover commonality. “I’ll never stop using the communication skills that I got here at MIT," Park says.


Investigating Antarctic ice shelf melting with global navigation satellite systems

Observations suggest a major melting event at the Ross Ice Shelf was connected to atmospheric turbulence.


Global navigation satellite systems (GNSS), which include GPS, are traditionally used for positioning, timing, and mapping information. In an open-access study published Feb. 27 in Geophysical Research Letters, MIT Haystack Observatory scientists report using existing GNSS satellites, in conjunction with 13 stations installed on the Ross Ice Shelf (RIS) in Antarctica, to measure atmospheric turbulence above the ice shelf that may have contributed to an unusual extensive surface melting in January 2016.

The RIS is a large, floating ice structure that fringes the western coast of Antarctica, buttressing the continental ice sheet. Normally, the RIS melts from underneath as warmer ocean water flows into its cavity underwater; in January 2016, warm, humid air caused an unusual melting event on the top side of the shelf. RIS stability is crucial to track, given that it regulates the amount of ice discharged into the ocean from Antarctica and thus significantly affects globally rising sea levels. 

Understanding atmospheric conditions above the RIS helps to explain its surface melting events, but it is challenging to monitor these in situ due to dangerous conditions and the remote location. 

Haystack scientists determined that a network of GNSS stations on the ice can be used to track atmospheric conditions above each station and across the network; water vapor in the lower atmosphere induces a delay in the GNSS signal that can be slightly different between stations, and changes over time. These spatial and temporal variations of water vapor allow researchers to track weather over the RIS and can be used to infer the strength (also called “rockiness”) of atmospheric turbulence.

During the unusual RIS surface melting event, the GNSS station data indicated turbulence at a level four times greater than usual. This novel application of the GNSS network systems to measure atmospheric conditions allows scientists to monitor distant, life-threatening locations remotely. 

“In January 2016, Antarctica experienced a significant widespread summer melting, driven by the warm air intrusion from the Southern Ocean. Our study showed that atmospheric turbulence may have helped mix the air mass and aggravated the surface melting,” says Haystack Research Scientist Dhiman Mondal. “We can use a GNSS network as an atmospheric turbulence sensor and monitor the health of the ice sheets where meteorological measurements are sparse.” 

MIT Haystack Observatory also recently developed and tested an instrument, the seismogeodetic ice penetrator, which will contribute to monitoring the atmospheric turbulence in Antarctica. Haystack scientists also plan to use this method of GNSS systems to monitor ice melt above the Greenland Ice Sheet. 

Pedro Elosegui, head of the Haystack geodesy department, says, “The colossal Antarctic ice shelves, such as the RIS, are (generally) thinning and retreating. They lose mass by calving icebergs — some rather spectacularly, by collapsing — and by basal melting due to the interaction of warm and salty ocean waters. We found that the RIS can also lose mass to surface melting caused by warm and humid air from the Ross Sea, which brought about enhanced atmospheric turbulence and may have further strengthened the melting.”


3 Questions: Communicating about climate, in audio and beyond

Madison Goldberg, the new host of the Ask MIT Climate podcast, talks about her career as a science communicator as well as ideas she thinks it’s important for climate communicators to convey.


Since her first journalism fellowship covering energy and the environment at the NPR station in Harrisburg, Pennsylvania, Madison Goldberg has been drawn to science communication and audio storytelling. Now, after reporting on topics from solar storms to sewer systems to cryptography, she’s bringing her passions to MIT as the new host of the Institute’s climate change podcast.

Launched in 2019 as TILclimate, the show began its eighth season this year with a new name: Ask MIT Climate. But the podcast’s mission remains the same: teaming up with scientists and subject matter experts to bring listeners clear, accessible information on climate change topics in 15 minutes or less.

In this interview, Goldberg talks about her path to science communication, the ideas she thinks it’s important for climate communicators to convey, and what makes MIT an exciting place to share knowledge with the world.

Q: Did you always know that you wanted to be a science communicator? 

A: I didn’t! My first love in science was astronomy. I grew up looking at the stars a lot, and I was very lucky to do an internship in high school at UC Santa Cruz with a professor in their astronomy department. Space kind of puts everything in the biggest possible perspective, and for me, that’s a very calming thing.

And then in college, I wanted to do something closer to home, so to speak. I found that Earth science was very exciting to learn about, because pretty much all the sciences are somehow involved. You know, you’ve got chemistry, biology, physics ... everything all rolled into one. Also, I still got to tap into a lot of what I loved about astronomy, in terms of exploring deep time and big scales. And I was very motivated by a lot of the problems in Earth and climate science, because they tie so closely to people’s lives.

I expected to continue with research, but I discovered that what was especially compelling to me was learning about this stuff and then talking to people about it. And in my senior year of college I learned that science communication, and science journalism, was a field that you could be in. 

I took a science podcasting course that year — which I still can’t believe even existed — and I got my first taste of interviewing people and working in audio, which was just incredible. I had loved podcasts for so long, and so the medium felt really familiar.

Q: What is important for science communicators to convey about climate change?

A: One of the ideas that I try to always keep in mind, and that I think is really important to convey, is that climate change affects every single aspect of our lives. And we need to communicate about it accordingly.

I think it’s crucial to consider the ways climate change intertwines with all these other realms of people’s experiences; it affects where we live, it affects what we eat, it affects the economy, it affects our health. Approaching it in isolation doesn’t seem to be the most productive framework. As communicators, we have a responsibility to listen and learn and talk about all these many and varied ways that climate change shows up in people’s lives.

This idea of things intertwining also reminds me of a really central theme in Ask MIT Climate: that working towards climate solutions not only allows us to avoid the worst impacts of climate change, but it can also help make people’s lives better in other ways. And we get to think expansively about the future we want to build.

Q: What makes MIT an exciting place to be engaged in climate communication?

A: The folks that I've talked to at MIT are just so kind and generous with their time. And these people are so busy! They have so much on their plates, and yet, somehow, even when I have a million follow-up questions, extremely prominent researchers will hop on a Zoom or exchange emails to answer them. I feel so lucky to be part of this community.

Related to what I mentioned earlier, I also appreciate the interdisciplinary climate work that happens at MIT. Tackling climate change is a generational challenge, and it requires inputs from all kinds of fields. And at MIT we have, for example, the Climate Project, the Climate Policy Center, the Center for Sustainability Science and Strategy, the Living Climate Futures Lab — all of these ways to approach the issue and bring folks into the conversation who have different expertise, experiences, and perspectives. I think it’s really special to be at MIT, to see that happen in real-time, and to see students, faculty, and staff working to bridge across subject matter boundaries.

Above all, I’ve been shown such generosity, and I’m so grateful. I feel like I can never express enough gratitude for the people inside and outside of MIT who have spoken to me about their work and about their lives. All I can hope to do is to communicate that information faithfully. Because I think there’s a huge number of people who are curious about climate change and what we can do about it, and who want to learn.


Stamping high-res imagery onto everyday items to “reprogram” their appearance

The portable “ChromoLCD” device combines LCD and LED lighting to customize high-quality designs onto things like shirts and whiteboards.


Imagine a world where you could change the designs you see on bags, shirts, and walls whenever you want. Typical clothes would become customizable fashion pieces, while your humble abode could turn into a smart home. That’s the vision of scientists like MIT electrical engineering and computer science PhD student Yunyi Zhu ’20, MEng ’21: technology that can “reprogram” the appearance of personal accessories, home decor, and office items. 

At MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), she’s created clever hardware that can add, say, artwork to a sweater, then swap in a new illustration later. To do this, she coats items with an invisible ink called photochromic dye, which transforms into different colors when exposed to intense light. Her colleagues previously built a device called “PhotoChromeleon” that used a projector to activate this ink, but the system wasn’t portable, so Zhu built the LED-based tool “PortaChrome” to reprogram lower-resolution imagery on the go.

Zhu and her team now have the best of both worlds: a portable device called “ChromoLCD” that programs clear pictures onto T-shirts, tables, and whiteboards. It looks like a small printer on the outside, but inside, ChromoLCD combines the sharpness of liquid-crystal displays (LCDs) with the precision lighting of LEDs. The collective powers of these lights help users stamp designs onto flat surfaces (like walls) and soft ones (like clothes) after they’ve been coated with photochromic dye.

ChromoLCD can embed a digital rose onto a hoodie, for example. Once you’ve painted photochromic ink onto the surface you’d like to redesign, you upload your picture to the device via Bluetooth or USB port. Users can select and preview their designs from ChromoLCD’s display menu, then stamp the device onto their item. Within about 15 minutes, you’ll have a personalized piece, and if you’d like to change it, you can program a new design onto your object.

“We see ChromoLCD as a bridge between consumers and photochromic dyes,” says Zhu, who is also co-lead author on a paper presenting this work. “It’s basically a stamp, and it’s very easy to use. There are no alignment requirements, no 3D object texture creation. You just upload the image you’d like to put on your bag, place it on there, and then you’d have a personalized accessory.”

ChromoLCD showed it could add a personalized touch to accessories such as a handbag by stamping on colorful drawings of things like fish and flowers. It also embedded an augmented reality (AR) tag (much like a QR code) on a tiled kitchen counter, which linked to a cooking tutorial a user could watch while preparing a meal. The tool even reprogrammed a whiteboard to display high-resolution reference images, and could potentially turn any whiteboard into an interactive canvas that blends digital visuals with physical sketching.

Welcome to the light show

At its core, ChromoLCD is a tower of power. Its display screen sits atop a white shell, which houses a computer chip, a backlight made up of bright ultraviolet (UV) and red, green, and blue (RGB) LEDs, and an LCD panel. In other words, while ChromoLCD works its magic to customize an object, a light show takes place behind the scenes.

The system first produces a black-and-white video that outlines the brightness of particular pixels in the image you select. For example, a picture of a parrot will have some areas that are darker than others, such as the shadows cast under its wing. Then, a UV light darkens (or saturates) the dye on your object, followed by the RGB lights that brighten it up and color in each pixel. It’s kind of like when you open the shades in the morning — what starts as a blast of bright light soon becomes a more colorful visual. These lights are produced at precise frequencies that the LCD maps onto your target object.

Zhu and her colleagues note that these components are fairly easy to purchase, in case you want to make your own ChromoLCD at home. Recreating ChromoLCD could help you turn often-overlooked items into interactive displays that you can modify as you please. “A wall in your office can show your family’s pictures when you miss them, or perhaps a doormat can show a customized greeting for each of your guests,” says Zhu. “It’s sort of like turning the world into your canvas.”

What next?

Combined with PortaChrome and PhotoChromeleon, CSAIL researchers have developed systems that help us digitize our surroundings. The next step for them is to find a way to help with the creative process of what to put there. Currently, you still need to upload a picture or even create a texture image for a 3D object. With the recent advancements we’ve seen from AI in texture generation, though, users could make requests without as much effort. By simply turning on your phone’s camera (or wearing an AR helmet) and pointing it at a particular object, you could ask your generative system to “turn a cup into a medieval-style tankard.” Voilà: you’d have programmed drinkware.

In the meantime, Zhu and her colleagues are bringing photochromic material to larger surfaces by developing a reprogrammer in the shape of a wall-roller. The machine works much like painting a wall, allowing you to place larger designs onto a surface. CSAIL researchers are also exploring swiping and ironing motions, and even implementing their current technology into robots to help them communicate with humans and other machines. The machines would be able to essentially write what they’re doing onto a surface — for example, a Roomba vacuum could tell its robotic counterparts that it cleaned specific areas of a large floor by stamping a clearly displayed, high-resolution message on the ground.

Narges Pourjafarian, a postdoc at Northeastern University who wasn’t involved in the paper, says that ChromoLCD is more than a resolution upgrade over prior MIT projects. “It reframes monochromatic LCD panels as wavelength-selective fabrication tools, rather than merely display endpoints. This approach expands how we think about reprogrammable surface appearance, enabling high-resolution, reconfigurable graphics to be embedded directly into physical environments without the need for stationary projection enclosures. It opens a path toward compact, portable augmentation of garments, countertops, and shared surfaces.”

Zhu wrote the paper with six CSAIL affiliates. They are: MIT undergraduates Qingyuan Li (who is a co-lead author), Katherine Yan, Alex Luchianov, and Eden Hen; Harvard University graduate student and former visiting researcher Emily Guan; and MIT Associate Professor Stefanie Mueller, who is a CSAIL principal investigator and senior author on the work. The researchers will present their paper at the ACM International Conference on Tangible, Embedded, and Embodied Interaction.


On algorithms, life, and learning

Operations research expert Dimitris Bertsimas delivered the annual Killian Lecture, providing a look at the past and future of his work.


From enhancing international business logistics to freeing up more hospital beds to helping farmers, MIT Professor Dimitris Bertsimas SM ’87, PhD ’88 summarized how his work in operations research has helped drive real-world improvements, while delivering the 54th annual James R. Killian Faculty Achievement Award Lecture at MIT on Thursday, March 19.

Bertsimas also described how artificial intelligence is now being used in some of his scholarly projects and as a tool in MIT Open Learning efforts, which he currently directs — another facet of a highly productive and lauded career over four decades at the Institute. The Killian Award is the highest prize MIT gives its faculty.

“I have tried to improve the human condition,” Bertsimas said, summarizing the breadth of his work and the many applications to everyday living that he has found for it.

At MIT, Bertsimas is the vice provost for open learning, associate dean for online education and artificial intelligence, Boeing Leaders for Global Operations Professor of Management, and professor of operations research in the MIT Sloan School of Management. He also served as the inaugural faculty director of the master of business analytics program at MIT Sloan, and has held the position of associate dean of business analytics.

Bertsimas’ remarks encompassed both his past insights and his ongoing studies, as well as his current efforts to add AI to his research. Describing the concept of “robust optimization,” a highly influential approach that Bertsimas helped develop in the early 2000s, he explained how it has enabled, for instance, more reliable shipping through the Panama Canal. Other approaches to optimization aimed at getting more vessels through the canal every day — up to 48 — but would encounter significant problems at times. Bertsimas’ approach identified that 45 vessels a day was better — a slightly lower number, but one that “was always feasible,” he noted.

Over time, Bertsimas’ work has helped structure all kinds of solutions in business logistics; it has even been used for the allocation of school buses in Boston.

More recently, as Bertsimas explained in the lecture, he and his collaborators have been working with Hartford HealthCare in Connecticut on a wide range of issues, and are increasingly incorporating AI into the development of tools for diagnostics, among other things. On the optimization front, their research has suggested ways to reduce the average stay of a hospital patient, from 5.38 days to 4.93 days. In the main Hartford hospital they have studied, given the number of existing beds, that reduction has enabled more than 5,000 additional patient stays per year.

“It’s a very different ballgame,” Bertsimas said.

Bertsimas delivered his lecture, titled “Algorithms for Life: AI and Operations Research Transforming Healthcare, Education, and Agriculture,” to an audience of over 300 MIT community members in Huntington Hall (Room 10-250) on campus.

The award was established in 1971 to honor James Killian, whose distinguished career included serving as MIT’s 10th president, from 1948 to 1959, and subsequently as chair of the MIT Corporation, from 1959 to 1971.

“Professor Bertsimas’ scholarly contributions are both extensive and groundbreaking,” said Roger Levy, chair of the MIT faculty and a professor in the Department of Brain and Cognitive Sciences, while making introductory remarks. “He’s one of the rare individuals who has made significant contributions to both intellectual threads in the field of operations research: one, optimization — combinatorial, linear, and nonlinear — and number two, stochastic processes.”

Indeed, Bertsimas’ work has helped develop both better tools for studying and conducting operations, while also having a wide range of applications. As Bertsimas noted in his lecture, the deaths of both of his parents in 2009 helped propel him to start looking at extensively at ways operations research could help health care.

Bertsimas received his BS in electrical engineering and computer science from the National Technical University of Athens in Greece. Moving to MIT for his graduate work, he then earned his MS in operations research and his PhD in applied mathematics and operations research. Bertsimas joined the MIT faculty after receiving his doctorate, and has remained at the Institute ever since.

Bertsimas is also known as an energetic teacher who has been the principal advisor to a remarkable number of PhD students — 106 and counting, at this point.

“It is far and away my favorite activity, to supervise my doctoral students,” Bertsimas said. “It is a privilege, in my opinion, to work with exceptional young people like the ones we have at MIT, in ability and character and aspiration. They actually make me a better scientist, and a better person.”

“MIT is part of my identity,” Bertsimas quipped while noting that he is the only faculty member on campus who has those three letters, in order, in his first name.

In the latter part of the lecture, Bertsimas highlighted work he has been doing as vice provost of open learning at MIT. He has personally developed an large online course based on his own material, “The Analytics Edge.” In his current role, Bertsimas said, he now aspires for MIT to reach a billion learners with online courses, part of his effort to “democratize access to education.”

Bertsimas also demonstrated for the audience some AI tools he and his colleagues are working to bring to online education, including ways of condensing material, and the translation of online material into other languages.

It is just one more chapter in a long and broad-ranging career dedicated to grasping phenomena and developing tools to help us navigate it.

Or as Berstimas noted while summarizing his scholarship at one point in the lecture, “I try to increase the human understanding of how the world works.” 


Bridging medical realities in the study of technology and health

Anthropologist Amy Moran-Thomas studies overlooked insights from people health care is meant to reach.


A few weeks ago, Amy Moran-Thomas and 20 students in her class 21A.311 (The Social Lives of Medical Objects) were gathered around a glucose meter, a jar of test strips, and various spare medical parts in the MIT Museum seminar room, talking about how to make them work better.

The class had just heard a presentation from the president of the Belize Diabetes Association in Dangriga, Norma Flores, a nurse whose hospital had recently received a huge shipment of insulin that, although durable in theory, seemed to have spoiled in a heat wave. Flores and the students discussed whether scientists could develop temperature-stable insulin and design repairable glucose meters and other technologies for hospitals worldwide.

“Whenever people keep saying they are concerned about an issue, but the medical literature doesn’t describe it yet, there is a key question about what’s happening,” says Moran-Thomas. “Ethnography can help us learn about it.”

For Moran-Thomas, an MIT anthropologist, that class session was a way of connecting people and ideas that are too often overlooked. Flores was a central figure in Moran-Thomas’ 2019 book, “Traveling with Sugar: Chronicles of a Global Epidemic,” about diabetes in Belize and the failures of medical technology designed to treat it. (At the end of class, Flores surprised Moran-Thomas with a framed commendation from the Belize Diabetes Association for their nearly 20 years of work together.)

That approach informs all of Moran-Thomas’ work. Currently she is co-leading a group working on a project called the “Sugar Atlas,” mapping the social and economic dimensions of diabetes in the Caribbean, in tandem with scholars Nicole Charles of the University of Toronto and Tonya Haynes of the University of West Indies. Moran-Thomas has also spent more than a decade following the case of notorious medical experiments that took place in Guatemala in the 1940s, the subject of a recent paper she published with Susan Reverby of Wellesley College.

Closer to home, Moran-Thomas is working on a book about how energy extraction affects chronic conditions and mental health in her native Pennsylvania, at a time of increasing hospital closures. As part of this research, she has been working with MIT seismologist William Frank to develop low-cost sensors that people can use to measure the impact of industrial activity on their home neighborhoods. The research team was recently awarded a grant by the MIT Human Insight Collaborative (MITHIC) for the work. And with another MITHIC grant, Moran-Thomas and several colleagues are working to create a new “Health and Society” educational program at MIT.

“A through line in my work is the question about how to put people at the center of health and medicine,” says Moran-Thomas, an associate professor in MIT’s anthropology program. “Because that’s not how it feels to most people in the world. Care technologies that work for everybody, and health technologies in relation to chronic disease, connect all these different projects.”

The work Moran-Thomas may be best known for occurred in 2020, during the Covid-19 pandemic, when her research recovered an array of neglected clinical studies showing that oximeters functioned differently depending on the skin color of patients. After she published a piece about it in the Boston Review, further hospital studies by physicians who found the essay confirmed a pattern of disproportionately inaccurate readings, leading to subsequent efforts to improve the technology — all steming from her careful, patient-centric approach.

“What anthropology has to offer the world, and other knowledge systems, is the insights of people that might be missing from many accounts, and highlighting these stories that are getting left out,” Moran-Thomas says. “Those are not footnotes, but the central thing to follow. And those histories are also alive in the material world around us.”

Thinking across medical and climate technologies

After growing up in Pennsylvania, Moran-Thomas majored in literature while earning her BA from American University. She decided to pursue ethnographic research as a graduate student, and entered Princeton University’s program in anthropology, earning an MA in 2008 and her PhD in 2012. After postdoc stints at Princeton and Brown University, Moran-Thomas joined the MIT faculty in 2015.

At Princeton, Moran-Thomas’ dissertation research examined the diabetes epidemic in Belize, forming the basis of her first book, “Traveling with Sugar,” whose title is an expression in Belize for living with diabetes. As she chronicles in the book, plantation-era changes that undermined indigenous agriculture, among other things, contributed to a local economy that made diets sugar-heavy, while medical technologies are often unreliable or ill-suited to local conditions. The book also traces breakdowns in care technologies, such as prosthetic limbs (often sought after diabetes-linked amputations), glucose meters, hyperbaric chambers, insulin supply chains, dialysis machines, and pain management technologies.

“Traveling with Sugar” also develops a critique that has become a theme of Moran-Thomas’ work: that society often shifts the blame for illness onto patients while minimizing the larger-scale factors affecting everyday health.

“There can be this focus on exclusively prevention without care, the implicit assumption that patients need to act differently,” Moran-Thomas says. “Blame falls on individuals and families instead of a focus on other questions. Why are these technologies always breaking down? How are they designed, and by whom, for whom? What role is history playing in the present? And how are people trying to remake those structures?”

Those issues are highlighted in Moran-Thomas’ ongoing project, “Sugar Atlas: Counter-Mapping Diabetes from the Caribbean,” which is backed by a two-year Digital Justice Seed Grant from the American Council of Learned Societies. Whereas international organizations tend to lump North America and the Caribbean together when tracking diabetes, this project zooms in on specific aspects of the disease and its historical and structural contributors in the Caribbean, such as the distance people must travel to buy vegetables, their proximity to insulin supplies, and the ways climate change is affecting sea life and fishing practices.

“We’re trying to create a community platform offering a different vision of these conditions,” Moran-Thomas says of the effort to map otherwise unrecorded aspects of the global diabetes epidemic, while tracing mutual aid networks and people’s “arts of care” in the present.

Better design for common devices

Following her research in Belize, where glucose meters were prone to breaking, Moran-Thomas began taking a more active focus on the design of medical technology. At MIT, she began co-teaching a course with tech innovator Jose Gomez-Marquez, 21A.311 (The Social Lives of Medical Objects). The idea was to get students to think about device design that could lead to more durable, fixable, and equitable products.

In turn, Moran-Thomas’ interest in devices led her to question the pulse oximeter readings she started seeing first-hand during the Covid-19 pandemic. Pulse oximeters measure oxygen saturation levels in patients and are a part of even routine appointment check-ins. They work visually, casting beams of light to measure the color of hemoglobin, which varies depending on how much oxygen it contains. 

After firsthand encounters with the sensors led to more research, Moran-Thomas learned that some medical professionals had lingering, unanswered questions about pulse oximeters and they way they were calibrated. After she published her essay in the Boston Review, arguing for more data collection, medical researchers examined the issue more closely, finding that patients with darker skin were about three times more likely to have erroneous blood-oxygen readings than patients with lighter skin. Ultimately, an FDA panel recommended changes to the devices.

“A lot of my work has been learning about health and medicine technologies from the perspectives of patients, families, and nurses, rather than beginning with engineers and doctors,” Moran-Thomas says. “Those two projects, about blood sugar and blood oxygen, were about the shortcomings of those devices and how they could be improved. Those are perspectives I can highlight in hopes others will pick up on them and make other kinds of designs and policies possible.”

Moran-Thomas’ interest in device design has continued with her current book project, about the chronic health effects of energy production in Pennsylvania. She has worked with MIT seismologist William Frank, of the Department of Earth, Atmospheric and Planetary Sciences, to construct an inexpensive meter people can use to measure shaking in their homes caused by industrial activities. (After colleagues in western Pennsylvania reached out with seismic concerns, Moran-Thomas first got the idea to contact Frank after reading about his work in MIT News, incidentally).

The effort is also inspired by guidance from community leaders based at the Center for Coalfield Justice in western Pennsylvania. The research team has received a MITHIC SHASS+ Connectivity grant for their project, “Seismic Collaboratory: Rural Health, Missing Science, and Communicating the Chronic Impacts of Extraction.”

“I’ve met people who have been told by their doctors they must have vertigo, while they thought the walls of their house were really shaking,” Moran-Thomas says. “In a case like that, the device you need is not in the clinic, it’s a monitor at home.”

The book, overall, will examine the effects of energy production on chronic disease and mental health issues in Pennsylvania, something exacerbated by more hospitals being shuttered in the state.

Moran-Thomas is simultaneously working with several co-investigators to create the “Health and Society” educational program at MIT, including Katharina Ribbeck, Erica James, Aleshia Carlsen-Bryan, and Dina Asfaha. Their work was recently awarded an Education Innovation Seed Grant from MITHIC.

From small devices to large-scale changes in health care systems, from the U.S. to other regions, Moran-Thomas remains focused on a core set of issues about how to improve and broaden health care — and lessen the need for it in the first place.

“Thinking across scales is something that’s really useful about anthropology,” Moran-Thomas says. “Even one medical device is a tiny piece of a bigger infrastructure. In order to study that technology or device or sensor, you have to understand the bigger infrastructure it’s attached to, and that there are people involved in all parts of it.” 


Lasers, robots, action: MIT workshop explores Raman spectroscopy

Participants learn how laser “fingerprinting” can help identify materials in fields ranging from law enforcement to art restoration.


Could a three-hour workshop on an advanced materials analysis technique turn someone into a detective — or perhaps an art restorer?

At MIT’s Center for Bits and Atoms (CBA) in late January, about a dozen students explored that possibility during an Independent Activities Period (IAP) workshop on Raman spectroscopy, a technique that uses laser light to “fingerprint” materials. The session even featured a robotic dog equipped with sensing equipment, demonstrating how chemical analysis can be done remotely.

The workshop, led by MIT postdoc Lamyaa Almehmadi in collaboration with the CBA, introduced participants to a powerful technique now used by law enforcement and first responders to identify narcotics and explosives, by gemologists to authenticate precious stones, and pharmaceutical companies to verify raw materials and ensure product quality. CBA graduate researcher Jiaming Liu co-hosted, delivering lectures, demonstrating Raman equipment, and contributing to the curriculum and hands-on demonstrations.

“It can open up new possibilities for innovation across many fields,” said Almehmadi, an analytical chemist in the Department of Materials Science and Engineering (DMSE). After attendees learned the fundamentals, she encouraged them to think creatively about new applications: “My hope is to inspire all of you to think about doing something with Raman spectroscopy that no one has done before.”

Fingerprinting materials

Participants brought items to class to analyze using handheld devices, which fire laser light and measure how it bounces back. The resulting pattern behaves like a molecular fingerprint, identifying the materials in the item — whether it’s a paper clip, a piece of tree bark, or a mixing bowl.

Workshop attendee Sarah Ciriello, an administrative assistant at DMSE who brought a stone she found at the beach, was taken aback by the results. The Raman device suggested a 39 percent probability that the sample contained concrete-like material, with the remaining readings matching synthetic compounds — blurring the line between natural and manufactured materials.

“It’s man-made — I was surprised,” Ciriello said.

Developed in 1928 by Indian scientist C.V. Raman, who later won the Nobel Prize in Physics, Raman spectroscopy was groundbreaking because it used visible light to probe materials without destroying them, a major advantage over other techniques at the time, such as chromatography or mass spectrometry. But for decades, the Raman signal — the light scattered back from a sample — was weak, and the instruments were big and bulky, limiting its practical use.

Advances in lasers, computing power, and miniaturized optics have transformed Raman spectroscopy into a portable tool. Today’s handheld devices can instantly compare a sample’s molecular fingerprint against vast digital libraries, allowing users to identify thousands of materials in seconds. Because it doesn’t destroy the sample, Raman is especially useful in fields that require preserving materials — such as law enforcement, where evidence must remain intact, and art restoration.

Almehmadi’s own research focuses on advancing Raman spectroscopy by developing highly sensitive, semiconductor-based sensors that make portable chemical analysis possible, with applications ranging from medical diagnostics to forensic and environmental monitoring.

“Raman can be used to analyze any material,” Almehmadi says. “That’s why I decided to introduce it to students from diverse backgrounds.”

IAP classes are open to students and staff across MIT, and the Raman workshop reflected that range — from administrative staff to graduate and undergraduate students and postdocs in departments and labs including DMSE, the Department of Mechanical Engineering, the Media Lab, and the Broad Institute.

Walking the robot dog

A crowd-pleasing element in the workshop was the integration of a robot dog that belongs to the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The demonstration highlighted how Raman technology can be used in dangerous environments, such as crime scenes or toxic industrial sites.

The handheld device was secured to the robot using tape, and Almehmadi showed how she could navigate the dog to a plastic bag filled with a white powder — baking soda.

But in a real-world scenario, “How can we know if it is baking soda or not?” she says. “So we just shined the light, and then the instrument told us what it was.”

Participants used a Wi-Fi app on their phones to view the results and a small remote controller to operate the robotic dog themselves.

“I loved the robot dog,” Ciriello says. “I was able to control it a bit, but it was challenging because the gauge was really sensitive.”

Michael Kitcher, a postdoc in DMSE, also praises the robot demonstration.

“Given that we just duct taped the device onto the dog — it was cool to see it actually worked,” he says.

Looking ahead

Kitcher, who researches magnetic materials for electronic applications, joined the workshop to learn more about Raman spectroscopy, which he had read about but never used. He was impressed by its versatility — in addition to the beach stone and baking soda, the device identified materials in a contact lens, cosmetics, and even a diamond.

Although it struggled to analyze a piece of chocolate he brought — other signals from the chocolate interfered — Kitcher sees strong potential for his own research. One area he’s interested in is unconventional magnetic materials, such as altermagnets, with unusual magnetic behavior that researchers hope to better understand and control for more energy-efficient electronics.

“Over the last couple of years, people have been trying to get a better sense of why these materials behave the way they do — how we can control this unconventional magnetic order,” he says. Raman spectroscopy can probe the vibrations of atoms in a material, helping researchers detect patterns in the crystal structure that underlie unusual magnetic behaviors. By understanding these vibrations, scientists could unlock material design rules that enable ultra-fast, low-energy computing.

Hands-on workshops like this — that inspire innovative future applications — Almehmadi says, are at the heart of an MIT education.

“I’ve always learned best by doing,” she says. “Lectures and reading are important, but real understanding comes from hands-on experience.”


What’s the right path for AI?

Conference speakers discussed the unfolding trajectory of AI and the benefits of shaping technology to meet people’s needs.


Who benefits from artificial intelligence? This basic question, which has been especially salient during the AI surge of the last few years, was front and center at a conference at MIT on Wednesday, as speakers and audience members grappled with the many dimensions of AI’s impact.

In one of the conferences’s keynote talks, journalist Karen Hao ’15 called for an altered trajectory of AI development, including a move away from the massive scale-up of data use, data centers, and models being used to develop tools under the rubric of “artificial general intelligence.”

“This scale is unnecessary,” said Hao, who has become a prominent voice in AI discussions. “You do not need this scale of AI and compute to realize the benefits.” Indeed, she added, “If we really want AI to be broadly beneficial, we urgently need to shift away from this approach.”

Hao is a former staff member at The Wall Street Journal and MIT Technology Review, and author of the 2025 book, “Empire of AI.” She has reported extensively on the growth of the AI industry.

In her remarks, Hao outlined the astonishing size of datasets now being used by the biggest AI firms to develop large language models. She also emphasized some of the tradeoffs in this scale-up, such as the massive energy consumption and emissions of hyper-scale data centers, which also consume large amounts of water. Drawing on her own reporting, Hao also noted the human toll from the input work that global gig-economy employees do, inputting data manually for the hyper-scale models.

By contrast, Hao offered, an alternate path for AI might exist in the example of AlphaFold, the Nobel Prize-winning tool used to identify protein structures. This represents the concept of the “small, task-specific AI model tackling a well-scoped problem that lends itself to the computational strengths of AI,” Hao said.

She added: “It’s trained on highly curated data sets that only have to do with the problem at hand: protein folding and amino acid sequences. … There’s no need for fast supercomputing because the datasets are small, the model is small, and it’s still unlocking enormous benefit.”

In a second keynote address, scholar Paola Ricaurte underscored the desirability of purpose-driven AI approaches, outlining a number of conceptual keys to evaluating the usefulness of AI.

“There is no sense in having technologies that are not going to respond to the communities that are going to use them,” said Ricaurte.

She is a professor at Tecnologico de Monterrey in Mexico and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. Ricaurte has also served on expert committees such as the Global Partnership for AI, UNESCO’s AI Ethics Experts Without Borders, and the Women for Ethical AI project.

The event was hosted by the MIT Program in Women’s and Gender Studies. Manduhai Buyandelger, the program’s director and a professor of anthropology, provided introductory remarks.

Titled “Gender, Empire, and AI: Symposium and Design Workshop,” the event was held in the conference space at the MIT Schwarzman College of Computing, with over 300 people in attendance for the keynote talks. There was also a segment of the event devoted to discussion groups, and an afternoon session on design, in a half-dozen different subject areas.

In her talk, Hao decried the often-vague nature of AI discourse, suggesting it impedes a more thoughtful discussion about the industry’s direction.

“Part of the challenge in talking about AI is the complete lack of specificity in the term ‘artificial intelligence,’” Hao said. “It’s like the word ‘transportation.’ You could be referring to anything from a bicycle to a rocket.” As a result, she said, “when we talk about accessing its benefits, we actually have to be very specific. Which AI technologies are we talking about, and which ones do we want more of?”

In her view, the smaller-sized tools — more akin to the bicycle, by analogy — are more useful on an everyday basis. As another example, Hao mentioned the project Climate Change AI, focused on tools that can help improve the energy efficiency of buildings, track emissions, optimize supply chains, forecast extreme weather, and more.

“This is the vision of AI that we should be building towards,” Hao said.

In conclusion, Hao encouraged audience members to be active participants in AI-related discourse and projects, saying the trajectory of the technology was not yet fixed, and that public interventions matter.

Citing the writer Rebecca Solnit, Hao suggested to the audience that “Hope locates itself in the premise that we don’t know what will happen, and that in the spaciousness of uncertainty is room to act.” She also noted, “Each and every one of you has an active role to play in shaping technology development.”

Ricaurte, similarly, encouraged attendees to be proactive participants in AI matters, noting that technologies will work best when the pressing everyday needs of all citizens are addressed.

“We have the responsibility to make hope possible,” Ricaurte said.


After 16 years leading Picower Institute, Li-Huei Tsai will sharpen focus on research, teaching

Tsai, who has grown the MIT neuroscience institute, will increase focus on research including Alzheimer’s disease and Down syndrome.


MIT Picower Professor Li-Huei Tsai, who has led The Picower Institute for Learning and Memory since 2009, will step down from the role of director at the end of the academic year in May. Her decision frees her to focus exclusively on her academic work, including her continued leadership of MIT’s Aging Brain Initiative and the Alana Down Syndrome Center. Meanwhile, the search for the Picower Institute’s next director has begun.

“During her exceptional 16-year tenure in the role of director, Li-Huei has led substantial growth at the Picower Institute,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble professor of astrophysics. “She has markedly expanded the faculty — eight of the current 16 labs joined Picower under her directorship — through successful recruitment of highly talented neuroscientists. She has done this, and more, all while leading one of our most productive and influential labs, working on a quintessentially grand challenge in human health: combating Alzheimer’s disease.”

To conduct the search for a new Picower Institute director, Mavalvala has appointed a committee led by Sherman Fairchild Professor Matthew Wilson, associate director of the institute. Serving with Wilson are Picower Professor and former institute director Mark Bear, Menicon Professor Troy Littleton, Assistant Professor Sara Prescott, and Professor Fan Wang. They will identify and interview candidates, producing a report to Mavalvala later this spring.

Growing an institute

Tsai, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of The Broad Institute of MIT and Harvard, says she is grateful to have had the opportunity to build the Picower Institute into a preeminent center for neuroscience research.

“I’m immensely proud of what our institute represents: world-renowned neuroscience research that is creative, rigorous, novel, and impactful,” Tsai says. “Our labs produce innovations, discoveries, and often translational strategies that have broken new ground and pushed science, medicine, and technology forward. We also provide excellent training that has enabled us to launch the careers of many of the field’s new and future leaders. It has been a tremendous honor to be able to build on the incredible foundation and inspiration provided by my predecessors Susumu Tonegawa and Mark Bear to enable the institute’s growth and success.”

Founded by Tonegawa as the Center for Learning and Memory in 1994, and then renamed The Picower Institute for Learning and Memory after a transformative gift by Barbara and Jeffry Picower in 2002, the institute now comprises about 400 scientists, students, and staff across 16 labs in MIT’s buildings 46 and 68.

But when Tsai became director in July 2009, just three years after coming to MIT from Harvard Medical School, the Picower Institute was a smaller enterprise of 11 labs, and a community closer to 200 members. Over the ensuing years, Tsai worked closely with the Picowers’ foundation, formerly the JPB Foundation and now the Freedom Together Foundation, to develop several strategic initiatives to accelerate growth and enhance research productivity. These have included programs specifically designed to support junior faculty, to catalyze more applications for more private grant funding, and to sustain fellowships for more than 18 postdocs and graduate students. Working with the foundation, she has also expanded the scope of research support provided by the Picower Institute Innovation Fund begun under Bear.

Eager to galvanize colleagues across MIT in fighting neurodegenerative diseases and neurological disorders affecting cognition, Tsai also built and launched two campus-wide initiatives: The Aging Brain Initiative, founded in 2015 and sustained by a broad coalition of donors, and the Alana Down Syndrome Center, established in 2019 with a gift from The Alana Foundation.

Research focus

As the Picower Institute has grown, Tsai’s research has, too. In work spanning molecular, cellular, circuit, and network scales in the brain, Tsai has led numerous highly cited discoveries about the neurobiology of Alzheimer’s disease and has translated several of these insights into specific therapeutic strategies, including one now undergoing a national phase III clinical trial. In all, she has published more than 230 peer-reviewed neuroscience studies, generated numerous patents, and helped launch several startups. She has been named a fellow of the National Academy of Medicine, the American Academy of Arts and Sciences, and the National Academy of Inventors, and received awards including the Society for Neuroscience Mika Salpeter Lifetime Achievement Award and the Hans Wigzell Prize.

Tsai’s earliest discoveries identified key roles in neurodegeneration for the enzyme CDK5. She has pioneered understanding of how epigenetic changes in brain cells affect Alzheimer’s pathology and memory. Her work has also highlighted a critical role for DNA double-strand breaks in disease.

In more recent work, Tsai’s lab has conducted several studies using innovative human stem-cell-based cultures to advance understanding of how the biggest genetic risk factor for Alzheimer’s (a gene variant called APOE4) contributes to pathology, and how some existing medications and supplements might help. In collaboration with MIT professor of computer science Manolis Kellis, she has also published several sweeping atlases documenting how gene expression and epigenetics differ in Alzheimer’s disease. These studies have provided the field with troves of new data and have yielded new insights into what makes the brain vulnerable to disease, and what helps some people remain resilient.

Tsai has also led a collaboration with professors Emery N. Brown and Edward S. Boyden that’s discovered a potential noninvasive, device-based treatment for Alzheimer’s and possibly other neurological disorders. Called “Gamma Entrainment Using Sensory Stimuli” (GENUS), the technology stimulates the senses (vision, hearing, or touch) to increase the power and synchrony of 40Hz frequency “gamma” waves in the brain. Numerous studies, involving either lab animals or human volunteers by her group and others, have shown that the approach can preserve brain volume and learning and memory and reduce signs of Alzheimer’s pathology. Via an MIT spinoff company, the technology has now advanced to pivotal clinical trial enrolling hundreds of people around the country.

“After 16 years leading the Picower Institute, I’m now eager to sharpen my focus on advancing human health through the work in my lab, the Aging Brain Initiative, and the Alana Center,” Tsai says.


MIT and Hasso Plattner Institute establish collaborative hub for AI and creativity

Jointly led by the MIT Morningside Academy for Design, MIT Schwarzman College of Computing, and the Hasso Plattner Institute in Potsdam, the hub will foster a dynamic community where computing, creativity, and human-centered innovation meet.


The following is a joint announcement from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, Hasso Plattner Institute, and Hasso Plattner Foundation.

The MIT Morningside Academy for Design (MAD), MIT Schwarzman College of Computing, Hasso Plattner Institute (HPI), and Hasso Plattner Foundation celebrated the launch of the MIT and HPI AI and Creativity Hub (MHACH) at a signing ceremony this week. This 10-year initiative aims to deepen ties between computing and design as advances in artificial intelligence are reshaping how ideas are conceived and shared.

Funded by the Hasso Plattner Foundation, MIT and HPI will work together to foster collaborative interdisciplinary research and support a portfolio of educational programs, fellowships, and faculty engagement focused on AI and creativity, expanding scholarly inquiry into AI applications across disciplines, industries, and societal challenges. The collaboration begins with an inaugural two-day workshop March 19-20 at MIT, bringing together faculty, students, and researchers to set early priorities.

“As we hear from our faculty, as the Information Age gives way to an era of imagination, we expect a new emphasis on human creativity,” reflects MIT President Sally Kornbluth. “Through this collaboration, MIT and HPI are creating a shared space where students and faculty will come together across disciplines to explore new ideas, experiment with emerging tools, and invent new frontiers at the intersection of human creativity and AI.”

“The best minds need the right environment to do their most creative work,” says Rouven Westphal, from the Hasso Plattner Foundation. “When HPI and MIT come together across disciplines and borders, they create exactly that. The Hasso Plattner Foundation is committed to supporting this collaboration for the long term, building on Hasso Plattner’s vision of uniting technological excellence with human-centered design and creativity.”

Deepening collaboration at the intersection of technology, creativity, and societal impact

Building on the success of the Hasso Plattner Institute-MIT Research Program on Designing for Sustainability, established in 2022 between MIT MAD and HPI, the new MHACH hub represents a commitment to deepen collaboration at the intersection of technology, creativity, and societal impact.

“MIT and HPI share a common commitment to turning scientific excellence into real-world impact. Through this collaboration, we will create an environment where students and researchers from both sides of the Atlantic can work together, experiment across disciplines, and learn from one another — at a time when artificial intelligence is set to profoundly shape our lives. We are convinced that this collaboration will generate ideas with impact far beyond both institutions and inspire international cooperation and innovation,” says Professor Tobias Friedrich, dean and managing director of the Hasso Plattner Institute.

“HPI and MIT exist at the nexus of technology and creativity. Expanding this dynamic relationship will generate new paths for the infusion of AI, design, and creativity, enabling students, faculty, and researchers to dream and discover novel solutions, moving more quickly than ever from idea to implementation. MAD was established to connect thinkers across and beyond the Institute, and this new era of collaboration with HPI advances that mission on a global scale,” comments Hashim Sarkis, dean of the MIT School of Architecture and Planning and the Elizabeth and James Killian (1926) Professor.

Academic leadership from MIT and HPI will jointly shape the hub’s research and teaching agenda. Based in Potsdam, Germany, HPI is a center of excellence for digital engineering advancing research, education, and societal transfer in IT systems engineering, data engineering, cybersecurity, entrepreneurship, and digital health. Through its globally recognized HPI d-school and pioneering work in design thinking methodology, HPI brings a distinctive perspective on human-centered innovation to the collaboration, alongside a strong record in AI and data science research and technology transfer.

Expanding research and education on AI and creativity

The efforts of this multifaceted initiative are intended to foster a dynamic academic community spanning MIT and HPI, anchored by Hasso Plattner–named professorships and graduate fellowships whose recipients will be actively engaged in the hub. The long-term framework is designed to provide continuity for faculty appointments, doctoral training, and cross-campus research.

The agreement also includes the development of classes and educational programs in areas of shared AI focus, along with expanded experiential opportunities through AI-focused workshops, hackathons, and summer exchanges. A steering committee composed of representatives from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, and Hasso Plattner Institute will facilitate the shared governance of MHACH.

“Creativity has always been about extending human capability. At its core, this collaboration asks what it truly means to create something new. The question isn’t whether AI diminishes creativity, but how new forms of intelligence can deepen and enrich that process. Our goal is to explore that intersection with rigor and build a cross-disciplinary scholarly and research community that shapes how AI supports the creation of new ideas and knowledge,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

This collaboration is made possible by the Hasso Plattner Foundation’s long-term philanthropic commitment to institutions that connect technological innovation with design thinking and education. The Hasso Plattner Foundation has played a central role in establishing and supporting institutions such as the Hasso Plattner Institute and international design thinking programs that bridge disciplines and geographies.


Generative AI improves a wireless vision system that sees through obstructions

With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.


MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.

Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.

This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.

The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.  

This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.

These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.

“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”

Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.

Surmounting specularity

The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.

These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.

But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.

“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.

The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.

In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.

“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.

Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.

Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.

“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.

The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.

The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.

Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.

Seeing “ghosts”

The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.

Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.

These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.

“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.

They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.

They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.

In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.

This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.


A better method for identifying overconfident large language models

This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.


Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.

But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.   

To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.

Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.

They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.

“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.

She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.

The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.    

“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.

To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.

An ensemble approach

The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.

To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.

“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.

Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.

“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.

TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.

They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.

Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.

Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.

This work is funded, in part, by the MIT-IBM Watson AI Lab.


New model predicts how mosquitoes will fly

Their flight patterns change in response to different sensory cues, a new study finds. The work could lead to more effective traps and mosquito control strategies.


A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.

Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.

Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.

When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.

When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.

Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.

The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.

“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”

The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.

Flight by numbers

Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.

Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.

“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”

At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.

Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?

“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”

Taking cues

For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.

In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.

The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.

Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.

“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”

In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.

“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.

“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”

The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.

“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”

This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund. 


Brain circuit needed to incorporate new information may be linked to schizophrenia

Impairments of this circuit may help to explain why some people with schizophrenia lose touch with reality.


One of the symptoms of schizophrenia is difficulty incorporating new information about the world. This can lead people with schizophrenia to struggle with making decisions and, eventually, to lose touch with reality.

MIT neuroscientists have now identified a gene mutation that appears to give rise to this type of difficulty. In a study of mice, the researchers found that the mutated gene impairs the function of a brain circuit that is responsible for updating beliefs based on new input.

This mutation, in a gene called grin2a, was originally identified in a large-scale screen of patients with schizophrenia. The new study suggests that drugs targeting this brain circuit could help with some of the cognitive impairments seen in people with schizophrenia.

“If this circuit doesn’t work well, you cannot quickly integrate information,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT. “We are quite confident this circuit is one of the mechanisms that contributes to the cognitive impairment that is a major part of the pathology of schizophrenia.”

Feng and Michael Halassa, a professor of psychiatry and neuroscience and director of translational research at Tufts University School of Medicine, are the senior authors of the new study, which appears today in Nature Neuroscience. Tingting Zhou, a research scientist at the McGovern Institute, and Yi-Yun Ho, a former MIT postdoc, are the lead authors of the paper.

Adapting to new information

Schizophrenia is known to have a strong genetic component. For the general population, the risk of developing the disease is about 1 percent, but that goes up to 10 percent for those who have a parent or sibling with the disease, and 50 percent for people who have an identical twin with the disease.

Researchers at the Stanley Center for Psychiatric Research at the Broad Institute have identified more than 100 gene variants linked to schizophrenia, using genome-wide association studies. However, many of those variants are located in non-coding regions of the genome, making it difficult to figure out how they might influence development of the disease.

More recently, researchers at the Stanley Center used a different strategy, known as whole-exome sequencing, to reveal gene mutations linked to schizophrenia. This technique sequences only the protein-coding regions of the genome, so it can reveal mutations that are located in known genes.

Using this approach on about 25,000 sequences from people with schizophrenia and 100,000 sequences from control subjects, the researchers identified 10 genes in which mutations significantly increase the risk of developing schizophrenia.

In the new Nature Neuroscience study, Feng and his students created a mouse model with a mutation in one of those genes, grin2a. This gene encodes a protein that forms part of the NMDA receptor — a receptor that is activated by the neurotransmitter glutamate and is often found on the surface of neurons.

Zhou then investigated whether these mice displayed any of the characteristic behaviors seen in people with schizophrenia. These individuals show many complex symptoms, including psychoses such as hallucinations and delusions (loss of contact with reality). Those are difficult to study in mice, but it is possible to study related symptoms such as difficulty in interpreting new sensory input.

Over the past two decades, schizophrenia researchers have hypothesized that psychosis may stem from an impaired ability to update beliefs based on new information.

“Our brain can form a prior belief of reality, and when sensory input comes into the brain, a neurotypical brain can use this new input to update the prior belief. This allows us to generate a new belief that’s close to what the reality is,” Zhou says. “What happens in schizophrenia patients is that they weigh too heavily on the prior belief. They don’t use as much current input to update what they believed before, so the new belief is detached from reality.”

To study this, Zhou designed an experiment that required mice to choose between two levers to press to earn a food reward. One lever was low-reward — mice had to push it six times to get one drop of milk. A high-reward lever dispensed three drops per push.

At the beginning of the study, all of the mice learned to prefer the high-reward lever. However, as the experiment went on, the number of presses required to dispense the higher reward gradually went up, while there were no changes to the low-reward lever.

As the effort required went up, healthy mice start to switch back and forth between the two levers. Once they had to press the high-reward lever around 18 times for three drops of milk, making the effort per drop about the same for each lever, they eventually switched permanently to the low-reward lever. However, mice with a mutation in grin2a showed a different behavior pattern. They spent more time switching back and forth between the two levers, and they made the switch to the low-reward side much later.

“We find that neurotypical animals make adaptive decisions in this changing environment,” Zhou says. “They can switch from the high-reward side to the low-reward side around the equal value point, while for the animals with the mutation, the switch happens much later. Their adaptive decision-making is much slower compared to the wild-type animals.”

An impaired circuit

Using functional ultrasound imaging and electrical recordings, the researchers found that the brain region affected most by the grin2a mutation was the mediodorsal thalamus. This part of the brain connects with the prefrontal cortex to form a thalamocortical circuit that is responsible for regulating cognitive functions such as executive control and decision-making.

The researchers found that neuronal activity in the mediodorsal thalamus appears to keep track of the changes in value of the two reward options. Additionally, the mice showed different patterns of neural activity depending on which state they were — either an exploratory state or committed to one side.

The researchers also showed that they could use optogenetics to reverse the behavioral symptoms of the mice with mutated grin2a. They engineered the neurons of the mediodorsal thalamus so that they could be activated by light, and when these neurons were activated, the mice began behaving similarly to mice without the grin2a mutation.

While only a very small percentage of schizophrenia patients have mutations in the grin2a gene, it’s possible that this circuit dysfunction is a converging mechanism of cognitive impairment for a subset of schizophrenia patients with different causes.

Targeting this circuit could offer a way to overcome some of the cognitive impairments seen in people with schizophrenia, the researchers say. To do that, they are now working on identifying targets within the circuit that could be potentially druggable.

The research was funded by the National Institutes of Mental Health, the Poitras Center for Psychiatric Disorders Research at MIT, the Yang Tan Collective at MIT, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Stelling Family Research Fund at MIT, the Stanley Center for Psychiatric Research, and the Brain and Behavior Research Foundation.


Turning extreme heat into large-scale energy storage

Fourth Power, founded by Professor Asegun Henry, is developing thermal batteries for efficiently storing excess electricity from utility grids and power producers.


Thermal batteries can efficiently store energy as heat. But building them requires a carefully designed system with materials that can withstand cycles of extremely high temperatures, without succumbing to problems like corrosion, thermal expansion, and structural fatigue.

Many thermal battery systems move high-temperature gas or molten salt around through metal pipes. Fourth Power, founded by MIT Professor Asegun Henry, is turning these materials inside out, using molten metal to transport the heat, which is stored in carbon bricks.

“The idea was, instead of making the system from metal, let’s move liquid metals,” says Henry SM ’06, PhD ’09.

Henry’s approach earned him a Guinness World Record for the hottest liquid pump back in 2017 — important because when you double the absolute temperature of a material, to the point where it glows white-hot, the amount of light it emits doesn’t just double, it increases 16 times (or to the fourth power).

The company is harvesting all that light with thermophotovoltaic cells, which work like solar cells to convert light into electricity. Henry and his collaborators broke another record when they demonstrated a thermophotovoltaic cell that could convert light to electricity with an efficiency above 40 percent.

Fourth Power is working to use those record-breaking innovations to provide energy for power grids, power producers, and technology companies building power-hungry infrastructure like data centers. Henry says the batteries can provide anywhere from 10 to over 100 hours of electricity at a storage cost that is significantly cheaper than lithium-ion batteries at grid scale. The company is currently cycling each section of its system through relevant operating temperatures — which are nearly half as hot as the sun — and plans to have a fully integrated demonstration unit operating later this year.

“Explaining why our system is such a huge improvement over everything else centers around power density,” explains Henry, who serves as Fourth Power’s chief technologist. “We realized if you push the temperature higher, you will transfer heat at a higher rate and shrink the system. Then everything gets cheaper. That’s why we pursue such high temperatures at Fourth Power. We operate our thermal battery between 1,900 and 2,400 degrees Celsius, which allows us to save a tremendous amount on the balance of system costs.”

A career in heat

Henry earned his master’s and PhD degrees from MIT before working in faculty positions at Georgia Tech and MIT. As a professor at both schools, his research has focused on thermal transport, storage, renewable energy, and other technologies that could lead to improvements in sustainability and decarbonization. Today, he is the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering.

Heat transfer systems are usually made out of metals like iron and nickel. Generally, the higher temperature you want to reach, the more expensive the metal. Henry noticed ceramics can get much hotter than metals, but they’re not used nearly as often. He started asking why.

“The answer is often pretty straightforward: You can’t weld ceramics,” Henry says. “Ceramics aren’t ductile. They generally fail in a catastrophically brittle way, and that’s not how we like large systems to behave. But I couldn’t find many problems beyond that.”

After receiving funding from the Department of Energy and the MIT Energy Initiative, Henry spent years developing a pump made from ceramics and graphite (which is similar to a ceramic). In 2017, his pump set the record for the highest recorded operating temperature for a liquid pump, at 1,200 Celsius. The pump used white-hot liquid tin as a fuel. He chose tin because it doesn’t react with carbon, eliminating corrosion. It also has a relatively low melting point and high boiling point, which keeps it liquid in a large temperature range.

The challenge then became designing the system.

“Typically, a mechanical engineer would come up with a design and say, ‘Give me the best materials to do this,’” Henry says. “We flipped the problem, so we were saying, ‘We know what materials will work, now we need to figure out how to make a system out of it.’”

In 2023, Henry met Arvin Ganesan, who had previously led global energy work at Apple. At first, Ganesan wasn’t interested in joining a startup — he had two young kids and wanted to prioritize his family — but he was intrigued by the potential of the technology. At their first meeting, the two connected over shared values and fatherhood, as Henry surprised Ganesan by bringing his own young children.

“I had a sense this technology had the promise to tackle the twin crises of affordability and climate change at the same time,” says Ganesan, who is now Fourth Power’s CEO. “As energy demand becomes more pronounced, we either need to deploy harder and deeper tech, which is also important, or improve existing tech. Fourth Power is trying to simplify the physics and thermodynamic principles to deliver an approach that has been very well-studied for a very long time.”

Since 2023, Fourth Power has been conducting sponsored research at the LNS Bates Research and Engineering Center to validate the durability and reliability of its components ahead of a fully integrated demonstration.

The system Fourth Power designed takes in excess electricity from sources like the grid and uses it to heat a series of 6-foot-long, 20-inch thick graphite bricks until they reach about 2,400 Celsius. At that point the system is considered fully charged.

When the customer wants the electricity back, the bricks are used to heat up liquid tin, which flows through a series of graphite pipes, pumps, and flow meters to thermophotovoltaic cells, which turn the light from the glowing hot infrastructure back into electricity.

“You can basically dip the cells into the light and get power, or you can pull them back out and shut it off,” Henry explains. “The liquid metal starts at 2,400 Celsius and then cools as it’s going through the system because it’s giving a bunch of its energy to the photovoltaic, and then it circulates back through the graphite blocks, which act as a furnace, to retrieve more heat.”

From concept to company

Later this year, Fourth Power plans to turn on a 1-megawatt-hour system in its new headquarters in Bedford, Massachusetts. A full-scale system would offer 25 megawatts of power and 250 megawatt hours of storage and take up about half a football field.

“Most technologies you’ll see in storage are around 10 megawatts an acre or less,” Henry explains. “Fourth Power is more like 100 megawatts per acre. It’s very power-dense.”

The power and storage units of Fourth Power’s system are modular, which will allow customers to start with a smaller system and add storage units to extend storage length later. The company expects to lose about 1 percent of total heat stored per day.

“Customers can buy one storage and one power module, and that’s a 10-hour battery,” Henry explains. “But if they want one power module and two storage modules, that’s a 20-hour battery. Customers can mix and match, which is really advantageous for utilities as renewables scale and storage needs change.”

Down the line, the system could also be run as a power plant, converting fuel into electricity or using fuel to charge its batteries during stretches with little wind or sun. It could also be used to provide industrial heat.

But for now, Fourth Power is focused on the battery application.

“Utilities need something cheap and they need something reliable,” Henry says. “The only technology that has managed to reach at least one of those requirements is lithium ion. But the world is waiting for something that’s much cheaper than lithium ion and just as reliable, if not better. That’s what we’re focused on demonstrating to the world.”


John Ochsendorf named associate dean for research for the School of Architecture and Planning

The newly created role will shape the infrastructure needed to nurture the school’s growing research goals.


Professor John Ochsendorf, a member of the MIT faculty since 2002, is taking on a new role in support of the research efforts of faculty and students in the MIT School of Architecture and Planning (SA+P). At the start of this year, Ochsendorf was appointed to lead an initiative strengthening research strategy, support, and funding across the school.

“John is a bridge-builder by instinct and practice, and we look forward to the bridges he will build between our school and industry, our school and MIT, and between research and pedagogy in our school,” says SA+P Dean Hashim Sarkis. The appointment comes as sponsored research across SA+P continues to grow, expanding opportunities for graduate research assistantships and interdisciplinary collaboration across MIT.

Ochsendorf is the Class of 1942 Professor with dual appointments in the departments of Architecture and Civil and Environmental Engineering in the MIT School of Engineering. At the center of his work is a deep commitment to students and education through research and making. For example, in close collaboration with students and alumni, he has contributed to projects ranging from the Sean Collier Memorial on campus to a recent Martin Puryear sculpture at Storm King Art Center. Since 2022, Ochsendorf has served as the founding director of the MIT Morningside Academy for Design, where he helped establish new models for design research, interdisciplinary collaboration, and student engagement across the Institute.

Ochsendorf describes the new role as both a “challenge and an opportunity” to support the considerable and increasingly broad portfolio of research across SA+P.

“We want to understand the current landscape of our research funding and identify the challenges and inefficiencies impacting faculty,” he notes. “The ultimate goal is to grow our research capacity for a world that needs the best ideas from MIT.”

The effort is consistent with SA+P’s history of pioneering research and pedagogic exploration. The Department of Architecture was among the first in the United States to establish doctoral programs within a school of architecture, including PhDs in history, theory, and criticism and in building technology. The Department of Urban Studies and Planning is home to the largest urban planning faculty in the country and maintains a variety of research labs, while Media Arts and Sciences and the Media Lab has a broad and deep research culture. Each of the school’s departments enjoys the advantage of operating within the context of MIT’s culture of innovation and interdisciplinary study. As new faculty hires have been increasingly research-driven, the time for developing and supporting robust research portfolios is now. 

Ochsendorf and his students’ research have bridged the spectrum from humanistic research supported by organizations such as the National Endowment for the Humanities and the Graham Foundation for Advanced Studies in the Fine Arts to more scientific research supported by the National Science Foundation. In his new role, he will build on that experience to work with faculty and Institute partners to strengthen grant development, clarify research priorities, and expand research capacity across SA+P.

“I’ve always loved being at MIT because of the team spirit here,” says Ochsendorf. “We’re a place where we try to support each other, and it’s because of this environment that I am excited about this new role.”


Three anesthesia drugs all have the same effect in the brain, MIT researchers find

Discovering this common mechanism could lead to a universal anesthesia-delivery system to monitor patients more effectively.


When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.

This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.

“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

Miller, Emery Brown, who is the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.

Destabilizing the brain

Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.

When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.

“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”

In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.

For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.

In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.

This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.

Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.

“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”

The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.

“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”

Monitoring anesthesia

Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.

For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.

To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.

“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.

Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.

The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.


“We the People” depicts inventors, dreamers, and innovators in all 50 states

For the 250th anniversary of the US, Joshua Bennett’s epic poem set celebrates unexpected lives forged across the nation.


Zora Neale Hurston remains one of America’s best-known authors. Charles Henry Turner developed landmark studies about the behavior of bees and spiders. Brian Wilson founded the Beach Boys. George Nissen invented the trampoline. What do they all have in common?

Well, for one thing, they were all innovative Americans — creators and discoverers, producing work no one anticipated. For another, they are all now celebrated as such, in verse, by Joshua Bennett.

That’s right. Bennett — an MIT professor, lauded poet, and literary scholar — is marking the 250th anniversary of the founding of the U.S. with a book-length work of poetry about the country and some of its distinctive figures. In fact, 50 of them: Bennett has written a substantial work featuring remarkable people or inventions from each of the 50 states, meditating on their place in cultural fabric of the U.S.

“There’s so much to be said for a country where you and I are possible, and the things we do are possible,” Bennett says.

The book, “We (The People of the United States),” is published today by Penguin Books. Bennett is a professor and the Distinguished Chair of the Humanities at MIT.

Bennett’s new work has some prominent Americans in it, but is no gauzy listing of familiar icons. Many of the 50 people in his book overcame hardship, poverty, rejection, or discrimination; some have already been rescued from obscurity, but others have not received proper acclaim. Few of them had a straightforward, simple connection with their times.

“It’s about feeling that you have a life in this country which is undeniably complex, but also has this remarkable beauty to it,” Bennett says of the work. “A beauty you helped to create, and that no one can take away from you.”

The figures that Bennett writes about are sources of fascination, and inspiration, demonstrating the kinds of lives it is possible to invent in the U.S.

“We’re in a moment that calls for compelling, historically grounded stories about what America is, what it has been, and what it can be,” Bennett adds. “Can we build a life-affirming vision for the future and those who will inherit it? I’m trying to. I work on it every day.”

Taking flight

“We (The People of the United States)” is inspired, in part, by Virgil’s “Georgics,” pastoral poems by the great Roman poet. Bennett encountered them while a PhD student in literature at Princeton University.

“The poet Susan Stewart, my professor at Princeton, introduced me to Virgil’s Georgics,” Bennett says. “I eventually started to think: What would it look like for me to cover Virgil?” Adding to his interest in the concept, one of his favorite poets, Gwendolyn Brooks, had spent time recasting Virgil’s ancient epic, “The Aeneid,” for her Pulitzer Prize-winning work, “Annie Allen.” She also translated the original work from Latin as a teenager. Moreover, Bennett’s writing has long engaged with the subject of people working the land in America.

“I decided to start writing all these poems about agriculture,” Bennett says. “But then I thought, this would be interesting as an epic poem about America.” As he launched the project, its focus shifted some more: “I started to think about the book as an ode to invention.”

Soon Bennett had worked out the structure. An opening section of the work is about his own family background, becoming a father, and the process of building a life here in Massachusetts.

“Where does my influence, my aspiration, end and the child begin?” Bennett writes in one poem. That section prefigures further themes in the collection about the domestic environments many of its figures emerged from. For the rest of the work, with one innovator or innovation for each of the 50 states, Bennett adopted a regular writing schedule, producing at least one new poem per week until he was finished. 

Hurston, one of several famous authors and artists featured in the book, represents Florida. From Ohio, entomologist Charles Henry Turner was the first Black person to receive a PhD from the University of Chicago, in 1907, before conducting a wide range of studies about the cognition and behavior of spiders and bees, among other things.

George Nissen, alternately, was a University of Iowa gymnast who built the first trampoline in the 1930s in his home state — something Bennett calls a “magical device” that brings to life “the scene in your mind of the leap/and of the leap itself, where you are airborne, illuminated/quickly immortal.” Whether these innovations appear through rigorous academic exploration or became mass-market goods that produce flights of fancy, Bennett has a keen eye for people who break new ground and fire our own feelings of wonder.

“We actually are all bound up in it together,” Bennett says. “These different figures, from various fields, eras, and lifelong pursuits are in here together precisely because they helped weave the story of this country together. It’s a story that is still unfolding.”

Bennett is straightforward about the struggles many of his subjects faced. His choice to represent North Carolina is the poet George Moses Horton, an enslaved man who not only learned to read and write in the early 1800s — the state later made that illegal for enslaved persons, in 1830 — but made money selling poems to University of North Carolina students. Indeed, Horton’s work was published in the 1820s. Bennett writes that Horton’s public performance of his poetry was “an ancient art revived in the flesh of a prodigy in chains.”

Bennett’s unblinking regard for historical reality is a motif throughout the work. “To me it’s not only about exploring a history that a reader might feel connected to or want to learn more about,” he says. “It’s about honoring those who lived that history, who helped make some of the most beautiful parts of the present possible, through an engagement with the substance of their lives.”

Just my imagination

Many figures in “We (The People of the United States)” are artists, but of many forms. From watching VH1 as a child, Bennett got into the Beach Boys, and he devotes the California entry in the poem to them. Or as Bennett puts it, he was “newly initiated into a sound/I do not understand until I am old enough to be nostalgic/for windswept locales, and singular moments in time/I never lived through.”

Bennett was learning about the Beach Boys while growing up in Yonkers, New York, far from any California beaches. But then, Brian Wilson wasn’t a surfer either — he grew up in an industrial suburb of Los Angeles. Imagination was the coin of the realm for Wilson, something Bennett understood when Beach Boys songs would veer off in unexpected directions.

“I’ve always been drawn to moments of great surprise, or revelation, in the works of art I love,” Bennett says. “Which is part of why I’ve dedicated my life to poetry. You think one thing is happening in a poem, and suddenly that shock comes, that unexpected turn, or volta. Brian Wilson always had a great understanding of that. It works in pop music. Surprise, sometimes, is a shift in register that takes you higher.”

Various poems in the collection have down-to-earth origins. Bennett remembers his father often fixing things in the family home, from toys to the boiler, saying, “Pass me the Phillips-head,” when he needed a screwdriver. Thus Oregon appears in the book: Portland is where the Phillips-head screwdriver was invented.

In conversation, Bennett notes the hopeful disposition of his father, who after living through Jim Crow and serving in the Vietnam War, worked 10-hour shifts at the U.S. Postal Service to support his family. Even with all the difficulty he experienced in his life, Bennett’s father always encouraged his son to pursue his dreams.

“I’m grateful that I inherited a profound sense of belonging, and dignity, from my parents,” Bennett says. “There was always this feeling that we were part of a much larger story, and that we had a responsibility to tell the truth about the world as we knew it.”

And that’s really what Bennett’s new book is about.

“We can reckon with our history in its fullness and work, tirelessly, toward a world that’s worthy of the most vulnerable among us,” Bennett says. “Like Toni Morrison, we can ‘dream the world as it ought to be.’ And then make it real. That’s my vision.”


Ocean bacteria team up to break down biodegradable plastic

MIT researchers uncovered the roles of bacterial species from the environment as they consume biodegradable plastic.


Biodegradable plastics could help alleviate the plastic waste crisis that is polluting the environment and harming our health. But how long plastics take to degrade and how environmental bacteria work together to break them down is still largely unknown.

Understanding how plastics are broken down by microbes could help scientists create more sustainable materials and even new microbial recycling systems that convert plastic waste into useful materials.

Now MIT researchers have taken an important first step toward understanding how bacteria work together to break down plastic. In a new paper, the researchers uncovered the role of individual ocean bacteria in the breakdown of a widely used biodegradable plastic. They also showed the complementary processes microbes use to fully consume the plastic, with one microbe cleaving the plastic into its component chemicals and others consuming each chemical.

The researchers say it’s one of the first studies illuminating specific bacterial species’ role in the breakdown of plastic and indicates the speed of plastic degradation can vary widely depending on a few key factors.

“There is a lot of ambiguity about how long these materials actually exist in the environment,” says lead author Marc Foster, a PhD student in the MIT-WHOI Joint Program. “This shows plastic biodegradation is highly dependent on the microbial community where the plastic ends up. It’s also dependent on the plastics — the chemistry of the polymer and how they’re made as a product. It’s important to understand these processes because we’re trying to constrain the environmental lifetime of these materials.”

Joining Foster on the paper are MIT PhD candidate Philip Wasson; former MIT postdoc Andreas Sichert; MIT undergraduate Deborah Madden; Woods Hole Oceanographic Institute researchers Matthew Hayden and Adam Subhas; Chong Becker and Sebastian Gross of the international chemical and plastic company BASF; Otto Cordero, an MIT associate professor of civil and environmental engineering; Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor; and Desirée Plata, MIT’s School of Engineering Distinguished Climate and Energy Professor. The paper appears in the journal Environmental Science and Technology.

Uncovering collaboration

Scientists hope biodegradable plastic can be used to address the mountains of plastic waste piling up in our oceans and landfills.

“More than half of produced plastic is either sent to landfills or directly released into the environment,” Foster says. “But without knowing the specifics of different degradation processes, we won’t be able to accurately predict the lifetime of these materials and better control that degradation.”

To date, many studies into the biodegradation of plastics have focused on single microbial organisms, but Foster says that’s not representative of how most plastics are broken down in the environment.

“It’s really rare for a single bacterium to carry out the full degradation process because it requires a significant metabolic burden to carry all of the enzymatic functions to depolymerize the polymer and then use those chemical subunits as a carbon and energy source,” Foster says.

Other studies have sought to capture the molecular footprints of groups of bacteria as they degrade plastic, which gives a snapshot of the species involved without uncovering the mechanisms of action.

For this study, the researchers wanted to uncover the roles of specific bacterial species as they fully degraded plastic. They started with a type of biodegradable plastic known as an aromatic aliphatic co-polyester. Such plastic is used in shopping bags and food packaging. It’s also often laid across the soil of farms to prevent weeds and retain moisture.

To begin the study, researchers at BASF, which produces that type of plastic, first placed samples of the product into different depths of the Mediterranean Sea to let bacteria grow as a thin biofilm around the plastic. The company then shipped the samples to researchers at MIT, who isolated as many species of bacteria as possible from the samples. The researchers mixed those isolates and identified 30 bacterial species that continued to grow in abundance on the plastic.

Using carbon dioxide as a measure of plastic degradation, the researchers isolated each bacterium and found one, Pseudomonas pachastrellae, that could depolymerize the plastic compounds, breaking them into the three chemical components of the plastic: terephthalic acid, sebacic acid, and butanediol.

But that bacterium couldn’t consume all three components on its own. One by one, the researchers exposed each bacterium to each chemical, finding no bacteria that could consume all three, although they did find some species that could consume one or two chemicals on their own.

Finally, the researchers selected five bacterial species based on their complementary breakdown abilities and showed the small group exhibited the same ability to fully degrade the plastic as the 30-member bacteria community.

“I was able to minimize the degradation process to this simplistic set of specific metabolic functions,” Foster says. “And then when I took out one bacterium, the mineralization dropped, which indicated the organism was controlling the degradation of the polymer. Then when I had each one of the bacteria alone in a culture, none of them could reach the same degradation as all five together, indicating there was this complementary function required. It worked much better than I thought it would.”

The researchers also found the five-member bacteria community couldn’t mineralize a different plastic, showing groups of bacteria may only be able to mineralize specific plastics.

“It highlights that the microbes living where this plastic ends up are going to dictate the plastic’s lifetime,” Foster says.

Faster plastic degradation

Foster notes the bacteria in his study are likely specific to the Mediterranean Sea. The study also only involved bacteria that could survive in his lab environment. Still, Foster says it’s one of the first papers that identifies the roles of bacteria in consuming plastic.

“Most studies wouldn’t be able to identify the specific bacteria that’s controlling each complementary mineralization process,” Foster says. “Here we can say this bacteria controls degradation, these bacteria handle mineralization, and then we show the function of each bacteria and show that together, they can remove the entire polymer.”

Foster says the work is an important first step toward creating microbial systems that are better at breaking down plastic or converting it into something useful. In follow-up work for his PhD, he is exploring what makes successful bacterial pairs for faster plastic consumption and how enzymes dock on plastic particles to initiate and continue degradation.

The work was supported by the MIT Climate and Sustainability Consortium and BASF SE. Partial support was provided by the U.S. National Science Foundation Graduate Research Fellowship Program.


New sensor sniffs out pneumonia on a patient’s breath

The technology could enable fast, point-of-care diagnoses for pneumonia and other lung conditions.


Diagnosing some diseases could be as easy as breathing into a tube. MIT engineers have developed a test to detect disease-related compounds in a patient’s breath. The new test could provide a faster way to diagnose pneumonia and other lung conditions. Rather than sit for a chest X-ray or wait hours for a lab result, a patient may one day take a breath test and get a diagnosis within minutes.

The new breath test is a portable, chip-scale sensor that traps and detects synthetic compounds, or “biomarkers,” of disease, which are initially attached to inhalable nanoparticles. The biomarkers serve as tiny tags that can only be unlocked and detached from the nanoparticle by a very particular key, such as a disease-related enzyme.

The idea is that a person would first breathe in the nanoparticles, similar to inhaling asthma medicine. If the person is healthy, the nanoparticles would eventually circulate out of the body intact. If a disease such as pneumonia is present, however, enzymes produced as a result of the infection would snip off the nanoparticles’ biomarkers. These untethered biomarkers would be exhaled and measured, confirming the presence of the disease.

Until now, detecting such exhaled biomarkers required laboratory-grade instruments that are not available in most doctor’s offices. The MIT team has now shown they can detect exhaled biomarkers of pneumonia at extremely low concentrations using the new portable, chip-scale breath test, which they’ve dubbed “PlasmoSniff.”

They plan to incorporate the new sensor into a handheld instrument that could be used in clinical or at-home settings to quickly diagnose pneumonia and other diseases.

“In practice, we envision that a patient would inhale nanoparticles and, within about 10 minutes, exhale a synthetic biomarker that reports on lung status,” says Aditya Garg, a postdoc in MIT’s Department of Mechanical Engineering. “Our new PlasmoSniff technology would enable detection of these exhaled biomarkers within minutes at the point of care.”

Garg is the first author of a study that details the team’s new sensor design. The study appears online in the journal Nano Letters. MIT co-authors include Marissa Morales, Aashini Shah, Daniel Kim, Ming Lei, Jia Dong, Seleem Badawy, Sahil Patel, Sangeeta Bhatia, and Loza Tadesse.

Tailored tags

PlasmoSniff is a project led by Loza Tadesse, an assistant professor of mechanical engineering at MIT. Tadesse’s group builds diagnostic devices that can be used directly in doctor’s office and other point-of-care settings. Her work specializes in spectroscopy, using light to identify key fingerprints in a chemical or molecule.

Several years ago, Tadesse teamed up with Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT. Bhatia’s group focuses in part on developing nanoparticle sensors — tiny particles that can be tagged with a synthetic biomarker. Bhatia can tailor these biomarkers to cleave from their nanoparticle only in the presence of specific “protease” enzymes that are produced by certain diseases.

In work that was reported in 2020, Bhatia’s group demonstrated they could detect cleaved biomarkers of pneumonia from the breath of infected mice. The biomarkers were exhaled at extremely low concentrations, of about 10 parts per billion. Nevertheless, the researchers were able to detect the compounds using mass spectrometry — a technology that is highly sensitive but requires bulky and expensive instrumentation that is not widely available in clinical settings.

“We thought, ‘How can we achieve that same sensitivity, in a way that’s accessible, at the point of need, and in a chip format that can be scalable in terms of cost?’” Tadesse says. 

A fingerprint trap

For their new study, Tadesse’s group looked to design a sensitive, portable breath test to quickly detect Bhatia’s biomarkers. Their new design centers on “plasmonics” — the study and manipulation of light and how it interacts with matter at the nanoscale.

The researchers noted that molecules exhibit characteristic vibrational modes, corresponding to the motions of atoms within their chemical bonds. These vibrations can be detected using Raman spectroscopy, an optical technique in which molecules are illuminated with light. A small fraction of the scattered light shifts in energy due to interactions with a molecule’s vibrations. By measuring these energy shifts, researchers can identify molecules based on their distinctive vibrational fingerprints.

To detect Bhatia’s biomarkers, however, they would need to isolate the comparatively few molecules from the dense cloud of many other exhaled molecules. They would also need to boost the biomarker’s vibrational signal, as the Raman-scattered light by an individual molecule is inherently extremely small.

“This is a needle-in-a-haystack problem,” Tadesse says. “Our method detects that needle that would otherwise be embedded in the noise.”

The team’s new sensor is designed to trap target biomarkers and boost their vibrational signal. The core of the sensor is made from a thin gold film, above which the researchers suspended a layer of gold nanoparticles. The gold nanoparticles are coated with a porous silica shell, generating a 5-nanometer-wide gap between the gold nanoparticles and the gold film. The silica is modified to strongly bond with molecules of water. The hydrogen in water can in turn bond with the target biomarkers. If any biomarkers pass through the sensor’s gap, they stick to the water molecules like Velcro.

The sensor’s gap is engineered to strongly amplify light due to plasmonic resonance, where electrons in the nearby gold structures collectively oscillate in response to incoming light, concentrating the electromagnetic field into the gap. Biomarkers trapped in these gaps experience a greatly enhanced electromagnetic field, which amplifies their Raman scattering signal. The researchers can then measure the Raman scattered light, and compare the pattern to the biomarker’s known “fingerprint,” to confirm its presence.

The team worked with Daniel Kim, a graduate student in Bhatia’s lab, and tested the sensor’s performance on samples of lung fluid that they obtained from healthy mice. They spiked these samples with biomarkers of pneumonia that Bhatia’s group previously designed. They then placed the spiked fluid in a vial and heated it to evaporate the fluid, to simulate exhaled breath. They placed the new sensor on the underside of the vial’s cap and used a Raman spectrometer to measure the scattered light as the fluid vapor passed through the sensor.

Through these experiments, they showed the sensor quickly detected biomarkers of pneumonia at extremely low, clinically relevant concentrations.

“Our next goal is to have a breath collection system, like a mask you can breathe into,” Garg says. “A patient would first use something like an asthma inhaler to inhale the nanoparticles. They could then breathe through the mask sensor for five minutes. We could then integrate a handheld Raman spectrometer to detect whatever biomarker is breathed out, within minutes.”

Breath tests for disease, sometimes referred to as disease breathalyzers, are an emerging technology. Most designs are still in the experimental stage, and take different approaches to detect various conditions such as certain cancers, intestinal infections, and viruses such as Covid-19. The MIT team notes that its design can be used to detect diseases beyond pneumonia, as well as biomarkers that are not related to disease, as long as the biomarker of interest has a known vibrational “fingerprint.”

“It’s not just limited to these biomarkers or even diagnostic applications,” Tadesse says. “It can sniff out industrial chemicals or airborne pollutants as well. If a molecule can form hydrogen bonds with water, we can use its vibrational fingerprint to detect it. It’s a pretty universal platform.”

This work was supported, in part, by funding from Open Philanthropy (now Coefficient Giving). Several characterization and fabrication steps were conducted at MIT.nano.


From Idaho to MIT, on a quest to cut methane emissions

PhD student Audrey Parker studies methane mitigation strategies in dairy farms and coal mines, to reduce emissions of the potent greenhouse gas.


Amid the hum of milking equipment and the shuffle of cow hooves, PhD student Audrey Parker and her collaborators pull a wagon through a dusty path of a dairy barn, measuring an invisible greenhouse gas drifting through the air. Most engineering students wouldn’t expect their graduate research to take them to a dairy farm, but for Parker, this is where some of the most impactful climate solutions are hiding in plain sight.

The scene was part of the civil and environmental engineering student’s PhD work exploring advanced yet practical technologies to mitigate methane emissions. Such emissions are much more effective at trapping heat in the atmosphere than carbon dioxide. Dairy farms are a major source of methane, and Parker’s wagon carried sensors to measure methane concentrations.

Now in her fourth year in the lab of Professor Desirée Plata, Parker looks forward to visiting such farms. When she’s not taking measurements, she can look across the rolling fields and think of home.

Parker grew up in Boise, Idaho. Her childhood was filled with backpacking trips, skiing, horseback riding, and otherwise enjoying what her natural surroundings had to offer.

“Growing up, we were always outside,” she says. “I knew how to cast a fly rod before I knew how to ride a bike.”

That experience motivated Parker to pursue studies related to preserving the environment she loved. She attended Boise State University as an undergraduate, where she studied sustainable materials development under the mentorship of Assistant Dean Paul Davis. In the summer before her senior year, she was accepted to the MIT Summer Research Program (MSRP), which equips students for graduate school by bringing them to MIT to conduct cutting-edge research. That’s where she began working with Plata, MIT’s Distinguished Climate and Energy Professor.

“They do a great job bringing in people of different backgrounds,” Parker says. “It wasn’t until I started working with Desirée that I started applying materials science as a tool to reduce greenhouse gas emissions. That was a profound insight.”

Parker graduated Boise State University as a Top Ten Scholar, the highest academic honor granted to graduating seniors, before driving across the country to begin her studies at MIT. She decided to devote her PhD to exploring methane mitigation strategies, building on her experience from MSRP.

Her focus is on methane emissions from two sources: air being vented from coal mines, and dairy farms. Those two areas alone account for a large portion of human-driven methane emissions. Both sources are dilute compared to the average oil or gas well, which makes the methane challenging to capture and convert into less environmentally harmful molecules.

Parker also wanted to work with community members in the field during her PhD to ensure whatever technical solutions she developed are practical enough to implement at scale.

“Desirée’s approach is to make sure industry is aware of affordable and sustainable ways to remove methane from their operations, while also incorporating the nuanced expertise stakeholders offer,” Parker says. “I appreciate that she is focused on not just doing work for the chapter of a PhD thesis, but also making our work lead to real-world change.”

Parker’s research explores both quantifying methane at emission sources and designing technologies that could be used to convert methane into carbon dioxide, a molecule with significantly less climate warming potential.

“Methane naturally converts into carbon dioxide over the course of about 12 years in the atmosphere,” Parker explains. “The technology we work on simply speeds up this natural process to achieve near-term climate benefits.”

The main technology Parker studies is a catalyst made from zeolites, an abundant and inexpensive mineral with complex internal structures like honeycombs. Parker dopes the zeolites with copper and explores ways to apply external heat to facilitate complete methane conversion.

Parker and her collaborators assess the durability of the material and its performance under different conditions. Recognizing that real-world deployment environments can often be difficult to replicate in lab, they test catalyst performance in operating dairy farms. In a 2025 paper, she analyzed the use of thermal energy to sustain methane combustion in catalyst materials, detailing when the approach actually brings net-climate benefits.

“If your methane concentrations are low and you’re having to provide so much energy into your system, you could become climate-harmful, but there’s also a context where it’s beneficial,” Parker explains. “Understanding where that trade-off occurs is critical to making sure your mitigation technologies are having the benefits you’re anticipating.”

That kind of systems-level thinking is necessary to understand the long-term impacts of interconnected climate systems.

“It lays a framework that other people can use for their mitigation technologies,” Parker says. “There are trade-offs with every technology, and being transparent about that is important. I think as academics it’s easy to get tunnel vision based on our research. There’s such limited funding for mitigation technologies overall and so making sure those few funding dollars are allocated appropriately is critical for achieving our climate goals.”

Some of Parker’s research findings have informed the design of a pilot-scale methane mitigation system in a coal mine, although she hasn’t gotten a chance to visit it just yet.

Outside of her research, Parker co-chairs the MIT Congressional Visit Days, a program run by the Science Policy Initiative that sends MIT students to Washington to meet with lawmakers and advocate for science-based policies.

“On-the-Hill advocacy teaches you about the policy landscape in unparalleled ways,” Parker says. “Those conversations you have with lawmakers can drive transformational change to bridge the gap between science and policy. It is our job as scientists to communicate our findings clearly so policymakers can design regulations that enable effective solutions.”

This spring, Parker is also leading a workshop for the MIT Climate and Sustainability Consortium around financing the voluntary carbon market. Here, she plans to leverage industry insights to catalyze private capital at the scale needed to meet our climate goals.

Parker also still gets plenty of outdoor time, hiking outside Boston and skiing a bit, though she says the New England ski mountains don’t compare to those out west.

Parker, who expects to complete her PhD next year, says it’s gratifying to be able to devote her research to protecting the environment she loves so much.

“For me it’s about preserving the world I grew up in,” Parker says. “Especially in Idaho, where communities are experiencing more frequent wildfires and more intense droughts. As a child, the natural world provided so much wonder. Today, that same sense of wonder is what drives me to protect it.”


How the brain handles the “cocktail party problem”

Using a computational model, neuroscientists showed how the brain can selectively focus attention on one voice among others in a noisy environment.


MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.

This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.

Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.

“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.

Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.

Modeling attention

Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.

When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.

“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”

This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.

“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.

In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.

To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.

To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.

“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.

Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.

The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.

“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.

Effects of location

Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.

The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.

“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.

These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.

“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.

Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.

The research was funded by the National Institutes of Health.


3 Questions: Fortifying our planetary defenses

MIT astronomers are developing a new way to detect, monitor, and mitigate the threats posed by smaller asteroids to our critical space infrastructure.


When people think of asteroids, they tend to picture rare, civilization-ending impacts like those depicted in movies such as “Armageddon.” In reality, the asteroids most likely to affect modern society are much smaller. While kilometer-scale impacts occur only every tens of millions of years, decameter-scale (building-sized) objects strike Earth far more frequently: roughly every couple decades. As astronomers develop new ways to detect and track these smaller asteroids, planetary defense becomes increasingly relevant for protecting the space-based infrastructure that underpins modern life, from GPS navigation to global communications.

The good news for us earthlings is that a team of MIT researchers is on this space-case. Associate Professor Julien de Wit, Research Scientist Artem Burdanov, and their colleagues recently developed a new asteroid-detection method that could be used to track potential asteroid impactors and help protect our planet. They have now applied this new technique to the James Webb Space Telescope (JWST), demonstrating that JWST can be used to detect and characterize decameter-scale asteroids all the way out to the main belt, a crucial step in fortifying our planetary safety and security. De Wit and his colleagues recently co-led with with Andrew Rivkin PhD ’91 new observations of an asteroid called 2024 YR4, which made headlines last year when it was first discovered. They were able to determine that the asteroid will not collide with the Moon, which could have had impacts on Earth’s critical satellite systems.

De Wit, Burdanov, Assistant Professor Richard Teague, and Research Scientist Saverio Cambioni spoke to MIT News about the importance of planetary defense and how MIT astronomers are helping to lead the charge to ensure our planet’s safety.

Q: What is planetary defense and how is the field changing?

Burdanov: Planetary defense is a field of science and engineering that’s focused on preventing asteroids and comets from hitting the Earth. While traditionally the field has been focused on much larger asteroids, thanks to new observational capabilities the field is growing to include monitoring much smaller asteroids that could also have an impact.

De Wit: When people think about asteroids they tend to think of impacts along the lines of these rare, civilization-ending “dinosaur killer” asteroids — objects that are scientifically fascinating but, happily, statistically unlikely on human timescales. But as soon as you move to smaller asteroids, there are so many of them that you’re looking at impacts happening every few decades or less. That becomes much more relevant on human timescales.

Now that our society has become increasingly reliant on space-based infrastructure for communication, navigation technologies like GPS and satellite-based security systems, we can be affected by different populations of smaller asteroids. These smaller asteroids will probably lead to zero direct human casualties but would have very different consequences on our space infrastructure. At the same time, because they are smaller, they require different technologies to monitor and understand them, both for the detection and for the characterization. At MIT, we are working to redefine planetary defense in a way that is far more pertinent, personable, and practical — focusing on these much smaller asteroids that could have real consequences. In other words, planetary defense is no longer just about avoiding extinction-level events. It is about protecting the systems we depend on in the near term.

Q: Why are observations with telescopes like the James Webb Space Telescope (JWST) so important to keeping our planet safe?

Teague: We’re entering a time now where we have these large-scale sky surveys that are going to be producing an incredible amount of data. We’re trying to develop the framework here at MIT where we can sift through that data as quickly and efficiently as possible, and then use the resources that we have available, such as the optical and radio observatories that we run like the MIT Haystack and Wallace Observatories, to follow up on those potential threats as quickly as possible and determine whether they could be problematic.

We’ve been doing trial observations to try and piece together how fast we can do this. The challenging thing is that the smaller objects that we’ve been talking about, the decameter ones, they’re really hard to detect from the ground. They’re just so small, and so that’s why we really need to use space-based facilities like JWST to help keep our planet safe. JWST is just incomparable, really, for detecting these very small, faint objects. A lot of our work at the moment at MIT is trying to understand is how do we build that entire pipeline ­— from detection to risk assessment to mitigation — under one roof to make it as efficient as possible. And I think this is a really MIT-type of problem to solve. There’s not many places that have the same range of experts in astronomy and engineering and technology to really tackle this properly. It’s really exciting that MIT hosts all these sorts of experts that we’re bringing together to solve this problem and keep our planet safer.

Cambioni: There is going to be what I like to call an asteroid revolution coming up because in addition to JWST’s observational capabilities, there is a new observatory in Chile called the Vera Rubin Observatory that could increase the detection of known small objects in space by a factor of 10. The most important thing to keep in mind, though, is that this observatory will detect the objects but may lose a lot of them. This is where a part of our work is coming in, to basically follow that object and map it as soon as possible. Additionally, Vera Rubin only looks at the reflected light, and it doesn’t get a precise estimate of an asteroid’s size. This gap between detection and characterization is a fundamental problem of asteroid science, between how many objects we discover and how fast we can characterize them. At MIT, we are using our in-house capabilities to help characterize these objects. That includes the MIT Wallace Observatory and the MIT Haystack Observatory.

Q: What role can MIT play in this new era of planetary defense?

De Wit: The reality is that, given the occurrence rate of these smaller asteroids and the new observational capabilities now coming online — from the Rubin Observatory to space-based facilities like JWST — we expect that within the next decade we will identify a handful of decameter-scale objects whose trajectories place them on course to impact the Earth-Moon system within this century. At that point, society will face a very practical question: whether, and how, to respond. Because these are much smaller objects than the dinosaur-killing asteroids, the types of mitigation strategies that we may envision are different. This is also where I think MIT might have an important role to play in the development, design, and potentially even construction of cost-effective, rapid-response asteroid-mitigation strategies. To help organize that effort, we have begun bringing together researchers across the Institute through the Planetary Defense at MIT project, working closely with colleagues on the engineering side.

Teague: What I’m particularly excited about is the way we’ve managed to engage students at MIT in this research as well. We’ve really focused on the impactful research and the way we’re bridging departments and labs within MIT, and this has been a fantastic way to engage students with practical astronomy and research. Saverio has run an IAP [Independent Activities Period] course, and we’re also running a student observing lab with the Wallace Observatory, where we hire a cohort of students every semester, and they’re taught how to use these observatories remotely. They take the data, do the analysis, and this semester, we've got on the order of 10 undergraduate students that are going to be working throughout the semester to take these observations and help us build this observation pipeline.

It's great that here at MIT we’re not only pushing the forefront of the research, but we’re also training the next generation of astronomers that is going to come in and carry this project through and into the future.


2026 MacVicar Faculty Fellows named

MIT professors Amos Winter and Nikolai Zeldovich are honored for exceptional undergraduate teaching.


Two outstanding MIT educators have been named MacVicar Faculty Fellows: professor of mechanical engineering Amos Winter and professor of electrical engineering and computer science Nickolai Zeldovich.

For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program (UROP). Fellows are chosen through an annual and highly competitive nomination process. The Registrar’s Office coordinates and administers the award on behalf of the Division of Graduate and Undergraduate Education. Nominations are reviewed by an advisory committee, and the provost selects the fellows.

Amos Winter: Bringing excitement to the classroom

Amos Winter is the Germeshausen Professor in the Department of Mechanical Engineering (MechE). He joined the faculty in 2012 and is best known for teaching class 2.007 (Design and Manufacturing I).

A hallmark of Winter’s pedagogy is the way he connects technical learning and core engineering science with real-world impacts. His approach keeps students actively engaged and encourages critical thinking while developing their competence and confidence as design engineers. Current graduate student Ariel Mobius ’24 writes, “Professor Winter is a transformative educator. He successfully blends rigorous technical instruction with lessons on problem scoping and hands-on learning and backs it all up with personalized mentorship. He is a committed advocate for his students and has fundamentally shaped my path as a mechanical engineer.”

Especially notable is Winter’s energetic style and use of interactive materials and demonstrations to make fundamental topics tangible. “He wheels in a large steamer trunk filled with demos he has built or collected to illustrate the day’s topic,” writes Class of 1948 Career Development Professor and assistant professor of mechanical engineering Kaitlyn Becker. “Some demos are enduring classics and others newly designed each year.” Through his “Gearhead Moment of Zen” Winter will share an astonishing car stunt to explain the mechanics using course material. “The theatrics stay in students’ minds,” says Becker, highlighting how Winter’s dramatic examples reinforce learning.

These techniques, combined with a supportive culture, allowed Winter to transform 2.007 from a core class and first subject in engineering design into a celebration of student effort and learning. Throughout the term, students learn how to design and build objects culminating in a robot competition in which their creations tackle themed challenges on a life-size game board. In the past, fewer than half the students were able to compete and today, boosted by Winter’s mentorship and enthusiasm, nearly 97 percent finish a competition-ready robot.

Ralph E. and Eloise F. Cross Professor of Mechanical Engineering David Hardt writes, “Thanks to Amos, this subject has become transformative for many MechE undergraduates.” Becker concurs: “He is the heart and captain of the 2.007 ‘cheer squad,’ cultivating a caring and motivated teaching team.”

Current graduate student Aidan Salazar ’25 notes, “His teaching philosophy is grounded in empowerment: he encourages students to take risks when designing while giving them the confidence and support needed to do so with thoughtful engineering analysis.”

Winter is also deeply invested in students’ growth outside the classroom. He serves as faculty supervisor for MIT’s Formula SAE (Society of Automotive Engineers) and Solar Car teams and guides related UROP projects. In fall 2025 alone, he advised nearly 50 UROP students from the teams, demonstrating his commitment to experiential learning and ability to mentor students at scale.

Salazar continues: “He has offered extraordinary contributions in helping MIT undergraduates embody the Institute’s ‘mens-et-manus’ [‘mind-and-hand’] motto, and I am grateful to be one of the individuals shaped by his teaching.”

“I have always looked up to my colleagues who are MacVicar Fellows as the best educators at the Institute,” writes Winter. “What makes this acknowledgement even more special to me is by earning it from teaching 2.007, which I often cite as one of the best parts of my job. The class is where most mechanical engineering undergraduates gain their first real engineering experience by physically realizing a machine of their own conception. It has been extremely gratifying to watch a generation of students translate their knowledge of engineering and design from the class into their careers … I am honored to have played a role in their intellectual growth and done so meaningfully enough to be recognized as a MacVicar Fellow.”

Nickolai Zeldovich: Inspiring independent thinkers and future teachers

Nickolai Zeldovich is the Joan and Irwin M. (1957) Jacobs Professor of Electrical Engineering and Computer Science (EECS). Student testimonials highlight his unique ability to activate their problem-solving skills, cultivate their intellectual curiosity, and infuse learning with joy.

Katarina Cheng ’25 writes, “From my first day of lecture in the course, I was immediately drawn in by Professor Zeldovich’s joy and enthusiasm for every facet of security and its power,” and Rotem Hemo ’17, ’18 says that Zeldovich “empowers students to find solutions themselves.”

Yael Tauman Kalai, the Ellen Swallow Richards (1873) Professor and professor of EECS concurs. She notes that his lectures — with back-and-forth discussion and probing questions — encourage independent thinking and ensure that “everyone feels a little smarter at the end. It is not surprising that students love him.”

Zeldovich’s affinity for problem-solving translates to his curricular work as well. When he arrived at MIT in 2008, Course 6 offered classes in theoretical and applied cryptography, but lacked a dedicated systems security subject. Recognizing this as a significant gap, Zeldovich took it upon himself to create class 6.566/6.858 (Computer Systems Security) in 2009. Since then, the subject has become a central part of the curriculum, but sustained interest from undergraduates revealed another need, and in 2021 he partnered with colleagues to create a dedicated introductory course: 6.1600 (Foundations of Computer Security).

Edwin Sibley Webster Professor of EECS Srini Devadas writes: “What our curriculum was sorely in need of was a systems security class, and Nickolai immediately and single-handedly created [it],” and has “taught this class to rave reviews ever since.”

The impact of Zeldovich’s thoughtful, inquiry-driven approach to pedagogy extends beyond the walls of his classroom, inspiring future educators, teaching assistants (TAs), and even his faculty colleagues at MIT.

Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor of computer science, writes that Zeldovich has “proven himself to be a dedicated teacher of teachers … One of the things that makes teaching with Nickolai so much fun is that he shares his passion with the undergraduates and MEng students who join the course staff as TAs.”

“[He] encourages the TAs to contribute their own creative ideas to the course,” continues Corrigan-Gibbs. “It should not be a surprise then that 100% of the TAs that we have had in our class have signed up to teach with Nickolai again.”

“Due, in no small part, to how I saw Nickolai lead his classroom, I was inspired to become an educator myself,” writes MIT alumna Anna Arpaci-Dusseau ’23, SM ’24. “I saw that the role of an instructor is not only to teach, but to innovate by thinking of creative projects, and to connect by listening to students’ concerns. As I go forward in my career, I am grateful to have such a wonderful example of an educator to look up to.”

Kalai adds, “I have learned a great deal from the two times that I have ‘taken’ (part of) the class from Nickolai. His extensive knowledge and experience are evident in every lecture. There is so much variety to Nickolai’s teaching.”

Nickolai Zeldovich is the recipient of numerous awards including the EECS Spira Teaching Award (2013), the Edgerton Faculty Achievement Award (2014), the EECS Faculty Research Innovation Fellowship (2018), and the EECS Jamieson Award for Excellence in Teaching (2024).

On receiving this award, Zeldovich says, “MIT has a culture of strong undergraduate education, so being selected as a MacVicar Fellow was truly an honor. It’s a joy to teach smart students about computer systems, and the tradition of co-teaching classes in the EECS department helped me improve as a teacher. Most of all, I look forward to continuing to teach MIT’s students!”

Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website. 


New photonic device efficiently beams light into free space

Light-emitting structures that curl off the chip surface could enable advanced displays, high-speed optical communications, and larger-scale quantum computers.


Photonic chips use light to process data instead of electricity, enabling faster communication speeds and greater bandwidth. Most of that light typically stays on the chip, trapped in optical wires, and is difficult to transmit to the outside world in an efficient manner.

If a lot of light could be rapidly and precisely beamed off the chip, free from the confines of the wiring, it could open the door to higher-resolution displays, smaller Lidar systems, more precise 3D printers, or larger-scale quantum computers.

Now, researchers from MIT and elsewhere have developed a new class of photonic devices that enable the precise broadcasting of light from the chip into free space in a scalable way.

Their chip uses an array of microscopic structures that curl upward, resembling tiny, glowing ski jumps. The researchers can carefully control how light is emitted from thousands of these tiny structures at once.

They used this new platform to project detailed, full-color images that are roughly half the size of a grain of table salt. Used in this way, the technology could aid in the development of lightweight augmented reality glasses or compact displays.

They also demonstrated how photonic “ski jumps” could be used to precisely control quantum bits, or qubits, in a quantum computing system.

“On a chip, light travels in wires, but in our normal, free-space world, light travels wherever it wants. Interfacing between these two worlds has long been a challenge. But now, with this new platform, we can create thousands of individually controllable laser beams that can interact with the world outside the chip in a single shot,” says Henry Wen, a visiting research scientist in the Research Laboratory of Electronics (RLE) at MIT, research scientist at MITRE, and co-lead author of a paper on the new platform.

He is joined on the paper by co-lead authors Matt Saha, of MITRE; Andrew S. Greenspon, a visiting scientist in RLE and MITRE; Matthew Zimmermann, of MITRE; Matt Eichenfeld, a professor at the University of Arizona; senior author Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science and principal investigator in the Quantum Photonics and Artificial Intelligence Group and the RLE; as well as others at MIT, MITRE, Sandia National Laboratories, and the University of Arizona. The research appears today in Nature.

A scalable platform

This work grew out of the Quantum Moonshot Program, a collaboration between MIT, the University of Colorado at Boulder, the MITRE Corporation, and Sandia National Laboratories to develop a novel quantum computing platform using the diamond-based qubits being developed in the Englund lab.

These diamond-based qubits are controlled using laser beams, and the researchers needed a way to interact with millions of qubits at once.

“We can’t control a million laser beams, but we may need to control a million qubits. So, we needed something that can shoot laser beams into free space and scan them over a large area, kind of like firing a T-shirt gun into the crowd at a sports stadium,” Wen says.

Existing methods used to broadcast and steer light off a photonic chip typically work with only a few beams at once and can’t scale up enough to interact with millions of qubits.

To create a scalable platform, the researchers developed a new fabrication technique. Their method produces photonic chips with tiny structures that curve upward off the chip’s surface to shine laser beams into free space.

They built these tiny “ski jumps” for light by creating two-layer structures from two different materials. Each material expands differently when it cools down from the high fabrication temperatures.

The researchers designed the structures with special patterns in each layer so that, when the temperature changes, the difference in strain between the materials causes the entire structure to curve upward as it cools.

This is the same effect as in an old-fashioned thermostat, which utilizes a coil of two metallic materials that curl and uncurl based on the temperature in the room, triggering the HVAC system. “Both of these materials, silicon nitride and aluminum nitride, were separate technologies. Finding a way to put them together was really the fabrication innovation that enables the ski jumps. This wouldn’t have been possible without the pioneering contributions of Matt Eichenfield and Andrew Leenheer at Sandia National Labs,” Wen says.

On the chip, connected waveguides funnel light to the ski jump structures. The researchers use a series of modulators to rapidly and precisely control how that light is turned on and off, enabling them to project light off the chip and move it around in free space.

Painting with light

They can broadcast light in different colors and, by tweaking the frequencies of light, adjust the density of the pattern that is emitted. In this way, they can essentially paint pictures in free space using light.

“This system is so stable we don’t even need to correct for errors. The pattern stays perfectly still on its own. We just calculate what color lasers need to be on at a given time and then turn it on,” he says.

Because the individual points of light, or pixels, are so tiny, the researchers can use this platform to generate extremely high-resolution displays. For instance, with their technique, 30,000 pixels can be fit into the same area that can hold only two pixels used in smartphone displays, Wen says.

“Our platform is the ideal optical engine because our pixels are at the physical limit of how small a pixel can be,” he adds.

Beyond high-resolution displays and larger quantum computers with diamond-based qubits, the method could be used to produce Lidars that are small enough to fit on tiny robots.

It could also be utilized in 3D printing processes that fabricate objects using lasers to cure layers of resin. Because their chip generates controllable beams of light so rapidly, it could greatly increase the speed of these printing processes, allowing users to create more complex objects.

In the future, the researchers want to scale their system up and conduct additional experiments on the yield and uniformity of the light, design a larger system to capture light from an array of photonic chips with “ski jumps,” and conduct robustness tests to see how long the devices last.

“We envision this opening the door to a new class of lab-on-chip capabilities and lithographically defined micro-opto-robotic agents,” Wen says.

This research was funded, in part, by the MITRE Quantum Moonshot Program, the U.S. Department of Energy, and the Center for Integrated Nanotechnologies.


A better method for planning complex visual tasks

A new hybrid system could help robots navigate in changing environments or increase the efficiency of multirobot assembly teams.


MIT researchers have developed a generative artificial intelligence-driven approach for planning long-term visual tasks, like robot navigation, that is about twice as effective as some existing techniques.

Their method uses a specialized vision-language model to perceive the scenario in an image and simulate actions needed to reach a goal. Then a second model translates those simulations into a standard programming language for planning problems, and refines the solution.

In the end, the system automatically generates a set of files that can be fed into classical planning software, which computes a plan to achieve the goal. This two-step system generated plans with an average success rate of about 70 percent, outperforming the best baseline methods that could only reach about 30 percent.

Importantly, the system can solve new problems it hasn’t encountered before, making it well-suited for real environments where conditions can change at a moment’s notice.

“Our framework combines the advantages of vision-language models, like their ability to understand images, with the strong planning capabilities of a formal solver,” says Yilun Hao, an aeronautics and astronautics (AeroAstro) graduate student at MIT and lead author of an open-access paper on this technique. “It can take a single image and move it through simulation and then to a reliable, long-horizon plan that could be useful in many real-life applications.”

She is joined on the paper by Yongchao Chen, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS); Chuchu Fan, an associate professor in AeroAstro and a principal investigator in LIDS; and Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab. The paper will be presented at the International Conference on Learning Representations.

Tackling visual tasks

For the past few years, Fan and her colleagues have studied the use of generative AI models to perform complex reasoning and planning, often employing large language models (LLMs) to process text inputs.

Many real-world planning problems, like robotic assembly and autonomous driving, have visual inputs that an LLM can’t handle well on its own. The researchers sought to expand into the visual domain by utilizing vision-language models (VLMs), powerful AI systems that can process images and text.

But VLMs struggle to understand spatial relationships between objects in a scene and often fail to reason correctly over many steps. This makes it difficult to use VLMs for long-range planning.

On the other hand, scientists have developed robust, formal planners that can generate effective long-horizon plans for complex situations. However, these software systems can’t process visual inputs and require expert knowledge to encode a problem into language the solver can understand.

Fan and her team built an automatic planning system that takes the best of both methods. The system, called VLM-guided formal planning (VLMFP), utilizes two specialized VLMs that work together to turn visual planning problems into ready-to-use files for formal planning software.

The researchers first carefully trained a small model they call SimVLM to specialize in describing the scenario in an image using natural language and simulating a sequence of actions in that scenario. Then a much larger model, which they call GenVLM, uses the description from SimVLM to generate a set of initial files in a formal planning language known as the Planning Domain Definition Language (PDDL).

The files are ready to be fed into a classical PDDL solver, which computes a step-by-step plan to solve the task. GenVLM compares the results of the solver with those of the simulator and iteratively refines the PDDL files.

“The generator and simulator work together to be able to reach the exact same result, which is an action simulation that achieves the goal,” Hao says.

Because GenVLM is a large generative AI model, it has seen many examples of PDDL during training and learned how this formal language can solve a wide range of problems. This existing knowledge enables the model to generate accurate PDDL files.

A flexible approach

VLMFP generates two separate PDDL files. The first is a domain file that defines the environment, valid actions, and domain rules. It also produces a problem file that defines the initial states and the goal of a particular problem at hand.

“One advantage of PDDL is the domain file is the same for all instances in that environment. This makes our framework good at generalizing to unseen instances under the same domain,” Hao explains.

To enable the system to generalize effectively, the researchers needed to carefully design just enough training data for SimVLM so the model learned to understand the problem and goal without memorizing patterns in the scenario. When tested, SimVLM successfully described the scenario, simulated actions, and detected if the goal was reached in about 85 percent of experiments.

Overall, the VLMFP framework achieved a success rate of about 60 percent on six 2D planning tasks and greater than 80 percent on two 3D tasks, including multirobot collaboration and robotic assembly. It also generated valid plans for more than 50 percent of scenarios it hadn’t seen before, far outpacing the baseline methods.

“Our framework can generalize when the rules change in different situations. This gives our system the flexibility to solve many types of visual-based planning problems,” Fan adds.

In the future, the researchers want to enable VLMFP to handle more complex scenarios and explore methods to identify and mitigate hallucinations by the VLMs.

“In the long term, generative AI models could act as agents and make use of the right tools to solve much more complicated problems. But what does it mean to have the right tools, and how do we incorporate those tools? There is still a long way to go, but by bringing visual-based planning into the picture, this work is an important piece of the puzzle,” Fan says.

This work was funded, in part, by the MIT-IBM Watson AI Lab.


2026 MIT Sloan Sports Analytics Conference shows why data make a difference

Over 2,500 — including coaches and players from Team USA, the NBA, WNBA, and more — attended MIT’s industry-leading event, now in its 20th year.


With time dwindling in the Olympic women’s ice hockey gold medal game on Feb. 19, players for Team USA and Team Canada lined up for a key faceoff in Canada’s end. Canada had a 1-0 lead. USA had 2:23 left, and an ace up their sleeve: analytics.

USA Coach John Wroblewski pulled the goalkeeper, to get a player advantage, and had forward Alex Carpenter take the faceoff. Statistics show that Carpenter is not only very good at winning faceoffs; she also wins a lot of them cleanly. That allows her team to quickly regain possession, without too many teammates nearby. Knowing that, Wroblewski directed the USA players to spread out, largely away from the faceoff circle, in position to circulate the puck as soon as they got it back.

Carpenter won the faceoff, and Team USA quickly started a passing move. Laila Edwards soon launched a shot that longtime star Hilary Knight deflected in for the crucial, game-tying goal with 2:04 left. Team USA then won in overtime. And data-driven decision-making had also won big; indeed, it helped change the Olympics.

“What it does for a coach, the other thing these analytics do, is … it allows you to move forward with this confidence level,” Wroblewski said on Saturday at the 20th annual MIT Sloan Sports Analytics Conference (SSAC), during a hockey analytics panel where he detailed his decision-making for that faceoff, and in the gold medal game generally.

Using the data, he added, lets coaches “limit the emotion” that might cloud their in-game decisions.

“By the time you get to that decision, you’re then allowed the freedom to step away from the decision, to allow the players to go earn their medal,” Wroblewski added.

You don’t usually find coaches divulging their tactical secrets just three weeks after a big game has been played. But then, this is the MIT Sloan conference, a trailblazing forum that has helped analytics ideas spread throughout sports. Coaches, players, and analysts know any data-driven discussion will find an interested audience.

“Analytics was massive for us going into the gold medal game,” Wroblewski said.

20 years on: From classrooms to convention halls

The 20th edition of SSAC was a strong one, with many substantive panel discussions and interviews; the annual research paper, hackathon, and case study contests; mentorship events and informal networking opportunities; and more. Over 2,500 people attended the two-day event, held at Boston’s Menino Conference and Exhibition Center (MCEC). The conference was founded in 2007 by Daryl Morey, now president of basketball operations for the NBA Philadelphia 76ers, and Jessica Gelman, now CEO of the Kraft Analytics Group.

The first three editions of the conference were held on the MIT campus. In 2010, it first moved to the MCEC (one of two regular convention-center sites it uses), and starting in 2011, the conference became a two-day event.

Today people attend for the panels, the career opportunities, and, in some cases, to make news. NBA Commissioner Adam Silver was on hand this year, engaging in an on-stage conversation with former WNBA great Sue Bird, publicly addressing some of the key issues facing his league, and drawing wide media coverage.

First, though, Silver reflected about attending the second edition of the conference on the MIT campus in 2008, when he was deputy commissioner.

“It was literally a classroom of 20 people we were talking to,” Silver recalled. “I think it was the beginning of the moment when people were taking sports as a discipline more seriously. … I give Jessica and Daryl a lot of credit [for that].”

Addressing tanking and gambling

A core part of Silver’s comments focused on two big issues in pro basketball: tanking and gambling. About eight NBA teams appear to be tanking this season, that is, losing games in order to increase their chances of getting a high draft pick.

“We are going to make substantial changes for next year,” Silver said, although he also added: “I am an incrementalist. I think we’ve got to be a little bit careful about how huge a change we make at once. I’m not ruling anything out. But I am paying attention to that.”

To be sure, tanking has long been a part of professional basketball, as Bird noted during the conversation.

“We did it in Seattle, to be honest,” Bird said. “Breanna Stewart was coming out of college. We were in a ‘rebuild.’”

Still, in this NBA season, tanking has become an epidemic, in “a little bit of a perfect storm,” as Silver put it on Friday. And almost every proposed solution seems to have drawbacks. Perhaps the simplest cure for tanking, actually, would be robust analytical studies showing that it is not a very effective team-building strategy. If that is what the numbers reveal, of course.

Meanwhile, multiple arrests of NBA players and coaches at the beginning of the season show further that sports gambling continues to present challenges to professional sports leagues.

“I personally think there should be more regulation now, not less,” Silver said on Friday, suggesting that federal rules would simplify things in the U.S., where 39 states allow sports gambling to some extent. He also said the NBA can continue to work on monitoring data to protect against gambling scandals.

“I think there are some large-platform companies are that are looking at a business opportunity to come in and in a much more sophisticated way work as a detection service with the league,” Silver said.

Through it all, Silver said, the NBA will continue to be a data-driven operation. Have you watched a game with a long instant-replay review, and gotten a little impatient? Still, have you kept watching that game? So does almost everyone.

“For years people would tell us, ‘Don’t use instant replay, because you’ll turn fans off,’” Silver said. However, he added, “The data suggests, in terms of ratings and what servers tell us, you almost never lose a fan when you’re going to replay. Because they want to see the replay and they want to see what happened.”

The minnows got big

Sports analytics took root in baseball, with its discrete pitcher-hitter actions. Legendary MLB general manager Branch Rickey employed a statistician for the great Brooklyn Dodgers of the 1950s; the famous manager Earl Weaver thought analytically with the Baltimore Orioles in the 1970s. Baseball analyst Bill James made sports analytics a viable pursuit with his annual “Baseball Abstract” bestsellers in the 1980s, and Michael Lewis’ “Moneyball” popularized it.

But data can be applied to all sports — and sometimes is most valuable when only some teams are interested in it. Take soccer. In the English Premier League, about three clubs have been heavily oriented around analytics over the last decade: Liverpool FC, Brighton FC, and Brentford FC. That has helped Liverpool win multiple titles, while Brighton and Brentford, smaller clubs, have startled many with their success.

Saturday at SSAC, Brentford’s majority owner Matthew Benham made one of his most visible public appearances, in an onstage interview with podcaster Roger Bennett. Benham first made money wagering on soccer, then invested in Brentford, his childhood club.

“The information we used in the early days was really, really rudimentary,” Benham said. In his account, his success building an analytics-based club has only partly been about the numbers.

“A lot of the success has just been in running things efficiently.” Benham said. He prefers to have management discussions that are an “exchange of views, rather than debate,” since the latter implies an interaction with a clear winner and loser. Instead, compiling independent-minded views from his executives is more important.

Brentford also uses “a combination of old-style scouting and data” for its player acquisition decisions, Benham said. Not every decision works. Brentford could have signed current Arsenal FC star Eberechi Eze for a mere $4 million pounds in 2019, and passed; Crystal Palace FC acquired Eze, then realized a windfall when Arsenal purchased his services.

Still, pressed by Bennett to specify a little more about his analytical thinking, Benham implied that strikers are valuable not only for their finishing skills, but for consistently getting open for shots on goal. Fans tend to focus too much on a player’s misses, rather than how many chances are created by their off-ball work.

“Getting in position is way, way more informative than finishing,” Benham said.

A similar insight seems to have guided Liverpool’s thinking. As it happens, a Friday panel at SSAC featured Ian Graham, who ran Liverpool’s analytics operations from 2012 to 2023, and weighed in on a number of subjects. Among other things, Graham noted, teams are too cautious when tied late in a match; soccer grants three points for a win, one for a draw, and zero for a loss, so from a tied position, the reward for winning is twice as great as the penalty for losing.

“Teams don’t go for it enough,” Graham said. “Teams think a draw is an okay result.”

The limits of knowledge

Sports, of course, are ultimately played by imperfect, injury-prone, and sometimes exhausted athletes. One consistent lesson from the MIT Sloan conference involves the limits of data and plans.

“We think the data is giving us an answer, when actually it’s giving us some information, and we still have to make a choice,” said Ariana Andonian, vice president of player personnel for the Philadelphia 76ers, during a basketball panel on Saturday.

Asked about the promise of artificial intelligence for sports analytics, Sonia Raman, head coach of the WNBA’s Seattle Storm, noted that its insights might always be limited by circumstances.

“It’s not like you can just get an AI report in the middle of the game that says, ‘Get some shooting in,’” said Raman, who, prior to coaching in the WNBA and NBA served for 12 years as head coach of the MIT women’s basketball team.

“You can have a great plan, but if it’s poorly executed, it’s way worse than a poor plan that’s well executed,” added Steven Adams, a center for the NBA’s Houston Rockets (who is currently not playing due to injury), during the same panel.

And yet, in some games and matches, the analytics do work, the plans do come to fruition, and the numbers do make a difference. When that happens, as John Wroblewski can now attest, the results are golden. 


MIT undergraduates help US high schoolers tackle calculus

The MIT4America Calculus Project is a growing source of tutoring support on a topic that’s a “gateway” to many STEM careers.


This year in a rural school district in southeastern Montana, one high school student is taking calculus. For many people, calculus is daunting enough, even when teachers are used to offering it and peers are around to help. Studying it solo can be even harder. Yet this lone student has an unusual source of support: weekly tutoring directly from an MIT undergraduate, by Zoom, a long-distance but helpful way to stay on track.

It's part of a new program called the MIT4America Calculus Project, launched from the Institute last summer, in which MIT undergraduates and alumni work with school districts across the U.S., from Montana to Texas to New York, to tutor high school students. The logic is compelling: Students are highly proficient at calculus at MIT, where it is almost a requirement for admissions and success. The new civic-minded outreach program lets those MIT people share their knowledge and skills, getting high schoolers ready for further studies and even jobs, especially in STEM fields. 

“Calculus is a gateway for many students into STEM higher education and careers,” says MIT Professor Eric Klopfer, a co-director of the MIT4America Calculus Project. “We can help more students, in more places, fulfill requirements and get into great universities across the country, whether MIT or others, and then into STEM careers. We want to make sure they have the skills to do that.”

At this point, the project is working closely with 14 school districts across the U.S., deploying 30 current MIT undergraduates and seven alumni as tutors. The weekly sessions are carefully coordinated with school administrators and teachers, and the MIT tutors have all received training. The program started with an in-person summer calculus camp in 2025; by next summer, the goal is to be collaborating with about 20 schools districts.

“We want it to have a lasting impact,” says Claudia Urrea, an education scholar and co-director of the MIT4America Calculus Project “It’s not just about students passing an exam, but having tutors who look like what the students want to be in the future, who are mentors, have conversations, and make sure the high school students are learning.” 

Klopfer and Urrea bring substantial experience to the project. Klopfer is a professor and director of the Scheller Teacher Education Program and the Education Arcade at MIT; Urrea is executive director for the PreK-12 Initiative at MIT Open Learning.

The MIT4America Calculus Project is supported through a gift from the Siegel Family Endowment and was developed as a project in consultation with David Siegel SM ’86, PhD ’91, a computer scientist and entrepreneur who is chairman of the firm Two Sigma.

“David Siegel came to us with two powerful questions: How can we spread the educational impact of MIT beyond our walls? And how can we open doors to STEM careers for U.S. high school students who don’t have access to calculus?” says MIT President Sally Kornbluth.

She adds: “The MIT4America Calculus Project answers those questions in a perfectly MIT way: Reflecting the Institute’s longstanding commitment to national service, the MIT4America Calculus Project supplies an innovative answer to a hard practical problem, and it taps the uncommon skill of the people of MIT to create opportunity for others. We’re enormously grateful to David for his inspiration and guidance, and to the Siegel Family Endowment for the financial support that brought this idea to life.”

The U.S. has more than 13,000 school districts, and about half of them offer calculus classes. The MIT effort aims to work with districts that already have existing programs but are striving to add educational support for them, often while facing funding constraints or other limitations.

In contrast to the one-student calculus situation in Montana, the project is also working with a 5,000-student district in Texas, south of Dallas, where about 60 high school students take calculus; currently five Institute undergraduates are tutoring 15 students from the district’s schools.

“Other organizations are involved in efforts like this, but I think MIT brings some unique things to it,” Klopfer says. “I think involving our undergraduates in this is an awesome contribution. Our students really do come from all over the place, and are sometimes connecting back to their home states and communities, and that makes a difference on both sides.”

He adds: “I see benefits for our students, too. They develop good ways of communicating, working with other people and building skills. They can gain a lot of great experience.”

In addition to the in-person summer calculus camp, which is expected to continue, and the weekly video tutoring, the MIT4America Calculus Project is working on developing online tools that help guide high school students as well. Still, Urrea emphasizes, the project is built around “the importance of people. A community of support is very important, to have connections that build over time.  The human aspect of the program is irreplaceable.”

The MIT tutors must pass rigorous training sessions that cover pedagogy and other aspects of working with high school students, and know they are making a substantial commitment of time and effort.

It has been worth it, as teachers say their high school students have been responding very well to the MIT tutors.

“For students to be able to see themselves in their tutors is a really cool thing,” says Shilpa Agrawal ’15, director of computer science and an AP calculus AB teacher at Comp Sci High in the Bronx, New York, where 15 students are participating in the project.

“It’s led to a lot of success for my students,” adds Agrawal, who majored in computer science at MIT. She is part of the national network of MIT-connected teachers who have been helping the program grow organically, having reached out to Jenny Gardony, manager of the MIT4America Calculus Project.

Gardony, who is also the math project manager in MIT’s Scheller Teacher Education program, has been receiving enthusiastic emails from teachers in other participating districts since the project started.

“I have to start by saying thank you,” one teacher wrote to Gardony, adding that one student “was so excited in class today. The session she had with you made her so confident. She’s always nervous, but today she was smiling and helping others, and that was 100 percent because of you.”

Gardony adds: “The fact that a busy teacher takes the time to send that email, I’m touched they would do that.” 


Understanding how “marine snow” acts as a carbon sink

A new study finds hitchhiking bacteria dissolve essential ballast in ubiquitous “snow” particles, which could counteract the ocean’s ability to sequester carbon.


In some parts of the deep ocean, it can look like it’s snowing. This “marine snow” is the dust and detritus that organisms slough off as they die and decompose. Marine snow can fall several kilometers to the deepest parts of the ocean, where the particles are buried in the seafloor for millennia.

Now, researchers at MIT and their collaborators have found that as marine snow falls, tiny hitchhikers may limit how deep the particles can sink before dissolving away. The team shows that when bacteria hitch a ride on marine snow particles, the microbes can eat away at calcium carbonate, which is an essential ballast that helps particles sink.

The findings, which appear this week in the Proceedings of the National Academy of Sciences, could explain how calcium carbonate dissolves in shallow layers of the ocean, where scientists had assumed it should remain intact. The results could also change scientists’ understanding of how quickly the ocean can sequester carbon from the atmosphere.

Marine snow is a main vehicle by which the ocean stores carbon. At the ocean’s surface, phytoplankton absorb carbon dioxide from the atmosphere and convert the gas into other forms of carbon, including calcium carbonate — the same stuff that’s found in shells and corals. When they die, bits of phytoplankton drift down through the ocean as marine snow, carrying the carbon with them. If the particles make it to the deep ocean, the carbon they carry can be buried and locked away for hundreds to thousands of years.

But the new study suggests bacteria may be working against the ocean’s ability to sequester carbon. By eroding the particles’ calcium carbonate, bacteria can significantly slow the sinking of marine snow. The more they linger, the more likely the particles are to be respired quickly, releasing carbon dioxide into the shallow ocean, and possibly back into the atmosphere.

“What we’ve shown is that carbon may not sink as deep or as fast as one may expect,” says study co-author Andrew Babbin, an associate professor in the Department of Earth, Atmospheric and Planetary Sciences and a mission director at the Climate Project at MIT. “As humanity tries to design our way out of the problem of having so much CO2 in the atmosphere, we have to take into account these natural microbial mechanisms and feedbacks.”

The study’s primary author is Benedict Borer, a former MIT postdoc who is now an assistant professor of marine and coastal sciences at the Rutgers School of Environmental and Biological Sciences; co-authors include Adam Subhas and Matthew Hayden at the Woods Hole Oceanographic Institution and Ryan Woosley, a principal research scientist at MIT’s Center for Sustainability Science and Strategy.

Losing weight

Marine snow acts as the ocean’s main “biological pump,” the process by which the ocean pulls carbon from the surface down into the deep ocean. Scientists estimate that marine snow is responsible for drawing down billions of tons of carbon each year. Marine snow’s ability to sink comes mainly from minerals such as calcium carbonate embedded within the particles. The mineral is a dense ballast that weighs down the particle. The more calcium carbonate a particle has, the faster it sinks.

Scientists had assumed based on thermodynamics that calcium carbonate should not dissolve within the ocean’s upper layers, given the general temperature and pH conditions in the surface ocean. Any calcium carbonate that is bound up in marine snow should then safely sink to depths greater than 1,000 meters without dissolving along the way.

But oceanographers have long observed signs of dissolved calcium carbonate in the upper layers of the ocean, suggesting that something other than the ocean’s macroscale conditions was dissolving the mineral and slowing down the ocean’s biological pump.

And indeed, the MIT team has found that what is dissolving calcium carbonate in shallow waters is a microscale process that occurs within the immediate environment of an individual particle.

“Most oceanographers think about the macroscale, and in this instance what’s happening in microscopic particles is what is actually controlling bulk seawater chemistry,” Borer says. “Consequences abound for the ocean’s carbon dioxide sequestration capacity.”

A sinking sweetspot

In their new study, the researchers set up an experiment to simulate a sinking particle of marine snow and its interactions at the microscale. The team synthesized particles similar to marine snow that they made from varying concentrations of calcium carbonate and bacteria — organisms that are often found feasting on the particles in the ocean.

“The ocean is a fairly dilute medium with respect to organic matter,” Babbin says. “So organisms like bacteria have to search for food. And particles of marine snow are like cheeseburgers for bacteria.”

The team designed a small microfluidic chip to contain the particles, and flowed seawater through the chip at various rates to simulate different sinking speeds in the ocean. Their experiments revealed that whenever particles hosted any bacteria, they also rapidly lost some calcium carbonate, which dissolved into the surrounding seawater. As bacteria feed on the particles’ organic material, the microbes excrete acidic waste products that act to dissolve the particles’ inorganic, ballasting calcium carbonate.

The researchers also found that the amount of calcium carbonate that dissolves depends on how fast the particles sink. They flowed seawater around the particles at slow, intermediate, and fast speeds and found that both slow and fast sinking limit the amount of calcium carbonate that’s dissolved. With slow sinking, particles don’t receive as much oxygen from their surroundings, which essentially suffocates any hitchhiking bacteria. When particles sink quickly, bacteria may be sufficiently oxygenated, but any waste products that they produce can be easily flushed away before they can dissolve the particles’ calcium carbonate.

At intermediate speeds, there is a sweet spot: Bacteria are sufficiently oxygenated and can also build up enough waste, enabling the microbes to efficiently dissolve calcium carbonate.

Overall, the work shows that bacteria can have a significant effect on marine snow’s ability to sink and sequester carbon in the deep ocean. Bacteria can be found everywhere, and particularly in the shallower ocean regions. Even if macroscale conditions in these upper layers should not dissolve calcium carbonate, the study finds bacteria working at the microscale most likely do.

The findings could explain oceanographers’ observations of dissolved calcium carbonate in shallow ocean regions. They also illustrate that bacteria and other microbes may be working against the ocean’s natural ability to sequester carbon, by dissolving marine snow’s ballast and slowing its descent into the deep ocean. As humans consider climate solutions that involve enhancing the ocean’s biological pump, the researchers emphasize that bacteria’s role must be taken into account.

“Insights from this work are vital to predict how ecosystems will respond to marine carbon dioxide removal attempts, and overall how the oceans will change in response to future climate scenarios,” says Benedict Borer, who carried out the study’s experiments as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

This work was supported, in part, by the Simons Foundation, the National Science Foundation, and the Climate Project at MIT.


Improving AI models’ ability to explain their predictions

A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.


In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output.

Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.

The concepts the model uses are usually defined in advance by human experts. For instance, a clinician could suggest the use of concepts like “clustered brown dots” and “variegated pigmentation” to predict that a medical image shows melanoma.

But previously defined concepts could be irrelevant or lack sufficient detail for a specific task, reducing the model’s accuracy. The new method extracts concepts the model has already learned while it was trained to perform that particular task, and forces the model to use those, producing better explanations than standard concept bottleneck models.

The approach utilizes a pair of specialized machine-learning models that automatically extract knowledge from a target model and translate it into plain-language concepts. In the end, their technique can convert any pretrained computer vision model into one that can use concepts to explain its reasoning.

“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” says lead author Antonio De Santis, a graduate student at Polytechnic University of Milan who completed this research while a visiting graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

He is joined on a paper about the work by Schrasing Tong SM ’20, PhD ’26; Marco Brambilla, professor of computer science and engineering at Polytechnic University of Milan; and senior author Lalana Kagal, a principal research scientist in CSAIL. The research will be presented at the International Conference on Learning Representations.

Building a better bottleneck

Concept bottleneck models (CBMs) are a popular approach for improving AI explainability. These techniques add an intermediate step by forcing a computer vision model to predict the concepts present in an image, then use those concepts to make a final prediction.

This intermediate step, or “bottleneck,” helps users understand the model’s reasoning.

For example, a model that identifies bird species could select concepts like “yellow legs” and “blue wings” before predicting a barn swallow.

But because these concepts are often generated in advance by humans or large language models (LLMs), they might not fit the specific task. In addition, even if given a set of pre-defined concepts, the model sometimes utilizes undesirable learned information anyway, which is a problem known as information leakage.

“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis explains.

The MIT researchers had a different idea: Since the model has been trained on a vast amount of data, it may have learned the concepts needed to generate accurate predictions for the particular task at hand. They sought to build a CBM by extracting this existing knowledge and converting it into text a human can understand.

In the first step of their method, a specialized deep-learning model called a sparse autoencoder selectively takes the most relevant features the model learned and reconstructs them into a handful of concepts. Then, a multimodal LLM describes each concept in plain language.

This multimodal LLM also annotates images in the dataset by identifying which concepts are present and absent in each image. The researchers use this annotated dataset to train a concept bottleneck module to recognize the concepts.

They incorporate this module into the target model, forcing it to make predictions using only the set of learned concepts the researchers extracted.

Controlling the concepts

They overcame many challenges as they developed this method, from ensuring the LLM annotated concepts correctly to determining whether the sparse autoencoder had identified human-understandable concepts.

To prevent the model from using unknown or unwanted concepts, they restrict it to use only five concepts for each prediction. This also forces the model to choose the most relevant concepts and makes the explanations more understandable.

When they compared their approach to state-of-the-art CBMs on tasks like predicting bird species and identifying skin lesions in medical images, their method achieved the highest accuracy while providing more precise explanations.

Their approach also generated concepts that were more applicable to the images in the dataset. 

“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis says.

In the future, the researchers want to study potential solutions to the information leakage problem, perhaps by adding additional concept bottleneck modules so unwanted concepts can’t leak through. They also plan to scale up their method by using a larger multimodal LLM to annotate a bigger training dataset, which could boost performance.

“I’m excited by this work because it pushes interpretable AI in a very promising direction and creates a natural bridge to symbolic AI and knowledge graphs,” says Andreas Hotho, professor and head of the Data Science Chair at the University of Würzburg, who was not involved with this work. “By deriving concept bottlenecks from the model’s own internal mechanisms rather than only from human-defined concepts, it offers a path toward explanations that are more faithful to the model and opens many opportunities for follow-up work with structured knowledge.”

This research was supported by the Progetto Rocca Doctoral Fellowship, the Italian Ministry of University and Research under the National Recovery and Resilience Plan, Thales Alenia Space, and the European Union under the NextGenerationEU project.


Personal tech, social media, and the “decline of humanity”

In Compton Lecture at MIT, social psychologist Jonathan Haidt warns of dramatic global decay in cognition, attention spans, and civic life, and urges curbs to tech use.


Social psychologist Jonathan Haidt presented a forceful analysis of the damage smartphones and social media are doing to our cognition, our civic fabric, and our children’s wellbeing, while calling for renewed action to ward off their effects, in the latest of MIT’s Compton Lectures on Wednesday.

“Around the world, people are getting diminished,” Haidt said. “Less intelligent, less happy, less competent. And it’s happening very fast … My argument is that if we continue with current trends as AI is coming in, it’s going to accelerate. The decline of humanity is going to accelerate.”

Haidt is the Thomas Cooley Professor of Ethical Leadership at New York University’s Stern School of Business and the author of the recent bestseller “The Anxious Generation,” which suggests that the widespread adoption of social media in the 2010s has been especially damaging to young women, making them prone to anxiety and depression.

But as Haidt has continued to examine the effects of social media on society, he has started focusing on additional issues. Our inability to put our phones away, our compulsion to check social media, and the way we spend hours a day watching short-form videos, may be causing problems that go far beyond any rise in anxiety and depression.

“It turns out, it’s not the biggest thing,” Haidt said. “There’s something bigger. It is the destruction of the human capacity to pay attention. Because this is affecting most people, including most adults. And if you imagine humanity with 10 to 50 percent of its attentional ability sucked out of it, there’s not much left. We’re not very capable of doing things if we can’t focus or stay on a task for more than 30 seconds.”

Whatever solution may emerge to these problems, Haidt declared, is going to have to come from “human agency. People see a problem, they figure out a way around it. That’s what I’m hoping to promote here [to] this very important audience. So please consider what I’m saying, these trends, and then work to change them.”

Haidt’s lecture, titled, “Life After Babel: Democracy and Human Development in the Fractured, Lonely World That Technology Gave Us,” was delivered before a capacity audience of over 400 people in MIT’s Huntington Hall (Room 10-250).

The lecture spanned a variety of related topics, with Haidt presenting chart after chart showing the onset of declines in cognition, educational achievement, and happiness, which all have seemed to occur soon after the widespread adoption of smartphones in the 2010s. The individual adoption of smartphones, he notes, has been compounded by the way schools brought internet-connected computing devices into classrooms around the same time.

“The biggest, the most costly mistake we’ve ever made in the history of American education [was] to put computers and high tech on people’s desks,” Haidt said.

Distractible students with shorter attention spans are reading fewer books, he noted; some cinema students cannot sit through films. The top quartile of students is continuing to do well, he noted, but for most students, proficiency levels have dipped notably since the 2010s.

“Fifty years of progress in education, 50 years of progress, up in smoke, gone,” Haidt said. “We’re back to where we were 50 years ago. That’s pretty big, that’s pretty serious.”

As Haidt mentioned multiple times in his remarks, he is not an opponent of all forms of technology, or even personal communication technology, but rather is seeking to mitigate its harmful effects.

“I love tech, I love modernity, we’re all dependent on it, I love my iPhone,” Haidt said. Just as he finished that sentence, an audience member’s cellphone started ringing loudly — drawing a huge laugh from the audience.

“I did not plant that, that was a truly spontaneous demonstration of what I’m talking about,” Haidt said.

Haidt was introduced by MIT President Sally A. Kornbluth, who called him “a leading voice for reforming society’s relationship with technology.” She praised Haidt’s work, noting that he wants to “encourage us to imagine a more positive role for technology in humanity’s future.”

The Karl Taylor Compton Lecture Series was introduced in 1957. It is named for MIT’s ninth president, who led the Institute from 1930 to 1948 and also served as chair of the MIT Corporation from 1948 to 1954.

Compton, as Kornbluth observed, helped MIT evolve from being more strictly an engineering school into “a great global university” with “a new focus on fundamental scientific research.” During World War II, she added, Compton “helped invent the longstanding partnership between the federal government and America’s research universities.”

Haidt received his undergraduate degree from Yale University and his PhD from the University of Pennsylvania. He taught on the faculty at the University of Virginia for 16 years before joining New York University. He has written several widely discussed books about contemporary civic life. Haidt observed that the problems stemming from device distraction and compulsion appear to have hit so-called Gen Z — those born from roughly the mid 1990s to the early 2010s — especially hard, though he emphasized that people in that cohort are essentially victims of circumstance.

“I am not blaming Gen Z,” Haidt said. “I am saying we raised our kids in a way — we allowed the technology companies to take over childhood. We allowed a few giant companies to own our children’s attention, to show them millions of short videos, to destroy their ability to pay attention, to stop them from reading books, and this is the result.”

For a portion of his remarks, Haidt also examined the consequences of social media for politics, showing data that chart the global diminishment of democracy since the 2010s, while the world has become soaked in misinformation and conflictual online interactions.

“That, I think, is what digital technology has done to us,” Haidt said. “It was supposed to connect us, but instead it has broken things, divided us, and made it very, very hard to ever have common facts, common truths, common stories again.”

Towards the end of his remarks, Haidt also speculated that the effects of using AI will be corrosive as well, intellectually and psychologically.

“AI is not exactly going to make us better at interacting with human beings,” Haidt said.

With all this in mind, what is to be done, to limit the intellectual and social damage from tech devices and social media? For one thing, Haidt suggested, we should be less impressed by high-tech innovations and social media.

“We need to disenthrall ourselves from technology,” Haidt said, paraphrasing a line written by President Abraham Lincoln. He added: “I suggest that we have a generally negative view … of social media and of AI.” This kind of “more emotionally negative or ambivalent view” will make it easier for us to reverse the way technology seems to control us.

As a practical matter, Haidt suggested, that means taking steps to limit our exposure to technology. His own public-advocacy group, The Anxious Generation Movement, suggests a set of four reforms: No smartphones for kids before they are high-school age; no social media before age 16; making school phone-free, from bell to bell; and giving kids more independence, free play, and responsibility in the world.

Certainly there is movement toward some of these concepts. Some school districts in the U.S. are banning or limiting phone usage; Australia has also instituted a ban on social media for anyone under 16, while a handful of other countries have announced similar plans.

“There’s a gigantic techlash happening right now,” Haidt suggested. For all the sudden changes technology has introduced within the last 15 years, it is still possible, for now, for people to find a way out of our tech-induced predicament.

“The good news is, there is human agency,” Haidt said.


Seeds of something different

Kate Brown’s book, “Tiny Gardens Everywhere,” examines the hidden history of urban farming, its extensive use, and the politics of growing food.


In Berlin in the early 1870s, tourists began visiting a neighborhood called Barackia. It did not have museums, palaces, or any other typical attractions. Barackia was a working-class neighborhood where people grew their own food, lived in small dwellings, and established communal arrangements outside the normal reach of government. For a while, anyway: In 1872, authorities moved in and cleared out Barackia.

Still, the concept of small urban farming caught on, and by 1900, about 50,000 Berlin households were growing food, often in so-called arbor colonies. The practice has never really been abandoned: Today, by law, Germany provides residents the right to garden, still a very popular activity in urban areas.

“In a little space, you can grow a lot of produce,” says MIT Professor Kate Brown, author of a new history of urban gardening. “Once you set things up, it need not take too much of your time. You can have another job and still grow food. You go to Berlin, and many German cities, and you’re surrounded by these allotment gardens.”

But as the residents of Barackia found out, there is a politics that comes with growing your own food on common land. Other interests may want to claim or at least control the land themselves. Or they may want to tap into the labor being applied to gardening. One way or another, when many people start gardening for themselves, core questions about the organization of society seem to sprout up, too.

Brown examines urban gardening and its politics in her book, “Tiny Gardens Everywhere: The Past Present, and Future of the Self-Provisioning City,” published by W.W. Norton. Brown is the Thomas M. Siebel Distinguished Professor in History of Science within MIT’s Program in Science, Technology, and Society. In a book with global scope, ranging from Estonia to Amsterdam and Washington, Brown contends that urban gardening has many positive spillover effects, from health and environmental benefits to community-building — apart from periods of pushback when others are trying to eliminate it.

“Community after community, people work together to create food provisioning practices,” Brown says. “And after people come together for food and gardening, then they start to solve other problems they have.”

Whose land?

“Tiny Gardens Everywhere” was several years in making, featuring extensive archival research, with firsthand material interspersed too. Brown’s story begins in England, which had a very long tradition of people farming on common land, often in ingenious, productive ways. “Every bit of space was used,” Brown says.

Then in the late 18th century, the advent of “enclosures” for wealthy landowners privatized much land and changed social life for many. Poorer residents, even when given allotments, found them not big enough for self-sustaining farming.

“Private property is largely an English invention of the late 18th century,” Brown says. “Before that, and in many parts of the world to this day, people live with a communal sense of the ownership of the land.”

In Brown’s interpretation, the enclosure movement did not just claim more land for Britain’s upper class. In an industrializing society, it forced peasants into the factory labor force, whether in cities or in rural mills.

“Really what they were doing when they were enclosing land was trying to control labor, as much as controlling land,” Brown says. “Because of their reliance on the commons, peasants were self-sufficient. Who wants to go work in a factory when you could be out having fun in the forest? Expelling people was a way to force them to become homeless, the landless proletariat, with nothing to sell but their labor, for 10 or 18 hours a day.”

As Brown chronicles in detail, conflicts between communal agriculture and propertied classes have often arisen since then, in varying forms. And sometimes, in now-surprising places, because urban gardening has been more extensive than we realize.

A core section of “Tiny Gardens Everywhere” focuses on Washington, in the middle of the 20th century. During the Great Migration, which started a few decades earlier, African Americans moved north en masse, resettling in cities. They brought extensive knowledge with them about agricultural practices. In the part of Washington east of the Anacostia River, Black neighborhoods relied heavily on local gardening.

“They set up workers’ cooperatives and food cooperatives,” Brown observes. Despite often living in difficult circumstances, she adds, “I think it’s very interesting that people found really smart ways to adapt. If the neighborhood had no garbage collection, they’ll compost. No sewers, they’ll compost.”

Over time, though, authorities started claiming more land, designating homes to be torn down, and restricting the ability of residents to garden. And as Brown chronicles in the book, local officials have used restrictions on urban gardening as a form of social control, with one outcome being a homogenized social and physical landscape characterized by grass lawns for the affluent.

How much food?

Even if urban gardening has been fairly common in the past, it is natural to ask: How much food can it really provide? As Brown sees it, there is not one simple answer to that question. At one point, victory gardens provided about 40 percent of all produce grown in the U.S. during World War II, for one thing. More recently, In 1996, 91 percent of the potatoes Russians ate came from urban allotment gardens on 1.5 percent of the country’s arable land.

As Brown also points out in the book, we may not be growing as much produce on giant farms as we think. Only 2 percent of agricultural land in the U.S. is used to produce fruit and vegetables, for instance. The U.S., as a variety of analysts and writers have observed, has corn-and soy-heavy agricultural systems at its largest scales, principally yielding corn-based products. That means, Brown says, “They’re really inefficiently [working] to produce ethanol, corn syrup, chips, and cookies.”

In sum, she adds, “Yes, I do think it’s possible to take an urban space and grow a good part of the fruits and vegetables that people need there.”

It is possible, Brown believes, for things to change on this front. For instance, Florida, Illinois, and Maine, three fairly different states in terms of politics, all have laws providing the right to garden. Oklahoma has a similar bill in the works.

“I think this approach to looking at our right to grow food, to self-provision, to step outside of markets for our most essential needs, is something that represents a unifying set of desires in our hyperpolarized political landscape,” Brown says.

Other scholars have praised “Tiny Gardens Everywhere.” Sunil Amrith, a professor of history at Yale University, has said that Brown uses “enviable skill, craft, and insight” to show “that the past of small-scale urban provisioning contains the seeds of a more resilient future for us all.”

For her part, Brown hopes the book will not only appeal to readers, but spur them to become more active about the issue, as gardeners, local policy advocates, or both.

“One of the drumbeats of this book is that people do — and maybe we all should — win the right to garden,” Brown says. 


Studying the genetic basis of disease to explore fundamental biological questions

Eliezer Calo’s studies of craniofacial malformations have yielded insight into protein synthesis and embryonic development.


When Associate Professor Eliezer Calo PhD ’11 was applying for faculty positions, he was drawn to MIT not only because it’s his alma mater, but also because the Department of Biology places high value on exploring fundamental questions in biology.

In his own lab, Calo studies how craniofacial malformations arise. One motivation is to seek new treatments for those conditions, but another is to learn more about fundamental biological processes such as protein synthesis and embryonic development.

“We use genes that are mutated in disease to uncover fundamental biology,” Calo says. “Mutations that happen in disease are an experiment of nature, telling us that those are the important genes, and then we follow them up not only to understand the disease, but to fundamentally understand what the genes are doing.”

Calo’s work has led to new insights into how ribosomes form and how they control protein synthesis, as well as how the nucleolus, the birthplace of ribosomes in eukaryotic cells, has evolved over hundreds of millions of years.

In addition to earning his PhD at MIT, Calo is also an alumnus of MIT’s Summer Research Program (MSRP), which helps to prepare undergraduate students to pursue graduate education. Since starting his lab at MIT, Calo has made a point to serve as a research mentor for the program every summer.

“I feel that it’s important to pay back to the program that helped me realize what I wanted to do,” he says.

A nontraditional path

Growing up in a mountainous region of Puerto Rico, Calo was the first person from his family to finish high school. While attending the University of Puerto Rico at Rio Piedras, the largest university in Puerto Rico, he explored a few different majors before settling on chemistry.

One of Calo’s chemistry professors invited him to work in her lab, where he did a research project studying the pharmacokinetics of cell receptors found on the surface of astrocytes, a type of brain cell.

“It was a good mix of biology and chemistry,” he says. “I think that that was the catalyst to my pursuit of a career in the sciences.”

He learned about MSRP from Mandana Sassanfar, a senior lecturer in biology at MIT and director of outreach for several MIT departments, at an event hosted by the University of Puerto Rico for students interested in careers in science. He was accepted into the program, and during the summer after his junior year, he worked in the lab of Stephen Bell, an MIT professor of biology. That experience, he says, was transformative.

“Without that experience, I would have probably chosen another career,” Calo says. In Puerto Rico, “science was fun, but it was a struggle. We had to make everything from scratch, and then you spend more time making reagents than doing the experiments. When I came to MIT, I was always doing experiments.”

During that time, he realized he liked working in biology labs more than chemistry labs, so when he applied to graduate school, he decided to move into biology. He applied to five schools, including MIT. “Once MIT sent me the acceptance, I just had to say yes. There was no saying no.”

At MIT, Calo thought he might study biochemistry, but he ended up focusing on cancer biology instead, working with Jacqueline Lees, an MIT biology professor, to study the role of the tumor suppressor protein Rb.

After finishing his PhD, Calo felt burnt out and wasn’t sure if he wanted to continue along the academic track. His thesis committee advisors encouraged him to do a postdoc just to try it out, and he ended up going to Stanford University, where he fell in love with California and switched to a new research focus. Working with Joanna Wysocka, a professor of developmental biology at Stanford, he began investigating how development is affected by the regulation of proteins that make up cellular ribosomes — a topic his lab still studies today.

Returning to MIT

When searching for faculty jobs, Calo focused mainly on schools in California, but also sent an application to MIT. As he was deciding between offers from MIT and the University of California at Berkeley, a phone call from Angelika Amon, the late MIT professor of biology, convinced him to take the cross-country leap back to MIT.

“She had me on the phone for more than one hour telling me why I should come to MIT,” he recalls. “And that was so heartwarming that I could not say no.”

Since starting his lab in 2017, Calo has been studying how defects in the production of ribosomes give rise to diseases, in particular craniofacial malformations such as cleft palate.

Ribosomes, the organelles where protein synthesis occurs, consist of two subunits made of about 80 proteins. A longstanding question in biology has been why mutations that affect ribosome formation appear to primarily affect the development of the face, but not the rest of the body.

In a 2018 study, Calo discovered that this is because the mutations that affect ribosomes can have secondary effects that influence craniofacial development. In embryonic cells that form the face, a mutation in a gene called TCOF1 activates p53 at a higher level than in other embryonic cells. High levels of p53 cause some of those cells to undergo programmed cell death, leading to Treacher-Collins Syndrome, a disorder that produces underdeveloped bones in the jaw and cheek.

His lab has shown that p53 overactivation is also responsible for craniofacial disorders caused by mutations in RNA splicing factors.

Calo’s work on ribosome formation also led him to explore another cell organelle known as the nucleolus, whose role is to help build ribosomes. In 2023, he found that a gene called TCOF1, which can lead to craniofacial malformations when mutated, is critical for forming the three compartments that make up the nucleolus.

That finding, he says, could help to explain a major evolutionary shift that occurred around 300 million years ago, when the nucleolus transitioned from two to three compartments. This “tripartite” nucleolus is found in all reptiles, birds, and mammals.

“That was quite surprising,” Calo says. “Studying disease-related genes allowed us to understand a very fundamental biological process of how the nucleolus evolved, which has been a question in the field that nobody could figure out the answer for.”


X-raying rocks reveals their carbon-storing capacity

New research by MIT geophysicists could assist efforts to remove carbon from the atmosphere and store it underground.


To avoid the worst effects of climate change, many billions of metric tons of industrially generated carbon dioxide will have to be captured and stored away by the end of this century. One place to store such an enormous amount of greenhouse gas is in the Earth itself. If carbon dioxide were pumped into the cracks and crevices of certain underground rocks, the fluid would react with the rocks and solidify carbon into minerals. In this way, carbon dioxide could potentially be locked in the rocks in stable form for millions of years without escaping back into the atmosphere.

Some pilot projects are already underway to demonstrate such “carbon mineralization.” These efforts have shown promising results in terms of successfully mineralizing a large fraction of injected CO2. However, it’s less clear how the rocks will evolve in response. As carbonate minerals build up, could they clog up cracks and crevices, and ultimately limit the amount of CO2 that can be stored there?

In a new study appearing today in the journal AGU Advances, MIT geophysicists explored this question by injecting fluid into rocks and using X-ray imaging to reveal how the rocks’ pores and cracks changed as the fluid mineralized over time.

Their experiments showed that as fluid was pumped into a rock, the rock’s permeability (the ability of fluid to flow through the rock) dropped sharply. Meanwhile, the rock’s porosity (its total amount of empty space, in the form of pores, cracks, and crevices) remained relatively the same.

The researchers found that the minerals were precipitating out of the fluid in the narrower tunnels connecting larger pores, preventing the fluid from flowing into larger pore spaces. Even so, the fluid did keep flowing through the rock, albeit at a lower rate, and minerals continued to form in some cracks and crevices.

“This study gives you information about what the rock does during this complex mineralization process, which could give you ideas of how to engineer it in your favor,” says study co-author Matėj Peč, an associate professor of geophysics at MIT. 

“If you were injecting CO2 into the Earth and saw a massive drop in permeability, some operators might think they clogged up the well,” adds co-author Jonathan Simpson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But as this study shows, in some cases, it might not matter that much. As long as you maintain some flow rate, you could still form minerals and sequester carbon.”

The study’s co-authors include EAPS Research Scientist Hoagy O’Ghaffari as well as Sharath Mahavadi and Jean Elkhoury of the Schlumberger-Doll Research Center.

Drilling down

Basalt is a type of erupted volcanic rock that is found in places such as Hawaii and Iceland. When fresh, it’s highly porous, with many pores, cracks, and fractures running through the rock. The material also is highly concentrated in iron, calcium, and magnesium. When these elements come in contact with fluid that is rich in carbon dioxide, they can dissolve and mix with CO2, and eventually form a new carbon-based mineral such as calcite or dolomite.

A project based in Iceland and piloted by the company CarbFix is currently injecting CO2-rich water into the region’s underground basalt to see how much of the gas can be converted and stored as minerals in the rock. The company’s runs have shown that more than 95 percent of the CO2 injected into the ground turns into minerals within two years. The project is proving that the chemistry works: CO2 can be stored as stone.

But the MIT team wondered how this mineralization process would change the basalt itself and its capacity to store carbon over time.

“Most studies investigating carbon mineralization have focused on optimizing the geochemistry, but we wanted to know how mineralization would affect real reservoir rocks,” Peč says.

Rocky X-rays

The team set out to study how the permeability and porosity of basalt changes as carbonate-rich fluid is pumped into and mineralized throughout the rock.

“Porosity refers to the total amount of open space in the rock, which could be in the form of vesicles, or fractures that connect vesicles, or even areas between sand grains,” Simpson explains. “Because there is so much variability in porosity patterns, there is no one-to-one relationship between porosity and permeability. You could have a lot of pores that are not necessarily connected. So, even if 20 percent of the rock is porous, if they’re not connected, then permeability would be zero.”

“The details of that are important to understand for all these problems of injecting fluids into the subsurface,” Peč emphasizes.

For their experiments, the team used samples of basalt that Peč and others collected during a trip to Iceland in 2023. They placed small samples of basalt in a custom-built holder that they connected to two tubes, through which they flowed two different fluids, each containing a solution that, when mixed, quickly forms carbonate minerals. The team chose this combination of fluids in order to speed up the mineralization process.

In the actual process of injecting CO2 into the ground, CO2 is mixed with water. When it is pumped through rock, the fluid first goes through a “dissolution” phase, in which it draws elements such as iron, calcium, and magnesium out from the basalt and into the CO2-rich fluid. This dissolution process can take some time, before the mineralization process, in which CO2 mixes with the drawn-out elements, can proceed.

The researchers used two different fluids that quickly mineralize when combined, in order to skip over the dissolution phase and efficiently study the effects of the mineralization process. The team was able to see the mineralization process occurring within the rock, at an unprecedented level of detail, by performing experiments inside an X-ray CT scanner. The team set up their experiment in a CT scanner (similar to the ones used for medical imaging in hospitals) and took frequent, high-resolution, three-dimensional snapshots of the basalt periodically over several days to weeks as they flowed the fluids through.

Their imaging revealed how the pores, cracks, and crevices in the rock evolved, and filled in with minerals as the fluid flowed through over time. Over multiple experiments, they found that the rock’s permeability quickly dropped within a day, by an order of magnitude. The rock’s porosity, however, decreased at a much slower rate. At the end of the longest-duration experiments, only about 5 percent of the original pore space was filled with new minerals.

“Our findings tell us that the minerals are initially forming in really small microcracks that connect the bigger pore spaces, and clogging up those spaces,” Simpson says. “You don’t need much to clog up the tiny microfractures. But when you do clog them up, that really drops the permeability.”

Even after the initial drop in permeability, however, the team could continue to flow fluid through, and minerals continued to form in tight spaces within the rock. This suggests that even when it seems like an underground reservoir is full, it might still be able to store more carbon.

The researchers also monitored the rock with ultrasonic sensors during each experiment and found that the sensor could track even small changes in the rock’s porosity. The less porous, or more filled in the rock was with minerals, the faster sound waves traveled through the material. These results suggest that seismic waves could be a reliable way to monitor the porosity of underground rocks and ultimately their capacity to store carbon.

“Overall, we think that carbon mineralization seems like a promising avenue to permanently store large volumes of CO2,” Peč concludes. “There are plenty of reservoirs and they should be injectable over extended periods of time if our results can be extrapolated.”

This work was supported by MIT’s Advanced Carbon Mineralization Initiative funded by Beth Siegelman SM ’84 and Russ Siegelman ’84, with additional funding from the Chan-Zuckerberg Foundation.


New catalog more than doubles the number of gravitational-wave detections made by LIGO, Virgo, and KAGRA observatories

The latest crop of space-time wobbles includes a variety of heavy, fast-spinning, and lopsided colliding black holes.


When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.

And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.

Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.

The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)

This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.

“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.

“In the past decade, gravitational wave astronomy has progressed from the first detection  to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”

“Pushing the edges”

Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.

A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.

Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.

The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.

“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”

Unusual signals

The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.

For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.

“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”

Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.

Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.

The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other. 

“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”

Cosmic connections

From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.

“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.

This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.

The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.

“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”

Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.

“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”

The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.

Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.

“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”

By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).

“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.

“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."


Nitrous oxide, a product of fertilizer use, may harm some soil bacteria

While some N2O is produced naturally at the plant root, agricultural practices can increase its levels, to the detriment of some microbes that support plant growth.


Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.

While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.

Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.

The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.

“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”

A toxic gas

Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.

“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”

While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.

They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.

Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.

Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.

“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.

“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”

From the lab to the farm

In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.

Wasson calls the paper a proof of concept and plans to study agricultural soil next.

“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”

McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.

“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”

The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).


How some skills become second nature

Patterns of gaze and attention can reveal how some people unconsciously figure out how to master a task, new research shows.


Expertise isn’t easy to pass down. Take riding a bike: A seasoned cyclist might talk a beginner through the basics of how to sit and when to push off. But other skills, like how hard to pedal to keep balanced, are more intuitive and harder to articulate. This implicit know-how is known as tacit knowledge, and very often, it can only be learned with experience and time.

But a team of MIT engineers wondered: Could an expert’s unconscious know-how be accessed, and even taught, to quickly bring a novice up to an expert’s level?

The answer appears to be “yes,” at least for a particular type of visual-learning task.

In a study published today in the Journal of Neural Engineering, the engineers identified tacit knowledge in volunteers who were tasked with classifying images of various shapes and patterns. As the volunteers were shown images to organize, the team recorded their eye movements and brain activity to measure their visual focus and cognitive attention, respectively.

The measurements showed that, over time, the volunteers shifted their focus and attention to a part of each image that made it easier to classify. However, when asked directly, the volunteers were not aware that they had made such a shift. The researchers concluded that this unconscious shift in attention and focus was a form of tacit knowledge that the volunteers possessed, even if they could not articulate it. What’s more, when the volunteers were made aware of this tacit knowledge, their accuracy in classifying images improved significantly.

The study is the first to directly show that visual attention can reveal unconscious, tacit knowledge during image classification tasks. It also finds for the first time that bringing this concealed knowledge to the surface can enhance experts’ performance.

While the results are specific to the study’s experiment, the researchers say they suggest that some forms of hidden know-how can be made explicit and applied to boost one’s learning experience. They suspect that tacit knowledge could be accessed for disciplines that require keen observation skills, including certain physical trades and crafts, sports, and image analysis, such as medical X-ray diagnoses.

“We as humans have a lot of knowledge, some that is explicit that we can translate into books, encyclopedias, manuals, equations. The tacit knowledge is what we cannot verbalize, that’s hidden in our unconscious,” says study author Alex Armengol-Urpi, a research scientist in MIT’s Department of Mechanical Engineering. “If we can make that knowledge explicit, we can then allow for it to be transferred easier, which can help in education and learning in general.”

The study’s co-authors include Andrés F. Salazar-Gomez, research scientist at the MIT Media Lab; Pawan Sinha, professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences; and Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering.

Hidden gaze

The concept of tacit knowledge is credited to the scientist and philosopher Michael Polyani, who in the mid 20th century was the first to investigate the notion that “we know more than we can tell.” His insights revealed that humans can hold a form of knowledge that is internalized, almost second nature, and often difficult to express or translate to others.

Since Polyani’s work, many studies have highlighted how tacit knowledge may play a part in perfecting certain skills, spanning everything from diagnosing medical images to discerning the sex of cats from images of their faces.

For Armengol-Urpi, these studies raised a question: Could a person’s tacit knowledge be revealed through unconscious signals, such as patterns in their eye movements? His PhD work focused on visual attention, and he had developed methods to study how humans focus their attention, by using cameras to follow the direction of their gaze, and electroencephalography (EEG) monitors to record their brain activity. In his research, he learned of a previous study that used similar methods to investigate how radiologists diagnose nodules in X-ray images. That study showed that the doctors unconsciously focused on areas of an image that helped them to correctly detect the nodules.

“That paper didn’t focus on tacit knowledge, but it suggested that there are some hidden clues in our gaze that could be explored further,” Armengol-Urpi says.

The shape of knowledge

For their new study, the team looked at whether they could identify signs of tacit knowledge from measurements of visual focus and attention. In their experiment, they asked 30 volunteers to look sequentially at over 120 images. They could look at each image for several seconds and then were asked to classify the image as belonging to either group A, or group B, before they were shown the next image.

Each image contained two simple shapes on either side of the image — a square, a triangle, a circle, and any combination of the three, along with different colors and patterns for each shape. The researchers designed the images such that they should be classified into one of two groups, based on an intricate combination of shape, color, and pattern. Importantly, only one side of each image was relevant for the classification.

The volunteers, however, were given no guidelines on how to classify the images. Therefore, for about the first half of the experiment, they were considered “novices,” and more or less guessed at their classifications. Over time, and many more images, their accuracy improved to a level that the researchers considered “expert.” Throughout the experiment, the team used cameras to follow each participant’s eye movements, as a measure of visual focus.

They also outfitted volunteers with EEG sensors to record their brain waves, which they used as a measure of cognitive attention. They designed each image to show two shapes, each of which flickered at different, imperceptible frequencies. They found they could identify where a volunteer’s attention landed, based on which shape’s flicker their brain waves synced up with.

For each volunteer, the team created maps of where their gaze and attention were focused, both during their novice and expert phases. Overall, these maps showed that in the beginning, the volunteers focused on all parts of an image as they tried to make sense of how to classify it. Toward the end, as they got a grasp of the exercise and improved their accuracy, their attention shifted to just one side of each image. This side happened to be the side that the researchers designed to be most relevant, while the other side was just random noise.

The maps showed that the volunteers picked up some knowledge of how to accurately classify the images. But when they were given a survey and asked to articulate how they learned the task, they always maintained that they focused on each entire image. It seemed their actual shift in focus was an unconscious, tacit skill.

“They were unconsciously focusing their attention on the part of the image that was actually informative,” Armengol-Urpi says. “So the tacit knowledge they had was hidden inside them.”

Going a step further, the team then showed each participant the maps of their gaze and attention, and how the maps changed from their novice to expert phases. When they were then shown additional images, the volunteers seemed to use this once-tacit knowledge, and further improved their classification accuracy.

“We are currently extending this approach to other domains where tacit knowledge plays a central role,” says Armengol-Urpi, who is exploring tacit knowledge in skilled crafts and sports such as glassblowing and table tennis, as well as in diagnosing medical imaging. “We believe the underlying principle — capturing and reinforcing implicit expertise through physiological signals — can generalize to a wide range of perceptual and skill-based domains.”

This research was supported, in part, by Takeda Pharmaceutical Company.


A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster

The approach could help engineers tackle extremely complex design problems, from power grid optimization to vehicle design.


Many engineering challenges come down to the same headache — too many knobs to turn and too few chances to test them. Whether tuning a power grid or designing a safer vehicle, each evaluation can be costly, and there may be hundreds of variables that could matter.

Consider car safety design. Engineers must integrate thousands of parts, and many design choices can affect how a vehicle performs in a collision. Classic optimization tools could start to struggle when searching for the best combination.

MIT researchers developed a new approach that rethinks how a classic method, known as Bayesian optimization, can be used to solve problems with hundreds of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods.

Their technique leverages a foundation model trained on tabular data that automatically identifies the variables that matter most for improving performance, repeating the process to hone in on better and better solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This allows them to adapt to different applications.

The researchers’ tabular foundation model does not need to be constantly retrained as it works toward a solution, increasing the efficiency of the optimization process. The technique also delivers greater speedups for more complicated problems, so it could be especially useful in demanding applications like materials development or drug discovery.

“Modern AI and machine-learning models can fundamentally change the way engineers and scientists create complex systems. We came up with one algorithm that can not only solve high-dimensional problems, but is also reusable so it can be applied to many problems without the need to start everything from scratch,” says Rosen Yu, a graduate student in computational science and engineering and lead author of a paper on this technique.

Yu is joined on the paper by Cyril Picard, a former MIT postdoc and research scientist, and Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering. The research will be presented at the International Conference on Learning Representations.

Improving a proven method

When scientists seek to solve a multifaceted problem but have expensive methods to evaluate success, like crash testing a car to know how good each design is, they often use a tried-and-true method called Bayesian optimization. This iterative method finds the best configuration for a complicated system by building a surrogate model that helps estimate what to explore next while considering the uncertainty of its predictions.

But the surrogate model must be retrained after each iteration, which can quickly become computationally intractable when the space of potential solutions is very large. In addition, scientists need to build a new model from scratch any time they want to tackle a different scenario.

To address both shortcomings, the MIT researchers utilized a generative AI system known as a tabular foundation model as the surrogate model inside a Bayesian optimization algorithm.

“A tabular foundation model is like a ChatGPT for spreadsheets. The input and output of these models are tabular data, which in the engineering domain is much more common to see and use than language,” Yu says.

Just like large language models such as ChatGPT,  Claude, and Gemini, the model has been pre-trained on an enormous amount of tabular data. This makes it well-equipped to tackle a range of prediction problems. In addition, the model can be deployed as-is, without the need for any retraining.

To make their system more accurate and efficient for optimization, the researchers employed a trick that enables the model to identify features of the design space that will have the biggest impact on the solution.

“A car might have 300 design criteria, but not all of them are the main driver of the best design if you are trying to increase some safety parameters. Our algorithm can smartly select the most critical features to focus on,” Yu says.

It does this by using a tabular foundation model to estimate which variables (or combinations of variables) most influence the outcome.

It then focuses the search on those high-impact variables instead of wasting time exploring everything equally. For instance, if the size of the front crumple zone significantly increased and the car’s safety rating improved, that feature likely played a role in the enhancement.

Bigger problems, better solutions

One of their biggest challenges was finding the best tabular foundation model for this task, Yu says. Then they had to connect it with a Bayesian optimization algorithm in such a way that it could identify the most prominent design features.

“Finding the most prominent dimension is a well-known problem in math and computer science, but coming up with a way that leveraged the properties of a tabular foundation model was a real challenge,” Yu says.

With the algorithmic framework in place, the researchers tested their method by comparing it to five state-of-the-art optimization algorithms.

On 60 benchmark problems, including realistic situations like power grid design and car crash testing, their method consistently found the best solution between 10 and 100 times faster than the other algorithms.

“When an optimization problem gets more and more dimensions, our algorithm really shines,” Yu added.

But their method did not outperform the baselines on all problems, such as robotic path planning. This likely indicates that scenario was not well-defined in the model’s training data, Yu says.

In the future, the researchers want to study methods that could boost the performance of tabular foundation models. They also want to apply their technique to problems with thousands or even millions of dimensions, like the design of a naval ship.

“At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical,” says Ahmed.

“The approach presented in this work, using a pretrained foundation model together with high‑dimensional Bayesian optimization, is a creative and promising way to reduce the heavy data requirements of simulation‑based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings,” says Wei Chen, the Wilson-Cook Professor in Engineering Design and chair of the Department of Mechanical Engineering at Northwestern University, who was not involved in this research.


Injectable “satellite livers” could offer an alternative to liver transplantation

The engineered tissue grafts could take on the liver’s function and help thousands of people with liver failure.


More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.

To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.

In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.

“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).

Bhatia is the senior author of the new study, which appears today in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.

Restoring liver function

The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.

Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.

Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.

Cluster of spheres move to the right as spheres are added through injectors at bottom

To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.

In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.

“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”

The injected mixture also includes fibroblast cells — supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.

Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.

In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.

“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.

An alternative to transplantation

In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.

“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they're supposed to, and they produced the proteins that we expect them to.”

After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.

“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”

With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.

The research was funded by the Koch Institute Support (core) grant from the National Cancer Institute, the National Institutes of Health, the Wellcome Leap HOPE Program, a National Science Foundation Graduate Research Fellowship, and the Howard Hughes Medical Institute.


Coping with catastrophe

Japan incorporates more disaster planning into its buildings and public spaces than any other nation. Miho Mazereeuw’s new book explains how they do it.


Each April in Japan, people participate in a tradition called “hanami,” or cherry-blossom viewing, where they picnic under the blooming trees. The tradition has a second purpose: The presence of people at these gatherings, often by water, helps solidify riverbanks and protect them from spring floods. The celebration has a dual purpose, by addressing, however incrementally, the threat of natural disaster.

The practice of creating things that also protect against disasters can be seen all over Japan, where many new or renovated school buildings have design features unfamiliar to students elsewhere. In Tokyo, one elementary school has a roof swimming pool that stores water and is used to help the building’s toilets flush, plus an additional rainwater catchment tank and exterior stairs leading to a large balcony that wraps around one side of the building.

Why? Well, Japan is prone to natural disasters, such as tsunamis, earthquakes, and flooding. The country’s schools often double as evacuation sites for local residents, and design practices increasingly reflect this. In normal times, the roof pool is where students learn to swim and helps keep the school cool, and the large balcony is used by spectators watching the adjacent school athletics field. In emergencies, water storage is crucial and exterior stairs help people ascend quickly to the gymnasium, built on the second floor — to keep evacuees safer during flooding.

Meanwhile, in one Tokyo district, rooftop solar power is now common. Some schools feature skylights and courtyards to bring in natural light. Again, these architectural features serve dual purposes. Solar power, for one, lowers annual operating costs, and it provides electricity even in case of grid troubles.

These are examples of what MIT scholar Miho Mazereeuw has termed “anticipatory design,” in which structures and spaces are built with dual uses, for daily living and for when crisis strikes.

“The idea is to have these proactive measures in place rather than being reactionary and jumping into action only after something has happened,” says Mazereeuw, an associate professor in MIT’s Department of Architecture and a leading expert on resilient design.

Now Mazereeuw has a new book on the subject, “Design Before Disaster: Japan’s Culture of Preparedness,” published by the University of Virginia Press. Based on many years of research, with extensive illustrations, Mazereeuw examines scores of successful design examples from Japan, both in terms of architectural features and the civic process that created them.

“I’m hoping there can be a culture shift,” Mazereeuw says. “Wherever you can invent design outcomes to help society be more resilient beforehand, it is not at exorbitant cost. You can design for exceptional everyday spaces but embed other infrastructure and flexibility in there, so when there is a flood event or earthquake, those buildings have more capability.”

Bosai and barbecue

Mazereeuw, who is also the head of MIT’s Urban Risk Lab, has been studying disaster preparedness for over 30 years. As part of the Climate Project at MIT, she is also one of the mission directors and has worked with communities around the world on resiliency planning.

Japan has a particularly well-established culture of preparedness, often referred to through the Japanese word “bosai.” Mazereeuw has been studying the country’s practices carefully since the 1990s. In researching the book, she has visited hundreds of sites in the country and talked to many officials, designers, and citizens along the way.

Indeed, Mazereeuw emphasizes, “A major theme in the book is connecting the top-down and bottom-up.” Some good design ideas come from planners and architects. Other have come from community groups and local residents. All these sources are important.

“The Japanese government does invest a lot in disaster research and recovery,” Mazereeuw says. “But I would hate for people in other countries to think this isn’t possible elsewhere. It’s the opposite. There are a lot of examples in here that don’t cost extra, because of careful design through community participation.”

As one example, Mazereeuw devotes a chapter of the book to public parks, which are often primary evacuation spaces for residents in case of emergency. Some have outdoor cooking facilities, which in normal times are used for, say, a weekend barbecue or local community events but are also there in case of emergency. Some parks also have water storage, or restroom facilities designed to expand if needed, and many serve as flood reservoirs, protecting the surrounding neighborhood.

“The barbecue facilities are a great example of dual use, connecting the everyday with disaster preparedness,” Mazereeuw says. “You can bring food into this beautiful park, so you’re used to using this space for cooking already. The idea is that your cognitive map of where you should go is connected to fun things you have done in the past.”

Some of the parks Mazereeuw surveys in the book are tiny pocket parks, which are also filled with useful resilience tools.

“Anticipatory design does not have to be monumental,” Mazereeuw writes in the book.

Negotiating through design

To be sure, some disaster mitigation measures are difficult to enact. In the Naiwan district of Kesennuma, as Mazereeuw outlines in the book, much of the local port area was destroyed in the 2011 tsunami, and the government wanted to build a seawall as part of the reconstruction plan. Some local residents and fishermen were unenthusiastic; a seawall could limit ocean access. Finally, after extended negotiations, designers created a seawall integrated into a new commercial district with cafes and stores, as well as new areas of public water access.

“This project used the power of design to negotiate between prefectural and local regulations, structural integrity and aesthetics, ocean access and safety,” Mazereeuw says.

Ultimately, working to build a coalition in support of resilience measures can help create more interesting and useful designs.

Other scholars have praised “Design Before Disaster.” Daniel P. Aldrich, a professor at Northeastern University, has called the book a “well-researched, clearly written investigation” into Japanese disaster-management practices, adding that any officials or citizens around the world “who seek to keep residents and communities safe from shocks of all kinds will learn something important from this book. It sets a high bar for future scholarship in the field.”

For her part, Mazereeuw emphasizes, “We can learn from the Japanese example, but it’s not a copy-paste thing. The book is so people can understand the essence of it and then create their own disaster preparedness culture and approach. This should be an all-hands process. Emergency management is not about relying on managers. It’s figuring out how we all play a part.”