General News from MIT - Massachusetts Institute of Technology

Latest general updates from MIT.

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Method for stress-testing cloud computing algorithms helps avoid network failures

The “MetaEase” technique provides a heads-up to potential scenarios that could cause long wait-times or outages.


Researchers from MIT and elsewhere have developed a more user-friendly and efficient method to help networking engineers identify potential system failures before they cause major problems, like a cloud service outage that leaves millions of users unable to access applications. 

The technique uncovers hidden blind spots that might cause a shortcut algorithm to fail unexpectedly when it is deployed. 

This new approach can identify worse-case scenarios that an engineer might miss if they use a traditional method that compares an algorithm against a set of human-designed past test cases. It is also less labor-intensive than other verification tools that require engineers to rewrite an algorithm in a complex mathematical code each time they want to test it.

Instead of needing a mathematical reformulation, the new method reads the algorithm’s source code directly and automatically searches for worse-case scenarios that lead to the highest level of underperformance.

By helping engineers quickly and easily stress-test a networking algorithm before deployment, the method could catch failure modes that might otherwise only appear in a real outage. The technique could also be used to analyze the risks of deploying AI-generated code.

“We need to have good tools to measure the worse-case scenario performance of our algorithms so we know what could happen before we put them into production. This is an easy-to-use tool that can be plugged into current systems so we can find the best algorithm to use and ensure the worse-case scenarios are identified in advance,” says Pantea Karimi, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this new technique. 

She is joined on the paper by senior authors Mohammad Alizadeh, an associate professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Behnaz Arzani, a principal researcher at Microsoft Research; along with Ryan Beckett, Siva Kesava Reddy Karkarla, and Pooria Namyar, researchers at Microsoft Research; and Santiago Segarra, a professor at Rice University. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation. 

Assessing algorithms

In large systems like cloud servers, the tried-and-true algorithms that route data from one place to another or are often too computationally intensive to run in a feasible amount of time. 

So, engineers and researchers develop suboptimal algorithms called heuristics that can run much faster. However, there could be unexpected but plausible circumstances that will cause a heuristic to underperform or fail when deployed.

A heuristic can route millions of data requests across a cloud network in seconds, but under the wrong conditions — like an unusual traffic pattern or a sudden spike in demand — the shortcut can break down in ways the designer never anticipated.

When these problems occur, a company may have no choice but to drop some requests that can’t be processed. 

The firm could also deliberately allocate more resources in advance to head-off a potential disaster, leading to higher overall costs and wasted electricity from underutilization.

“This is really bad for a company because, either way, they are going to lose a lot of money. If this particular scenario hasn’t happened before and was never tested, how would a developer know in advance before it happens?” Karimi says.

Stress-testing heuristics typically involves running a new algorithm in simulation using a set of human-designed test cases and manually comparing the performance with a previous algorithm. But this is time-consuming and can leave blind spots if an engineer doesn’t know to test for certain situations.

Alternatively, engineers could use a verification tool to evaluate the performance of their heuristic more systematically. However, these tools require the engineer to encode the algorithm into a complex, mathematical formula that can take days to flesh out. The process, which doesn’t work for every type of heuristic, must be repeated each time the engineer changes the code.

Instead, the researchers developed a more user-friendly and efficient verification tool, called MetaEase, that analyzes the heuristic’s existing implementation code directly to identify the biggest risks of deploying it.

“This would reduce the friction of using these heuristic analysis tools,” Karimi says.

She began this work during an internship at Microsoft Research, where the team previously developed MetaOpt, a heuristic analyzer that requires engineers to rewrite their algorithms as formal optimization models. MetaEase grew out of the desire to remove that barrier.

Maximizing the gap

MetaEase is driven by two key innovations. First, it uses a technique called symbolic execution to map out the different decision points in the heuristic's code. These are places where the algorithm might behave differently depending on the input.

This technique produces a set of representative starting points, each corresponding to a distinct behavior the heuristic could exhibit.

Second, from these starting points, MetaEase utilizes a guided search to systematically move toward inputs that make the heuristic perform as poorly as possible, compared to the optimal algorithm.

In machine learning, for instance, an input could be a set of user queries to an AI chatbot at a given time.

“In this way, we have exploited every possible heuristic behavior and used special techniques to move in the direction where we think the performance gap is going to increase,” Karimi explains.

In the end, MetaEase identifies the input that maximizes the performance gap between the heuristic and an optimal benchmark.

With this information, a heuristic developer could inspect the input to understand what went wrong and incorporate safeguards that will prevent the problem from happening during deployment.

In simulated experiments, MetaEase often identified inputs with larger performance gaps than traditional methods — pinpointing more catastrophic worse-case scenarios. And it did so much more efficiently. 

It was also able to analyze a recent networking heuristic that no state-of-the-art method could handle.

In the future, the researchers want to enhance MetaEase so it can process additional types of types of data, like categorical inputs. They also want to improve the scalability of their method and adapt MetaEase to evaluate more complex heuristics.

“Reasoning about the worst-case performance of deployed heuristics is a hard and longstanding problem. MetaEase makes tangible progress by analyzing heuristics directly from source code, eliminating the need for formal models that have historically limited who can use such analysis tools. I was pleasantly surprised that it handles non-convex and randomized heuristics by combining symbolic execution with gradient-based search in a practical and effective way,” says Ratul Mahajan of the University of Washington Paul G. Allen School of Computer Science and Engineering, who was not involved with this research.

This research was funded, in part, by a Microsoft Research internship and the U.S. National Science Foundation (NSF).


Games people — and machines — play: Untangling strategic reasoning to advance AI

Assistant Professor Gabriele Farina mines the foundations of decision-making in complex multi-agent scenarios.


Gabriele Farina grew up in a small town in a hilly winemaking region of northern Italy. Neither of his parents had college degrees, and although both were convinced they “didn’t understand math,” Farina says, they bought him the technical books he wanted and didn’t discourage him from attending the science-oriented, rather than the classical, high school.

By around age 14, Farina had focused on an idea that would prove foundational to his career.

“I was fascinated very early by the idea that a machine could make predictions or decisions so much better than humans,” he says. “The fact that human-made mathematics and algorithms could create systems that, in some sense, outperform their creators, all while building on simple building blocks, has always been a major source of awe for me.”

At age 16, Farina wrote code to solve a board game he played with his 13-year-old sister.

“I used game after game to compute the optimal move and prove to my sister that she had already lost long before either of us could see it ourselves,” Farina says, adding that his sister was less enthralled with his new system.

Now an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), Farina combines concepts from game theory with such tools as machine learning, optimization, and statistics to advance theoretical and algorithmic foundations for decision-making.

Enrolling at Politecnico di Milano for college, Farina studied automation and control engineering. Over time, however, he realized that what activated his interest was not “just applying known techniques, but understanding and extending their foundations,” he says. “I gradually shifted more and more toward theory, while still caring deeply about demonstrating concrete applications of that theory.”

Farina’s advisor at Politecnico di Milano, Nicola Gatti, professor and researcher in computer science and engineering, introduced Farina to research questions in computational game theory and encouraged him to apply for a PhD. At the time, being the first in his immediate family to earn a college degree and living in Italy, where doctoral degrees are handled differently, Farina says he didn’t even know what a PhD was.

Nevertheless, one month after graduating with his undergraduate degree, Farina began a doctoral degree in computer science at Carnegie Mellon University. There, he won distinctions for his research and dissertation, as well as a Facebook Fellowship in Economics and Computation.

As he was finishing his doctorate, Farina worked for a year as a research scientist in Meta’s Fundamental AI Research Labs. One of his major projects was helping to develop Cicero, an AI that was able to beat human players in a game that involves forming alliances, negotiating, and detecting when other players are bluffing.

Farina says, “when we built Cicero, we designed it so that it would not agree to form an alliance if it was not in its interest, and it likewise understood whether a player was likely lying, because for them to do as they proposed would be against their own incentives.”

A 2022 article in the MIT Technology Review said Cicero could represent advancement toward AIs that can solve complex problems requiring compromise.

After his year at Meta, Farina joined the MIT faculty. In 2025, he was distinguished with the National Science Foundation CAREER Award. His work — based on game theory and its mathematical language describing what happens when different parties have different objectives, and then quantifying the “equilibrium” where no one has a reason to change their strategy — aims to simplify massive, complex real-world scenarios where calculating such an equilibrium could take a billion years.

“I research how we can use optimization and algorithms to actually find these stable points efficiently,” he says. “Our work tries to shed new light on the mathematical underpinnings of the theory, better control and predict these complex dynamical systems, and uses these ideas to compute good solutions to large multi-agent interactions.”

Farina is especially interested in settings with “imperfect information,” which means that some agents have information that is unknown to other participants. In such scenarios, information has value, and participants must be strategic about acting on the information they possess so as not to reveal it and reduce its value. An everyday example occurs in the game of poker, where players bluff in order to conceal information about their cards.

According to Farina, “we now live in a world in which machines are far better at bluffing than humans.”

A situation with “massive amounts of imperfect information,” has brought Farina back to his board-game beginnings. Stratego is a military strategy game that has inspired research efforts costing millions of dollars to produce systems capable of beating human players. Requiring complex risk calculation and misdirection, or bluffing, it was possibly the only classical game for which major efforts had failed to produce superhuman performance, Farina says.

With new algorithms and training costing less than $10,000, rather than millions, Farina and his research team were able to beat the best player of all time — with 15 wins, four draws, and one loss. Farina says he is thrilled to have produced such results so economically, and he hopes “these new techniques will be incorporated into future pipelines,” he says.

“We have seen constant progress towards constructing algorithms that can reason strategically and make sound decisions despite large action spaces or imperfect information. I am excited about seeing these algorithms incorporated into the broader AI revolution that’s happening around us.”


MIT marks first Robert R. Taylor Day with Tuskegee University

A day of conversations and archival access at the MIT Museum reflects an ongoing exchange rooted in the work and ideas of the Institute’s first Black graduate.


On April 10, MIT marked its first official Robert R. Taylor Day with a program centered on the life and work of Robert Robinson Taylor (Class of 1892), the Institute’s first Black graduate and the first academically trained Black architect in the United States.

After graduating from MIT, Taylor joined Tuskegee Institute (now Tuskegee University), where he designed campus buildings, developed a curriculum, and helped establish an approach to architectural education grounded in making and community life — an orientation that continues to shape the relationship between MIT and Tuskegee today. 

Taylor returned to MIT on April 10, 1911, to speak at the 50th anniversary of the Institute’s founding — the date now observed as Robert R. Taylor Day. Reflecting on his education, he credited MIT with the “methods and plans” he carried to Tuskegee Institute. “Certainly the spirit,” he said, was found “in the love of doing things correctly, of putting logical ways of thinking into the humblest task … to build up the immediate community in which the persons live.”

One hundred fifteen years later, at the MIT Museum, students and faculty gathered around Taylor’s original thesis, “A Soldiers Home.” The work was presented alongside archival materials from Taylor’s time at MIT by Jonathan Duval, assistant curator of architecture and design. Rather than framing Taylor as a distant historical figure, the encounter with the work itself — its drawings, assumptions, and ambitions — set the terms for the day, bringing forward not only his accomplishments but the ideas and methods that continue to inform teaching and collaboration today. Attendees then gathered for a lunch-and-learn session including a hybrid panel involving MIT and Tuskegee University faculty. 

“It is so important to continue to develop the MIT-Tuskegee relationship begun by Robert R. Taylor,” says Kwesi Daniels, associate professor and head of the architecture department at Tuskegee University. “MIT students are provided an opportunity to experience the campus Taylor designed and his ethos of social architecture. For the Tuskegee students, they are able to appreciate the foundation Taylor received at MIT. The engagement epitomizes the ‘mind and hand’ philosophy of MIT and the head, hand, heart philosophy of Tuskegee.”

An ongoing exchange

Student and faculty exchanges, launched by the architecture departments at both institutions, have extended these connections in recent years. MIT students travel to Tuskegee for work in historic preservation and community engagement, sampling Daniels’ scanning and drone equipment, while Tuskegee students come to MIT to engage with digital fabrication and entrepreneurship.

For Nicholas de Monchaux, professor and head of the Department of Architecture at MIT, the relationship reflects continuity. “We are not uniting. We’re reuniting,” he says. “This year’s celebration should really be seen as the kickoff of a year of reflecting on Robert Taylor’s legacy and imagining what the day, and his legacy, can become over time.”

The day’s program — the vision for which originally emerged from a suggestion made by MIT literature professor Joshua Bennett during a meeting at Tuskegee with de Monchaux, Daniels, and Tuskegee President Mark Brown — moved into a broader effort among faculty and collaborators across architecture, history, and the humanities. As Bennett put it, “The primary aim of Robert R. Taylor Day is to lift up not only Taylor’s accomplishments, but his ideas — and the fact that his ideas live on in those of us who have inherited his legacy.”

That emphasis is also visible in the dedicated coursework and research that has accompanied the exchange since 2022. In class 4.s12 (Brick x Brick: Drawing a Particular Survey), taught by Carrie Norman, assistant professor in architecture at MIT, students document buildings on the Tuskegee campus through measured drawings and archival interpretation. Working from limited historical material, they reconstruct both form and intent.

“My role has been to structure this work pedagogically,” Norman says, “guiding students in methods of close looking, measured drawing, and archival interpretation.” She describes Taylor’s work as “an ongoing research agenda,” adding that “the broader aim is not only to deepen engagement with Taylor’s legacy, but to build on it through new forms of design research.”

Related work has contributed to a recent exhibition on the Tuskegee Chapel at the National Building Museum, curated by Helen Bechtel of the Yale School of Architecture. Building on research conducted in Norman’s course, students developed large-scale models that form part of the exhibition. New 3D fabrications use a limited set of archival materials to reconstruct the chapel originally designed by Taylor as the first electrified building in Alabama’s Macon County, which was destroyed by fire in 1957.

Looking ahead

Timothy Hyde, professor in the MIT Department of Architecture, has also been involved in the ongoing MIT–Tuskegee collaboration and in efforts to situate Taylor’s work within a broader historical context. He notes that Taylor’s training at MIT helped shape the curriculum he later developed at Tuskegee. “The other influence I would like to mention is the city of Boston itself,” Hyde adds. “Boston was a prosperous city with a wealth of civic architecture that Taylor would have seen and studied.” 

A documentary project on Taylor’s life, supported by the MIT Human Insight Collaborative and led by Hyde and historian Christopher Capozzola, senior associate dean for MIT Open Learning, is currently in development.

For some students, these encounters shape longer trajectories. As an undergraduate at Tuskegee, Myles Sampson participated in the MIT Summer Research Program (MSRP), where he began to connect architecture with a growing interest in computation. He later enrolled in MIT’s Master of Science in Architecture Studies (SMArchS) computation program, working with Professor Larry Sass, who introduced him to robotic fabrication.

“I never looked back,” Sampson says. “Without that hands-on research experience, I would never have looked past contemporary architectural practice.” He is now pursuing a doctorate in computational design at Carnegie Mellon University, focused on the role of automation in architecture and construction.

Sampson contributed significant work to the National Building Museum’s exhibition. His installation, Brick Parable, brings together historical reference and robotic construction. As de Monchaux notes, the project reflects the long arc of Taylor’s legacy: “bricks were fired by students as part of Taylor’s training program … Myles [Sampson]’s piece, made with a robotic assembly of bricks, explores the architectural idea of the chapel in contemporary form.”

For Daniels, the continued circulation of students between the two institutions remains central. Viewing Taylor’s thesis in particular offers a shared point of reference. “Whether the student is from Tuskegee or MIT, they are able to appreciate the quality of work Taylor completed as a student,” he says, “and how he built on that work by creating a college campus, beginning at age 25.”

Across these activities, Taylor’s work is approached not as a fixed legacy, but as a set of methods and commitments that continue to be tested. As Catherine Armwood, dean of Tuskegee University Robert R. Taylor School of Architecture and Construction Science, describes it: “While our students leverage [the design and entrepreneurship program] MITdesignX to turn architectural concepts into social enterprises through advanced fabrication and venture mentorship, MIT students come to Tuskegee for an immersion in historic preservation. By surveying buildings handcrafted by our founding students, they learn a legacy of self-reliance and community impact that can’t be found anywhere else,” Armwood says. “Together, we are bridging technical innovation with deep-rooted heritage to train a new generation of visionary leaders.” 


Astronomers pin down the origins of a planetary odd couple

New measurements of a hot Jupiter and its mini-Neptune companion suggest both planets formed surprisingly far away from their host star.


Across the Milky Way galaxy, a planetary odd couple is circling a star some 190 light years from Earth. A normally “lonely” hot Jupiter is sharing space with a mini-Neptune, in a rare and unlikely pairing that’s had astronomers puzzled since the system’s discovery in 2020.

Now MIT scientists have caught a glimpse into the atmosphere of the mini-Neptune, which is circling inside the orbit of its Jupiter-sized companion, and discovered clues to explain the origins of this unusual planetary system.

In a study appearing today in Astrophysical Journal Letters, the scientists report on new measurements of the mini-Neptune’s atmosphere, made using NASA’s James Webb Space Telescope (JWST). It is the first time astronomers have measured the composition of a mini-Neptune that resides inside the orbit of a hot Jupiter.

Their measurements reveal that the smaller planet has a “heavy” atmosphere that is rich with water vapor, carbon dioxide, sulfur dioxide, and hints of methane. Such a heavy atmosphere would not have been acquired by the planet if it had formed in its current location, very close to its star.

Instead, the scientists say their findings point to an alternate origin story: Both the mini-Neptune and the hot Jupiter may have formed much farther away, in the colder region of the protoplanetary disk. There, the planets could slowly build up atmospheres of ice and other volatiles. Over time, the planets were likely drawn in toward the star in a gradual process that kept them close, with their atmospheres intact.

The team’s results are the first to show that mini-Neptunes can form beyond a star’s “frost line.” This boundary refers to the minimum distance from a star where the temperature is low enough that water instantly condenses into ice.

“This is the first time we’ve observed the atmosphere of a planet that is inside the orbit of a hot Jupiter,” says Saugata Barat, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and the lead author of the study. “This measurement tells us this mini-Neptune indeed formed beyond the frost line, giving confirmation that this formation channel does exist.”

The team consists of astronomers around the world, including Andrew Vanderburg, a visiting assistant professor at MIT, and co-authors from multiple other institutions including the Harvard and Smithsonian Center for Astrophysics, the University of South Queensland, the University of Texas at Austin, and Lund University.

A “one-of-a-kind” system

As their name implies, mini-Neptunes are planets that are less massive than Neptune. They are considered to be gas dwarfs, which are made mostly of gas, with an inner, rocky core. Mini-Neptunes are the most commonly found planet in the Milky Way, though, interestingly, no such world exists in our own solar system. Astronomers have observed many planets circling a wide variety of stars in a range of planetary systems. Mini-Neptunes, then, are generally considered to be garden-variety planets.

But in 2020, Chelsea X. Huang, then a Torres Postdoctoral fellow at MIT (now on the faculty at University of South Queensland), discovered a mini-Neptune in a rare and puzzling circumstance: The planet appeared to be circling its star with an unlikely companion — a hot Jupiter.

The astronomers made their discovery using NASA’s Transiting Exoplanet Survey Satellite (TESS). They analyzed TESS’ measurements of TOI-1130, a star located 190 light years from Earth, and detected signs of a mini-Neptune and a hot Jupiter, orbiting the star every four and eight days respectively.

“This was a one-of-a-kind system,” says Huang. “Hot Jupiters are ‘lonely,’ meaning they don’t have companion planets inside their orbits. They are so massive, and their gravity is so strong, that whatever is inside their orbit just gets scattered away. But somehow, with this hot Jupiter, an inner companion has survived. And that raises questions about how such a system could form.”

A spot-on snapshot

The 2020 discovery of TOI-1130 and its odd planetary pair inspired Huang, Vanderburg, and their colleagues to take a closer look at the planets, and specifically, their atmospheres, with JWST. In its new study, the team reports its analysis of TOI-1130b — the inner-orbiting mini-Neptune.

Catching the planet at just the right time was their first challenge. Most planets circle their star with a regular, predictable period, like the tick of a clock. But the mini-Neptune and the hot Jupiter were found to be in “mean motion resonance,” meaning that each can affect the other’s motion, pulling and tugging, and slightly varying the time each takes to orbit their star. This made it tricky to predict when JWST could get a clear view.

The team, led by Judith Korth of Lund University, assembled as many past observations of the system as they could, and developed a model to predict when each planet would pass by the star at an angle that JWST could observe.

“It was a challenging prediction, and we had to be spot-on,” Barat says.

In the end, the team was able to catch a direct and detailed snapshot of both planets.

“The beauty of JWST is that it does not observe just in one color, but at different colors, or wavelengths,” Barat explains. “And the specific wavelengths that a planet absorbs can tell you a lot about the composition of its atmosphere.”

From JWST’s measurements, the team found that the planet absorbed wavelengths specifically for water, carbon dioxide, sulfur dioxide, and to a lesser degree, methane. These molecules are heavier than hydrogen and helium, which constitute lighter atmospheres. Astronomers had assumed that, if mini-Neptunes formed very close to their star, they should have light atmospheres.

But the team’s new results counter that assumption and offer a new way that mini-Neptunes could form. Since heavier molecules were found in the atmosphere of TOI-1130b, which resides very close to its star, the scientists say the only possible explanation for its composition is that the planet formed much farther out than its current location.

The planet likely accumulated its heavy atmosphere of water and other volatiles such as carbon dioxide and sulfur dioxide in the icy region beyond the star’s frost line. In this much colder environment, water condenses onto bits of dust to form icy pebbles, which an infant planet can draw into its atmosphere. The water evaporates as it slowly migrates in closer to its star.

Barat says the team’s detection of heavy molecules in the atmosphere of TOI-1130b confirms that the planet — and likely its hot Jupiter companion — formed in the outskirts of the system. Through gradual migration, the two planets would be able to stay close together and keep their atmospheres intact.

“This system represents one of the rarest architectures that astronomers have ever found,” Barat says. “The observations of TOI-1130b provide the first hint that such mini-Neptunes that form beyond the water/ice line are indeed present in nature.”

This work was supported, in part, by NASA.


The tech revolution that wasn’t

Associate Professor Dwai Banerjee’s new book examines the visionaries who wanted to turn India into a world power at making computers.


In 1960, engineers at India’s Tata Institute of Fundamental Research (TIFR) built what they called an “Automatic Calculator,” the country’s first working computer. It had the same type of ferrite-core memory as IBM’s world-leading machines, and at a glance, appeared to herald a new age of tech advances in India.

Constructed with a fraction of the resources Western computer engineers had, the TIFRAC, as they called it, was a remarkable feat.

“The people working on it had never really seen an actual functioning computer,” says Dwai Banerjee, an associate professor of science, technology, and society, and the author of a new book about computing in India. “You had this ambitious group of engineers building a state-of-the-art machine with very, very, limited resources. The fact they could build this is staggering.”

However, the TIFRAC was never even replicated, let alone produced at scale. The visionaries behind it wanted to turn India into an independent computing nation: a place that would produce its own equipment and become an industry power. Instead, the TIFRAC became a technological cul-de-sac, and India’s tech industry took on a very different shape. Instead of exporting equipment, it exports talent, sending skilled engineers and executives around the globe.

Now Banerjee explores those issues in the book, “Computing in the Age of Decolonization: India’s Lost Technological Revolution,” published by Princeton University Press. In it, he examines the country’s pursuit of technological self-sufficiency, and the global forces that prevailed against this vision. As a result, the country is “the world’s leading provider of inexpensive outsourcing and offshoring services, yet enjoys minimal benefits from more profitable advances in research, manufacturing, and development,” Banerjee writes.

“This book is about understanding how the current landscape of technological power came to be and the unequal way in which power is distributed across the world when it comes to anything to do with computing,” Banerjee says. “Basically, the historical conditions of the mid-20th century period are essential to understanding why the world of computing looks the way it does today.”

Computing and the geopolitics of knowledge

When India became a sovereign nation in 1947, many of its leaders believed “rapid technology-driven industrialization was the only way out of centuries of colonial underdevelopment,” as Banerjee writes. Some leapt into action, such as the remarkable nuclear physicist Homi J. Bhabha, who helped establish the TIFR.

Initially, Indian leaders hoped to gain cooperation for the U.S. and international organizations in making technological advances, but quickly ran into Cold War politics. Computing was heavily bound up with defense matters; India was not always fully aligned with U.S. political interests, so the flow of knowledge from the U.S. to India was distinctly limited.

“This is very much an external constraint story,” Banerjee says. “You need blueprints and not just working papers, and that’s what was guarded by the U.S. for a very long time.”

Still, the TIFR research team toiled away as its computing projects until the TIFRAC was up and running — making national headlines.

“The achievement it represents is mind-boggling,” Banerjee emphasizes. “A computer in the U.S. would have cost more to run than this entire institute in India.”

As Banerjee details in the book, the TIFRAC machine was built to grow. Its engineers matched the speed of IBM machines and planned to import larger ferrite-core memory stacks as their workload expanded. But when IBM released the FORTRAN programming language in 1957, it required four times the memory the TIFRAC machine was equipped with. India’s 1958 foreign exchange crisis then shaped the machine’s fate: The World Bank convened a U.S.-led creditor consortium that conditioned rescue loans on the opening of Indian markets to Western capital. Importing larger memory stacks became unaffordable, rendering the TIFRAC obsolete almost as soon as it was completed.

“It’s a geopolitics-of-knowledge question, not that they made a mistake,” Banerjee says of the Indian engineers. “They didn’t know IBM was about to reshape software.”

Exit IBM, enter services

Though IBM’s jump forward after the release of Fortran left the TIFRAC project stalled out, Indian advocates for computer manufacturing did not give up their dream. For one thing, they looked around for partnerships and other ways of moving their domestic tech industry forward. And then in 1978, India, uniquely, banned IBM from the country, on account of its business practices.

That might have set the stage for India’s computer manufacturing industry to flourish. But at the same moment, countervailing forces took hold, including a widespread turn toward the private sector as an increasing source of activity, rather than public-private enterprises.

“For a moment you have this imagination come to a sort of fruition,” Banerjee observes. “But by the late 1970s and 1980s, there is a new group of people arguing for quick profits through software services, saying that this route feels less painful than setting up manufacturing, R&D, and firms for a decade or more.”

This turn toward private-sector services rather than government-involved manufacturing ultimately became a decisive factor in shaping India’s tech-sector trajectory. Rather than seeking to make machines domestically, the country became part of the global tech-services sector, while many of its engineers migrated to Silicon Valley and other tech hotspots. Global tech firms used their reach to advance the idea that many countries would develop independent industries. This is not the outcome India’s leaders and technologists once envisioned.

“It still surprises me because of the one thing India did that no other country in the world managed to do, and that’s kick out IBM,” Banerjee says. “The fact that this vision fades is part of changing government ambition.”

Beyond the mavericks

In writing the book, Banerjee has multiple goals. One is simply shedding more light on the rich details of India’s initial computing efforts. Another is contesting the idea that India somehow naturally found a role providing services and exporting talent; that is not what many people once hoped.

Still another motif in Banerjee’s work is that the history of computing too often centers on innovators who are cast as mavericks, shrugging off conventions to upend business and society — whereas the large-scale forces of global capital and geopolitics matter greatly in technological development.

“This book suggests we often overplay those stories of individual genius, because you can be a genius with all the right ideas, but if you don’t have all the institutions supporting you, it means nothing,” Banerjee says.

Other scholars have praised “Computing in the Age of Decolonization.” Matthew L. Jones, a professor of history at Princeton University, has stated that Banerjee’s book is a “scrupulous accounting of ultimately failed Indian efforts to secure technological sovereignty in the wake of independence,” which “joins the best recent accounts of computing worldwide and transforms how we think through diverse national trajectories through the Cold War and beyond.”

For his part, Banerjee hopes a wide variety of readers will be interested in the book — and recognize that the specific case of India and computing can tell us a lot about the challenges of new types of economic growth in many places.

“India stands in for a lot of countries in the mid-20th century that had recently gained formal political independence and were thinking of ways to catch up with the rest of the advanced industrialized world,” Banerjee says. “But the power structures tied to technological and scientific advancement did not disappear. They were replaced by newer structures, including foreign policy with very specific ideas about what different countries should be doing with regard to technology. That’s where the story starts.”


Biologist Joey Davis explores how cells build complex structures

His studies have shed light on the assembly instructions that govern ribosomes, the critical protein-building machines of the cell.


Ribosomes, the cellular machines that assemble proteins, are made from dozens of proteins and RNA molecules. Putting all of those pieces together is a complex puzzle — one that MIT Associate Professor Joey Davis PhD ’10 revels in trying to solve.

Understanding how these structures form and later break down could help researchers learn more about how disruptions of these fundamental processes can lead to disease. But, as Davis points out, it’s also an interesting biological question.

“Our long-term goal is to really understand how the natural world assembles these huge complexes rapidly and efficiently. It’s a fundamentally interesting question to think about how these things get put together,” he says.

His work has helped reveal that unlike building a house, which happens in a prescribed sequence of steps — pouring the foundation, building the frame, putting on the roof, then doing electrical and plumbing work — ribosomes can be assembled in a more flexible way. Cells can even skip an assembly step and then come back to it later.

“In these natural systems, it seems like the assembly pathways are much more dynamic and flexible,” he says. “It appears that evolution has selected pathways that aren’t strictly ordered in the way we would think about an assembly line, where you always put in one component, then the next, and then the next. We’re excited to understand the selective advantages of such approaches.”

A love of discovery

Davis’ interest in how things are put together developed early in life, inspired by his father, a carpenter who framed houses. During the mid-1980s, the family moved from Colorado to Southern California, where his father worked in construction during a housing boom there.

“I was always interested in building things, which I think probably came from being around my dad and other builders,” Davis says.

As an undergraduate at the University of California at Berkeley, where he majored in computer science and biological engineering, Davis’ interests turned toward smaller scales, in the realm of cells and molecules. During his junior year, he started working in the lab of chemistry professor Michael Marletta, who studies molecular-level biological interactions.

In the lab, Davis investigated how enzymes that contain heme are able to preferentially bind to either oxygen or nitric oxide, two gases that are very similar in structure. That work kindled a love of studying the natural world and pursuing discoveries in fundamental science.

“Being in the Marletta lab and seeing students and postdocs that were really passionate about these problems had a big impact on me,” Davis says. “The goal was to understand the fundamentals of how molecular discrimination works, and the idea of discovery for the sake of discovery was thrilling.”

After graduating from Berkeley, Davis spent another year working in Marletta’s lab, and then a year working odd jobs, before heading to MIT to pursue a PhD in biology. There, he worked with Professor Bob Sauer, now emeritus, who studied the relationship between protein structure and function, with a particular focus on the molecular machines that degrade or remodel proteins.

Davis’ thesis research centered on enzymes called AAA proteases, which remove damaged proteins from cellular membranes and send them to cell organelles that break them down. In addition to studying the structure and function of the proteases, Davis worked on ways to engineer them to tag specific proteins for destruction.

That work led him into synthetic biology, which he used to develop genetic parts that drive production of proteins of interest. Some of those parts ended up being used by the biotech startup Ginkgo Bioworks, where Davis took a job as a senior scientist after graduating.

Working at Ginkgo Bioworks allowed Davis to stay in Boston while his partner finished her PhD. The couple then moved back to California, where Davis worked as a postdoc at Scripps Research, which was home to one of the first direct electron detection cameras for cryo-electron microscopy (cryo-EM). These detectors allow researchers to generate structures with near atomic resolution. At Scripps, Davis began using them to study ribosomes as they were being assembled.

Peering into the ribosome

After joining the MIT faculty in 2017, Davis continued his work on ribosomes and assembled a lab group that includes students from a variety of backgrounds who work together to develop new ways to explore biological phenomena.

“I have a mix of method developers and biologists in the group, and the work from each of them informs each other,” Davis says. “My lab goes back and forth between building sets of tools to answer biological questions, and then as we’re answering those questions, it motivates the next generation of tool development.”

During ribosome assembly, RNA molecules fold themselves into the correct shapes, creating docking sites for proteins to attach. Then, more RNA molecules come in and fold themselves into the structure.

“It’s a beautifully coupled process by which the cell folds hundreds of RNA helices and binds on the order of 50 proteins, and it does it in two minutes from start to finish. E. coli does this 100,000 times per hour, and it’s amazing how rapid and efficient the process is,” Davis says.

Cryo-EM allows scientists to capture this process in minute detail. It can be used to take hundreds of thousands of two-dimensional images of ribosome samples frozen in a thin layer of ice, from different angles. Computer algorithms then piece together these images into a three-dimensional representation of the ribosome.

To gain insight into how ribosomes are assembled, researchers can stall the process at different points and then analyze the resulting structures. In 2021, Davis’s lab developed a new method called CryoDRGN, which uses neural networks to analyze cryo-EM data and generate the full ensemble of structures that were present in the sample.

This work has shown that when certain steps of ribosome assembly are blocked, many different structures result, suggesting that the assembly can occur in a variety of ways.

In future work, Davis aims to dramatically increase the throughput of cryo-EM to generate datasets of protein structures that could help improve the AI-based models that are now used to predict protein structures.

“There are still huge swaths of sequence space that these models are very poor at predicting, but if we could collect data on those sequences en masse, that could potentially serve as key training data for a next-generation protein structure prediction method that could fill out that space,” he says.


Rett syndrome study highlights potential for personalized treatments

Using advanced human cell cultures, MIT researchers tracked how two different mutations alter neural circuit development, and how each could be addressed with distinct potential therapeutics.


Although many studies approach the developmental disorder Rett syndrome as a single condition arising from general loss of function in the gene MECP2, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that two different mutations of the gene caused many distinct abnormalities in lab cultures. Moreover, correcting key differences made by each mutation required different treatments.

“Individual mutations matter,” says Mriganka Sur, senior author of the new open-accdess study in Nature Communications and the Newton Professor in the Picower Institute and the Department of Brain and Cognitive Sciences. “This is an approach to personalizing treatment, even for a single-gene disorder.”

The study employed advanced 3D human brain tissue cultures called “organoids” or “minibrains” derived from skin cells or blood cells donated by Rett syndrome patients with each mutation. Lead author Tatsuya Osaki, a Picower Institute research scientist, says that the organoids’ ability to model the specific consequences of each mutation enabled him to gain mutation-specific insights that haven’t emerged in prior studies, where scientists just knocked out MECP2 overall. The organoids also provided a novel opportunity to understand how each mutation affected different cell types and their interactions.

Distinct effects

More than 800 mutations in MECP2 can cause Rett syndrome, but just eight account for more than 60 percent of cases. Sur and Osaki chose one of these, R306C, which involves a difference of just one DNA base pair (916C>T), because it represents 7-8 percent of Rett syndrome cases. The other mutation they chose, V247X, is much more rare and severe because it cuts off production of the gene’s protein product by a single DNA base deletion (705Gdel), leaving the protein not just errant, but incomplete.

In organoids cultured for three months, each mutation produced some common but also sometimes distinct consequences compared to control organoids with non-mutated MECP2. For many of their experiments, the team used “three-photon” microscopes capable of cellular-level resolution all the way through the organoids’ approximate 1 millimeter thickness, resolving both their structure (via “third-harmonic generation” imaging), and the live activity patterns of their neurons (via calcium fluorescence).

For instance, the scientists observed that the V247X organoids exhibited several structural differences from their controls — they were larger and had different thicknesses of various layers — but the R306C ones were much more like their controls. Organoids harboring either mutation exhibited less-developed axon projections from their neurons, compared to their control comparators.

Looking at properties of neural activity and connectivity in the organoids, the scientists found some similar deficits across both mutations. Both showed reduced spiking activity and synchronicity between neurons compared to in their controls.

But when the scientists looked at other properties, the organoids started to diverge from each other. In particular, an indication of the efficiency of their network structure called “small-world propensity” (SWP) was decreased in R306C organoids, and increased in V247X ones, compared to controls. This means that both mutations altered the development of typical network structures for information processing, but in different directions.

To ensure that their results were meaningful for Rett syndrome patients, the team collaborated with Charles Nelson at Boston Children’s Hospital, whose team measured EEG in several children with different Rett mutations. Although the sample was small, the researchers measured indications that the SWP property in the EEG readings was altered in the volunteers, much like in the organoids.

Finally, by labeling excitatory neurons to flash in one color and inhibitory neurons to flash in a different color, the scientists were able to see that connectivity between the different neural types differed significantly from controls in the V247X organoids.

Treatment tests

All the testing showed that each mutation caused several changes in organoid structure, activity, and connectivity, and that the deviations were often particular to the specific mutation.

To understand how these differences emerged, and how they might be corrected, Sur and Osaki’s team turned to examining how the cells in each kind of organoid might be expressing their genes differently than controls. Differences in gene expression often lead to alterations of key molecular pathways in cells that can disrupt their activity and function. Analysis with a technique called single cell RNA sequencing indeed yielded hundreds of differences in each organoid type, where some genes were expressed more than in controls while others were underexpressed.

For instance, the analyses revealed that in R306C organoids a gene called HDAC2 was overexpressed. That protein is known for repressing expression of other genes. Meanwhile, in the V247X organoids, the scientists found reduced expression of genes for some receptors of the inhibitory neurotransmitter GABA. These organoids also showed defects in the function of astrocyte cells, which support many aspects of neural function.

Organoids with either mutation also exhibited aberrations in molecular pathways that enable the development of circuit connections between neurons, called synapses.

Given the specific defects they observed, the scientists decided to treat the organoids with a drug that can inhibit HDAC2 activity and another that increases GABA’s efficacy. The HDAC2 inhibitor restored neuronal activity and SWP to normal levels in the R306C organoids, and the GABA “agonist” baclofen restored SWP to control levels in the V247X organoids.

Osaki notes each of the treatment drugs has already been studied in other disease contexts, meaning they are well-understood drugs that could be repurposed.

Now that the researchers have developed an organoid platform for dissecting individual mutations’ consequences, identifying both their roots and testing treatments, they plan to apply it to studying four more mutations, Sur says, comparing all of them against a standardized control organoid.

In addition to Sur, Osaki, and Nelson, the paper’s other authors are Chloe Delepine, Yuma Osako, Devorah Kranz, April Levin, and Michela Fagiolini.

The National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Foundation provided support for the research.


Powering 160,000 hours of discovery at MIT.nano

NanoFab Equipment Management and Operations (NEMO) system streamlines shared facilities management via tool trainings, reservations, and lab communications.


Each year, more than 1,500 researchers rely on over 200 tools and instruments at MIT.nano to pursue experiments that span MIT’s disciplines, collectively generating 160,000 hours of work across 88,000 instances of tool use. Behind this activity is an operational framework that must discretely coordinate access, maintain fairness, and keep research moving without friction.

Managing such a dynamic environment requires more than a scheduling calendar. An automated reservation system serves as the connective tissue of the facility, balancing demand across diverse user needs while supporting the practical realities of a shared lab space. Researchers arrive at MIT.nano with different workflows, safety requirements, and administrative needs, yet the system must present a seamless experience. Integration with MIT’s broader digital infrastructure, from onboarding and authentication to safety training and billing, ensures that access is both efficient and compliant, reducing barriers so researchers can focus on their work.

A system for the modern era

Over the past three years, during a period of rapid growth in both equipment and facility usage, MIT.nano undertook a transition to a new platform designed to scale with demand while maintaining operational continuity. The effort reflects an ongoing commitment to evolving infrastructure that supports the pace, complexity, and collaborative spirit of modern research.

The importance of robust laboratory management systems has long been recognized at MIT. For decades, researchers in the Microsystems Technology Laboratories (MTL) and the Materials Research Laboratory relied on the CORAL lab management platform to reserve and manage shared instrumentation. Jointly developed by MIT and Stanford University and introduced in 2003, CORAL represented a significant advance over the text-based system it replaced. But by the time MIT.nano adopted CORAL in 2018, active development had slowed, and the platform was beginning to show its age, most visibly through the absence of modern web and mobile interfaces expected by today’s users.

To address these limitations, MIT.nano has transitioned to NEMO, an open-source laboratory management system originally developed at the National Institute of Standards and Technology. NEMO centralizes scheduling, communication, and operational logistics into a single platform that manages tool reservations and user access while supporting facility growth. Its modular architecture and plugin framework allow for extensive customization, enabling the system to evolve alongside the needs of a large, shared research environment.

“Over time, NEMO was replicating core functionalities of CORAL while introducing new features that CORAL simply could not support,” explains Thomas Lohman, senior software and systems manager at MTL and a long-time contributor to CORAL’s development. “The question became whether to continue patching the old system or adopt this new platform that already had a lot of the features we use daily, as well as an active community continually improving it.”

For MIT.nano leadership, modernization was about more than replacing an aging tool. “We needed a system that centralizes everything a facility user depends on — policies, tool documentation, training workflows, and communications — within a user-friendly, mobile-accessible environment,” says Anna Osherov, associate director for Characterization.nano, who led the evaluation and transition effort. “Just as important was making sure the platform enhances the experience for both users and staff.”

Collaborating at MIT and with shared access facilities

MIT.nano collaborated closely with Mathieu Rampant, NEMO project lead and CEO of Atlantis Labs, to adopt the community edition of NEMO, an extended version enriched by contributions from a growing global user base. The open-source model ensures that improvements developed at MIT.nano benefit the broader research community, reinforcing a shared ecosystem of innovation. “The NEMO community is expanding rapidly, and many new features originate directly from facility users and administrators,” says Rampant. “That collaborative model allows improvements to propagate quickly while giving institutions a sense of ownership in the platform’s evolution.”

NEMO introduces modern features long requested by MIT.nano researchers, including mobile access, improved transparency, and streamlined workflows. Facility users can now monitor their own tool usage and consumables, customize notifications, register for training, join real-time equipment waitlists, report issues, and communicate with staff, all through a unified dashboard. What was once distributed across multiple systems is now centralized, reducing friction in day-to-day lab operations.

Launching a new platform at the scale of MIT.nano required careful planning and sustained collaboration. The system needed to support multiple facility types, integrate with existing MIT infrastructure, and accommodate a diverse set of instrumentation workflows. “Features that work well in a typical characterization lab can quickly become a burden in a more chemically active environment like the cleanroom,” explains Jorg Scholvin, associate director of Fab.nano. “Relying on researchers to log in using personal devices and Duo authentication, for example, would be impractical in that setting.”

To address these challenges, MIT.nano collaborated with MIT Information Systems and Technology Associate Vice President Olu Brown and Senior Director for Infrastructure Operations Marco Gomes and their teams to streamline integration between MIT systems and NEMO for cleanroom users. “The availability of modern APIs allowed us to connect very different systems efficiently and deliver a convenient, seamless, and productive experience in the lab,” says Scholvin.

The result is a platform that now processes thousands of reservations, communications, and operational actions daily. “We truly value the partnership with MIT.nano and appreciate the collaboration throughout this effort,” says Gomes. “It’s been a great example of teams working together to deliver something meaningful for the research community.”

As one of the largest shared-access facilities deploying NEMO, MIT.nano has played a central role in advancing the platform’s capabilities, both by helping shape its development and by demonstrating a model that is scalable and effective for other facilities and research centers nationwide. Enhancements first created to meet MIT.nano’s needs are now leveraged by other facilities adopting NEMO across the globe. 


It took 40 years for technology to catch up to this zipper design

An old patent from MIT Professor Bill Freeman inspired the new “Y-zipper,” a three-sided fastener that snaps gear, robots, and art into shape at the push of a button.


In 1985, the Innovative Design Fund placed an ad in Scientific American offering up to $10,000 to support clever prototypes for clothing, home decor, and textiles. William Freeman PhD ’92, then an electrical engineer at Polaroid and now an MIT professor, saw it and submitted a novel idea: a three-sided zipper. Instead of fastening pants, it’d be like a switch that seamlessly flips chairs, tents, and purses between soft and rigid states, making them easier to pack and put together.

Freeman’s blueprint was much like a regular zipper, except triangular. On each side, he nailed a belt to connect narrow wooden “teeth” together. A slider wrapping around the device could be moved up to fasten the three strips into place, straightening them into a triangular tube. His proposal was rejected, but Freeman patented his prototype and stored it in his garage in the hopes it might come in handy one day.

Nearly 40 years later, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers wanted to revive the project to create items with “tunable stiffness.” Prior attempts to adjust that weren’t easily reversible or required manual assembly, so CSAIL built an automated design tool and adaptable fastener called the “Y-zipper.” The scientists’ software program helps users customize three-sided zippers, which it then builds on its own in a 3D printer using plastics. These devices can be attached or embedded into camping equipment, medical gear, robots, and art installations for more convenient assembly.

“A regular zipper is great for closing up flat objects, like a jacket, but Freeman ideated something more dynamic. Using current fabrication technology, his mechanism can transform more complex items,” says MIT postdoc and CSAIL researcher Jiaji Li, who is a lead author on an open-access paper presenting the project. “We’ve developed a process that builds objects you can rapidly shift from flexible to rigid, and you can be confident they’ll work in the real world.”

Why zippers?

Users can customize how the fasteners look when they’re zipped up in CSAIL’s software program; they can select the length of each strip, as well as the direction and angle at which they’ll bend. They can also choose from one of four motion “primitives” to select how the zipper will appear when it’s zipped up: straight, bent (similar to an arch), coiled (resembling a spring), or twisted (looks like screws).

The Y-zipper that results will appear to “shape-shift” in the real world. When unzipped, it can look like a squid with three sprawling tentacles, and when you close it up, it becomes a more compact structure (like a rod, for instance). This flexibility could be useful when you’re traveling — take pitching a tent, for example. The process can take up to six minutes to do alone, but with the Y-zipper’s help, it can be done in one minute and 20 seconds. You simply attach each arm to a side of the tent, supporting the structure from the top so that the zipper seemingly pops the canopy into place. 

This seamless transition could also unlock more flexible wearables, often useful in medical scenarios. The team wrapped the Y-zipper around a wrist cast, so that a user could loosen it during the day, and zip it up at night to prevent further injuries. In turn, a seemingly stiff device can be made more comfortable, adjusting to a patient’s needs.

The system can also aid users in crafting technology that moves at the push of a button. One can attach a motor to the Y-zipper after fabrication to automate the zipping process, which helps build things like an adaptive robotic quadruped. The robot could potentially change the size of its legs, tightening up into taller limbs and unzipping when it needs to be lower to the ground. Eventually, such rapid adjustments could help the robot explore the uneven terrain of places like canyons or forests. Actuated Y-zippers can also build dynamic art installations — for example, the team created a long, winding flower that “bloomed” thanks to a static motor zipping up the device.

Mastering the material

While Li and his colleagues saw the creative potential of the Y-zipper, it wasn’t yet clear how durable it would be. Could they sustain daily use?

The team ran a series of stress tests to find out. First, they evaluated the strength and flexibility of polylactic acid (PLA) and thermoplastic polyurethane (TPU), two plastics commonly used in 3D printing. Using a machine that bent the Y-zippers down, they found that PLA could handle heavier loads, while TPU was more pliable.

In another experiment, CSAIL researchers used an actuator to continuously open and close the Y-zipper to see how long it’d take to snap. Some 18,000 cycles of zipping and unzipping later, they finally broke. Y-zipper’s secret to durability, according to 3D simulations: its elastic structure, which helps distribute the stress of heavy loads.

Despite these findings, Li envisions an even more durable three-sided zipper using stronger materials, like metal. They may also make the zippers bigger for larger-scale projects, but that’s not yet possible with their current 3D printing platform.

Jiaji also notes that some applications remain unexplored, like space exploration, wherein Y-zipper’s tentacles could be built into a spacecraft to grab nearby rock samples. Likewise, the zippers could be embedded into structures that can be assembled rapidly, helping relief workers quickly set up shelters or medical tents during natural disasters and rescues.

“Reimagining an everyday zipper to tackle 3D morphological transitions is a brilliant approach to dynamic assembly,” says Zhejiang University assistant professor Guanyun Wang, who wasn’t involved in the paper. “More importantly, it effectively bridges the gap between soft and rigid states, offering a highly scalable and innovative fabrication approach that will greatly benefit the future design of embodied intelligence.”

Li and Freeman wrote the paper with Tianjin University PhD student Xiang Chang and MIT CSAIL colleagues: PhD student Maxine Perroni-Scharf; undergraduate Dingning Cao; recent visiting researchers Mingming Li (Zhejiang University), Jeremy Mrzyglocki (Technical University of Munich), and Takumi Yamamoto (Keio University); and MIT Associate Professor Stefanie Mueller, who is a CSAIL principal investigator and senior author on the work. Their research was supported, in part, by a postdoctoral research fellowship from Zhejiang University and the MIT-GIST Program.

The researchers’ work was presented at the ACM’s ​​Computer-Human Interaction (CHI) conference on Human Factors in Computing Systems in April.


How chromatin movement helps control gene expression

By monitoring these chromosomal structures over many timescales, MIT researchers found that chromatin helps bring genes closer to their regulatory elements.


Gene expression is controlled, in part, by the interactions between genes and regulatory elements located along the genome. Those interactions depend on the ability of chromatin — a mix of DNA and proteins — to move around within a crowded space.

In a new study, MIT researchers have measured chromatin movement at timescales ranging from hundreds of microseconds to hours, allowing them to rigorously quantify those dynamics for the first time.

Their analysis revealed that chromatin can exist in two different categories: In one, chromatin moves in a constrained way that allows it to primarily contact only neighboring regions of the genome; in the other, chromatin moves more freely and contacts regions that are farther away, but only over longer timescales.

The findings offer insight into how gene expression is regulated, as well as how chromatin segments come together for other processes such as DNA repair, the researchers say.

“Because we were able to look at chromatin dynamics for the first time at these very fast timescales, and also for the first time across the full dynamic range, we were able to observe chromatin motion over a range that just wasn’t possible before,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT and the senior author of the new study, which appears today in Nature Structural and Molecular Biology.

The paper’s lead authors are MIT postdoc Matteo Mazzocca, Domenic Narducci PhD ’25, and Simon Grosse-Holz PhD ’23. Jessica Matthias, chief commercial officer of Abberior Instruments, and Tatiana Karpova, manager of the National Cancer Institute Optical Microscopy Core, are also authors of the paper.

Constrained movement

In textbooks, chromatin is often depicted as a static structure within the cell nucleus, but in reality, it is constantly moving. Those movements are necessary for genes to interact with DNA regulatory sequences such as enhancers, which can be as far as 1 million base pairs away. They also ensure that when DNA breaks occur, the two ends of DNA can encounter each other to be repaired.

“Chromatin dynamics are foundational to all processes in the nucleus, and especially processes that involve two things finding each other. That’s important in DNA repair, gene regulation, recombination, or moving a particular gene to the right compartment of the nucleus,” Hansen says.

The movement of any particular location on the genome, or locus, is constrained by the fact that DNA is a polymer. After moving in any direction, a locus will be pulled back by the DNA on either side of it.

“Chromosomes are polymers. They’re held together by many nucleotides of DNA. Being part of DNA is a little bit like running while holding hands with other people. If a hundred people are holding hands and you, in the middle of the chain, try to run in one direction, you’ll get pulled back,” Hansen says.

This type of behavior is known as subdiffusive movement. Previous studies have yielded conflicting reports on how subdiffusive chromatin is, mainly because the studies were not able to track the movement over a long enough period of time to obtain statistically robust measurements. Because the movements are so small, on the order of nanometers, data needs to be obtained over long dynamic ranges — from milliseconds to hours.

In those earlier studies, researchers used imaging techniques that can track the position of a single molecule over time by comparing images frame by frame. These are useful but can only be used over a small dynamic range because of the limitations of conventional microscopy.

To generate more statistically robust data, the MIT team used MINFLUX — a super-resolution light microscopy technique that can track the movement of tiny objects such as proteins over longer periods of time. This technique was recently developed by Stefan Hell of the Max Planck Institute, a Nobel laureate for his work in super resolution microscopy. In this study, the MIT team became the first to apply this technique to chromatin in living cells.

“MINFLUX allowed us to get around the limitations of conventional microscopy, letting us measure chromatin movement faster and for a longer period of time than ever before,” Narducci says. “To our knowledge, it’s the first time this technique has been used this way.”

Using MINFLUX, the researchers were able to study cells over timescales that covered four orders of magnitude — from 200 microseconds to 10 seconds. And by combining MINFLUX with two traditional imaging techniques, they could track chromatin movement over seven orders of magnitude across time, from hundreds of microseconds to several hours.

“Region of influence”

These studies, performed across several different mouse and human cell types, allowed the researchers to identify two distinct classes of chromatin dynamics. In both classes, over short and intermediate timescales (up to 200 seconds), any given locus tends to move only within about 200 nanometers. This suggests that the subdiffusive pull is stronger than had been previously thought.

“One of the main takeaways is that you have this region of influence where a genomic locus has access to other genomic loci, and this is roughly a couple hundred nanometers large,” Grosse-Holz says. “If loci are much closer together than a couple hundred nanometers, they’re effectively in contact all the time. You get a cutoff at a couple hundred nanometers where everything within that region around a given locus can see that locus, and everything outside cannot.”

This constant contact is likely beneficial for DNA repair, as the broken strands remain in close proximity to each other. The findings also suggest that for genes and regulatory elements that are within about 100,000 base pairs, they don’t need any extra help to find each other — they will do so routinely through their normal movement.

“If they are closer than 100,000 bases, and most regulatory elements are, then those elements are going to find their target gene within a few milliseconds or a few minutes,” Mazzocca says. “These are timescales that are completely consistent with transcription.”

In the other class of chromatin dynamics that the researchers identified, chromatin is able to move over a wider range, but only at longer timescales (a few minutes to hours). This class of chromatin appeared in some types of cells but not others, for reasons that are not yet understood.

“It would be reasonable to assume that the behavior would be more or less the same in all cell types, but that’s not at all what we found,” Hansen says. “It’s very different in different cell types, with no obvious way of categorizing things.”

He adds that the strength of the subdiffusive pull that the researchers found in this study can’t be explained with existing models that have been developed to study chromatin dynamics — the Rouse model and the fractal globule model. This suggests that the models may need to incorporate factors that were previously left out, such as the interactions between chromatin and the crowded nucleoplasm it sits within.

“These findings are significant for two key reasons,” says Luca Giorgetti, a group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not involved in the study. “First, they rigorously confirm longstanding but anecdotal observations that chromatin motion is strongly subdiffusive. Second, they demonstrate that this behavior is consistent across multiple cell types and persists across all measured timescales.”

The research was funded, in part, by the National Institutes of Health, a National Science Foundation CAREER Award, a Pew-Stewart Scholar for Cancer Research Award, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.


Found Industries aims to strengthen America’s industrial supply chains

Founded by Peter Godart ’15, SM ’19, PhD ’21, the company has developed technologies for extracting critical metals and making fuel out of aluminum.


Found Industries has gone through several distinct phases in the four years since it was originally formed as Found Energy. There was the scrappy startup stage, in which the company was primarily housed in the basement of founder Peter Godart ’15, SM ’19, PhD ’21. Then there was the demonstration phase, in which the company worked to productize its technology for transforming aluminum into high-density fuel for industrial operations.

Now, after confronting supply chain vulnerabilities related to critical metals in its aluminum fuel business, the company is launching a new division, Found Metals, to extract the critical metal gallium from mineral refineries — a move that builds on its original technology while addressing a major national security need.

Gallium is a critical material in the defense, semiconductor, and energy sectors. In 2024, China produced 99 percent of the world’s primary supply — market dominance the country takes advantage of through export controls.

Godart’s company developed an electrochemical gallium extraction technology for internal use after realizing how dependent it would be on China for the catalyst material at the center of its aluminum fuel reactors. Now, with support from the U.S. Department of Energy, Found is hoping to use that technology to create a new domestic supply chain for gallium and a host of other important metals.

Found Industries is still committed to its aluminum fuel operations, now under its Found Energy division. It is already running a 100-kilowatt-class demonstration plant and is preparing for industrial pilot deployments next year. But with its expansion, which was announced April 21, the company is also working to meet the moment for critical metals production.

“Gallium is the world’s most critical metal, as it’s 99 percent controlled by China,” Godart says. “When you produce 99 percent of something, you also produce 99 percent of the tools required to extract it. We couldn’t get our hands on some of those tools, so we were forced to come up with a new technology. Now we believe we can deploy this at scale to become one the first major Western suppliers of these metals.”

From fuel to metals

Godart focused on robotics as an undergraduate in MIT’s Department of Mechanical Engineering and Department of Electrical Engineering and Computer Science. Following graduation, he worked at NASA’s Jet Propulsion Laboratory, where he explored systems for tapping into high-density fuels like aluminum on other planets.

“I had this crazy idea that you could use aluminum, which is already a common construction material for aerospace, as a fuel on other planets,” Godart says. “You don’t need most of the aluminum on a spacecraft once you land on another planet. Aluminum is around 40 times more energy-dense than lithium-ion batteries, and if you have an oxidizer, like water on an icy moon for example, then you can react that aluminum with water and extract energy as heat and hydrogen.”

Luckily for people who might spill water on aluminum while cooking, the metal is normally very stable when exposed to air. In order to tap into aluminum’s stored energy, it needs to undergo a chemical reaction. Godart began exploring catalyst materials to create that reaction at NASA. He continued that work with professor of mechanical engineering Douglas Hart when he returned to MIT in 2017, this time for applications a little closer to home.

“If we want to think about moving humanity to other planets, we have some problems to solve here first,” Godart says. “That was the impetus for me to go back to MIT to study using aluminum as a fuel for energy distribution on Earth.”

Around 70 million tons of aluminum are already transported around the globe every year. Godart says that gives aluminum an easier path to scale. During his PhD, he created a process for coating aluminum with a gallium-containing alloy to help tap into aluminum’s embodied energy.

“We found a catalyst that, when mixed with aluminum scraps, enabled aluminum to react with water very rapidly and at orders of magnitude higher power density than what had been possible before,” Godart says. “That meant you could use aluminum as a fuel and get megawatt-scale power from compact reactor systems.”

By the time he finished his PhD in 2021, Godart and his collaborators had developed a system that mixes aluminum fuel with those catalysts to continuously produce electricity at the kilowatt scale through a hydrogen fuel cell.

Godart launched Found Energy in 2022, licensing part of his research from MIT’s Technology License Office and receiving support from MIT’s Venture Mentoring Service. The company received an Activate fellowship, and after quickly outgrowing Godart’s basement, moved into its current 20,000 square foot facility in Charlestown, Massachusetts.

Today, Found Energy is working with industrial companies that have abundant aluminum scrap.

“When you invent a fuel, you then have to invent the engine,” Godart says. “Our engine is called a catalyzed aluminum water reactor. You feed in aluminum that’s been treated with the catalyst and water, and you get a steam-hydrogen gas mixture. We call that our power stream. We use it to cogenerate industrial heat and electricity. The reaction byproduct is a hydrated aluminum oxide that can be sold into various industries or recycled back into aluminum, which is the long-term vision.”

As Godart worked to build more of the systems, he became concerned about Found’s reliance on Chinese supply chains for its catalyst material. So, in 2024, he developed a new way to extract gallium from Bayer liquor, an industrial process stream used to produce aluminum. Traditional methods for extracting gallium rely on foreign-controlled organic chemicals or resins to bind and concentrate the gallium.

Found uses a continuous electrochemical process to recover the gallium directly from Bayer liquor and other industrial feedstocks, even at low concentrations.

“We thought of it as a way to future-proof what we were doing,” Godart says. “Necessity was the mother of invention.”

Then, toward the end of 2024, China began restricting the export of critical metals including gallium.

“We realized we had already developed a technique for producing these restricted metals that could be very quickly adapted,” Godart recalls.

Scaling for national security

On April 14, the Department of Energy’s Office of Critical Minerals and Energy Innovation selected Found as part of its $5.4 million program to recover gallium from domestic feedstocks. The company plans to start extracting gallium, along with other critical metals like indium and germanium, by the end of 2027.

Meanwhile, Found is already running a 100-kilowatt-class aluminum fuel demonstration system in Charlestown and is working through a orders of several megawatts from large public companies.

“For our fuel technology, the vision is to go as big as possible,” Godart says. “We envision major power plants. Aluminum refineries today, for example, consume hundreds of megawatts of continuous thermal power. That’s what we aim to deliver.”

Godart says he spends most of his time now on gallium extraction, but both branches of the business could make supply chains more secure in the future.

“The big focus now is critical metals, because the government needs this,” Godart says. “We’re also making these metals for ourselves, so we’re vertically integrating our own supply chain, which is table stakes now for companies that deal in physical goods. You need to be able to control your inputs. By focusing on metals, it improves the likelihood of success for our aluminum fuel business.”


MIT affiliates awarded 2026 Guggenheim Fellowships

Afreen Siddiqi, Kathleen Thelen, and Vinod Vaikuntanathan, along with alumna Kate Manne, are appointed to the 2026 class of “trail-blazing fellows.”


MIT Research Scientist Afreen Siddiqi ’99, SM ’01, PhD ’06; MIT professors Kathleen Thelen and Vinod Vaikuntanathan SM ’05, PhD ’09; as well as Kate Manne PhD ’11 are among 223 scientists, artists, and scholars awarded 2026 fellowships from the John Simon Guggenheim Memorial Foundation. Working across 55 disciplines, the fellows were selected from almost 5,000 applicants for “prior career achievement and exceptional promise.”

Each fellow receives a monetary stipend to pursue independent work at the highest level under “the freest possible conditions.” Since its founding in 1925, the Guggenheim Foundation has awarded nearly $450 million in fellowships to more than 19,000 fellows. This year, MIT faculty and staff were recognized in the categories of geography and environmental studies, political science, and computer science.

Afreen Siddiqi is a research scientist in the Engineering Systems Laboratory in the Department of Aeronautics and Astronautics. Her expertise is in the development of systems-theoretic analytical methods and quantitative modeling for technical systems in space and on Earth that need to operate and adapt in changing environments. Her work has focused on space exploration, satellite Earth observation for informing decisions, and critical infrastructure planning. She has served as a contributing author to the sixth assessment report of 2022 of the Intergovernmental Panel on Climate Change (IPCC) on implications of water, energy, and food interconnections for climate change adaptation. Her work has received teaching awards and fellowships including the Amelia Earhart Fellowship, Richard D. DuPont Fellowship, and the Rene H. Miller Prize in Systems Engineering.

Kathleen Thelen is the Ford International Professor of Political Science. Her work focuses on the political economy of the rich democracies, with a current emphasis on the study of American capitalism in comparative perspective. Her most recent book, “Attention Shoppers! American Retail Capitalism and the Origins of the Amazon Economy,” was published by Princeton University Press in 2025. Her awards include the Friedrich Schiedel-Award for Politics and Technology, the Aaron Wildavsky Enduring Contribution Prize, and the Michael Endres Research Prize (2019). She was elected to the American Academy of Arts and Sciences in 2015.

Vinod Vaikuntanathan is the Ford Foundation Professor of Engineering in the Department of Electrical Engineering and Computer Science. A principal investigator at the Computer Science and Artificial Intelligence Laboratory, his research focuses upon the foundations of cryptography and its applications to theoretical computer science at large. He is known for his work on fully homomorphic encryption (a powerful cryptographic primitive that enables complex computations on encrypted data), as well as lattice-based cryptography (which lays down a new mathematical foundation for cryptography in the post-quantum world). His awards include the Harold E. Edgerton Faculty Award, the Godel Prize, the Simons Investigator Award, the Distinguished Alumnus Award from Indian Institute of Technology Madras, a Best Paper Award from CRYPTO 2024, test of time awards from IEEE Symposium on Foundations of Computer Science and CRYPTO conferences, and he was named a MacVicar Faculty Fellow in 2024 and an International Association for Cryptologic Research Fellow in 2026.

Kate Manne, who earned her PhD in philosophy at MIT in 2011, is now a professor at Cornell University.

“Our new class of Guggenheim Fellows is representative of the world’s best thinkers, innovators, and creators in art, science, and scholarship,” says Edward Hirsch, award-winning poet and president of the Guggenheim Foundation. “As the foundation enters its second century and looks to the future, I feel confident that this new class of 223 individuals will do bold and inspiring work, undaunted by the challenges ahead. We are honored to support their visionary contributions.”


Testing sustainable agriculture in Barcelona

Students in a MISTI Global Classroom confronted the challenges of climate change, one farm and co-op visit at a time.


A dozen MIT students recently set out for Barcelona — not just to study climate resilience, but to experience it firsthand. As part of STS.S22 (How to Grow Resilient Futures: Regenerative Agriculture and Economies in Catalunya, Spain), an Independent Activities Period course taught by Kate Brown, the Thomas M. Siebel Distinguished Professor in the History of Science, they stepped beyond the classroom and into living systems of sustainability.

Offered as a Global Classroom through MIT International Science and Technology (MISTI), the course reimagined what learning could look like. Instead of working their way through a syllabus containing texts about sustainable farming and the power of cooperatives, Brown’s students got their hands dirty. 

In fact, quite literally: They visited local farms and slaughterhouses; prepped, cooked, and served a cooperative dinner to migrants; and constructed a working greenhouse. In the process, they built a lasting community and forged their own visions about sustainability and how they are compelled to confront climate change — as MIT students now, and eventually as alumni. 

“I wanted the students to think about alternatives to the notion of capitalist development, where the latest technology is seen as the solution to every social problem that emerges. I wanted them to see ways people are solving problems in a place like Barcelona, where communities and ecologies are centered as part of the solution,” Brown says.

Through Brown’s partnerships at the Barcelona Urban Research Institute and Research and Degrowth (R&D)  — and MISTI Spain’s infrastructure — the group of eight undergraduates and four graduate students had the opportunity to examine the historical roots of cooperative movements in the region while simultaneously experiencing today’s iteration of co-op work. 

Brown intentionally designed the three-week syllabus to push students beyond the classroom walls and get them face-to-face with local MISTI Spain collaborators from across the farming and ecology sectors. For example, the class met with Pino Delàs, a pig farmer who left the industrial system to start his own localized, cyclical operation, called Llavora, which supported community farming and generated significantly less waste. 

Rooted in community 

With more than a century of creating cooperatives — both workers and farms — Barcelona and its Catalan roots provided an ideal environment for the students to consider Brown’s questions through fieldwork rooted in community. 

Within their first week on the ground, they collaborated with volunteers at the Agora Squat. The small “pocket park” was initially slated to be developed into a luxury hotel, but a local group of 200 neighborhood residents came together to protest the plan, instead exercising their legal right to use the land, a caveat in Spanish law that allows neighbors to make a case for possessing land that isn’t being used productively. Now, the lush green square boasts a community kitchen and gardens. One night a week, volunteers provide dinner for upward of 60 recent North African migrants, using ingredients sourced from local fruiterias and shops that would have otherwise gone to waste at the close of business. 

On this particular Thursday, Brown’s students became nonprofit managers and chefs, but they also became community members themselves. In just a few hours from start to finish, the students had to source donated produce from the local vendors, come up with a recipe using what they’d gathered, and then prepare a meal in the rudimentary kitchen. “They received a lot of turnips and had to create a recipe to use them,” Brown says. In the end, a flavorful stew simmered in a massive metal pan over propane burners, brought alive with fresh chilies picked from the garden. 

“This was way outside some students’ comfort zones,” Brown says. Yet, that was exactly the point of the activity. By the end of the evening, the students discovered that sometimes the most profound educational moments take shape only after challenging the limits of learning. 

“Many of us do not consider ourselves chefs, so it was empowering to discover that, together, we had the capacity to create a nourishing meal for 70 people, with produce that would have otherwise gone to waste. This meal that we created on the spot, in combination with many of the other workshops during the program, was a strong reminder of how much agency each of us has to effect change within isolating and constraining systems, especially in community with like-minded individuals,” says Sonia Torres Rodriguez, a first-year PhD student in urban studies and planning.

Torres Rodriguez focuses her doctoral research on affordable and climate-resilient housing. She was drawn to the IAP program's exploration of innovative approaches to more equitably distributing the means of producing housing and food, and was excited to be learning in person in Spain. “Cooking together, admiring healthy regenerative soil, foraging, learning traditional methods to braid grass, installing mini solar panels, and hosting poetry circles, would have been impossible to replicate on Zoom,” she says. 

Calvin Macatantan, a senior in computer science and urban studies and planning, was initially drawn to the program because of his interest in resilient economies and how they support the communities they serve. Other than visiting family in the Philippines, he’d never left the United States before. He was especially moved by the group’s stay at La Bruguera, an eco-resort partnered with R&D that serves as a “living laboratory.” The cohort heard from local experts in regenerative agriculture, soil health, and low-tech agroforestry, alongside hands-on activities such as eco-art sessions, weaving lessons, and the rebuilding of a greenhouse. 

As part of a final project for the course, Macatantan and another student wrote and illustrated a children’s book that explains La Bruguera’s work by making the soil come to life as the main protagonist for young readers. 

Brown’s course pushed Sofia Espindola de La Mora to think more critically about everyday systems and their environmental impact. Originally from Puerto Rico, the first-year student has watched helplessly in recent years as climate change has increased the frequency and magnitude of natural disasters at home.

She came to MIT looking for answers and wanting to make a difference, and signed up for Brown’s course as part of that quest. “It was fascinating to see firsthand that the degrowth movement doesn’t mean slowing down is a bad thing, but instead that the constant striving for more is what has led us to many of the predicaments we now face as a society. It forced me to think about whether it would even be possible for me to sustain the life I have now using renewable energy,” Espindola de La Mora says. The course convinced her to focus her studies on climate system science and engineering. 

A climate context

Broadening students’ perspectives was a priority for Brown, whose research lies at the intersection of history, science, technology, and bio-politics. She’s known on campus for courses like STS.038 (Risky Business: Food Production, Environment, and Health). Her 2026 book, “Tiny Gardens Everywhere: The Past, Present and Future of the Self-Provisioning City,” examines urban systems, including gardens. 

When Brown was designing the Global Classroom — made possible through MISTI, with additional support from the MIT Energy Initiative — she centered a value she considers imperative in any course today: addressing climate and other human-driven environmental challenges.

“I’m focused on training students to approach these problems at the local level, so they see what happens when they’re working through communities, rather than prescribing to them something to scale all over the world,” Brown says.

That localized, individualized approach helped expand on what the students initially believed was possible, and compelled them to become part of the solution through their studies and in their professional lives. 

Since their return to campus, Brown’s students have continued to lean on one another and build community, one meal at a time. Many Tuesday nights, they come together to cook dinner, Barcelona squat style. Each individual brings their ingredients, and together they create a recipe that nourishes and sustains.  

“I was losing a lot of faith in the world before this trip,” Macatantan admits. “We’re constantly surrounded by consumption and the drive to do more. This experience helped me realize that I want to do something that impacts people. For me, that will look like research. I want to become an expert in a subject and become someone who can help communicate that knowledge to people who need it.” 

“MISTI Global Classrooms like this show what happens when learning extends beyond the MIT campus,” says Alicia Goldstein Raun, associate director of MISTI and managing director of the MIT-Spain Program at the Center for International Studies. “I was excited when Professor Brown approached me to help shape this new class, knowing it would resonate with students,” says Raun. “The students tackled global challenges like climate change and explored the degrowth movement while immersing themselves in Spanish communities and culture.”

For faculty interested in designing a MISTI Global Classroom, more information can be found here.


Beacon Biosignals is mapping the brain during sleep

Founded by Jake Donoghue PhD ’19 and former MIT researcher Jarrett Revels, the company is creating an AI-driven platform to help diagnose and treat disease.


The human brain remains one of the most fascinating and perplexing mysteries in medicine. Scientists still struggle to match neurological activity with brain function and detect problems early, slowing efforts to treat neurological disorders and other diseases.

Beacon Biosignals is working to make sense of the brain by monitoring its activity while people sleep. The company, which was founded by Jake Donoghue PhD ’19 and former MIT researcher Jarrett Revels, developed a lightweight headband that uses electroencephalogram (EEG) technology to measure brain activity while people enjoy their normal sleep routines at home. Those data are processed by machine-learning algorithms to monitor the effects of novel treatments, find new signs of disease progression, and create patient cohorts for clinical trials.

“There’s a step-change in what becomes possible when you remove the sleep lab and bring clinical-grade EEG into the home,” says Donoghue, who serves as Beacon’s CEO. “It turns sleep from a constrained, facility-based test into a scalable source of high-quality data for diagnostics, drug development, and longitudinal brain health.”

Beacon partners with pharmaceutical companies to accelerate its path to patients. The company’s FDA 510(k)-cleared medical device has already been used in over 40 clinical trials across the globe as part of studies aimed at treating conditions including major depressive disorder, schizophrenia, narcolepsy, idiopathic hypersomnia, Alzheimer’s disease, and Parkinson’s disease.

With each deployment, Beacon learns more about how the brain works — insights it is using to create a “foundation model” of the brain.

“It’s our belief that the dataset that’s going to transform brain health doesn’t exist yet — but we are rapidly creating it,” Donoghue says. “Our platform can characterize the heterogeneity of disease progression, generating dynamic insights that are impossible to fully capture through static modalities like sequencing or imaging. The brain is an electric organ and changes through synaptic plasticity, so tracking brain function across many diseases at scale will allow us to discover novel subgroups of diseases and map them over time.”

Illuminating the brain

Donoghue trained in the Harvard-MIT Program in Health Sciences and Technology, conducting clinical training for an MD while completing his PhD in neuroscience at MIT under the guidance of Earl Miller, MIT's Picower Professor in Brain and Cognitive Sciences and The Picower Institute for Learning and Memory. While in the program, Donoghue trained at Massachusetts General Hospital and Boston Children’s Hospital, where he helped care for patients, including in oncology, during the rise of genomic sequencing to guide precision cancer therapies. He later worked in neurology and psychiatry, where care often relied on more iterative approaches — highlighting an opportunity to bring similarly data-driven precision to brain health.

“What struck me most was the inability to measure brain function in the ways that cardiologists can longitudinally monitor cardiac function in patients from home,” Donoghue says. “At MIT, I built this conviction that processing a lot of brain data and working to correlate that with brain function would be transformative to how these neurological diseases are identified and treated.”

Toward the end of his training, Donoghue began developing his ideas further, engaging with mentors including HST and Harvard Medical School professors Sydney Cash and Brandon Westover. He had met Revels, who was working as a research software engineer in MIT’s Julia Lab, during his PhD, and convinced him to co-found Beacon with him in 2019.

“We decided building a business to understand the organ of interest — the brain — would be a great start to understanding heterogeneous neuropsychiatric diseases and building better treatments,” Donoghue recalls.

Beacon began as a computation and analytics company building wearable devices to expand clinical impact and reach. From its early days, Beacon has been partnering with large pharmaceutical companies running clinical trials, offering a less invasive way to watch brain activity and learn how their drugs are impacting the brain as well as how patients sleep.

“It was clear sleep was the right window to understand the brain,” Donoghue says. “Neural activity during sleep can be an order of magnitude higher and more structured, almost like a language. It’s a great surface area for understanding brain function and how different drugs affect the brain.”

Donoghue says Beacon’s devices can collect lab-grade data on each patient for multiple sequential nights, resulting in higher quality assessment. The company uses machine learning to extract insights, such as the time patients spend in different sleep stages and the number of small awakenings that occur throughout the night. It can also detect subtle sleep architecture changes that might lead to cognitive decline.

“We’re starting to take features of sleep activity and link them to outcomes in a way that’s never been done with this level of precision,” Donoghue says.

To date, Beacon has taken part in clinical trials for sleep and psychiatric disorders as well as neurodegenerative diseases, where sleep changes can emerge years before the presentation of symptoms.

“We do a lot of work in areas like Alzheimer’s disease and Parkinson’s, which affected my grandfather,” Donoghue says. “We’re analyzing features of rapid-eye-movement and slow-wave sleep to detect early changes that precede clinical symptoms. It’s an opportunity to move these diseases from late recognition to much earlier, data-driven detection.”

Improving brain treatments for millions

Last year, Beacon acquired an at-home sleep apnea testing company that serves more than 100,000 patients each year across the U.S., accelerating access to high-quality, comprehensive testing in the home and expanding the reach of its platform. Then in November, the company raised $97 million to accelerate that expansion.

“The vision has always been to reach patients and help people at scale,” Donoghue says. “What’s powerful is that we’re building a longitudinal record of brain function over time,” Donoghue says. “A patient might come in for sleep apnea screening, but if they develop Parkinson’s years later, that earlier data becomes a window into the disease before symptoms emerged. That turns routine testing into a foundation for entirely new prognostic biomarkers — and a path to detecting and intervening in brain disease earlier, potentially before symptoms ever begin.”


Unlocking mysteries of the universe through math

Mathematician Amanda Burcroff is developing frameworks for understanding algebraic and geometric spaces in science as part of the School of Science Dean’s Postdoctoral Fellowship.


GPS navigation, cryptography, quantum computing — while some of humankind’s greatest advancements have been invented by pioneers from various cultures, they were founded upon one common grammar: mathematics.

“Mathematics is the language with which God wrote the universe,” said the famous Italian astronomer, physicist, and philosopher Galileo Galilei, who, among his various scientific contributions, helped provide evidence for the idea that the sun is at the center of the solar system.

Although mostly conveyed through combinations of numbers, letters, and signs that may seem enigmatic to many, math equations hold within them countless stories — playbooks that generations of wonderers and inventors have crafted, refined, and shared in an attempt to make sense of a world full of unknown variables.

“I have faith in mathematics that, when there seems to be something special happening, when there’s some coincidence, that it’s not just a coincidence,” says mathematician Amanda Burcroff, “but that there’s actually some really deep, interesting, and involved reason for why that should be true.”

Burcroff’s research is focused on algebraic combinatorics, an area that provides discrete frameworks for understanding algebraic and geometric spaces that ubiquitously arise across science. This year, she joins MIT’s Department of Mathematics as a postdoc as part of the School of Science Dean’s Fellowship. Working with Professor Alexander Postnikov, Burcroff is building upon her techniques with the goal of applying them to other areas such as theoretical physics — a field that seeks to uncover the fundamental laws governing everything from subatomic particles to the cosmos itself.

“I have trust that if you keep following the path, eventually you’ll find the treasure — that is, whatever theorem or proof — that you’re looking for,” she says.

Exploring possibilities and redefining rules

Like many children, Burcroff once saw math as a subject that entailed lots of memorizing. Although she felt that it came naturally to her, she didn’t always find math very interesting.

In high school, as she came to learn about areas like calculus and geometry, Burcroff started to see the discipline in a different light — a creative approach to exploring what’s possible.

“[In] most other fields, the rules are imposed on you by the world,” she says, “but in math, you get full freedom to lay down those rules and then figure out what the implications of those rules are by using logical consequence.”

In 2015, Burcroff began her bachelor’s degree at the University of Michigan with a major in math and a minor in computer science. There, she entered the world of combinatorics — a branch of math dealing with counting, arranging, and combining objects that forms a crucial basis for understanding the complexity of problems, as well as the limits of computer algorithms.

“When I was starting out, I was just happy to have any mystery that anyone gave me,” she says.

Math was, to Burcroff, like a fun game with levels to complete. But during a study abroad program in Budapest, Hungary — the hometown of Paul Erdős, who is considered to be one of the most prolific mathematicians of the 20th century — it became more exciting to play when she was handed puzzles no one has yet solved.

“It turns out that if you put down the right set of rules, there’s an infinite number of beautiful things that you can do with it,” she says.

A journey of endless mysteries to unlock

In 2019, Burcroff embarked on a journey to pursue further research in England, later completing a master’s degree in pure mathematics at the University of Cambridge, then a research master’s degree at Durham University. In 2021, she returned to the United States and began her PhD at Harvard University, with the guidance of Professor Lauren Williams.

Among several riddles she has unraveled over the years, Burcroff helped unify different mathematical approaches to understand why systems work so reliably. Think of it as finding out that two seemingly different set of instructions actually lead the same way. By demonstrating their connections, her work has revealed an underlying, overarching mathematical architecture — a finding that later helped Burcroff and her collaborators tackle one of the many enduring riddles in her field.

Generalized cluster algebras form the basis for describing geometries that appear throughout physics. For more than a decade, mathematicians suspected these building blocks were created only by adding up ingredients and never subtracting, although no one was able to prove it. In 2024, Burcroff and her collaborators published a paper demonstrating that these spaces have nice positivity properties by developing a new way to count and organize patterns — helping untangle a long-standing conjecture, whose potential implications span from predicting particle collision outcomes to describing the spaces appearing in string theory.

These findings have earned Burcroff numerous prestigious awards including a National Science Foundation Graduate Research Fellowship, a British Marshall Scholarship, and a Jack Kent Cooke Graduate Fellowship.

Despite the tremendous number of problems she has answered, new ones keep arising.

“Every time you unlock one of them, it gives you a bunch of paths to new connected mysteries,” Burcroff says.

At MIT, she is working with Postnikov, whose research on combinatorics and positivity-type problems has presented a radically different way to calculate fundamental quantities in quantum field theory.

“Burcroff is conducting research across disciplinary boundaries,” says Postnikov.

He adds: “I am sure that she will have a lot of fruitful interactions with researchers in other MIT departments.”

Burcroff’s goal is to apply combinatorial techniques to broader physical contexts and direct applications, especially those with implications to topics like mirror symmetry, a principle in string theory suggesting that very different-looking geometric spaces can be mathematically equivalent.

While “doing math is 99 percent trying something and failing,” Burcroff says it is this same challenge that keeps her motivated. To her, it is not about reaching a destination, but rather about the continuous “process of discovery,” one she hopes to share beyond the typical classroom.

To make math more accessible, especially among underrepresented groups, Burcroff has worked with mentorship programs including Harvard’s Real Representations and Math Includes, Cambridge Girls’ Angle, and MIT PRIMES. During her time as a postdoc, she hopes to continue this outreach and explore ways to get involved with other support groups at MIT’s Department of Mathematics.


Study: Gene circuits reshape DNA folding and affect how genes are expressed

When genes are transcribed, they suppress or activate their neighbors, coupling expression between the two genes.


When a gene is turned on in a cell, it creates a ripple effect along the DNA strand, changing the physical structure of the strand. A new study by MIT researchers shows that these ripples can stimulate or suppress neighboring genes.

These effects, which result from the winding or unwinding of neighboring DNA, are determined by the order of genes along a strand of DNA. Genes upstream of the active gene are usually turned up, while those downstream are inhibited.

The new findings offer guidance that could make it easier to control the output of synthetic gene circuits. By altering the relative ordering and arrangement of genes, or “gene syntax,” researchers could create circuits that synergize to maximize their output, or that alternate the output of two different genes.

“This is really exciting because we can coordinate gene expression in ways that just weren’t possible before,” says Katie Galloway, an assistant professor of chemical engineering at MIT. “Syntax will be really useful for dynamic circuits. Now we have the ability to select not only the biochemistry of circuits, but also the physical design to support dynamics.”

Galloway is the senior author of the study, which appears today in Science. MIT postdoc Christopher Johnstone PhD ’26 is the paper’s lead author. Other authors include MIT graduate student Kasey Love, members of the lab of Brandon DeKosky, an MIT associate professor of chemical engineering, and researchers from Peter Zandsta’s lab at the University of British Columbia and the labs of Christine Mummery and Richard Davis at Leiden University Medical Center in the Netherlands.

Gene syntax

When a gene is copied into messenger RNA, or “transcribed,” the double-stranded DNA helix must be unwound so that an enzyme called RNA polymerase can access the DNA and start copying it. That unwinding leads to physical changes in the structure of DNA strand.

Upstream of the gene, DNA becomes looser, while downstream, it becomes more tightly wound. These changes affect RNA polymerase’s ability to access the DNA: Upstream of an active gene, it’s easier for the enzyme to attach; downstream, it’s more difficult.

In a study published in 2022, Galloway and Johnstone performed computational modeling that explored how these biophysical changes might influence gene expression. They studied three different arrangements, or types of syntax: tandem, divergent, and convergent.

Most synthetic gene circuits are designed in a tandem arrangement, with one gene followed by another downstream. In a divergent arrangement, neighboring genes are transcribed in opposite directions (away from each other), and in convergent syntax, they are transcribed toward each other.

The modeling suggested that the divergent arrangement was most likely to produce circuits where both genes are expressed at a high level. Tandem arrangements were predicted to result in the downstream gene being suppressed by the upstream gene.In the new study, the researchers wanted to see if they could observe these predicted phenomena in human cells.

“Normally, we think about gene circuits and pieces of DNA as these lines that we draw, but they’re polymers that have physical characteristics,” Galloway says. “The thing that we were trying to solve in this paper was: When you put two genes on the same piece of DNA, how does their physical interaction become coupled?”

The researchers engineered circuits that each contained two genes, in either a tandem, divergent, or convergent configuration, into human cell lines and human induced pluripotent stem cells.

The results confirmed what their modeling had predicted: In divergent circuits, expression of both genes was amplified. In tandem circuits, turning on the upstream gene suppressed the expression of the downstream gene.

These effects produced as much as a 25-fold increase or decrease in gene expression, and they could be seen at distances of up to 2,000 base pairs between genes.

Using a high-resolution genome mapping technique called Region Capture Micro-C, the researchers were also able to analyze how the DNA structure changed when nearby genes were being transcribed.

As predicted, they found that the DNA regions downstream from an active gene formed tightly twisted structures known as plectonemes, similar to the tangles seen in a twisted telephone cord. These structures make it harder for RNA polymerase to bind to DNA.

To engineer these cells, the researchers used a new system they developed with the LUMC team called STRAIGHT-IN Dual, which allows them to efficiently insert two genes into the same DNA strand at both alleles. This system is being reported in a second paper published today, in Nature Biomedical Engineering.

Precise control

The new findings could help guide the design of synthetic gene circuits, which are usually designed to be controlled by biochemical interactions with activator or repressor molecules. Now, circuit designers can also perform biophysical manipulations to enhance or repress genes expression.

“Everyone thinks about the components they need, and the biochemical properties they need to build a circuit,” Galloway says. “Now, we have added the physical construction of those components, which is going to change how those biochemical units are interpreted.”

As a demonstration of one potential application, the researchers built synthetic circuits containing the genes for two segments of a novel antibody discovered by the Dekosky lab, used to treat yellow fever, and incorporated them into human cells. As they expected, the divergent syntax produced larger quantities of the yellow fever antibody.

Galloway’s lab has also used this approach to optimize the output of synthetic gene circuits they previously reported that could be used to deliver gene therapy or to reprogram adult cells into other cell types.

This strategy could also be used to build a variety of other types of dynamic synthetic circuits, such as toggle switches, oscillators, or pulse generators, for any application that requires precise control over gene expression.

“If you want coordinated expression, a divergent circuit is great. If you want something that’s either/or, you can imagine using a convergent or tandem circuit, so when one turns on, the other turns off, and you can alternate pulses,” Galloway says. “Now that we understand the syntax, I think this will pave the way for us to program dynamic behaviors.”

The research was funded, in part, by the National Institutes of Health, the National Institute for General Medical Sciences, a National Science Foundation CAREER Award, the Pershing Square Foundation, the Air Force Research Laboratory, and the Koch Institute Support (core) Grant from the National Cancer Institute.


The hidden structure behind a widely used class of materials

Relaxor ferroelectrics have been used in electronics and sensors for decades, but the source of their unique properties was a mystery until now.


Materials called relaxor ferroelectrics have been used for decades in technologies like ultrasounds, microphones, and sonar systems. Their unique properties come from their atomic structure, but that structure has stubbornly eluded direct measurement.

Now a team of researchers from MIT and elsewhere has directly characterized the three-dimensional atomic structure of a relaxor ferroelectric for the first time. The findings, reported today in Science, provide a framework for refining models used to design next-generation computing, energy, and sensing devices.

“Now that we have a better understanding of exactly what’s going on, we can better predict and engineer the properties we want materials to achieve,” says corresponding author James LeBeau, MIT’s Kyocera Professor of Materials Science and Engineering. “The research community is still developing methods to engineer these materials, but in order to predict the properties those materials will have, you have to know if your model is right.”

In their paper, the researchers describe how they used an emerging technique to reveal the distribution of electric charges in the material, with a surprising result.

“We realized the chemical disorder we observed in our experiments was not fully considered previously,” says co-first authors Michael Xu PhD ’25 and Menglin Zhu, who are both postdocs at MIT. “Working with our collaborators, we were able to merge the experimental observations with simulations to refine the models and better predict what we see in experiments.”

Joining Zhu, Xu, and LeBeau on the paper are Colin Gilgenbach and Bridget R. Denzer, MIT PhD students in materials science and engineering; Yubo Qi, an assistant professor at the University of Alabama at Birmingham; Jieun Kim, an assistant professor at the Korea Advanced Institute of Science and Technology; Jiahao Zhang, a former PhD student at the University of Pennsylvania; Lane W. Martin, a professor at Rice University; and Andrew M. Rappe, a professor at the University of Pennsylvania.

Probing disordered materials

Leading simulations of relaxor ferroelectrics suggest that when an electric field is applied, the interactions of positively and negatively charged atoms in different nanoregions of the material help give rise to exceptional energy storage and sensing capabilities. The details of those nanoregions have been impossible to directly measure to date.

For their Science paper, the researchers studied a relaxor ferroelectric material used in sensors, actuators, and defense systems that is a lead magnesium niobate-lead titanate alloy. They used an emerging measurement technique, called multi-slice electron ptychography (MEP), in which researchers move a nanoscale-sized probe of high-energy electrons over a material and measure the resulting electron diffraction patterns.

A green laser scans through a boxed lattice of atoms

“We do this in a sequential way, and at each position, we acquire a diffraction pattern,” Zhu explains. “That creates regions of overlap, and that overlap has enough information to use an algorithm to iteratively reconstruct three-dimensional information about the object and the electron wave function.”

The technique revealed a hierarchy of chemical and polar structures that spanned from atomic to mesoscopic scales. The researchers also found that many regions of differing polarization in the material were much smaller than predicted by the leading simulations. The researchers then fed their new data back into those computer simulations and refined the models to better reflect their findings under different conditions.

“Previously, these models basically had random regions of polarization, but they didn’t tell you how those regions correlate with each other,” Xu says. “Now we can tell you that information, and we can see how individual chemical species modulate polarization depending on the charge state of atoms.”

Toward better materials

Zhu says the paper demonstrates the potential of electron ptychography to study complex materials and opens up new avenues of research into complex, disordered materials.

“This study is the first time in the electron microscope that we’ve been able to directly connect the three-dimensional polar structure of relaxor ferroelectrics with molecular dynamics calculations,” Xu says. “It further proves you can get three-dimensional information out of the sample using this technique.”

The researchers also believe the approach could one day help engineer materials with advanced electronic behaviors for a range of improved memory storage, sensing, and energy technologies.

“Materials science is incorporating more complexity into the material design process — whether that’s for metal alloys or semiconductors — as AI has improved and our computational tools have become more advanced,” LeBeau says. “But if our models aren’t accurate enough and we have no way to validate them, it’s garbage in garbage out. This technique helps us understand why the material behaves the way it does and validate our models.”

The work was supported, in part, by the U.S. Army Research Laboratory, the U.S. Office of Naval Research, the U.S. Department of War, and a National Science Graduate Fellowship. The researchers also used MIT.nano facilities.


How neurons sense bacteria in the gut

Neural interaction with bacteria has important effects on animal brains. A new study investigates how neurons sense bacteria by revealing, in nematodes, the bacterial signals that a key neuron detects.


Recent studies suggest animals and people alike have close and complex relationships with the bacteria around and within them. The human gut microbiome, for instance, has been associated with both depression and Parkinson’s disease. To go beyond association toward understanding of the actual mechanisms that enable the bacterial microbiome to influence brain function, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT examines the mechanisms at work in a model “bacterial specialist,” the nematode Caenorhabditis elegans.

In the new open-access study in Current Biology, the team, led by Picower Fellow Cassi Estrem in the Picower Institute for Learning and Memory lab of Associate Professor Steven Flavell, identifies the specific chemicals that a key neuron in C. elegans senses, both in the bacteria that it eats and in the bacteria that it needs to avoid ingesting.

“In our bodies, our own cells are outnumbered by the bacterial cells living in and on us. There’s an increasing recognition that this has a profound impact on human health,” says Flavell, an investigator of the Howard Hughes Medical Institute and faculty member of MIT’s Department of Brain and Cognitive Sciences. “It’s been clear that there are links for some time. Our study aimed to identify the hard mechanisms of how a host nervous system is affected by bacteria in the alimentary canal.”

Achieving a fundamental mechanistic understanding of how neurons interact with bacteria could help improve attempts to intervene in or manipulate those interactions with therapeutic drugs or supplements, Flavell says.

Mmm … sugar

Flavell calls C. elegans a “bacterial specialist” because the tiny, transparent worm has evolved to eat bacteria as its diet, while also needing to avoid pathogenic bacteria that can prove to be its undoing. This has led it to develop a nervous system especially well-attuned to sorting out what is food and what is foe. In 2019, the lab discovered that the neuron NSM, which projects into the worm’s alimentary canal, employs two “acid sensing ion channels” (ASICs) to detect when certain bacteria have been ingested. Notably, those ion channels are analogous to ones found in neurons in humans. When NSM detects yummy bacteria, it releases serotonin that causes the worm to increase its feeding rate and slow its slithering so that it can stay to dine on the surrounding meal.

To really understand how this works, Flavell and Estrem realized they needed to know exactly what the ion channels are detecting in the bacteria. To get started, they exposed worms to 20 different kinds of bacteria the worms are known to encounter and found that they all activated NSM activity to varying extents. Then they broke the bacteria down into more and more specific chemical components to see which one or ones triggered NSM. The experiments ruled out many components, including DNA, lipids, proteins, and simple sugars, and instead found that it’s specifically the polysaccharide sugars that coat many bacteria that drive NSM activation. In particular, in gram-positive bacteria, a chemical called peptidoglycan activated NSM. In gram-negative bacteria, a different polysaccharide was apparently in play.

Estrem and Flavell’s team also ran experiments showing that polysaccharides from bacteria in general, and peptidoglycan in particular, not only trigger NSM electrical activity, but actually promote the feeding and slowing behaviors. They also showed that genetically knocking out the ASICs abolished these responses. In all, they demonstrated that polysaccharide and peptidoglycan detection are sufficient to trigger the worm’s behaviors, and requires the ASICs.

Better not eat this

Having shown what exactly triggers the worms to recognize their bacterial food, the researchers wondered whether they could also pinpoint a danger sign the worm finds in harmful bacteria. For these experiments, they carefully used Serratia marcescens, a bacterium that’s also infectious for humans. Some strains of the bacteria have a red color, while others do not. The red ones, which have a pigment called prodigiosin, tend to be much more lethal for worms. In their testing, the researchers found that when NSM detected the non-pigmented bacteria, the neuron still activated and the worms still ingested the bacteria, but when prodigiosin was present, NSM did not activate and the worm did not pump it in or slow down to eat.

Adding prodigiosin to normally yummy bacteria also suppressed NSM’s usual response. In other words, the worms have evolved their digestive behavior (and the detectors within NSM) to avoid ingesting a chemical specifically associated with danger.

Flavell says it’s likely that some of the fundamental mechanisms highlighted in the new paper will inform studies of similar mechanisms in other animals.

“We developed a way of identifying these pathways by studying this organism that specializes in bacterial detection and displays robust responses,” Flavell explains. “But there’s no reason these pathways should be limited to C. elegans. The molecular players we identified are found in many species, including mammals.”

In addition to Estrem and Flavell, the paper’s other authors are Malvika Dua, Colby Fees, Greg Hoeprich, Matthew Au, Bruce Goode, and Lingyi Deng.

The National Institutes of Health, the McKnight Foundation, the Alfred P. Sloan Foundation, the Howard Hughes Medical Institute, and The Freedom Together Foundation provided support for the study.


A materials scientist’s playground

New system at MIT.nano will support quantum technology research.


Scientists and engineers around the world are working to improve quantum bits, or qubits, the minuscule building blocks of the quantum computer. Qubits are incredibly sensitive, making it easy for errors to be introduced, lowering device yield. But a new cluster tool at MIT.nano introduces capabilities that will allow researchers to continue advancements in qubit performance.

Passersby outside MIT.nano may have recently noticed a complex looking piece of equipment being installed on the first-floor cleanroom. What looks like a sci-fi movie prop is actually a state-of-the-art, custom-built molecular beam epitaxy (MBE): a physical vapor deposition system that operates under ultra-high vacuum to produce high-quality thin films. With the ability to grow different crystalline materials on a wafer, the tool will support quantum researchers and materials scientists by allowing them to study how film growth affects the properties of the materials used in making qubits.

“To realize the full promise of quantum computing, we need to build qubits that are robust, reproducible, and extensible,” says William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT. “To date, most of the improvements to superconducting qubit performance are traceable to circuit design — essentially, designing qubit circuits that are less sensitive to their environmental noise. However, those improvements have largely run their course. Going forward, we need to address the fundamental materials science and fabrication engineering required to reduce the sources of environmental noise. This multi-chamber, cassette-loaded, 200-millimeter wafer MBE system is exactly the right tool at the right time. And there’s no place better to do this research than at MIT.nano.”

That is because MIT.nano is preconditioned to receive this type of system with physical space, climate controls, policies and procedures for researchers, and expert staff to manage the lab. Through an equipment support plan, Oliver’s Engineering Quantum Systems (EQuS) group is able to install and run the tool inside MIT.nano, a high-performance, safe, and reliable environment.

A controlled environment is essential for the MBE. “Think of this system like an inverted International Space Station (ISS),” explains Patrick Strohbeen, research scientist in the EQuS group. “The ISS is a small chamber of atmosphere surrounded by the vacuum of space. This MBE system is a chamber of space-level vacuum surrounded by atmosphere.” That vacuum of space is kept at a steady negative 90 degrees Celsius, which enables precise growth of thin films on an atomic scale. It is the largest single deposition chamber (1-meter diameter) the manufacturer, DCA, has sold in the United States.

The journey of a wafer

The system, which in total takes up 600 square feet, is made up of six chambers. First is the load lock, where the wafer is placed into the system and brought down from atmospheric pressure to near the vacuum level of space. Then, the wafer enters the distribution center. This space acts like a central hub, transferring the wafers to other chambers. Next is the deposition, or “growth,” chamber. This is where the system’s primary function takes place — depositing materials, specifically atoms of superconducting metal, onto a substrate, typically silicon. From there, it moves to the oxidation chamber, which facilitates the growth of key ceramic materials for qubits. A fifth storage chamber can hold an additional 10 wafers within the vacuum.

A unique aspect of this system is its sixth chamber, designed for X-ray photoelectron spectroscopy (XPS). Using this chamber, researchers can shoot a photon in the form of X-rays at the surface and, when it hits the surface, it will excite the electron inside the material so that the electron jumps out and is picked up by a sensor that then tells the researcher about the environment the electron came from. As individual layers of atoms are put down in the growth chamber, scientists can move the wafer to the XPS chamber to measure changes in the material structure of the film and back again, all while keeping it inside the vacuum space.

Why is this important? “The quantum community has excellent device physicists and device engineers,” said Strohbeen. “The last piece of the puzzle is: We need to understand the materials platform that we’re using for these devices.” The buried interfaces, so far, have been understudied due to the difficulty in probing them, he explained.

For those of us who are not MBE experts, think of the snow that fell in Massachusetts this winter. How can you tell how much ice is on the pavement without removing all of the snow on top of it? And without changing the natural setting where the snow, ice, and pavement meet? With this system, specifically the XPS chamber, scientists can study the interfaces of buried materials without disturbing the physical or chemical environments. “It is a materials scientist’s playground,” jokes Strohbeen — a controlled space where researchers can learn about and explore materials’ interactions within layers of atoms.

Why MIT.nano?

When Oliver, who is also the director of the MIT Center for Quantum Engineering, secured the MBE Quantum, the next question was where to put it. Enter MIT.nano. Housing 45,000 square feet of cleanroom, this facility exists at MIT to support complex, sensitive equipment with both the infrastructure and the staff needed to maintain it.

“MIT.nano’s ultra-stable building utilities and lab environment are exactly what is needed to support a system that demands extreme repeatability and purity,” says Nick Menounos, MIT.nano associate director of infrastructure. “The success of this installation grew from the early collaboration. Professor Oliver engaged the MIT.nano team in the procurement process almost two years in advance. That foresight, combined with the infrastructure momentum we gained from the recent CHIPS Act project, meant that we could prepare the cleanroom perfectly. We compressed the installation process that normally takes several months and had this extraordinary machine running in under three weeks.”

“From the very beginning, the MIT.nano staff were helpful, knowledgeable, and willing to go above and beyond to make this happen,” says Oliver. “While the MIT.nano facility is certainly an infrastructural crown jewel at MIT, it’s the MIT.nano staff who make it the national treasure it is today.”

Positioning the MBE Quantum in the cleanroom helps the team focus on scalability and device yield. Humidity and particle count, two things carefully measured and maintained at MIT.nano, can affect the output of the device. Minimizing as many variables as possible is key to improving qubit performance. The cleanroom also allows for new device research because an array of fabrication and metrology tools are available without having to leave the clean environment.

“We’re really excited to see what we can do with it,” says Strohbeen. “We bought it as a materials science tool, and it will also be a device development tool due to the flexibility of having it in the cleanroom.”

The MBE system was purchased through a combination of grants from the Army Research Office (ARO) and from the Laboratory for Physical Sciences (LPS). The ARO grant, a Defense University Research Instrumentation Program grant, is the premier grant from ARO for funding large capital equipment purchases that should prove disruptive in technologically relevant areas. It arrives at an important time on campus, as one of MIT’s strategic initiatives — the MIT Quantum Initiative — aims to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.


Making the case for curiosity-driven science

President Sally Kornbluth spoke in front of a packed crowd about growing challenges to the U.S. research ecosystem as funding for America’s top research universities becomes increasingly strained.


“The thing that really struck me when I came to MIT and strikes me every single day is the stuff that’s going on here is amazing. The science, the engineering … every day I hear something that makes my jaw drop,” remarked President Sally Kornbluth during a live discussion with Lizzie O’Leary of Slate’s “What Next: TBD” podcast.

Kornbluth spoke about everything from the importance of curiosity-driven science and why basic science is critical to our nation’s future, to AI and education, and even bravely joined O’Leary in a rendition of the Williams College song, “The Mountains,” in honor of their shared alma mater.

“We are in this time of incredible uncertainty,” said Kornbluth of the current state of higher education and funding for scientific research. “What we are trying to do is keep the science robust.”

Bouncing back to her time at Duke and her love of college basketball, she noted it’s a combination of zone coverage and man-to-man defense when trying to address skepticism about higher education in Washington. She emphasized: “As one of the top institutions in the world it’s part of our responsibility to articulate the importance of science. Behind the scenes, I am — along with many other [university] presidents — I am in D.C. all the time now. I want to speak to Congressmen and women, Senators, people in the executive branch to explain the importance of what we are doing.”

Kornbluth emphasized that the pipeline of basic science that flows from U.S. universities is a critical asset for our country, cautioning that to keep straining this pipeline could have enormous negative ramifications for the U.S. down the line.

“If you think about research done in this country, it’s done in in universities, it’s done in national labs, and it’s done in industry,” said Kornbluth. Universities are where most of the science with a long pathway to impact, requiring patience, starts. She pointed to immunotherapy for cancer, which began 30-40 years ago in basic immunotherapy research, as an example. With that pipeline being drained, what does the future hold for new cancer therapies or new AI and quantum technologies?

Kornbluth also underscored that uncertainty and lost funding are having a “huge impact on the talent pipeline,” delving into the unique role universities play in training graduate students, who are the next generation of scientific researchers. “We hear, ‘Oh it would be okay if research was more in industry.’ I say, ‘Would you fly on a plane with a pilot who had never flown?’ How do they think people learn how to do research? We are training the next generation … and we are losing funding for them.” She added: “I think we are going to see reverberations for many decades if we don’t rectify that issue.”

When asked how she and her colleagues are working to keep research moving forward, Kornbluth explained that at MIT, “we have tried to find alternative ways to elevate the science. We have a series of presidential initiatives that cut across the whole campus in things like health and life sciences, quantum, humanities and social sciences. The notion is that we are trying to create new opportunities.”

Still, she acknowledged that losses from the endowment tax and diminished federal funding are painful. “There are only four schools right now that are subject to the 8 percent endowment tax, which is a tax on our earnings. For us, that means $240 million dollars a year plus other losses in grants. So, let’s say the whole thing is, we budgeted for a loss of $300 million a year on a $1.7 billion budget. … That has definitely had an impact on us. No question about it. 

“The other thing about it is again there’s all this uncertainty. Our investigators are writing a ton of grants. They don’t know if they’re going off into the void or they really have the sort of competitive opportunities they’ve always had in the past.”

Asked why universities did not see this moment coming, Kornbluth offered a few thoughts. “Look at MIT — 30,000 companies have come from MIT. When you look at something like that, why would you think any government that wants economic flourishing in their country would come after MIT?” she reflected. “It just never would have occurred to us.”

Turning toward the rapid advances in AI, and how the field is impacting education, Kornbluth noted that at MIT and other universities, “we have to focus on the human element, we have to educate our students, they need to know how to write and do mathematics … they have to view AI as a tool to augment their capabilities. That is how we are thinking about it.”

In the course of the conversation, Kornbluth also expressed her unwavering support for international students, noting that most want the opportunity to stay and contribute to research in the U.S. after graduation. “The talent brought to us through our international community is unbelievable. We can attract the very best in the world. You can bet when they talk about competitiveness with China, for example, in AI, quantum, etc., they are not sitting around in China saying, ‘Oh it’s great America is taking all our students.’ They’re thinking, ‘It’s great that America doesn’t want to take as many of our students anymore because we can train them.’ It’s a competitive issue that we really should lean into.”


Study: Immigrants help address the US eldercare shortage

Economists find that in metro areas with more immigration, nurses are spending more time with elderly patients.


Good caregivers are often in short supply, but after the Covid-19 pandemic hit the U.S. in early 2020, staff levels at nursing homes dropped by 10 percent. What was a simple personnel shortage has moved closer to being a nursing-care crisis.

“We have an aging population, care for them is labor-intensive, and there are shortages everywhere in that supply chain,” says MIT economist Jonathan Gruber.

As it happens, about one-fifth of health care support workers in the U.S. are immigrants. And as a newly published study of the nation’s metro areas shows, changes in immigration levels can affect how much nursing care the elderly receive.

“When immigration rises in a city, it significantly increases the health care workforce,” says Gruber, co-author of the study and a paper detailing its findings.

Overall, Gruber and his colleagues determined that when there is more immigration, registered nurses and other aides work more hours at nursing homes, without displacing already-employed caregivers, while patient outcomes improve. Essentially, a 10 percent increase in female immigrants in a given metro area leads to a 1.1 percent increase in hours that registered nurses spend with elderly patients, while hospitalizations for those patients drop, among other things.

“Even if immigration actually increases labor supply to the medical sector, it was an open question if that would improve outcomes, and it does,” adds Gruber, the Ford Professor of Economics and head of the MIT Department of Economics.

The paper, “Immigration, the Long-Term Care Workforce, and Elder Outcomes in the U.S.,” appears in the American Journal of Health Economics. The authors are Gruber; David C. Grabowski, a professor in the Department of Health Care Policy at Harvard Medical School; and Brian E. McGarry, an assistant professor in the Department of Medicine and the Department of Public Health Sciences at the University of Rochester.

More care, fewer hospitalizations

To conduct the study, the researchers tapped into multiple data sources, including immigration information from 2000 to 2018 appearing in the U.S. Census Bureau’s American Community Survey. Extensive nursing home data came from different types of reports that facilities are required to file in order to maintain Medicare and Medicaid eligibility, allowing the scholars to examine care staffing levels and patient outcomes.

All told, the study encompasses 16 million Medicare beneficiaries in over 13,000 nursing homes in metropolitan statistical areas of the U.S., and evaluates immigrations flows for two decades.

“One of the key groups that’s taking care of our nation’s elders is immigrants,” Gruber says. “So I thought it would be fascinating to understand how much does immigration actually matter for elder care.”

More specifically, the scholars find that for every 10 percent increase in immigration above the norm in metro areas, in addition to the 1.1 percent increase in registered nurse hours, there is a 0.7 percent increase in hours of care provided by certified nurse assistants. There is a 0.6 percent decline in hospitalizations for patients making short-term stays, of up to a month, in nursing homes.

Beyond that, the study yielded other markers showing that patient outcomes improve in these situations. The roughly 1 percent increase in hours of care was accompanied by a decline in the use of physical restraints needed for patients, who also needed less psychiatric medication prescriptions and had fewer urinary tract infections, among other things.   

The fact that those outcomes improved in more immigrant-staffed situations is among the new insights provided by the research.

“There’s a lot of evidence that providing more labor supply to the elderly sector improves patient outcomes,” Gruber says. “But it wasn’t clear whether more immigrants would work the same way, because of language issues or other factors.”

A new lens

The study comes as immigration policy has become a major issue in the U.S., something that Gruber says helped spur his curiosity about its health care implications — although he did not know what the study would reveal, one way or another. In this case, he notes, the impact of immigration on eldercare may be another factor to be considered in the larger debates about the subject.

“I think it provides a new lens on the debate over immigration,” Gruber says. “The debate over immigration has been solely about what will it do to native workers, what will it do to the crime rate, what will it do to tax collection. This adds a new element, which is: What will it do to our citizens’ care? By having more immigration, we provide more care.”

Gruber, Grabowski, and McGarry are continuing to study this issue. In a new working paper, released in February, they found that increases in immigration are consistent with a reduction in the mortality rate, in part by allowing more elderly people the opportunity to receive care at home.

Gruber recognizes that there will continue to be sharp policy disagreements over immigration. Still, as the just-published paper states, to this point, when it comes to nursing care, the “results paint a consistent picture of improved quality of care resulting from increased immigration.”


Solving the “Whac-a-mole dilemma”: A smarter way to debias AI vision models

A new debiasing technique called WRING avoids creating or amplifying biases that can occur with existing debiasing approaches.


In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.

Perhaps one of the best known and most persistent challenges that AI research continues to reckon with is bias. Bias is often discussed in relation to training data, but model architecture can also contain and amplify bias, negatively influencing model performance in real-world settings. In high-stakes medical scenarios, the very real consequences of poor performance have made bias into a quintessential safety issue.

A new paper from researchers at MIT, Worcester Polytechnic Institute, and Google that was accepted to the 2026 International Conference for Learning Representations proposes a novel debiasing approach called “Weighted Rotational DebiasING” (i.e., WRING) that can be applied to vision language models (VLMs), like OpenAI’s OpenCLIP.

VLMs are multi-modal models that can understand and interpret different data modalities like video, image, and text simultaneously. While debiasing approaches for VLMs do exist, the most commonly used approach is known as “projection debiasing,” which leads to what has been termed the “Whac-A-Mole dilemma”, an empirical observation that was formally introduced to AI research in 2023.

Projection debiasing is a post-processing approach that removes the undesirable, biased information from model embeddings by “projecting” the subspace out of a representation space of relationships, thereby cutting out the bias. But this approach has its drawbacks.

“When you do that, you inadvertently squish everything around,” says Walter Gerych, the paper’s first author, who conducted this research last year as a postdoc at MIT. “All the other relationships that the model learns change when you do that.”

Gerych, who is now an assistant professor of computer science at Worcester Polytechnic Institute, is joined on the paper by MIT graduate students Cassandra Parent and Quinn Perian; Google’s Rafiya Javed; and MIT associate professors of electrical engineering Justin Solomon and Marzyeh Ghassemi, who is an affiliate of the Abdul Latif Jameel Clinic for Machine Learning and Health and the Laboratory for Information and Decision Systems. 

While projection debiasing stops the model from acting upon the bias that’s been projected out of the subspace, it can end up amplifying and creating other biases, hence the Whac-A-Mole dilemma. According to Ghassemi, the unintended amplification of model biases is “both a technical and practical challenge. For instance, when debiasing a VLM that retrieves images of clinical staff — if racial bias is removed — it could have the unintended consequence of amplifying gender bias.” 

WRING works by moving certain coordinates within the high-dimensional space of a model — the ones that appear to be responsible for bias — to a different angle, so the model can no longer distinguish between different groups within a certain concept. This changes the representation within a specific space while leaving the model’s other relationships intact. And like projection debiasing, WRING is a post-processing approach, which means it can be applied “on the fly” to a pre-trained VLM. 

“People already spent a lot of resources, a lot of money, training these huge models, and we don’t really want to go in and modify something during training because then you have to start from scratch,” Gerych explains. “[WRING is] very efficient. It doesn’t require more training of the model and it’s minimally invasive.”

In their results, the researchers found that WRING significantly reduced bias for a target concept without increasing bias in other areas. But for now, the approach is somewhat limited to Contrastive Language-Image Pre-training (CLIP) models, a type of VLM that connects images to language for search or classification.

“Extending this for ChatGPT-style, generative language models, is the reasonable next step for us,” says Gerych.

This work was supported, in part, by a National Science Foundation CAREER Award, AI2050 Award Early Career Fellowship, Sloan Research Fellow Award, the Gordon and Betty Moore Foundation Award, and MIT-Google Computing Innovation Award.


The MIT-IBM Computing Research Lab launches to shape the future of AI and quantum computing

Building on a long-standing MIT–IBM collaboration, the new lab will chart the convergence of AI, algorithms, and quantum computing.


The following is a joint announcement by the MIT Schwarzman College of Computing and IBM.

IBM and MIT today announced the launch of the MIT-IBM Computing Research Lab, advancing their long-standing collaboration to shape the next era of computing. The new lab expands its scope to include quantum computing, alongside foundational artificial intelligence research, with the goal of unlocking new computational approaches that go beyond the limits of today’s classical systems.

The MIT-IBM Computing Research Lab builds on a distinguished history of scientific excellence at the intersection of research and academia. Evolving from the MIT-IBM Watson AI Lab, which originated in 2017 on MIT’s campus, the new lab reflects a transformed technology landscape — one which AI has entered mainstream deployment, and quantum computing is rapidly advancing toward practical impact. Together, MIT and IBM aim to help lead research in AI and quantum and to redefine mathematical foundations across both domains.

“We expect the MIT-IBM Computing Research Lab to emerge as one of the world’s premier academic and industrial hubs accelerating the future of computing,” says Jay Gambetta, director of IBM Research and IBM Fellow, and IBM chair of the MIT-IBM Computing Research Lab. “Together, the brightest minds at MIT and IBM will rethink how models, algorithms, and systems are designed for an era that will be defined by the sum of what’s possible when AI and quantum computing come together.”

“For a decade, the collaboration between MIT and IBM has produced leading-edge research and innovation, and provided mentorship and supported the professional growth of researchers both at MIT and IBM,” says Anantha Chandrakasan, MIT’s provost, who, as then-dean of the School of Engineering, spearheaded the creation of the MIT-IBM Watson AI Lab and will continue as MIT chair of the lab. “The incredible technical achievements sets the bar high for our work together over the next 10 years. I look forward to another decade of impact.”

Addressing the next frontiers in computation

The MIT-IBM Computing Research Lab will serve as a focal point for joint research between MIT and IBM in AI, algorithms, and quantum computing, as well as the integration of these technologies into hybrid computing systems. The lab is designed to accelerate progress toward powerful new computational approaches that take advantage of rapid advances in AI and quantum-centric supercomputing, including those that combine maturing quantum hardware with classical systems and advanced AI methods.

This research initiative will include improving capabilities and integrating AI with traditional computing, alongside pursuing advances in small, efficient, modular language model architectures, novel AI computing paradigms, and enterprise-focused AI systems designed for deployment in real-world environments, where reliability, transparency, and trust are essential.

In parallel, the lab will rethink the mathematical and algorithmic foundations that underpin the next era of computing by accelerating the development of novel quantum algorithms for complex problems, with impacts in areas such as materials science, chemistry, and biology.

Additionally, the lab will investigate mathematical and algorithmic foundations of machine learning, optimization, Hamiltonian simulations, and partial differential equations, which are used to approximate the behaviors of dynamical systems that currently stump classical systems beyond limited scales and accuracy. Innovations from the lab could have wide implications for global industries, from more accurate weather and air turbulence prediction to better forecasts of financial market performance. Similarly, with improved optimization approaches, research from the lab could help lower risks in areas like finance, predict protein structures for more targeted medicine, and streamline global supply chains.

With its focus on AI, algorithms, and quantum, the MIT-IBM Computing Research Lab will complement and enhance the work of two of MIT’s strategic initiatives, the MIT Generative AI Impact Consortium and the MIT Quantum Initiative. MIT President Sally Kornbluth launched these strategic initiatives to broaden and deepen MIT’s impact in developing solutions to serious global challenges. The MIT-IBM Computing Research Lab will also leverage IBM’s longtime leadership and expertise in quantum computing. As part of its ambitious roadmap, IBM has laid out a clear path to delivering the world’s first fault-tolerant quantum computer by 2029, and is working across industries to drive value from quantum-centric supercomputing, tightly integrating quantum computers with high-performance computing and AI accelerators to solve the world’s toughest problems.

Deep integration with scientific domains

The MIT-IBM Computing Research Lab will also continue to serve as a foundation for training the next generation of computational scientists and innovators. It will do so by engaging faculty and students across MIT departments, enabling new computational approaches to accelerate discoveries in the physical and life sciences.

The lab will continue to be co-directed by Aude Oliva, senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, and David Cox, vice president of AI Foundations at IBM Research. MIT and IBM have appointed leads for each of the lab’s three focus areas — AI, algorithms, and quantum. Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science (EECS), and Kenney Ng, principal research scientist at IBM Research and the MIT-IBM science program manager, will co-lead AI; Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in EECS, and Vasileios Kalantzis, IBM Research senior research scientist, will co-lead algorithms; and Aram Harrow, professor of physics, and Hanhee Paik, IBM director of Quantum Algorithm Centers, will co-lead quantum.

“The MIT-IBM Computing Research Lab reflects an important expansion of the collaboration between MIT and IBM and the increasing connections across AI, algorithms, and quantum. This deepened focus also underscores a strong alignment with the MIT Schwarzman College of Computing’s mission to advance the forefront of computing and its integration across disciplines,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and MIT co-chair of the lab. “I’m excited about what this next chapter will enable in these three areas, and their impact broadly.”

Building on nearly a decade of collaboration

The MIT-IBM Watson AI Lab helped pioneer a model for academic-industry research collaboration, aligning long-term scientific inquiry with real-world impact. Since its inception, the lab has funded over 210 research projects involving over 150 MIT faculty members and over 200 IBM researchers. Collectively, the projects have led to over 1,500 peer-reviewed articles. The lab also helped shape the career growth of a number of MIT students and junior researchers, funding more than 500 students and postdocs.

“The true measure of this lab is not just innovation, but transformation of a field. Hundreds of students have contributed to thousands of publications in top conferences and journals, demonstrating their capabilities to address meaningful problems,” says Oliva. “The MIT-IBM Computing Research Lab builds on an extraordinary legacy of impact to advance a trusted collaboration that will redefine the future of AI and quantum computing in a way never seen before.”

“By coupling academic rigor with industrial scale, the lab aims to define the computational foundations that will power the next generation of AI, quantum, and scientific breakthroughs,” says Cox. “By bringing together advances in AI, algorithms, and quantum computing under one integrated research effort, we’re creating the conditions to rethink the mathematical and computational foundations of science and engineering.”

The MIT-IBM Computing Research Lab will capitalize on this foundation, expanding both the scientific scope and the ecosystem of collaborators across the Cambridge-Boston region and beyond.


MIT engineers’ virtual violin produces realistic sounds

Based on the physics of how the instrument produces sound, the model could help violin makers in the design process.


There is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.

But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.

In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.

While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.

In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.

As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.

The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.

For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.

“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”

“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”

Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.

Sound matrix

The quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.

In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.

The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”

For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.

They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.

“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”

A plucky model

The team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.

For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.

For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.

The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”

“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”

As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.

“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”

This work was supported, in part, by an MIT Bose Research Fellowship.


Enabling privacy-preserving AI training on everyday devices

A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.


A new method developed by MIT researchers can accelerate a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure.

The MIT researchers boosted the efficiency of a technique known as federated learning, which involves a network of connected devices that work together to train a shared AI model.

In federated learning, the model is broadcast from a central server to wireless devices. Each device trains the model using its local data and then transfers model updates back to the server. Data are kept secure because they remain on each device.

But not all devices in the network have enough capacity, computational capability, and connectivity to store, train, and transfer the model back and forth with the server in a timely manner. This causes delays that worsen training performance.

The MIT researchers developed a technique to overcome these memory constraints and communication bottlenecks. Their method is designed to handle a heterogenous network of wireless devices with varied limitations.

This new approach could make it more feasible for AI models to be used in high-stakes applications with strict security and privacy standards, like health care and finance.

“This work is about bringing AI to small devices where it is not currently possible to run these kinds of powerful models. We carry these devices around with us in our daily lives. We need AI to be able to run on these devices, not just on giant servers and GPUs, and this work is an important step toward enabling that,” says Irene Tenison, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Her co-authors include Anna Murphy ’25, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a machine-learning engineer at Flower Labs; and senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at the IEEE International Joint Conference on Neural Networks. 

Reducing lag time

Many federated learning approaches assume all devices in the network have enough memory to train the full AI model, and stable connectivity to transmit updates back to the server quickly.

But these assumptions fall short with a network of heterogenous devices, like smartwatches, wireless sensors, and mobile phones. These edge devices have limited memory and computational power, and often face intermittent network connectivity.

The central server usually waits to receive model updates from all devices, then averages them to complete the training round. This process repeats until training is complete.

“This lag time can slow down the training procedure or even cause it to fail,” Tenison says.

To overcome these limitations, the MIT researchers developed a new framework called FTTE (Federated Tiny Training Engine) that reduces the memory and communication overhead needed by each mobile device.

Their framework involves three main innovations.

First, rather than broadcasting the entire model to all devices, FTTE sends a smaller subset of model parameters instead, reducing the memory requirement for each device. Parameters are internal variables the model adjusts during training.

FTTE uses a special search procedure to identify parameters that will maximize the model’s accuracy while staying within a certain memory budget. That limit is set based on the most memory-constrained device.

Second, the server updates the model using an asynchronous approach. Rather than waiting for responses from all devices, the server accumulates incoming updates until it reaches a fixed capacity, then proceeds with the training round.

Third, the server weights updates from each device based on when it received them. In this way, older updates don’t contribute as much to the training process. These outdated data can hold the model back, slowing the training process and reducing accuracy.

“We use this semi-asynchronous approach because want to involve the least powerful devices in the training process so they can contribute their data to the model, but we don’t want the more powerful devices in the network to stay idle for a long time and waste resources,” Tenison says.

Achieving acceleration

The researchers tested their framework in simulations with hundreds of heterogeneous devices and a variety of models and datasets. On average, FTTE enabled the training procedure to reach completing 81 percent faster than standard federated learning approaches.

Their method reduced the on-device memory overhead by 80 percent and the communication payload by 69 percent, while attaining near the accuracy of other techniques.

“Because we want the model to train as fast as possible to save the battery life of these resource-constrained devices, we do have a tradeoff in accuracy. But a small drop in accuracy could be acceptable in some applications, especially since our method performs so much faster,” she says.

FTTE also demonstrated effective scalability and delivered higher performance gains for larger groups of devices.

In addition to these simulations, the researchers tested FTTE on a small network of real devices with varying computational capabilities.

“Not everyone has the latest Apple iPhone. In many developing countries, for instance, users might have less powerful mobile phones. With our technique, we can bring the benefits of federated learning to these settings,” she says.

In the future, the researchers want to study how their method could be used to increase the personalized performance of AI models on each device, rather than focusing on the average performance of the model. They also want to conduct larger experiments on real hardware.

This work was funded, in part, by a Takeda PhD Fellowship.


With a swipe of a magnet, microscopic “magno-bots” perform complex maneuvers

MIT researchers’ new fabrication technique can produce soft, microscopic structures with magnetically activated moving parts.


Under a microscope, a bouquet of lollipop-like structures, each smaller than a grain of sand, waves gently in a petri dish of liquid. Suddenly, they snap together, like the jaws of a Venus flytrap, as a scientist waves a small magnet over the dish. What was previously an assemblage of tiny passive structures has transformed instantly into an active robotic gripper.

The lollipop gripper is one demonstration of a new type of soft magnetic hydrogel developed by engineers at MIT and their collaborators at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the University of Cincinnati. In a study appearing today in the journal Matter, the MIT team reports on a new method to print and fabricate the gel, which can be made into complex, magnetically activated three-dimensional structures.

The new gel could be the basis for soft, microscopic, magnetically responsive robots and materials. Such magno-bots could be used in medicine, for instance to release drugs or grab biopsies when directed by an external magnet.

Making objects move with magnets is nothing new, at least at the macroscale. We can, for example, wave a refrigerator magnet over a pile of paper clips that will trail the magnet in response. And at the microscale, scientists have designed a variety of magnetic “micro-swimmers” — components that are smaller than a millimeter and can be directed remotely by a magnet to squeeze through small spaces. For the most part, these designs work by mixing magnetic particles into a printable resin and pulling the entire swimmer in the direction of an external magnet.

In contrast, the MIT team’s new material can be made into even more complex and deformable structures with micron-scale precision. These features could enable a magnetic millibot to move individual features and perform more complex maneuvers.

“We can now make a soft, intricate 3D architecture with components that can move and deform in complex ways within the same microscopic structure,” says study author Carlos Portela, the Robert N. Noyce Career Development Associate Professor of Mechanical Engineering at MIT. “For soft microscopic robotics, or stimuli-responsive matter, that could be a game-changing capability.”

The study’s MIT co-authors include graduate students Rachel Sun and Andrew Chen, along with Yiming Ji and Daryl Yee of EPFL and Eric Stewart of the University of Cincinnati.

In a flash

At MIT, Portela’s group develops new metamaterials — materials engineered with unique, microscopic architectures that give rise to beyond-normal material properties. Portela has fabricated a variety of such metamaterials, including extremely tough and stretchy architectures and designs that can manipulate sound and withstand violent impacts.

Most recently, he’s expanded his research to “programmable” materials, which can be engineered to change their properties in response to stimuli, such as certain chemicals, light, and electric and magnetic fields.

From the team’s perspective, magnetic stimuli stand out from the rest.

“With a magnetically responsive material, we have control at a distance and the response is instantaneous,” says co-lead author Andrew Chen. “We don’t have to wait for a slow chemical reaction or physical process, and we can manipulate the material without touching it.”

For the new study, the team aimed to create a magnetically responsive metamaterial that can be made into structures smaller than a millimeter. Researchers typically fabricate microstructures by using two-photon lithography — a high-resolution 3D printing technique that flashes a laser into a small pool of resin. With repeated flashes, the laser traces a microscopic pattern into the resin, which solidifies into the same pattern, ultimately creating a tiny, three-dimensional structure, layer by layer.

While 3D resin printing produces intricate microstructures, using the same process to print magnetic structures has been a challenge. Researchers have tried to combine the resin with magnetic nanoparticles before printing the mixture. But magnetic particles are essentially bits of metal that inherently scatter light away or agglomerate and sediment unintentionally. Scientists have found that any magnetic particles in the resin can reduce the laser’s power at a given spot and weaken the resulting structure or prevent its printing altogether.

“Directly 3D printing deformable micron-scale structures with a high fraction of magnetic particles is extremely difficult, often involving a tradeoff between magnetic functionality and structural integrity,” says Sun, a co-lead author on the work.

A printed double-dip

The researchers created a new way to fabricate magnetic microstructures, by combining 3D resin printing with a double-dip process. The researchers first applied conventional resin printing to create a microstructure using a typical polymer gel, with no added magnetic particles. Then they dipped the printed gel into a solution containing iron ions, which the gel can absorb. The iron-soaked structure is then dipped again in a second solution of hydroxide ions. The iron ions in the gel bond with the hydroxide ions, creating iron-oxide nanoparticles that are inherently magnetic.

With this new process, the team can print intricate structures smaller than a millimeter, and add magnetic properties to the structures after printing. What’s more, they are able to control how magnetic a structure’s individual features can be. They found that, by tuning the laser’s power as they print certain features, they can set how cross-linked, or “tight” the gel is when printed. The tighter the gel, the fewer magnetic particles it can form. In this way, the researchers can determine how magnetic each tiny feature can be.

“This provides unprecedented design freedom to print multifunctional structures and materials at the microscale,” Sun says.

As a demonstration, the team fabricated ball-and-stick structures resembling tiny lollipops. The structures were less than a millimeter in height, with balls that were smaller than a grain of sand. The researchers printed the lollipops out of polymer gel and infused each ball with different amounts of magnetic particles, giving them various degrees of magnetism. Under a microscope, they observed that when they passed an ordinary refrigerator magnet over the structures, the lollipops pulled toward the magnet in various degrees, in a configuration that mimicked gripping fingers.

“You could imagine a magnetic architecture like this could act as a small robot that you could guide through the body with an external magnet, and it could latch onto something, for instance to take a biopsy,” Portela says. “That is a vision that others can take from this work.”

The team also fabricated a magnetically responsive, “bistable” switch. They first printed a small millimeter-long rectangle of polymer gel and attached to either side four tiny, oar-like magnetic structures. Each oar measured about 8 microns thick — about the size of a red blood cell. When the team applied a magnet on one end of the rectangle, the oars flipped toward the magnet, pulling the rectangle in the same direction and locking it in that position. When the magnet was applied to the other side, the oars flipped again, pulling the rectangle, like a switch, in the opposite direction.

“We think this is a new kind of bistable mechanism that could be used, for instance, in a microfluidic device, as a magnetic valve to open or shut some flow,” Portela says. “For now, we’ve figured out how to fabricate magnetic complex architectures at the microscale and also spatially tune their properties. That opens up a lot of interesting ideas for soft miniature robots going forward.”

This research was supported, in part, by the National Science Foundation and the MathWorks seed grant program.

This work was performed, in part, in the MIT.nano fabrication and characterization facilities.


Robotically assembled building blocks could make construction more efficient and sustainable

New research suggests constructing a simple building from interlocking subunits should be mechanically feasible and have a much smaller carbon footprint.


Robotically assembled building blocks could be a more environmentally friendly method for erecting large-scale structures than some existing construction techniques, according to a new study by MIT researchers.

The team conducted a feasibility study to evaluate the efficiency of constructing a simple building using “voxels,” which are modular 3D subunits that assemble into complex, durable structures.

After studying the performance of multiple voxels, the researchers developed three new designs intended to streamline building construction. They also produced a robotic assembler and a user-friendly interface for generating voxel-based building layouts and feeding instructions to the robots.

Their results indicate this voxel-based robotic assembly system could reduce embodied carbon — all of the carbon emitted during the lifecycle of building materials — by as much as 82 percent, compared with popular techniques like 3D concrete printing, precast modular concrete, and steel framing. The system would also be competitive in terms of cost and construction time. However, the choice of materials used to manufacture the voxels does play a major role in their carbon footprint and cost.

While scalability, durability, long-term robustness, and important considerations like fire resistance remain to be explored before such a system could be widely deployed, the researchers say these initial results highlight the potential of this approach for automated, on-site construction.

“I’m particularly excited about how the robotic assembly of discrete lattices can enable a practical way to apply digital fabrication to the built environment in a way that can let us build much more efficiently and sustainably,” says Miana Smith, a graduate student in the Center for Bits and Atoms (CBA) at MIT and lead author the study.

She is joined on the paper by Paul Richard, a graduate student at École Polytechnique Fédérale de Lausanne in Switzerland and former visiting researcher at MIT; Alfonso Parra Rubio, a CBA graduate student; and senior author Neil Gershenfeld, an MIT professor and the director of the CBA. The research appears in Automation in Construction.

Designing better building blocks

Over the past several years, researchers in the Center for Bits and Atoms have been developing voxels, which are lattice-structured building blocks that can be assembled into objects with high strength and stiffness, like airplane wings, wind turbine blades, and space structures.

“Here, we are taking aerospace principles and applying them to buildings. Why don’t we make buildings as efficiently as we make airplanes?” Gershenfeld says, based on prior work his lab has done on voxel assembly with NASA, Airbus, and Boeing.

To explore the feasibility of voxel-based assembly strategies for buildings, the researchers first evaluated the mechanical performance and sustainability of eight existing voxel designs, including a cuboctahedron made from glass-reinforced nylon and a Kelvin lattice made from steel.

Based on those evaluations, they developed a set of three voxels using a new geometry that could be more easily assembled robotically into a larger structure. The new design, based on a high-strength and high-stiffness octet lattice, mechanically self-aligns into rigid structures.

“The interlocking nature of these voxels means we can get nice mechanical properties without needing to have a lot of connectors in the system, so the construction process can run a lot faster,” Smith says.

To accelerate construction, they designed a robotic assembly system based on inchworm-like robots that crawl across a voxel structure by anchoring and extending their bodies. These Modular Inchworm Lattice Assembler robots, or MILAbots, use grippers on each end to place voxel building blocks and engage the snap-fit connections.

“The robots can assemble the voxels by dropping them into place and then stepping on them to have the pieces interlock. We can do precise maneuvers based on the mechanical relationship between the robots and the voxels,” Smith explains.

The team studied the embodied carbon needed to fabricate their new voxel designs using three materials: plastic, plywood, and steel. Then they evaluated the throughput and cost of using the robotic assembly system to build a simple, one-story building. The researchers compared these estimates with the performance of other construction methods.

Potential environmental benefits

They found that most existing voxels, and especially those made from plastics, performed poorly compared to existing methods in terms of sustainability, but the steel and wood voxels they designed offered significant environmental benefits.

For instance, utilizing their steel voxels would generate only 36 percent of the embodied carbon required for 3D concrete printing and 52 percent of the embodied carbon of precast concrete. The plywood voxels had the lowest carbon footprint, requiring about 17 percent and 24 percent of the embodied carbon needed, respectively.

“There is still a potential viable option for a plastics-based voxel approach, we just have to be a bit more strategic about which types of plastics, infills, and geometries we use,” Smith says.

In addition, projected on-site assembly time for the steel and wood voxel approaches averaged 99 hours, whereas existing construction methods averaged 155 hours.

These speed benefits rely on the distributed nature of voxel-based assembly. While one MILAbot working alone is far slower than existing techniques, with a team of 20 robots working in parallel, the system catches up to or surpasses existing automation methods at a lower cost.

“One benefit of this method is how incremental it is. You can start building, and if it turns out you need a new room, you can just add onto the structure. It is also reversible, so if your use changes, you can dissemble the voxels and change the structure,” Gershenfeld says.

The researchers also developed an interface that enables users to input or hand-design a voxelized structure. The automatic system determines the paths the MILAbots should follow for construction and sends commands to the assemblers.

The next step in this project will be a larger testbed in Bhutan, using the “super fab lab” that CBA helped set up there to replicate the robots to test construction for a planned sustainable city, Gershenfeld says.

Additional areas of future work include studying the stability of voxel structures under lateral loads, improving the design tool to account for the physics of the system, enhancing the MILAbots, and evaluating voxels that have integrated sheeting, insulation, or electrical and plumbing routing.

“Our work helps support why doing this type of distributed robot assembly might be a practical way to bring digital fabrication into building construction,” Smith says.

“This is yet another visionary example from Neil Gershenfeld and his team, of finding ways for buildings to build themselves with the help of tiny robotic machines. I’m now fascinated by how we can harness an idea like this to make it more affordable to make the outsides of buildings more engaging and joyful,” says Thomas Heatherwick, founder of the design and architecture firm Heatherwick Studio, who was not involved with this research.

This work was funded, in part, by the MIT Center for Bits and Atoms Consortia.


Mapping molecular markers of physical fitness

A new study reveals cellular pathways that appear to underlie some differences in physical fitness.


Patterns of molecular activity in the blood may hold clues not only to how fit someone is, but also to the biological processes that support physical performance. Researchers at MIT, GE HealthCare, and the U.S. Military Academy at West Point have developed a computational model that links thousands of these molecular signals to fitness levels, revealing pathways that could inform future studies to improve fitness training and speed injury or disease recovery.

To develop their model, the researchers analyzed more than 50,000 biomarkers in 86 cadets at the U.S. Military Academy who were training for a military competition. Using these data, the researchers were able to identify molecular pathways that appear to contribute to higher levels of physical fitness.

“We had 50,000 measurements, and we wanted to get it down to about 100 where there’s some likelihood that the markers that we’re measuring are mechanistically linked to physical fitness. So, not just a statistical correlation, of which there will be many, but markers where there’s a likelihood that there is a causal relationship,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering.

These biomarkers can be measured by analyzing blood samples, which could offer a simple way to provide an athlete, for example, or perhaps someone with chronic illness or a long-term injury, with additional information about potential areas to focus their efforts to reduce risk of injury, accelerate recovery, or improve their performance ceiling beyond what conventional measures show.

Azar Alizadeh, a principal scientist with GE HealthCare’s Healthcare Technology and Innovation Center, is the paper’s lead author. Fraenkel and Luca Marinelli, a senior principal scientist with GE HealthCare, are the senior authors of the new study, which appears in the journal Communications Biology.

Mapping fitness

To find the genetic basis of a simple trait such as height, scientists can perform large-scale studies known as genome-wide association studies (GWAS), in which genetic markers from thousands of people can be linked with height. However, the picture becomes much more complicated for traits such as physical fitness, which is determined by the interplay of many different genetic, physiological, and environmental factors.

The researchers set out to try to identify some of those factors, working with a group of 86 volunteers at the U.S. Military Academy at West Point who were training for the Sandhurst Military Skills Competition. Alizadeh led the experimental study design and execution, in collaboration with GE HealthCare, GE Research, West Point, and MIT scientists. During the three-month study period, volunteers participated in up to five sessions. At each session, blood samples were taken before and after intense exercise. The researchers also measured other traits such as lean muscle mass and VO2 max (the maximum rate of oxygen consumption during exercise).

From the blood samples, the researchers were able to measure more than 50,000 biomarkers, which they obtained by analyzing DNA methylation patterns, sequencing messenger RNA transcripts, and analyzing thousands of the proteins and small molecules found in the samples.

From their set of 50,000 biomarkers, the researchers hoped to identify a smaller number that could predict overall physical fitness, as measured by performance on the Army Combat Fitness Test (ACFT). This test includes a 2-mile run, maximum deadlift (the heaviest weight a person can lift for a single repetition up to 340 pounds), and sprint-drag-carry, a test that involves sprinting, dragging a sled, and carrying kettlebells.

One way to do this would be to simply train a computational model to identify correlations between fitness and biomarkers. However, with only 86 subjects in the study, that approach would likely yield correlations that were random and did not actually contribute to physical fitness, Fraenkel says.

To take a more targeted approach, the researchers first created a network model that represents the interactions between the markers, based on existing databases that catalog those interactions. These connections might represent proteins that interact with each other in a signaling pathway, or a transcription factor that turns on a set of genes.

“We built a network that you can think of as a city map. You want to find the places in the city map that are lighting up — not just one light going on, but a whole bunch of houses or street lamps going on in the same neighborhood,” Fraenkel says. “We can find neighborhoods on this enormous molecular map that are active at the same time, in a way that correlates with the phenotype that we measure.”

“We built upon the network bioinformatics from the Fraenkel lab to create an end-to-end predictive modeling framework to discover biological expression circuits that drive groups of physical characteristics predictive of ACFT scores, for example, body composition or exercise physiology metrics like VO2 max,” Marinelli says.

After feeding the measurements from the study participants into this predictive model, known as PhenoMol, the researchers were able to identify more than 100 biomarkers linked to performance on the ACFT. Fitness predictions based on these biomarkers were much more accurate than those of a model that correlated biomarkers with performance on the ACFT without taking network connections into account. Additionally, PhenoMol performed similarly to a model that predicted participants’ fitness based on measurements of their VO2 and lean muscle mass.

Cellular pathways

The researchers found that the biomarkers identified by PhenoMol clustered into several different cellular pathways. Those include pathways involved in blood coagulation and the complement cascade — a part of the immune system involved in clearing damaged cells. Those systems likely help with recovery from tissue injury and stress response during exercise, Fraenkel says.

Another prominent cluster involves molecules related to the urea cycle, which is responsible for eliminating the ammonia that results from the breakdown of proteins. The model also identified biomarkers that are linked with the function of mitochondria (the organelles that generate energy within cells).

Fraenkel now hopes to dig deeper into which markers show someone’s current fitness, and which might reveal what their potential fitness levels could be. This could help to reveal potential strengths that might not show up in traditional fitness tests, he says.

That kind of prediction could be useful not only for athletic training, but also for other people who are recovering from an injury or disease, or people experiencing the effects of aging. For example, using this approach in different populations might provide useful information for an elderly person after a stroke, since such events often require months of therapy to regain significant mobility.

“This has relevance for the military and for sports teams, but also in a lot of normal life situations where maybe someone is going through rehabilitation for some injury or disease and they’ve hit a wall,” Fraenkel says. “Or during aging, you may be able to see when somebody’s losing capacity or when they have more capacity than they’ve been able to actualize.”

Molecular markers of fitness could also be used in clinical trials to rigorously test the potential benefits of popular food supplements and fitness programs, he adds.

To make the testing process simpler, the researchers would like to narrow down their pool of biomarkers to a handful that could be easily measured from a blood sample using a single method suitable for widespread use.

The research was developed with funding from the Defense Advanced Research Projects Agency (DARPA), which states that the views, opinions, or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the U.S. government.


Self-organizing “pencil beam” laser could help scientists design brain-targeted therapies

MIT researchers leveraged a surprise discovery to devise a faster and more precise biomedical imaging technique.


MIT researchers discovered a paradoxical phenomenon in optical physics that could enable a new bioimaging method that’s faster and higher-resolution than existing technology.

They discovered that, under the right conditions, a chaotic mess of laser light can spontaneously self-organize into a highly focused “pencil beam.”

Using this self-organized pencil beam, the researchers captured 3D images of the human blood-brain barrier 25 times faster than the gold-standard method, while maintaining comparable resolution.

By showing individual cells absorbing drugs in real-time, this technology could help scientists test whether new drugs for neurodegenerative disease like Alzheimer’s or ALS reach their targets in the brain, with greater speed and resolution.

Concentrations of red dye begins to appear across the network veins.

“The common belief in the field is that if you crank up the power in this type of laser, the light will inevitably become chaotic. But we proved that this is not the case. We followed the evidence, embraced the uncertainty, and found a way to let the light organize itself into a novel solution for bioimaging,” says Sixian You, assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Research Laboratory for Electronics, and senior author of a paper on this imaging technique.

She is joined on the paper by lead author Honghao Cao, an EECS graduate student; EECS graduate students Li-Yu Yu and Kunzan Liu; postdocs Sarah Spitz, Francesca Michela Pramotton, and Federico Presutti; Zhengyu Zhang PhD ’24; Subhash Kulkarni, an assistant professor at Harvard University and the Beth Israel Deaconess Medical Center; and Roger Kamm, the Cecil and Ida Green Distinguished Professor of Biological and Mechanical Engineering at MIT. The paper appears today in Nature Methods.

A surprising finding

The discovery began with an observation that initially puzzled the researchers.

The team previously developed a precise fiber shaper, a device that enables them to carefully tune the laser light shining through a multimode optical fiber. This type of optical fiber can carry a significant amount of power.

Cao was pushing the multimode fiber toward its limit to see how much power it could take.

Typically, the more power one pumps into the laser, the more disordered and scattered the beam of light becomes due to imperfections in the fiber.

But Cao observed that, as he increased the power almost to the point where it would burn the fiber, the light did the opposite of what was expected: It collapsed into a single, needle-sharp beam.

“Disorder is intrinsic to these fibers. The light engineering you typically need to do to overcome that disorder, especially at high power, is a longstanding hassle. But with this self-organization, you can get a stable, ultrafast pencil beam without the need for custom beam-shaping components,” You says.

To replicate this phenomenon, the researchers found they had to satisfy two simple, but precise conditions.

First, the laser must enter the fiber at a perfect, zero-degree angle. This is a more rigorous requirement than is usually used for these types of fibers. Second, the power must be dialed up until the light begins to interact with the glass of the fiber itself.

“At this critical power, the nonlinearity can counter the intrinsic disorder, creating a balance that transforms the input beam into a self-organized pencil beam,” Cao explains.

Typically, researchers conduct these experiments at much lower power levels for fear of destroying the fiber, in which case they wouldn’t see this self-organization. In addition, such precise on-axis alignment isn’t typically necessary since a multimode fiber can carry so much power.

But taken together, these two techniques can generate a stable pencil-beam without any complicated light engineering methods.

“That is the charm of this method — you could do this with a normal, optical setup and without much domain expertise,” You says.

A better beam

When the researchers performed characterization experiments of this pencil beam, it was more stable and high-resolution than many similar beams. Other beams often suffer from “sidelobes” — blurry halos of light that can distort images.

Their beam was more pristine and tightly focused.

Building on those experiments, the researchers demonstrated the use of this pencil-beam in biomedical imaging of the human blood-brain barrier.

This barrier is a tightly packed layer of cells that protects the brain from toxins, but it also blocks many medicines. Scientists and clinicians often want to see how drugs flow inside the vasculature of the blood-brain barrier and whether they reach their targets within the brain.

But with standard optical settings, the best one can do is capture one 2D section of the vasculature at a time, and then repeat the process multiple times to generate a fuller image, You explains.

Using this new technique, the researchers created an ultrafast, high-precision pencil beam that enabled them to dynamically track how cells absorb proteins in real-time.

“The pharmaceutical industry is especially interested in using human-based models to screen for drugs that effectively cross the barrier, as animal models often fail to predict what happens in humans. That this new method doesn’t require the cells to have a fluorescent tag is a game-changer. For the first time, we can now visualize the time-dependent entry of drugs into the brain and even identify the rate at which specific cell types internalize the drug,” says Kamm.

“Importantly, however, this approach is not limited to the blood-brain barrier but enables time-resolved tracking of diverse compounds and molecular targets across engineered tissue models, providing a powerful tool for biological engineering,” Spitz adds.

The team captured cellular-level 3D images that were higher quality than with other methods, and generated these images about 25 times faster.

“Usually, you have a tradeoff between image resolution and depth of focus — you can only probe so far at a time. But with our method, we can overcome this tradeoff by creating a pencil-beam with both high resolution and a large depth of focus,” You says.

In the future, the researchers want to better understand the fundamental physics of the pencil-beam and the mechanisms behind its self-organization. They also plan to apply the technique to other scenarios, such as imaging neurons in the brain, and work toward commercializing the technology.

“You’s group realized this beam that concentrates energy in time and space could be valuable for microscopy techniques that depend on the intensity of the light that illuminates the sample. They demonstrated just that and found advantages over ordinary laser beams for imaging. It will be scientifically interesting to fully understand the creation of the new pencil beams, which could find use in a variety of imaging applications,” says Frank Wise, the Samuel B. Eckert Professor of Engineering Emeritus at Cornell University, who was not involved with this work.

The work was supported by MIT startup funds, Novo Nordisk Research Development, a National Science Foundation (NSF) CAREER Award, CZI Dynamic Imaging from the Chan Zuckerberg donor-advised fund through the Silicon Valley Community Foundation, the Manton Foundation, and the Fairbairn Menstruation Science Fund.


A faster way to estimate AI power consumption

The “EnergAIzer” method generates reliable results in seconds, enabling data center operators to efficiently allocate resources and reduce wasted energy.


Due to the explosive growth of artificial intelligence, it is estimated that data centers will consume up to 12 percent of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory. Improving data center energy efficiency is one way scientists are striving to make AI more sustainable.

Toward that goal, researchers from MIT and the MIT-IBM Watson AI Lab developed a rapid prediction tool that tells data center operators how much power will be consumed by running a particular AI workload on a certain processor or AI accelerator chip.

Their method produces reliable power estimates in a few seconds, unlike traditional modeling techniques that can take hours or even days to yield results. Moreover, their prediction tool can be applied to a wide range of hardware configurations — even emerging designs that haven’t been deployed yet.

Data center operators could use these estimates to effectively allocate limited resources across multiple AI models and processors, improving energy efficiency. In addition, this tool could allow algorithm developers and model providers to assess potential energy consumption of a new model before they deploy it.

“The AI sustainability challenge is a pressing question we have to answer. Because our estimation method is fast, convenient, and provides direct feedback, we hope it makes algorithm developers and data center operators more likely to think about reducing energy consumption,” says Kyungmi Lee, an MIT postdoc and lead author of a paper on this technique.

She is joined on the paper by Zhiye Song, an electrical engineering and computer science (EECS) graduate student; Eun Kyung Lee and Xin Zhang, research managers at IBM Research and the MIT-IBM Watson AI Lab; Tamar Eilam, IBM Fellow, chief scientist of sustainable computing at IBM Research, and a member of the MIT-IBM Watson AI Lab; and senior author Anantha P. Chandrakasan, MIT provost, Vannevar Bush Professor of Electrical Engineering and Computer Science, and a member of the MIT-IBM Watson AI Lab. The research is being presented this week at the IEEE International Symposium on Performance Analysis of Systems and Software.

Expediting energy estimation

Inside a data center, thousands of powerful graphics processing units (GPUs) perform operations to train and deploy AI models. The power consumption of a particular GPU will vary based on its configuration and the workload it is handling.

Many traditional methods used to predict energy consumption involve breaking a workload into individual steps and emulating how each module inside the GPU is being utilized one step at a time. But AI workloads like model training and data preprocessing are extremely large and can take hours or even days to simulate in this manner.

“As an operator, if I want to compare different algorithms or configurations to find the most energy-efficient manner to proceed, if a single emulation is going to take days, that is going to become very impractical,” Lee says.

To speed up the prediction process, the MIT researchers sought to use less-detailed information that could be estimated faster. They found that AI workloads often have many repeatable patterns. They could use these patterns to generate the information needed for reliable but quick power estimation.

In many cases, algorithm developers write programs to run as efficiently as possible on a GPU. For instance, they use well-structured optimizations to distribute the work across parallel processing cores and move chunks of data around in the most efficient manner.

“These optimizations that software developers use create a regular structure, and that is what we are trying to leverage,” explains Lee.

The researchers developed a lightweight estimation model, called EnergAIzer, that captures the power usage pattern of a GPU from those optimizations.

An accurate assessment

But while their estimation was fast, the researchers found that it didn’t take all energy costs into account. For instance, every time a GPU runs a program, there is a fixed energy cost required for setting up and configurating that program. Then each time the GPU runs an operation on a chunk of data, an additional energy cost must be paid.

Due to fluctuations in the hardware or conflicts in accessing or moving data, a GPU might not be able to use all available bandwidth, slowing operations down and drawing more energy over time.

To include these additional costs and variances, the researchers gathered real measurements from GPUs to generate correction terms they applied to their estimation model.

“This way, we can get a fast estimation that is also very accurate,” she says.

In the end, a user can provide their workload information, like the AI model they want to run and the number and length of user inputs to process, and EnergAIzer will output an energy consumption estimation in a matter of seconds.

The user can also change the GPU configuration or adjust the operating speed to see how such design choices impact the overall power consumption.

When the researchers tested EnergAIzer using real AI workload information from actual GPUs, it could estimate the power consumption with only about 8 percent error, which is comparable to traditional methods that can take hours to produce results.

Their method could also be used to predict the power consumption of future GPUs and emerging device configurations, as long as the hardware doesn’t change drastically in a short amount of time.

In the future, the researchers want to test EnergAIzer on the newest GPU configurations and scale the model up so it can be applied to many GPUs that are collaborating to run a workload.

“To really make an impact on sustainability, we need a tool that can provide a fast energy estimation solution across the stack, for hardware designers, data center operators, and algorithm developers, so they can all be more aware of power consumption. With this tool, we’ve taken one step toward that goal,” Lee says.

This research was funded, in part, by the MIT-IBM Watson AI Lab.


MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone

New dataset of 30,000-plus competition math problems from 47 countries gives AI researchers a harder test — and students worldwide a better training ground.


Every year, the countries competing in the International Mathematical Olympiad (IMO) arrive with a booklet of their best, most original problems. Those booklets get shared among delegations, then quietly disappear. No one had ever collected them systematically, cleaned them, and made them available, not for AI researchers testing the limits of mathematical reasoning, and not for the students around the world training for these competitions largely on their own.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), King Abdullah University of Science and Technology (KAUST), and the company HUMAIN have now done exactly that.

MathNet is the largest high-quality dataset of proof-based math problems ever created. Comprising more than 30,000 expert-authored problems and solutions spanning 47 countries, 17 languages, and 143 competitions, it is five times larger than the next-biggest dataset of its kind. The work will be presented at the International Conference on Learning Representations (ICLR) in Brazil later this month.

What makes MathNet different is not only its size, but its breadth. Previous Olympiad-level datasets draw almost exclusively from competitions in the United States and China. MathNet spans dozens of countries across six continents, covers 17 languages, includes both text- and image-based problems and solutions, and spans four decades of competition mathematics. The goal is to capture the full range of mathematical perspectives and problem-solving traditions that exist across the global math community, not just the most visible ones.

"Every country brings a booklet of its most novel and most creative problems," says Shaden Alshammari, an MIT PhD student and lead author on the paper. "They share the booklets with each other, but no one had made the effort to collect them, clean them, and upload them online."

Building MathNet required tracking down 1,595 PDF volumes totaling more than 25,000 pages, spanning digital documents and decades-old scans in more than a dozen languages. A significant portion of that archive came from an unlikely source: Navid Safaei, a longtime IMO community figure and co-author who had been collecting and scanning those booklets by hand since 2006. His personal archive formed much of the backbone of the dataset.

The sourcing matters as much as the scale. Where most existing math datasets pull problems from community forums like Art of Problem Solving (AoPS), MathNet draws exclusively from official national competition booklets. The solutions in those booklets are expert-written and peer-reviewed, and they often run to multiple pages, with authors walking through several approaches to the same problem. That depth gives AI models a far richer signal for learning mathematical reasoning than the shorter, informal solutions typical of community-sourced datasets. It also means the dataset is genuinely useful for students: Anyone preparing for the IMO or a national competition now has access to a centralized, searchable collection of high-quality problems and worked solutions from traditions around the world.

"I remember so many students for whom it was an individual effort. No one in their country was training them for this kind of competition," says Alshammari, who competed in the IMO as a student herself. "We hope this gives them a centralized place with high-quality problems and solutions to learn from."

The team has deep roots in the IMO community. Sultan Albarakati, a co-author, currently serves on the IMO board, and the researchers are working to share the dataset with the IMO foundation directly. To validate the dataset, they assembled a grading group of more than 30 human evaluators from countries including Armenia, Russia, Ukraine, Vietnam, and Poland, who coordinated together to verify thousands of solutions.

"The MathNet database has the potential to be an excellent resource for both students and leaders seeking new problems to work on or looking for the solution to a difficult question," says Tanish Patil, deputy leader of Switzerland's IMO. "Whilst other archives of Olympiad problems do exist (notably, the Contest Collections forums on AoPS), these resources lack standardized formatting system, verified solutions, and important problem metadata that topics and theory require. It will also be interesting to see how this dataset is used to improve the performance of reasoning models, and if we will soon be able to reliably answer an important issue when creating novel Olympiad questions: determining if a problem is truly original."

MathNet also functions as a rigorous benchmark for AI performance, and the results reveal a more complicated picture than recent headlines about AI math prowess might suggest. Frontier models have made extraordinary progress: Some have reportedly achieved gold-medal performance at the IMO, and on standard benchmarks they now solve problems that would stump most humans. But MathNet shows that progress is uneven. Even GPT-5, the top-performing model tested, averaged around 69.3 percent on MathNet's main benchmark of 6,400 problems, failing nearly one-in-three Olympiad-level problems. And when problems include figures, performance drops significantly across the board, exposing visual reasoning as a consistent weak point for even the most capable models.

Several open-source models scored 0 percent on Mongolian-language problems, highlighting another dimension where current AI systems fall short despite their overall strength.

"GPT models are equally good in English and other languages," Alshammari says. "But many of the open-source models fail completely at less-common languages, such as Mongolian."

The diversity of MathNet is also designed to address a deeper limitation in how AI models learn mathematics. When training data skews toward English and Chinese problems, models absorb a narrow slice of mathematical culture. A Romanian combinatorics problem or a Brazilian number theory problem may approach the same underlying concept from a completely different angle. Exposure to that range, the researchers argue, makes both humans and AI systems better mathematical thinkers.

Beyond problem-solving, MathNet introduces a retrieval benchmark that asks whether models can recognize when two problems share the same underlying mathematical structure, a capability that matters both for AI development and for the math community itself. Near-duplicate problems have appeared in real IMO exams over the years because finding mathematical equivalences across different notations, languages, and formats is genuinely hard, even for expert human committees. Testing eight state-of-the-art embedding models, the researchers found that even the strongest identified the correct match only about 5 percent of the time on the first try, with models frequently ranking structurally unrelated problems as more similar than equivalent ones.

The dataset also includes a retrieval-augmented generation benchmark, testing whether giving a model a structurally related problem before asking it to solve a new one improves performance. It does, but only when the retrieved problem is genuinely relevant. DeepSeek-V3.2-Speciale gained up to 12 percentage points with well-matched retrieval, while irrelevant retrieval degraded performance in roughly 22 percent of cases.

Alshammari wrote the paper with Safaei, HUMAIN AI engineer Abrar Zainal, KAUST Academy Director Sultan Albarakati, and MIT CSAIL colleagues: master's student Kevin Wen SB ’25; Microsoft Principal Engineering Manager Mark Hamilton SM ’22, PhD ‘25; and professors William Freeman and Antonio Torralba. Their work was funded, in part, by the Schwarzman College of Computing Fellowship and the National Science Foundation.

MathNet is publicly available at mathnet.csail.mit.edu.


Three from MIT named 2026 Goldwater Scholars

Rising seniors Deeksha Kumaresh, Anna Liu, and Charlotte Myers are honored for their academic achievements.


Three MIT rising seniors have been selected to receive a 2026 Barry Goldwater Scholarship, including Deeksha Kumaresh in the School of Engineering and Anna Liu and Charlotte Myersin the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 454 were selected.

The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.

Deeksha Kumaresh, a third-year biological engineering major, is an undergraduate researcher at the Hammond Lab. The Hammond Research Group at the MIT Koch Institute for Integrative Cancer Research focuses on the self-assembly of polymeric nanomaterials, with a major emphasis on the use of electrostatics and other complementary interactions to generate multifunctional materials with highly controlled architecture.

“Hands down, the mentors I’ve encountered have been the most significant part of my MIT journey,” Kumaresh says. “I’m also extremely grateful to the Hammond Lab, which has provided a supportive environment where I can make mistakes, learn, and grow as a researcher. I treasure the spontaneous conversations with lab members (about science or life) and their willingness to treat me seriously as an independent researcher, even as an undergraduate.”

Kumaresh is mentored by Paula Hammond, dean of the School of Engineering, Institute Professor, and professor of chemical engineering. Kumaresh's career goals are to pursue an MD/PhD. In the long term, she seeks to lead a bioengineering research lab to predict the efficacy and side effects of cancer therapies by developing systems-level computational and biological preclinical models.

“Receiving this scholarship has been incredibly meaningful, because it offered me the chance to reflect critically on my post-graduate goals and receive recognition for my journey for them,” Kumaresh says. “Earning this scholarship has welcomed me into a tight-knit community where I’ve already found so much guidance. Everyone is genuinely curious about everyone else’s interests and are eager to lend a hand however they can.”

Anna Liu, a third-year chemistry major, is an undergraduate researcher in the Radosevich Group. The overarching objective of the group’s research is to develop new catalysts, strategies, and reagents for synthetic chemistry. By designing and synthesizing new molecular compounds with unknown structure and function, the group hopes to learn more about the general principles enabling new chemical transformations.

Liu is mentored by professor of chemistry Alexander Radosevich. She plans to pursue a PhD in organic or inorganic chemistry and eventually lead research developing sustainable synthetic transformations informed by fundamental mechanistic and reactivity studies, and teach at the university level.

“Going through the Goldwater application process gave me a deeper understanding of my research project and helped me reflect on my intrinsic motivations to pursue research. I’m excited to use what I’ve learned to keep growing as a researcher,” Liu says. “I am so grateful for the countless mentors, teachers, labmates, classmates, friends, and family in my life who have believed in me, fostered my passion for chemistry, and taught me so much. Receiving this scholarship is truly a testament to their outstanding support!"

Charlotte Myers, a third-year physics and astronomy major, conducts research at the Kavli Institute for Astrophysics and Space Research, where she applies machine learning to model galactic structure, and at the Center for Theoretical Physics, where she studies theoretical models of dark matter. Her research interests center on the physics of dark matter, which she approaches from multiple perspectives — from its distribution on galactic scales to particle-level models.

Myers is mentored by Lina Necib, an assistant professor in the Department of Physics. She plans to pursue a PhD in theoretical physics and conduct research in cosmology and astroparticle physics, with a focus on the fundamental physics of dark matter, and teach at the university level.

“I am very grateful to my research advisors, Professor Necib, Dr. Starkman, and Professor Slatyer, for their guidance and support in helping me develop as a researcher,” Myers says. “I find it deeply rewarding to engage with open questions in physics, and I am excited to continue pursuing this work in graduate school and beyond. Receiving this scholarship has given me both the resources and the confidence to continue on that path, even when progress is not always linear.”

The scholarship program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.


MIT takes top team honors in 86th Putnam Math Competition

The undergraduate team topped the scoreboard for the sixth year in a row and also took the Elizabeth Lowell Putnam Prize again.


In an outstanding performance at the 86th William Lowell Putnam Mathematical Competition, MIT’s team once again took the top spot for the sixth consecutive year. MIT secured four of the five Putnam Fellows, who are the five highest-ranking students, and the Elizabeth Lowell Putnam Prize, which is given to a woman whose “performance in the competition is particularly meritorious.”

The members of the winning team, consisting of junior Cheng Jiang, senior Luke Robitaille, and first-year Chunji Wang, were all awarded as Putnam Fellows alongside senior Zixiang Zhou, each receiving a $2,500 award for their performance. Notably, Robitaille is a four-time Putnam Fellow, having received the award for each year of his studies. For a second consecutive year, sophomore Jessica Wan was awarded the Elizabeth Lowell Putnam Prize and received $1,000.

Wan was also among the top 25 scorers, amongst 16 others from MIT: Warren Bei, Reagan Choi, Pico Gilman, Henry Jiang, Zhicheng Jiang, Papon Lapate, Gyudong Lee, Derek Liu, Maximus Lu, Krishna Pothapragada, Pitchayut Saengrungkongka, Qiao Sun, Allen Wang, Kevin Wang, and Yichen Xiao.

A legacy of success

“I was delighted to see how well the MIT students did on the Putnam exam this year, which reflects their hard work, talent, and enthusiasm,” says Professor Henry Cohn, who led class 18.A34 (Mathematical Problem Solving) this year, also informally known as the Putnam seminar.

MIT’s continued success in the Putnam competition stems from a variety of sources. Some of this is built on things like the seminar, where students get together to sharpen their skills by diving deep into tough problems and discussing solutions.

Cohn, a former participant in the Putnam, comments on the joy of teaching the seminar and seeing students’ progress. “When you spend a semester watching students present solutions to difficult problems, you start to understand how they think,” says Cohn. “It’s exciting to see them apply their abilities to new, difficult problems."

Professor Bjorn Poonen, who also led the seminar in previous years (and is a four-time Putnam Fellow), describes it as an opportunity to hone a spectrum of skills in competition preparation. “Knowing how to explain things well is really important for doing well on the Putnam and for everything else, and for this it really helps to have experience communicating with others, which is what the problem-solving seminar is all about.”

A shared passion for problem-solving

The students who take the Putnam thrive on all aspects of the competition, from the social to the exam itself.

“It’s not a school day, and we still get to do math,” Jiang describes his excitement for the competition. Indeed, getting to “do math” extends beyond formally sitting for the exam, to breaks and opportunities for discussion that are interspersed throughout the day. The students take each opportunity to come together as seriously as they do the competition, and it is this collective passion for problem-solving that builds a strong sense of community and brings students back year after year.

“The competition brings together hundreds of students from across campus representing many majors, years of graduation, and degrees of math contest experience, but what brings everyone together is a shared love of solving problems,” Cohn says. “You can see this in the clusters of students who stay to discuss the problems long after the exam has ended. Mathematics can sometimes feel like a solitary pursuit, but at this level, collaboration is key.”

Community complements the shared passion the math enthusiasts share for problems and puzzles. “You get a kind of satisfaction similar to when you get unstuck while doing a crossword puzzle and everything falls into place,” Poonen describes his own experience solving Putnam problems.

Consistency in certainty

The competition is also an opportunity to see familiar faces. Robitaille recalls his experiences in high school math olympiads, and highlights the friendly atmosphere at the Putnam. “Throughout college, I have stayed close with people I met at competitions,” Robitaille says. “There’s the whole background of times spent together, not just on contest day.”

An event for both community and challenge, the consistency and certainty of competition day is what brought Robitaille and Zhou back year after year. “Each time, you have a set amount of time to sit in the room and work on the problems,” Robitaille says. “If you were the type of person for whom that would be a fun thing, like me, it’s nice to have an opportunity to do it again occasionally.”

“It’s more fun than the real world, where everything is complicated,” Zhou adds with a smile.

The full list of 2025 winners can be found on the Putnam website.


New chip can protect wireless biomedical devices from quantum attacks

Ultra-efficient chip design enables extremely strong cryptography algorithms to run on energy-constrained edge devices.


As quantum computers advance, they are expected to be able to break tried-and-true security schemes that currently keep most sensitive data secure from attackers. Scientists and policymakers are working to design and implement post-quantum cryptography to defend against these future attacks.

MIT researchers have developed an ultra-efficient microchip that can bring post-quantum cryptography techniques to wireless biomedical devices, like pacemakers and insulin pumps. Such wearable, ingestible, or implantable devices are usually too power-constrained to implement these computationally demanding security protocols.

Their tiny chip, which is about the size of a very fine needle tip, also includes built-in protections against physical hacking attempts that can bypass encryption to steal user data, such as a patient’s social security number or device credentials. Compared to prior designs, the new technology is more than an order of magnitude more energy-efficient.

In the long run, the new chip could enable next-generation wireless medical devices to maintain strong security even as quantum computing becomes more prevalent. In addition, it could be applied to many types of resource-constrained edge devices, like industrial sensors and smart inventory tags.

“Tiny edge devices are everywhere, and biomedical devices are often the most vulnerable attack targets because power constraints prevent them from having the most advanced levels of security. We’ve demonstrated a very practical hardware solution to secure the privacy of patients,” says Seoyoon Jang, an MIT electrical engineering and computer science (EECS) graduate student and lead author of a paper on the chip.

Jang is joined on the paper by Saurav Maji PhD ’23; visiting scholar Rashmi Agrawal; EECS graduate students Hyemin Stella Lee and Eunseok Lee; Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and an associate member of the Broad Institute of MIT and Harvard; and senior author Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science. The research was recently presented at the IEEE Custom Integrated Circuits Conference.

Stronger security

A large percentage of wireless biomedical devices, like ingestible biosensors for health monitoring, currently lack strong protection due to the computational demands of existing security protocols, Jang says.

But the complexity of post-quantum cryptography (PQC) can increase power consumption by two or three orders of magnitude.

Implementing PQC is of paramount importance, since agencies like the National Institute of Standards and Technology (NIST) will soon begin phasing out traditional cryptography protocols in favor of stronger PQC algorithms. In addition, some industry leaders believe rapid advances in quantum hardware make PQC implementation even more urgent.

To bring these power-hungry PQC protocols to wireless biomedical devices, the MIT researchers designed a customized microchip, known as an application-specific integrated circuit (ASIC), that greatly reduces energy overhead while guaranteeing the highest level of security.

“PQC is very secure algorithmically, but making a device resilient against physical attacks usually requires additional countermeasures that pump up the energy consumption at least two or three times. We want our chip to be robust to both security threats in a very lightweight manner,” Jang says.

A multi-pronged approach

To accomplish these goals, the researchers incorporated several design features into the chip.

First, they implemented two different PQC schemes to enhance robustness and “future-proof” their device in case one scheme is later proven to be insecure. To boost energy efficiency, they applied techniques that enable the PQC algorithms to share as much of the chip’s computational resources as possible.

Second, the researchers designed a highly efficient, on-chip true random number generator. This device continually generates random numbers to use for secret keys, which is essential to implement PQC.

Their on-chip design improves energy efficiency and security over standard approaches that usually receive random numbers from an external chip.

Third, they implemented countermeasures that prevent a type of physical hacking attempt, called a power side-channel attack, but only on the most vulnerable parts of the PQC protocols.

In power side-channel attacks, hackers steal secret information by analyzing the power consumption of a device while it processes data. The MIT researchers added just enough redundancy to the PQC operations to ensure the chip is protected from these types of attacks.

Fourth, they designed an early fault-detection mechanism so the chip will abort operations early if it detects a voltage glitch.

Wireless biomedical devices often have erratic power supplies, so they are susceptible to glitches that can cause an entire security procedure to fail. The MIT approach saves energy by stopping the chip from running a doomed procedure to completion.

“At the end of the day, because of the techniques we utilized, we can apply these post-quantum cryptography primitives while adding nothing to the overhead, with the added benefit of robustness to side-channel attacks,” Jang says.

Their device achieved between 20 to 60 times higher energy efficiency than all other PQC security techniques they compared it to, with a more compact area than many existing chips.

“As we transition into post-quantum approaches, providing strong security for even the most resource-limited devices is essential. This work shows that robust cryptographic protection for biomedical and edge devices can be achieved alongside energy efficiency and programmability,” says Chandrakasan.

In the future, the researchers want to apply these techniques to other vulnerable applications and energy-constrained devices.

This research was funded, in part, by the U.S. Advanced Research Projects Agency for Health.


MIT affiliates elected to the American Academy of Arts and Sciences for 2026

The prestigious honor society honors four MIT faculty and 13 additional MIT alumni among more than 250 new members.


Four MIT faculty members are among the roughly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 22. Thirteen additional MIT alumni were also honored.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

MIT faculty elected from MIT in 2026 are:

MIT alumni elected this year include Mark Aguiar PhD ’99 (Economics); Mark G. Allen SM ’86, PhD ’89 (Chemical Engineering); Magdalena Balazinska PhD ’06 (EECS); Keren Bergman SM ’91, PhD ’94 (EECS); Sara Cherry PhD ’00 (Biology); Cynthia J. Ebinger SM ’86, PhD ’88 (EAPS); Charles L. Epstein ’78 (Mathematics); Shanhui Fan PhD ’97 (Physics); Atif Mian ’96, PhD ’01 (Mathematics with Computer Science and Economics); Sarah E. O'Connor PhD ’01 (Chemistry); Darryll J. Pines SM ’88, PhD ’92 (Mechanical Engineering); Phillip (Terry) Ragon ’72 (Physics); and Mansour Shayegan ’79, EE ’81, SM ’81, PhD ’83 (Electrical Engineering).

“We celebrate the achievement of each new member and the collective breadth and depth of their excellence – this is a fitting commemoration of the nation’s 250th anniversary,” said Academy President Laurie Patton.

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.


Teaching AI models to say “I’m not sure”

A new training method improves the reliability of AI confidence estimates without sacrificing performance, addressing a root cause of hallucination in reasoning models.


Confidence is persuasive. In artificial intelligence systems, it is often misleading.

Today's most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they're right or guessing. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy.

The technique, called RLCR (Reinforcement Learning with Calibration Rewards), trains language models to produce calibrated confidence estimates alongside their answers. In addition to coming up with an answer, the model thinks about its uncertainty in that answer, and outputs a confidence score. In experiments across multiple benchmarks, RLCR reduced calibration error by up to 90 percent while maintaining or improving accuracy, both on the tasks the model was trained on and on entirely new ones it had never seen. The work will be presented at the International Conference on Learning Representations later this month.

The problem traces to a surprisingly simple source. The reinforcement learning (RL) methods behind recent breakthroughs in AI reasoning, including the training approach used in systems like OpenAI's o1, reward models for getting the right answer, and penalize them for getting it wrong. Nothing in between. A model that arrives at the correct answer through careful reasoning receives the same reward as one that guesses correctly by chance. Over time, this trains models to confidently answer every question they are asked, whether they have strong evidence or are effectively flipping a coin.

That overconfidence has consequences. When models are deployed in medicine, law, finance, or any setting where users make decisions based on AI outputs, a system that expresses high confidence regardless of its actual certainty becomes unreliable in ways that are difficult to detect from the outside. A model that says "I'm 95 percent sure" when it is right only half the time is more dangerous than one that simply gets the answer wrong, because users have no signal to seek a second opinion.

"The standard training approach is simple and powerful, but it gives the model no incentive to express uncertainty or say I don’t know," says Mehul Damani, an MIT PhD student and co-lead author on the paper. "So the model naturally learns to guess when it is unsure." 

RLCR addresses this by adding a single term to the reward function: a Brier score, a well-established measure that penalizes the gap between a model's stated confidence and its actual accuracy. During training, models learn to reason about both the problem and their own uncertainty, producing an answer and a confidence estimate together. Confidently wrong answers are penalized. So are unnecessarily uncertain correct ones.

The math backs it up: the team proved formally that this type of reward structure guarantees models that are both accurate and well-calibrated. They then tested the approach on a 7-billion-parameter model across a range of question-answering and math benchmarks, including six datasets the model had never been trained on.

The results showed a consistent pattern. Standard RL training actively degraded calibration compared to the base model, making models worse at estimating their own uncertainty. RLCR reversed that effect, substantially improving calibration with no loss in accuracy. The method also outperformed post-hoc approaches, in which a separate classifier is trained to assign confidence scores after the fact. "What’s striking is that ordinary RL training doesn't just fail to help calibration. It actively hurts it," says Isha Puri, an MIT PhD student and co-lead author. "The models become more capable and more overconfident at the same time."

The team also demonstrated that the confidence estimates produced by RLCR are practically useful at inference time. When models generate multiple candidate answers, selecting the one with the highest self-reported confidence, or weighting votes by confidence in a majority-voting scheme, improves both accuracy and calibration as compute scales.

An additional finding suggests that the act of reasoning about uncertainty itself has value. The researchers trained classifiers on model outputs and found that including the model's explicit uncertainty reasoning in the input improved the classifier's performance, particularly for smaller models. The model's self-reflective reasoning about what it does and doesn’t know contains real information, not just decoration.

In addition to Damani and Puri, other authors on the paper are Stewart Slocum, Idan Shenfeld, Leshem Choshen, and senior authors Jacob Andreas and Yoon Kim.


Plants can sense the sound of rain, a new study finds

Experiments by MIT engineers show rice seeds sprout faster to the sound of rain.


The next time you find yourself lulled by the patter of rain outside your window, think how that same sprinkle might sound if you were a tiny seed planted directly below a free-falling droplet. Would you still be similarly soothed?

In fact, MIT engineers have found the opposite to be the case: Some seeds may come alive to the sound of rain. In experiments with rice seeds, the team found that the sound of falling droplets effectively shook the seeds out of a dormant state, stimulating them to germinate at a faster rate compared with seeds that were not exposed to the same sound vibrations.

The team’s findings, which are published today in the journal Scientific Reports, are the first direct evidence that plant seeds and seedlings can sense sounds in nature. Their experiments involved rice seeds that they submerged in shallow water. Rice can germinate in both soil and shallow water. The researchers suspect that many similar seed types may also respond to the sound of rain.

The team worked out a hypothesis to explain how the seeds might be doing this. They found that when a raindrop hits the surface of a puddle or the ground, it generates a sound wave that makes the surroundings vibrate, including any shallowly submerged seeds. These vibrations can be strong enough to dislodge a seed’s “statoliths,” which are tiny gravity-sensing organelles within certain cells of a seed. When these statoliths are jostled, their movement is a signal for seeds and seedlings to grow and sprout.

“What this study is saying is that seeds can sense sound in ways that can help them survive,” says study author Nicholas Makris, a professor of mechanical engineering at MIT. “The energy of the rain sound is enough to accelerate a seed’s growth.”

Makris and his co-author, Cadine Navarro, a former graduate student in MIT’s Department of Urban Studies and Planning, suspect that the sound of rain is similar to the vibrations generated by other natural phenomena such as wind. They plan to follow up this work to investigate other natural vibrations and sounds plants may perceive.

Sound vibration

Plants are surprisingly perceptive. To help them survive, plants have evolved to sense and respond to stimuli in their surroundings. Some plants snap shut when touched, while others curl inward when exposed to toxic smells. And of course, most plants respond to light, reaching toward the sun to help them grow.

Plants can also sense gravity. A plant’s roots grow down, while its shoots push up against gravity’s pull. One way that plants sense and respond to gravity is through their statoliths. Statoliths are denser than a cell’s cytoplasm and can drift and sink through the cell, like a bit of sand in a jar of water. When a statolith finally settles to the bottom, its resting place on the cell’s membrane is a reflection of gravity’s direction and a signal for where a seed’s root or shoot should grow. If the statolith is dislodged, scientists have found that this can also trigger the seed to grow more.

Makris, whose work focuses on acoustics across a range of disciplines, became curious when Navarro asked him questions about seeds and sound. They wondered: Could sound be enough to jostle the statoliths and stimulate a seed to grow? And if so, what sounds in nature could be strong enough to have such an effect?

“I went back to look at work done by colleagues in the 1980s, who measured the sound of rain underwater. If you check, you’ll see it’s much greater than in the air,” Makris says. “It has to do with the fact that water is denser than air, so the same drop makes larger pressure waves underwater. So if you’re a seed that’s within a few centimeters of a raindrop’s impact, the kind of sound pressures that you would experience in water or in the ground are equivalent to what you’d be subject to within a few meters of a jet engine in the air.”

Such rain-induced soundwaves, Makris and Navarro suspected, might be enough to jostle statoliths and subsequently stimulate a seed’s growth.

Connecting a droplet’s dots

To test this idea, the researchers carried out experiments with rice seeds, which naturally grow in shallow watery fields. Over a large number of repeated experiments, the team submerged roughly 8,000 individual seeds of rice in shallow tubs of water and exposed sections of them to dripping water. The seeds were placed sufficiently far away from the falling droplets that only sound waves would reach them. The team varied the size and height of each water droplet to mimic raindrops during light, moderate, and heavy rainstorms.

The sound of rain, recorded by MIT researchers from underwater, within a rain puddle in Massachusetts during a moderate to heavy rainstorm. 
Credit: Courtesy of the researchers

They also used a hydrophone to measure the acoustic vibrations created underwater by the water droplets. They compared these measurements to recordings they took in the field, such as in puddles, ponds, wetlands, and soils during rainstorms. The comparisons confirmed that their water droplets in the lab were generating rain-induced acoustic vibrations as in nature.

As they observed the rice seeds, the researchers found that the groups of seeds that were exposed to the sound of water were able to germinate 30 to 40 percent faster than the seed groups that were not exposed to rain sounds but were otherwise in identical conditions. They also found that seeds that were closer to the surface could better sense the droplets’ sounds and grow faster, compared to more submerged or more distant seeds.

These experiments showed that there is a connection between the sound of a water droplet and a seed’s ability to grow. The researchers propose that there may be a biological advantage to seeds that can sense rain: If they are close enough to the surface to respond to the sound of rain, they are likely at an optimal depth to soak up moisture and safely grow to the surface.

The team then worked out calculations to see whether the physical vibrations of the droplets would be enough to jostle the seeds’ microscopic statoliths. If so, this would point to the mechanism by which sound can directly stimulate a plant’s growth.

In their calculations, the researchers factored in a rain droplet’s size and terminal velocity (the constant speed that a falling object eventually reaches), and worked out the amplitude of sound vibration the droplet would generate. From this, they determined to what degree these vibrations in water or soil would displace, or shake a submerged or buried seed, and how a shaking seed would affect microscopic statoliths within individual cells.

Makris and Navarro found that the experiments they performed on rice seeds were consistent with their calculations: The sound of rain can indeed dislodge and jostle a seed’s statoliths. This mechanism is likely at the root of a plant’s ability to “sense” the sound of rain and grow in response.

“Brilliant research has been done around the world to reveal the mechanisms behind the ability of plants to sense gravity,” Makris notes. “Our study has shown that these same mechanisms seem to be providing plant seeds a means of perceiving submergence depths in the soil or water that are beneficial to their survival by sensing the sound of rain. It gives new meaning to the fourth Japanese microseason, entitled ‘Falling rain awakens the soil.’”

This work was supported, in part, by the MIT Bose Fellowship and the MIT Koch Chair.


New study bridges the worlds of classical and quantum physics

The weird quantum behavior of subatomic particles can be understood through everyday classical ideas, MIT researchers show.


When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

In a paper appearing today in the journal Proceedings of the Royal Society, the team shows that the motion of a quantum object can be calculated by applying an idea from classical physics known as “least action.” With their new formulation, they show they can arrive at exactly the same solution as the Schrödinger equation — the main description of quantum mechanics — for a number of textbook quantum-mechanical scenarios, including the double-slit experiment and quantum tunneling.

Such mysterious phenomena, that could only be understood through equations of quantum mechanics, can now also be described using the team’s new classical formulation. In essence, the researchers have built an exact mathematical bridge between the classical, everyday physical world and the world that happens at dimensions smaller than an atom.

“Before, there was a very tenuous bridge that worked only for reasonably large [quantum] particles,” says study co-author Winfried Lohmiller, a research associate in the Nonlinear Systems Laboratory at MIT. “Now we have a strong bridge — a common way to describe quantum mechanics, classical mechanics, and relativity, that holds at all scales.”

“We’re not saying there’s anything wrong with quantum mechanics,” emphasizes co-author Jean-Jacques Slotine, an MIT professor of mechanical engineering and information sciences, and of brain and cognitive sciences. “We’re just showing a different way to compute quantum mechanics, which is based on well-known classical ideas that we put together in a simple way.”

To infinity and far below

Slotine and Lohmiller derived the quantum bridge while working on solidly classical problems. The researchers are members of the MIT Nonlinear Systems Laboratory, which Slotine directs. He and his colleagues develop models to describe complex behavior in problems of robotic and aircraft control, neuroscience, and machine learning. To predict the behavior of such systems, engineers often look to the Hamilton-Jacobi equation, which is one of the major formulations of classical mechanics and is related to Newton’s famous laws of motion.

The Hamilton-Jacobi equation essentially represents an object’s motion as minimizing a quantity called the action. Take, for instance, a simple scenario in which a ball is thrown from point A to point B. Theoretically, the ball could take any number of zigzagging paths between the two points. But the equation states that the actual path should be one where the ball’s “action” is minimized at every single point along that path.

In this case, the term “action” refers to the sum over time of the difference between an object’s kinetic energy (the energy that is generating the motion) and its potential energy (the object’s stored energy). The actual path that a ball takes between point A and B should then be a sequence of positions where the overall difference between kinetic and potential energy is minimized.

Slotine and Lohmiller were applying the Hamilton-Jacobi equation, and the principle of least action, to a number of classical mechanics problems with constraints when they realized that the equation, with some mathematical extensions, could solve a famous problem in quantum mechanics known as the double-slit experiment.

The double-slit experiment illustrates one of the weird, nonclassical behaviors that arises at quantum scales. In the experiment, two slits are cut out of a metal wall. When a single photon — a quantum-scale particle of light — is shot toward the wall, classical physics predicts that you should see a spot of light on the other side of the wall, assuming that the photon flew straight through either one of the holes, following a single path.

But experimentalists have instead observed alternating bright and dark stripes. The reality-bending pattern is a result of a quantum mechanical phenomenon by which a photon takes more than one path simultaneously. In this context, when a single photon is shot toward the wall, it can pass through both holes at the same time, along two paths that end up interfering with each other. The pattern of stripes that results means that the photon’s two interfering paths must be wave-like. The experiment therefore demonstrates how a quantum particle can also behave, however improbably, like a wave.

Since the discovery of quantum mechanics, physicists have tried to explain the double-slit experiment using tools from classical, everyday physics. But they’ve only ever been able to approximate the experiment’s results.

Even the noted physicist Richard Feynman ’39 found the task impossible. He assumed that one would have to consider and average over every single theoretical path that a photon could take, whether it be a straight line or any variation of a zigzagging path through either of the two holes. Such an exercise would require calculating an infinite number of possible zigzag paths, which all contradict the classical smooth paths one would expect. 

This last point is what Slotine and Lohmiller realized could be tweaked. Where classical physics assumes that an object must only take a single path from point A to B, quantum mechanics allows for an object to take multiple paths and multiple states simultaneously — a fundamental quantum property known as superposition.

The team wondered: What if classical physics could also entertain, at least mathematically, this notion of multiple paths? Then, they reasoned that an infinite number of paths wouldn’t have to be calculated. Instead, a much smaller number of “least action” classical paths might produce the exact same quantum result.

With this idea in mind, they looked back to the Hamilton-Jacobi equation to see how they might adapt its principles of least action to predict the double-slit experiment and other quantum phenomena.

“For a while we thought it was a little too good to be true,” Slotine says.

A particle’s destiny is in its density

In their new study, the team adds another ingredient of classical physics: “density,” which is, essentially, a probability that a given path is taken.

“We think of density in terms of fluid dynamics,” Lohmiller explains. “For the double-slit experiment, imagine pumping a hose toward the wall. What will happen is, most of the water will hit the center, but some droplets will also go toward the sides. A high density of water at the center means there is a high probability of finding a droplet along that path. And there will be a distribution, which we can compute.”

He and Slotine tweaked the Hamilton-Jacobi equation to include terms of density and multiple least action paths, and applied it to the double-slit experiment. They found that with this formulation, they only had to consider two classical paths through the two slits, as compared to Feynman’s infinity of zigzag paths. Ultimately, their calculations of classical density and action produced a wave function, or distribution of most probable paths that a photon could take, that was exactly the same as what was predicted by the Schrödinger equation, which is the central equation used to describe quantum-mechanical behavior.

“We show that the Schrödinger’s equation of quantum mechanics and the Hamilton-Jacobi equation of classical physics are actually identical given a suitable computation of density,” Slotine says. “That’s a purely mathematical result. We’re not saying that quantum phenomena happens at classical scales. We’re saying you can compute this quantum behavior with very simple classical tools.”

In addition to the double-slit experiment, the researchers showed the reworked equation can also predict other quantum mechanical behavior, such as quantum tunneling, in which particles such as electrons can pass through energy barriers that would not be possible according to classical physics. They could also derive the exact quantum wave of the electron in a hydrogen atom from the classical orbit of a planet. Finally, they revisited from this perspective the famous Einstein-Podolski-Rosen experiment, which started the modern study of quantum entanglement.

The researchers envision that scientists could use the new formula as a simple method to predict how certain quantum systems and devices will perform.

“There could be important implications for quantum computing, where quantum bits have these nonlinear energies that physicists must approximate, or for better understanding problems involving both quantum physics and general relativity,” Slotine offers. “In principle at least, we should now be able to characterize this quantum behavior exactly, with simple classical tools, and show that it’s not so mysterious after all.”


Two MIT alumnae named 2026 Gates Cambridge Scholars

Mitali Chowdhury ’24 and Christina Kim ’24 will pursue graduate studies at Cambridge University in the UK.


Mitali Chowdhury ’24 and Christina Kim ’24 have been selected as 2026 Gates Cambridge Scholars. The highly competitive fellowship offers fully funded opportunities for postgraduate study in any field at Cambridge University in the U.K. Kim is a second-time Gates Cambridge Scholar.

MIT students interested in the Gates Cambridge Scholar program should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Mitali Chowdhury

Chowdhury graduated from MIT with a BS in biological engineering and minors in both urban planning and environment and sustainability. Chowdhury has had a longstanding interest in reducing inequities in global health. At MIT, she pursued research in point-of-care diagnostics to identify and treat disease with accessible biotechnologies. She also helped develop low-cost testing for bacterial contamination in water in South Asia.

Chowdhury currently works at a startup advancing sequencing-based diagnostics. At Cambridge University, she will study for MPhil and PhD degrees in the Centre for Doctoral Training in Sensor Technologies. Her research will focus on CRISPR-based diagnostics to address antimicrobial resistance and expand equitable access to care.

Christina Kim

After graduating from MIT with a bachelor’s degree in chemistry and biology, Kim worked as a researcher in women’s health at the Wellcome Sanger Institute in Cambridge, U.K. 

As a 2025 Gates Cambridge Scholar, Kim pursued an MPhil in research at the institute, focusing on using bioinformatics and tissue engineering to design novel in vitro models. Her second Gates Cambridge scholarship will fund her PhD studies.


How to expand the US economy

In “Priority Technologies,” MIT faculty examine key areas of innovation that can drive American prosperity and security — now and in the decades ahead.


It’s an essential insight about our world: Innovation drives economic growth. For the U.S. to thrive, it must keep innovating. But how, and in what areas?

A new book co-authored by MIT faculty members focuses on six key areas where technology advances can drive the economy and support national security.

Those sectors — semiconductors, biotechnology, critical minerals, drones, quantum computing, and advanced manufacturing — are all built on U.S. know-how but are also areas where the country has either yielded a lead in production or innovation, or could yet fall behind.

As the book explains, a roadmap for U.S. prosperity and security involves sustaining notable areas of innovation and the national research ecosystem behind them, while rebuilding domestic manufacturing.

“In each of these areas, there are breakthroughs to be had, where the U.S. can leapfrog competitors and gain an advantage,” says Elisabeth Reynolds, an MIT expert on industrial innovation and editor of the new volume. “That’s a very exciting part of this.” She adds: “These areas are front and center for U.S. national economic and security policy.”

The book, “Priority Technologies: Ensuring U.S. Security and Shared Prosperity,” is published this week by the MIT Press. It features chapters by MIT faculty with expertise on the industrial sectors in question. Reynolds, a professor of the practice in MIT’s Department of Urban Studies and Planning, is a leading expert on industrial innovation and has long advocated for innovation-based growth that helps the U.S. workforce.

“All of this can be good for everyone,” says MIT economist Simon Johnson, who wrote the foreword to the book. “Out of that flow of innovations and ideas, we can create more good jobs for all Americans. Pushing the technological frontier and turning that into jobs is definitely going to help.”

Making more chips

“Priority Technologies” grew out of an ongoing MIT seminar by the same name, which Reynolds and Johnson began holding in 2023, often with appearances by other MIT faculty.

Both Reynolds and Johnson bring vast experience to the subject of innovation and production. Among other things, Reynolds headed MIT’s Industrial Performance Center for over a decade and was executive director of the MIT Task Force on the Work of the Future. She served in the White House National Economic Council as special assistant to the president for manufacturing and development.

Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship at the MIT Sloan School of Management, shared the 2024 Nobel Prize in economics, with MIT’s Daron Acemoglu and the University of Chicago’s James Robinson, for work about the historical relationship between institutions and economic growth. He has co-authored numerous books, including, with Acemoglu, the 2023 book “Power and Progress,” about the trajectory and implications of artificial intelligence.

As it happens, “Priority Technologies” does not focus on AI, instead opting to examine other vital, and often related, areas of innovation.

“We do not think this is the entire list of priority technologies,” Johnson says. “This is a partial list, and there are lots of other ideas.”

In the chapter on semiconductors, Jesús A. del Alamo, the Donner Professor of Science in MIT’s Department of Electrical Engineering and Computer Science, calls them “the oxygen of modern society.” This U.S.-born industry has seen a large manufacturing shift away from the country, however, leaving it vulnerable in terms of security and the economy; about one-third of inflation experienced in 2021 stemmed from a chip shortage. As he notes, the U.S. is now in the process of rebuilding its capacity to make leading-edge logic chips, for one thing.

“With semiconductors, people thought the U.S. could lose the manufacturing, stay on top of the innovation and design side, and would be fine,” Reynolds says. “But it’s turned out to make the country quite vulnerable. So we’ve had a massive shift to rebuild semiconductor manufacturing capabilities here in the U.S., and I would argue that’s been a successful strategy in recent years.”

Bringing biotech back home

In biotechnology, relocating manufacturing in the U.S. is also key, using new technologies in the process. As J. Christopher Love, the Laurent Professor of Chemical Engineering, puts it in his chapter, while the U.S. is the leader in biotech research, it “lacks the manufacturing infrastructure and expertise necessary to bring these ideas to the market at the same pace as it generates innovative new products.” Among other remedies, he suggests that smaller, more flexible production facilities can help the U.S. “leapfrog” other countries on the manufacturing side. Love is also co-director of MIT’s Initiative for New Manufacturing, which aims to drive advances in U.S. production across industries.

“We have tremendous biotech innovation, we’re the leaders, but we have a bottleneck when it comes manufacturing,” Reynolds observes. “If we can break through that with new technologies, new production processes, we’re in a position to make us less vulnerable, from a supply chain point of view, and capture more of what is going to be a $4 trillion market over the next 15 years.”

A similar story holds in other areas. Many drone innovations were developed in the U.S., while much manufacturing has shifted to China. Fiona Murray, the William Porter (1967) Professor of Entrepreneurship, writes that the U.S. has an “opportunity to rebuild its production at scale,” although that will also require significant strengthening of its supply chains, too.

Elsa Olivetti, the Jerry McAfee (1940) Professor of Engineering and a professor of materials science and engineering, recommends a multifaceted approach to help the U.S. regain traction in the production of critical minerals, including better forms of extraction, manufacturing, and recycling, to reduce potential scarcities.

And in the quantum computing chapter, two MIT co-authors — William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and a professor of physics; and Jonathan Ruane, a senior lecturer at MIT Sloan — note that the sector could help accelerate drug discovery, materials science, and energy applications. Noting that the U.S. still leads in private-sector investment in the field but tails China in public-sector investment, they urge more research support and stronger supply chains for quantum computing components, among other recommendations.

“The country that achieves quantum leadership will gain decisive advantages in these strategically important industries,” they write.

The university engine

From industry to industry, the book makes clear that certain key issues are broadly important to U.S. competitiveness and growth. The partnership between the federal government and the world-leading research capacities of U.S. universities, for one thing, has given the country an initial lead in many economic sectors and promises to continue driving innovation.

At the same time, the U.S. would benefit from expanding and strengthening its domestic supply chains, in the process of building up more domestic manufacturing, and needs capital investment that will help hardware-side, physically substantial industrial growth.

“These common themes include supply chain resilience and manufacturing capability,” Reynolds says. “Can we help drive the country’s innovation ecosystem through expansion of our industrial system and manufacturing? That’s a big question.”

On the research front, she reflects, over the years, “It’s been amazing how much MIT-led research has aligned with national priorities — or maybe that’s not so surprising.”

The partnership between the U.S. federal government and universities as research engines was formalized in the 1940s, thanks in part to then-MIT president Vannevar Bush. According to some estimates, government investment in non-defense research and development alone has accounted for up to 25 percent of U.S. economic growth since World War II.

“Vannevar Bush realized it wasn’t about a stock of technology, it was about a flow of innovation,” Johnson says. “And that brilliant insight is still relevant today. I think that is the insight of the last century. And that’s what we’re trying to capture and reiterate and repeat.”

“This is not even the future. This is current.”

Scholars and industry leaders have praised “Priority Technologies.” Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon University, has stated that when it comes to “ensuring American national security, economic competitiveness, and societal well-being,” the book underscores “the positive role technology can play in those outcomes.” Hemant Taneja, CEO of the venture capital firm General Catalyst, calls the volume “required reading for anyone interested in building the abundant, resilient future America deserves.”

For their part, Reynolds and Johnson hope the book will draw many kinds of readers interested in the economy, innovation, prosperity, and national security.

“We tried to make the volume accessible,” Reynolds says, noting that the book directly lays out “challenges for the country, and what we see as recommendations for next steps in how we position the country to succeed, and lead globally. Each of these chapters has something important to say.”

Johnson also notes the MIT scholars participating in the project want to enhance the ongoing policy conversation, in Washington and across the country, about supporting innovation and using it to drive U.S. economic and technological leadership.

“One reason to write a book is, you can’t pound the table with a podcast,” quips Johnson, who co-hosts a podcast, “Power and Consequences,” on major policy issues. In conversations with political leaders and their staffs, he adds, there is a core message to be transmitted about America and technology-driven growth: We have the knowledge and resources, but need to focus on supporting innovation while trying to increase domestic production.

“Here are the technologies we currently need,” Johnson says. “This is not imagination, this is not fanciful, this is not science fiction. This is not even the future. This is current. These are the technologies needed to defend the country and its interests. And we need to invest in these, and in everything we need to drive them forward.”


Managing traffic in space

Associate Professor Richard Linares is helping satellites safely navigate in increasingly congested orbits.


Chances are, you’ve already used a satellite today. Satellites make it possible for us to stream our favorite shows, call and text a friend, check weather and navigation apps, and make an online purchase. Satellites also monitor the Earth’s climate, the extent of agricultural crops, wildlife habitats, and impacts from natural disasters.

As we’ve found more uses for them, satellites have exploded in number. Today, there are more than 10,000 satellites operating in low-Earth orbit. Another 5,000 decommissioned satellites drift through this region, along with over 100 million pieces of debris comprising everything from spent rocket stages to flecks of spacecraft paint.

For MIT’s Richard Linares, the rapid ballooning of satellites raises pressing questions: How can we safely manage traffic and growing congestion in space? And at what point will we reach orbital capacity, where adding more satellites is not sustainable, and may in fact compromise spacecraft and the services that we rely on?

“It is a judgement that society has to make, of what value do we derive from launching more satellites,” says Linares, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the things we try to do is approach these questions of traffic management and orbital capacity as engineering problems.”

Linares leads the MIT Astrodynamics, Space Robotics, and Controls Lab (ARCLab), a research group that applies astrodynamics (the motion and trajectory of orbiting objects) to help track and manage the millions of objects in orbit around the Earth. The group also develops tools to predict how space traffic and debris will change as operators launch large satellite “mega-constellations” into space.

He is also exploring the effects of space weather on satellites, as well as how climate change on Earth may limit the number of satellites that can safely orbit in space. And, anticipating that satellites will have to be smarter and faster to navigate a more cluttered environment, Linares is looking into artificial intelligence to help satellites autonomously learn and reason to adapt to changing conditions and fix issues onboard.

“Our research is pretty diverse,” Linares says. “But overall, we want to enable all these economic opportunities that satellites give us. And we are figuring out engineering solutions to make that possible.”

Grounding practical problems

Linares was born and raised in Yonkers, New York. His parents both worked as school bus drivers to support their children, Linares being the youngest of six. He was an active kid and loved sports, playing football throughout high school.

“Sports was a way to stay focused and organized, and to develop a work ethic,” Linares says. “It taught me to work hard.”

When applying for colleges, rather than aim for Division I schools like some of his teammates, Linares looked for programs that were strong in science, specifically in aerospace. Growing up, he was fascinated with Carl Sagan’s “Cosmos” docuseries. And being close to Manhattan, he took regular trips to the Hayden Planetarium to take in the center’s immersive projections of space and the technologies used to explore it.

“My interest in science came from the universe and trying to understand our place within it,” Linares recalls.

Choosing to stay close to home, he applied to in-state schools with strong aeronautical engineering departments, and happily landed at the State University of New York at Buffalo (SUNY Buffalo), where he would ultimately earn his bachelor’s, master’s, and doctoral degrees, all in aerospace engineering.

As an undergraduate, Linares took on a research project in astrodynamics, looking to solve the problem of how to determine the relative orientation of satellites flying in formation.

“Formation flying was a big topic in the early 2000s,” Linares says. “I liked the flavor of the math involved, which allowed me to go a layer deeper toward a solution.”

He worked out the math to show that when three satellites fly together, they essentially form a triangle, the angles of which can be calculated to determine where each satellite is in relation to the other two at any moment in time. His work introduced a new controls approach to enable satellites to fly safely together. The research had direct applications for the U.S. Air Force, which helped to sponsor the work.

As he expanded the research into a master’s thesis, Linares also took opportunities to work directly with the Air Force on issues of satellite tracking and orientation. He served two internships with the U.S. Air Force Research Lab, one at Kirtland Air Force Base in Albuquerque, New Mexico, and the other in Maui, Hawaii.

“Being able to collaborate with the Air Force back then kind of grounded the research in practical problems,” Linares says.

For his PhD, he turned to another practical problem of “uncorrelated tracks.” At the time, the Air Force operated a network of telescopes to observe more than 20,000 objects in space, which they were working to label and record in a catalog to help them track the objects over time. But while detecting objects was relatively straightforward, the challenge came in correlating a detected object with what was already in the catalog. In other words, is what they were seeing something they had already seen?

Linares developed image analysis techniques to identify key characteristics of objects such as their shape and orientation, which helped the Air Force “fingerprint” satellites and pieces of space debris, and track their activity — and potential for collisions — over time.

After completing his PhD, Linares worked as a postdoc at Los Alamos National Laboratory and the U.S. Naval Observatory. During that time he expanded his aerospace work to other areas including space weather, using satellite measurements to model how Earth’s ionosphere — the upper layer of the atmosphere that is ionized by the sun’s radiation — affects satellite drag.

He then accepted a position as assistant professor of aerospace engineering at the University of Minnesota at Minneapolis. For the next three years, he continued his research in modeling space weather, tracking space objects and coordinating satellites to fly in swarms.

Making space

In 2018, Linares made the move to MIT.

“I had a lot of respect for the people and for the history of the work that was done here,” says Linares, who was especially inspired by the legendary Charles Stark “Doc” Draper, who developed the first inertial guidance systems in the 1940s that would enable the self-navigation of airplanes, submarines, satellites, and spacecraft for decades to come. “This was essentially my field, and I knew MIT was the best place to continue my career.”

As a junior faculty member in AeroAstro, Linares spent his first years focused on an emerging challenge: space sustainability. Around that time, the first satellite constellations were launching into low-Earth orbit with SpaceX’s Starlink, which aimed to provide global internet coverage via a huge network of several thousand coordinating satellites. The launching of so many satellites, into orbits that already held other active and nonactive satellites, along with millions of pieces of space debris, raised questions about how to safely manage the satellite traffic and how much traffic an orbit can sustain.

“At what level do we reach a tipping point, where we have too many satellites in certain orbital regimes?” Linares says. “It was kind of a known problem at the time, but there weren’t many solutions.”

Linares’ group applied an understanding of astrodynamics, and the physics of how objects move in space, to figure out the best way to pack satellites in orbital “shells,” or lanes that would most likely prevent collisions. They also developed a state-of-the-art model of orbital traffic, that was able to simulate the trajectories of more than 10 million individual objects in space. Previous models were much more limited in the number of objects they could accurately simulate. Linares’ open-source model, called the MIT Orbital Capacity Assessment Tool, or MOCAT, could account for the millions of pieces of space debris, in addition to the many intact satellites in orbit.

The tools that his group has developed are used today by satellite operators to plan and predict safe spacecraft trajectories. His team is continuing to work on problems of space traffic management and orbital capacity. They are also branching out into space robotics. The team is testing ways to teleoperate a humanoid robot, which could potentially help to build future infrastructure and carry out long-duration tasks in space.

Linares is also exploring artificial intelligence, including ways that a satellite can autonomously “learn” from its experience and safely adapt to uncertain environments.

“Imagine if each satellite had a virtual Doc Draper onboard that could do the de-bugging that we did from the ground during the Apollo missions,” Linares says. “That way, satellites would become instantaneously more robust. And it’s not taking the human out of the equation. It’s allowing the human to be amplified. I think that’s within reach.”


Why bother with plausible deniability?

Philosopher Sam Berstler explains why we have social norms that let people engage in open deception.


Picture this scenario in a business: An employee, Brad, disclosed some information that wound up in the hands of a competitor. He may not have meant to, but he did, and a few people at the firm know this. So, at the next company meeting, another employee, Linda, looks pointedly at Brad and says, “I know that no one would ever dream of leaking information, intentionally or otherwise, from our discussions.”

Linda means the opposite of what she says, of course. She is letting people know that Brad is to blame. However, while Linda is making her message public, she also wants what we often call “plausible deniability” for her statement. If anyone asks later if she was insinuating anything about Brad, she can claim she was just making a general comment about the firm.

From the boardroom to the courtroom, the talk show, and beyond, people frequently seek plausible deniability for their statements. It seems to work, too. Indeed, to have plausible deniability, the denial need not be plausible.

“People can say, ‘That’s not what I meant,’ and completely get away with it, even though it’s totally obvious they’re lying,” says MIT philosopher Sam Berstler. “They wouldn’t be getting away with it in the same respect by putting the content in explicit words.”

She adds: “This should be very puzzling to us, because in both cases the intent is maximally obvious.”

So why does plausible deniability work, and work like this? And what does it tell us about how we interact? Berstler, who studies language and communication, has published a new paper on plausible deniability, examining these issues. It is part of a larger body of work Berstler is generating, focused on everyday interactions involving deception.

To understand plausible deniability, Berstler thinks we should recognize that our conversations cannot be understood simply by analyzing the words we use. Our interactions always take place in social contexts, often have a performative aspect, and occasionally intersect with “non-acknowedgement norms,” the practice of keeping quiet about what we all know. Plausible deniability is bound up with social practices that incentivize us to not be fully transparent.

“A lot of indirect speech is designed, as it were, to facilitate this kind of deniability,” Berstler says.

The paper, “Non-Epistemic Deniability,” is published in the journal MIND. Berstler, the Laurance S. Rockefeller Career Development Chair and assistant professor of philosophy at MIT, is the sole author.

Managing a personal “Cold War”

In Berstler’s view, there are multiple ways to create plausible deniability. One is through the practice of open secrets, the subject of one of her previous papers. An open secret is widely known information that is never acknowledged, for reasons of power or in-group identification, among other things. Indeed, no one even acknowledges that they are not acknowledging the open secret.

Examining open secrets led Berstler directly to her analysis of plausible deniability. However, the new paper focuses more on another way of creating plausible deniability, which she calls “two-tracking norms.” Two-tracking is when a group divides its communications into two parts: One track consists of official, limited, courteous interaction, and the second track consists more of informal, resentful, uncooperative interactions. Linda, in our example, is engaging in two-tracking.

But why do we two-track at all? Why not just be fully transparent? Well, in an office scenario, if Linda is mad that Brad divulged some company secrets, calling out Brad directly might lead to recriminations and conflict beyond what Linda is willing to tolerate for the sake of critizing Brad on the record.

“It's like a Cold War situation where we each have an interest in not letting the conflict go to a state where we’re firing warheads at each other, but we can’t just purely manage relations around the negotiating table because we’re adversaries,” Berstler says. “We’re going to aggress against each other, but in a limited way. In a two-track conversation, communicating in the second track is like fighting a proxy battle, but we’re also providing evidence to each other that we’re only going to engage in a proxy battle.”

In this way, Linda takes Brad to task and some people pick up on it, but Brad is not explicitly publicly shamed. And though he might be unhappy, he is less likely to wreck all company norms in an attempt to retaliate. The firm more or less rolls on as usual.

Waiting for Goffman

Where Berstler differs in part from other philosophers is in her emphasis on the extent to which social practices are integral to our ways of deploying deniability. Our interactions are not just limited to rhetoric, but have additional layers.

“What we mean can often be different from what we say, or enhanced from what we say,” Berstler says. “Sometimes we figure out what others mean by relying on what they say in literal language. But sometimes we’re relying on other things, like the context.”

So, back at the firm, the colleagues of Linda and Brad might have some knowledge of a confidentiality breach, or they might know that Linda does not usually speak up at meetings, or they might read things into her tone of voice and the way she appeared to look at Brad. There is more to be gleaned than her literal words.

In this kind of analysis, Berstler finds illumination in the work of the midcentury sociologist Erving Goffman, who studied in minute detail the performative parts of our everyday interactions and speech. Goffman, as Berstler notes in the paper, proposed that we have a ritualized, social self (or “face”) and that normal, everyday behavior generally allows us, and others, to keep this face intact.

Relatedly, Goffman and some of his intellectual followers concluded that habits such as two-tracking are very common in everyday life; the price we pay for saving face is a bit less transparency, and a bit more secrecy and deniability.

“What I’m suggesting is we have these other established practices like two-tracking and open secrecy, where the deniability is just a byproduct,” Berstler says.

What’s the solution?

By bringing sociological ideas into her work, Berstler is moving beyond the normal philosophical discussion of the subject. On the other hand, she is not directly disputing core ideas in linguistics or the philosophy of language; she is just suggesting we add another layer to our analysis of communication and meaning.

Digging into issues of plausible deniability also raises the question of what to do about it. There may be something pernicious in the practice, but calling out plausible deniability threatens to dismantle our social guardrails and break the “Cold War” norms used to help people co-exist.

Berstler, though, has another suggestion: Instead of calling out such subterfuge, we can become verbally and performatively skilled enough to counteract it.

“I think the actual answer is becoming rhetorically clever,” Berstler says. “It’s being the person who uses indirect speech to respond strategically, without violating these norms. That is possible. It also means you have agency. You could become very good at verbal sparring.”

Besides, Berstler says, “Often that can be more powerful than just calling them out, and demonstrates your own verbal fluency. I think we admire it when we see it. Conversational skill is an important component of being morally good, in these cases by reprimanding someone in a way that’s not going to be counterproductive.”

She adds: “People who buy into the rhetoric of transparency can be setting back their own interests. Maybe speaking transparently is morally virtuous in some respects, but given the reality of our speech practices, transparency is not necessarily going to be the most effective way of handling things.”


Jacob Andreas and Brett McGuire named Edgerton Award winners

The associate professors of EECS and chemistry, respectively, are honored for exceptional contributions to teaching, research, and service at MIT.


MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.

“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”

“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”

Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the International Conference on Machine Learning and the Association for Computational Linguistics.

Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”

Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”

Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.

The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.

“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”

“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”

Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven't felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.” 

“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”

“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”


Bringing AI-driven protein-design tools to biologists everywhere

Founded by Tristan Bepler PhD ’20 and former MIT professor Tim Lu PhD ’07, OpenProtein.AI offers researchers open-source models and other tools for protein engineering.


Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”


A regulatory loophole could delay ozone recovery by years

Scientists say an exception in the Montreal Protocol for the use of ozone-depleting feedstocks could set the ozone recovery back seven years.


Often hailed as the most successful international environmental agreement of all time, the 1987 Montreal Protocol continues to successfully phase out the global production of chemicals that were creating a growing hole in the ozone layer, causing skin cancer and other adverse health effects.

MIT-led studies have since shown the subsequent reduction in ozone-depleting substances is helping stratospheric ozone to recover. (It could return to 1980 levels by as early as 2040, according to some estimates.) But the Montreal Protocol made an exception in its rules for the use of ozone-depleting substances as feedstocks in the production of other materials. That’s because it was thought that only a small amount — just 0.5 percent — of the ozone-depleting substances used for this purpose would leak into the atmosphere.

In recent years, however, scientists have observed more ozone-depleting substances in the atmosphere than expected, and have increased their estimates of leakage from feedstocks.

Now an international group of scientists, including researchers from MIT, has calculated the impact of different feedstock leakage rates on the ozone’s fragile recovery. They find the higher leakage rates, if not addressed by the Montreal Protocol, could delay ozone recovery by about seven years.

“We’ve realized in the last few years that these feedstock chemicals are a bug in the system,” says author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, who was part of the original research team that linked the chemicals to the ozone hole. “Production of ozone-depleting substances has pretty much ceased around the world except for this one use, which is when you have a chemical you convert into something else.”

The paper, which was published in Nature Communications today, is the first to comprehensively quantify the impact of leaked feedstocks, which are currently used to make plastics and nonstick chemicals. They are also used to make substitute chemicals for the ones regulated under the Montreal Protocol. The researchers say it shows the importance of curbing use and preventing leakage of such feedstocks, especially as the production of their end products, like plastic, is projected to grow.

“We’ve gotten to the point where, if we want the protocol to be as successful in the future as it has been in the past, the parties really need to think about how to tighten up the emissions of these industrial processes,” says first author Stefan Reimann of the Swiss Federal Laboratories for Materials Science and Technology.

“To me, it’s only fair, because so many other things have already been completely discontinued. So why should this exemption exist if it’s going to be damaging?” says Solomon.

Joining Reimann on the paper are his colleagues Martin K. Vollmer and Lukas Emmenegger; Luke Western and Susan Solomon of the MIT Center for Sustainability Science and Strategy and the Department of Earth, Atmospheric and Planetary Sciences; David Sherry of Nolan-Sherry and Associates Ltd; Megan Lickley of Georgetown University; Lambert Kuijpers of the A/gent Consultancy b.v.; Stephen A. Montzka and John Daniel of the National Oceanic and Atmospheric Administration; Matthew Rigby of the University of Bristol; Guus J.M. Velders of Utrecht University; Qing Liang of the NASA Goddard Space Flight Center; and Sunyoung Park of Kyungpook National University.

Repairing the ozone

In 1985, scientists discovered a growing hole in the ozone layer over Antarctica that was allowing more of the sun’s harmful ultraviolet radiation to reach Earth’s surface. The following year, researchers including Solomon traveled to Antarctica and discovered the cause of the ozone deterioration: a class of chemicals called chlorofluorocarbons, or CFCs, which were then used in refrigeration, air conditioning, and aerosols.

The revelations led to the Montreal Protocol, an international treaty involving 197 countries and the European Union restricting the use of CFCs. The subsequent decision to exempt the use of ozone-depleting substances for use as feedstocks was based partially on industry estimates of how much of their feedstocks leaked.

“It was thought that the emissions of these substances as a feedstock were minor compared to things like refrigerants and foams,” Western says. “It was also believed that leakage from these sources was minor — around half a percent of what went in — because people would essentially be leaking their profits if their feedstocks were released into the atmosphere.”

Unfortunately, some of those assumptions are no longer true. Western and Reimann are part of the Advanced Global Atmospheric Gases Experiment (AGAGE), a global monitoring network co-founded by Ronald Prinn, MIT’s TEPCO Professor of Atmospheric Science. AGAGE monitors emissions of ozone-depleting substances around the world, and in recent years researchers have revised their estimates of feedstock leakage upwards, to about 3.6 percent. For some chemicals, the number was even higher.

In the new paper, the researchers estimated a 3.6 percent feedstock leakage as the baseline for most chemicals. They compared that with a scenario where 0.5 percent of feedstocks are leaked from 2025 onward and a scenario with zero feedstock-related emissions. The researchers also looked at production trends between 2014 and 2024 to project how much of each specific ozone-depleting chemical would be used as feedstock between 2025 and 2100.

The analysis shows that until 2050, total ozone-depleting chemical emissions decrease in all scenarios as rising feedstock emissions are offset by declining uses enforced by the Montreal Protocol. In the scenario with continued 3.6 percent leakage, however, emissions level off around 2045, and total emissions only decrease by 50 percent overall by 2100.

The researchers then evaluated the impact of feedstock-related emissions on stratospheric ozone depletion. In the scenario where feedstock leakage is 0.5 percent, the ozone returns to its 1980 status by 2066. In the scenario with zero feedstock leakage, the ozone reclaims its 1980 health in 2065. But in the baseline scenario, the recovery is delayed about seven years, to 2073.

“This paper sends an important message that these emissions are too high and we have to find a way to reduce them,” Reimann says. “Either that means no longer using these substances as feedstocks, swapping out chemicals, or reducing the leakage emissions when they are used.”

A global response

Solomon is confident industries will be able to adjust to the latest findings.

“There are a lot of innovators in the chemical industry,” Solomon says. “They make new chemicals and improve chemicals for a living. It’s true they can perhaps get too entrenched with certain chemicals, but it doesn’t happen that often. Actually, they’re usually quite willing to consider alternatives. There are thousands of other chemicals that could be used instead, so why not switch? That’s been the attitude.”

Solomon says the fact that AGAGE can detect the impact of feedstock emissions is a testament to the progress the world has made in reducing emissions from other sources up to this point. She believes raising awareness of the feedstock problem is the first step.

“This isn’t the first time that the AGAGE Network has made measurements that have allowed the world to see we need to do a little better here or there,” Western says. “Often, it’s just a mistake. Sometimes all it takes is making people more aware of these things to tighten up some processes.”

Members of the Montreal Protocol meet every year. In those meetings, they split into working groups around different topics. Feedstock emissions are already one of those topics, so participants will review the evidence together. Typically, they release a statement about mitigation strategies if needed.

“We wanted to raise the warning flag that something is wrong here,” Reimann says. “We could reduce the period of ozone depletion by years. It might not sound like a long time, but if you could count the skin cancer cases you’d avoid in that time, it would seem quite significant.”

The work was supported, in part, by the U.S. National Science Foundation, the U.S. National Aeronautics and Space Administration (NASA), the Swiss Federal Office for the Environment, the VoLo Foundation, the United Kingdom Natural Environment Research Council, and the Korea Meteorological Administration Research and Development Program.


Youth may increase vulnerability to a carcinogen found in contaminated water and some drugs

A new study suggests that the chemical NDMA is much more likely to cause cancerous mutations after exposure early in life.


A new study from MIT suggests that a carcinogen that has been found in medications and in drinking water contaminated by chemical plants may have a much more severe impact on children than adults.

In a study of mice, the researchers found that juveniles exposed to drinking water containing this compound, known as NDMA, showed dramatically higher rates of DNA damage and cancer than adults.

The findings may help to explain an epidemiological association between childhood cancer and prenatal exposure to NDMA in people living near a contaminated site in Wilmington, Massachusetts, the researchers say. The study also suggests that it is critical to evaluate the impact of potential carcinogens across all ages.

“We really hope that groups that do safety testing will change their paradigm and start looking at young animals, so that we can catch potential carcinogens before people are exposed,” says Bevin Engelward, an MIT professor of biological engineering. “As a solution to cancer, cancer prevention is clearly much better than cancer treatment, so we hope we can spot dangerous chemicals before people are exposed, and therefore prevent extensive cancer risk.”

MIT postdoc Lindsay Volk is the lead author of the paper. Engelward is the senior author of the study, which appears in Nature Communications.

From DNA damage to cancer

NDMA (N-Nitrosodimethylamine) can be generated as a byproduct of many industrial chemical processes, and it is also found in cigarette smoke and processed meats. In recent years, NDMA has been detected in some formulations of the drugs valsartan, ranitidine, and metformin. It was also found in drinking water in Wilmington, Massachusetts, in the 1990s, as a result of contamination from the Olin Chemical site.

In 2021, a study from the Massachusetts Department of Health suggested a link between that water contamination and an elevated incidence of childhood cancer in Wilmington. Between 1990 and 2000, 22 Wilmington children were diagnosed with cancer. The contaminated wells were closed in 2003.

Also in 2021, Engelward and others at MIT published a study on the mechanism of how NDMA can lead to cancer. In the new Nature Communications paper, Engelward and her colleagues set out to see if they could determine why the compound appears to affect children more than adults.

Most studies that evaluate potential carcinogens are performed in mice that are at least 4 to 6 weeks old, and often older. For this study, the researchers studied two groups of mice — one 3 weeks old (juvenile), and one 3 months old (adult). Each group was given drinking water with low levels of NDMA, about five parts per million, for two weeks.

Inside the body, NDMA is metabolized by a liver enzyme called CYP2E1. This produces toxic metabolites that can damage DNA by adding a small chemical group known as a methyl group to DNA bases, creating lesions known as adducts.

When the researchers examined the livers of the mice, they found that juveniles and adults showed similar levels of DNA adducts. However, there were dramatic differences in what happened after that initial damage. In juvenile mice, DNA adducts led to significant accumulation of double-stranded DNA breaks, which occur when cells try to repair adducts. These breaks produce mutations that eventually lead to the development of liver cancer.

In the adult mice, the researchers saw essentially no double-stranded breaks and significantly fewer mutations compared to juveniles. Furthermore, the livers did not develop severe pathology, including tumors, even though they experienced the same initial level of DNA adducts.

“The initial structural changes to the DNA had very different consequences depending on age,” Engelward says. “The double-stranded breaks were exclusively observed in the young.”

Further experiments revealed that these differences stem from differences in the rates of cell proliferation. Cells in the juvenile liver divide rapidly, giving them more opportunity to turn DNA adducts into mutations, while cells of the adult liver rarely divide.

“This really emphasizes the overall problem that we’re trying to highlight in the paper,” Volk says. “With toxicological studies, oftentimes the standard is to use fully grown mice. At that point, they’re already slowing down cell division, so if we are testing the harmful effects of NDMA in adult mice, then we’re completely missing how vulnerable particular groups are, such as younger animals.”

While most of these effects were seen in the liver, because that is where NDMA is metabolized, a few of the mice developed other types of cancer, including lung cancer and lymphoma.

Adult risk is not zero

For most of these studies, the researchers used mice that had two of their DNA repair systems knocked out. This speeds up the mutation process, allowing the researchers to see the effects of NDMA exposure more easily, without needing to study a large population of mice.

However, a small study in mice with normal DNA repair showed that juveniles experienced NDMA-induced double-strand breaks, regenerative proliferation, and large-scale mutations that were completely absent in adults. This occurs because the fast-growing juveniles possess highly active DNA replication machinery that encounters the DNA adducts before the cell has time to repair them.

The researchers also found that if they treated adult mice with thyroid hormone, which stimulates proliferation of liver cells, those cells began accumulating mutations as quickly as the juvenile liver cells. Previous work done in the Engelward laboratory has shown that inflammation can also stimulate cell proliferation-driven vulnerability to DNA damage, so the findings of this study suggest that anything that causes liver inflammation could make the adult liver more vulnerable to damage caused by agents such as NDMA.

“We certainly don’t want to say that adults are completely resistant to NDMA,” Volk says. “Everything impacts your susceptibility to a carcinogen, whether that’s your genetics, your age, your diet, and so forth. In adults, if they have a viral infection, or a high fat diet, or chronic binge alcohol drinking, this can impact proliferation within the liver and potentially make them susceptible to NDMA.”

The researchers are now investigating how a high-fat diet might influence cancer development in mice that also have exposure to NDMA.

This collaborative effort across several MIT labs was funded by the National Institutes of Environmental and Health Sciences (NIEHS) Superfund Research Program, a NIEHS Core Center Grant, a National Institutes of Health Training Grant, and the Anonymous Fund for Climate Action. 


MIT study reveals a new role for cell membranes

Long thought to be mainly a structural support, the cell membrane also influences how cells respond to signals and may contribute to the growth of cancer cells.


Cells are enveloped by a lipid membrane that gives them structure and provides a barrier between the cell and its environment. However, evidence has recently emerged suggesting that these membranes do more than simply provide protection — they also influence the behavior of the protein receptors embedded in them.

A new study from MIT chemists adds further support to that idea. The researchers found that changing the composition of the cell membrane can alter the function of a membrane receptor that promotes proliferation.

Epidermal growth factor receptor (EGFR) can be locked into an overactive state when the cell membrane has a higher than normal concentration of negatively charged lipids, the researchers found. This may help to explain why cancer cells with high levels of those lipids enter a highly proliferative state that allows them to divide uncontrollably.

“The longstanding dogma of what a membrane does is that it’s just a scaffold, an organizational structure. However, there have been increasing observations that suggest that maybe these membrane lipids are actually playing a role in receptor function,” says Gabriela Schlau-Cohen, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT and the senior author of the study.

The findings open up the possibility of discovering new ways to treat tumors by neutralizing the negative charge, which might turn down EGFR signaling, she adds.

Shwetha Srinivasan PhD ’22 is the lead author of the paper, which appears in the journal eLife. Other authors include former MIT postdocs Xingcheng Lin and Raju Regmi, Xuyan Chen PhD ’25, and Bin Zhang, an associate professor of chemistry at MIT.

Receptor dynamics

The EGF receptor, which is found on cells that line body surfaces and organs, is one of many receptors that help control cell growth. Some types of cancer, especially lung cancer and glioblastoma, overexpress the EGF receptor, which can lead to uncontrolled growth.

Like most receptor proteins, EGFR spans the entire cell membrane. Until recently, it has been challenging to study how signals are conveyed across the entire receptor, because of the difficulty of creating membranes that have proteins going all the way through them and then studying both ends of those proteins.

To make it easier to study these signaling processes, Schlau-Cohen’s lab uses nanodiscs, a special type of self-assembling membrane that mimics the cell membrane. When making these discs, the researchers can embed receptors in them, allowing the team to study the function of the full-length receptor.

Using a technique called single molecule FRET (fluorescence resonance energy transfer), the researchers can study how the shape of the receptor changes under different conditions. Single molecule FRET allows them to measure the distance between different parts of the protein by labeling them with fluorescent tags and then measuring how fast energy travels between the tags.

In previous work, Schlau-Cohen and Zhang used single molecule FRET and molecular dynamics simulations to reveal what happens when EGFR binds to EGF. They found that this binding causes the transmembrane section of the receptor to change shape, and that shape-shift triggers the section of the receptor that extends inside the cell to activate cellular machinery that stimulates growth.

Stuck in an overactive state

In the new study, the researchers used a similar approach to investigate how altering the composition of the membrane affects the function of the receptor. First, they explored how elevated levels of negatively charged lipids would affect the cell membrane and EGFR function.

Normally, about 15 percent of the cell membrane is made up of negatively charged lipids. The researchers found that membranes with negatively charged lipids in the range of 15 to 30 percent behaved normally, but if that level reached 60 percent, then the EGFR receptor would become locked into an active state.

In that state, the pro-growth signaling pathway is turned on all the time, even when no EGF is bound to the receptor. Many cancer cells show increased levels of these lipids, and this mechanism could help to explain why those cells are able to grow unchecked, Schlau-Cohen says.

“If the membrane has high levels of negatively charged lipids, then it’s always in that open conformation. It doesn’t matter if ligand is bound or unbound,” she says. “It’s always in the conformation that’s telling the cell to grow, not just when EGF binds.”

The researchers also used this system to explore the role of cholesterol in EGFR function. When the researchers created nanodiscs with elevated cholesterol levels, they found that the membranes became more rigid, and this rigidity suppressed EGFR signaling.

The research was funded by the National Institutes of Health and MIT’s Department of Chemistry.


Waves hit different on other planets

From lazy ripples to towering breakers, waves should vary widely from one planet to another, according to a new model.


On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.

This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.

In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.

The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock. 

“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”

The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.

“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”

If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.

“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”

The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.

“The first puff”

When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.

“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”

She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.

“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”

Making waves

The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.

The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.

They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.

“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”

The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.

Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.

The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.

This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.

Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.

“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”

This work was supported, in part, by NASA and the National Science Foundation.


Multitasking quantum sensors can measure several properties at once

The devices represent a key step toward practical quantum sensing, with applications in biomedical sensing, materials characterization, and more.


A special class of sensors leverages quantum properties to measure tiny signals at levels that would be impossible using classical sensors alone. Such quantum sensors are currently being used to study the inner workings of cells and the outer depths of our universe.

Particularly promising are solid-state quantum sensors, which can operate at room temperature. Unfortunately, most solid-state quantum sensors today only measure one physical quantity at a time — such as the magnetic field, temperature, or strain in a material. Trying to measure both the magnetic field and temperature of a material at the same time causes their signals to get mixed up and measurements to become unreliable.

Now, MIT researchers have created a way to simultaneously measure multiple physical quantities with a solid-state quantum sensor. They achieved this by exploiting entanglement, where particles become correlated into a single quantum state. In a new paper, the team demonstrated its approach in a commonly used quantum sensor at room temperature, measuring the amplitude, frequency, and phase of a microwave field in a single measurement. They also showed the approach works better than sequentially measuring each property or using traditional sensors.

The researchers say the approach could enable quantum sensors that can deepen our understanding of the behavior of atoms and electrons inside materials and living systems like cancer cells.

“Quantum multiparameter estimation has been mostly theoretical to date,” says co-lead author of the paper Takuya Isogawa, a graduate student in nuclear science and engineering. “There have been very few experiments that actually demonstrate it, and that work focused on photons. We wanted to demonstrate multiparameter estimation in a more application-oriented setup: a solid-state quantum sensor in use today.”

Joining Isogawa on the paper are co-lead authors Guoqing Wang PhD ’23 and MIT PhD candidate Boning Li. The other authors on the paper are former MIT visiting students Zhiyao Hu and Ayumi Kanamoto; University of Tokyo PhD candidate Shunsuke Nishimura; Chinese University of Hong Kong Professor Haidong Yuan; and Paola Cappellaro, MIT’s Ford Professor of Engineering, a professor of nuclear science and engineering and of physics, and a member of the Research Laboratory of Electronics.

Quantum effects for measurement

Quantum sensors exploit quantum effects like entanglement, spin states, and superposition to measure changes in magnetic fields, electric fields, gravity, acceleration, and more. As such, they can be used to measure the activity of single molecules in ways that are useful for understanding biology and space, like tracking the activity of metabolites or enzymes inside cells.

One particularly useful sensor in biology leverages what’s known as nitrogen-vacancy (NV) centers in diamonds, a defect where a carbon atom in the diamond’s crystal lattice is replaced by a nitrogen atom, and a neighboring lattice site is missing, or vacant. The defect hosts an electronic spin whose transition frequencies can be read out optically. The NV center’s spin state is extremely sensitive to external effects, such as magnetic fields and temperature, which can shift the spin state in ways that can be measured at extremely high resolution.

Unfortunately, different external effects change the energy resonances of the spin in similar ways, making it difficult to measure multiple effects at once. The result is that most solid-state quantum sensor applications measure a single physical quantity at one time.

“If you can only measure one quantity at a time, you have to repeat experiments to measure quantities one by one,” Isogawa says. “That takes more time, which means less sensitivity. It also makes experiments more susceptible to errors.”

For their experiment, the researchers used NV centers inside of a 5-square-millimeter diamond. They pointed a laser into the diamond and studied its fluorescence to make their measurements, a common approach for such sensors. To study the electronic spin of the NV center, they used a microwave antenna. To study the spin of the nitrogen atom they used a radio frequency field.

“We used those two spins as two qubits,” Isogawa says, referring to the building blocks of quantum computing systems. “If you have only one qubit, you can only measure one outcome: basically, 0 or 1. It’s the probability that it spins up or down. Think of it like a coin toss, with the probability of getting heads or tails. With two qubits, we increased the parameters that we could extract.”

The system worked because the spins of the sensor qubit and auxiliary qubit were entangled, a quantum property where the state of one particle is dependent on another. With one qubit, you get a binary outcome. With two, you get four possible outcomes with a total of three possible parameters.

The two qubits allowed researchers to measure those three quantities simultaneously using a technique known as the Bell state measurement.

Other researchers had used the Bell state measurement at extremely low temperatures before, but the MIT researchers developed a new technique to perform the measurement at room temperature. That technique was first proposed by Wang, who was previously a graduate student in Professor Cappellaro’s lab.

The researchers used the approach to simultaneously measure the amplitude, detuning, and phase of a microwave magnetic field. The researchers also say the approach could be used to measure electric fields, temperature, pressure, and strain.

“Measuring these parameters simultaneously can help us explore spin waves in materials, which is an important topic in condensed matter physics,” Isogawa says. “NV center sensors have extremely high spatial resolution and versatility. It can measure a lot of different physical quantities.”

More practical quantum sensing

The researchers say this work is an important step toward using solid-state quantum sensors to more fully characterize systems in biomedical research and materials characterization. That’s because multiparameter estimation had never been achieved in realistic settings or in widely used quantum sensors.

“What makes the NV center quantum sensors so special is they can operate at room temperature,” Isogawa says. “It’s very suitable for biological measurements or condensed matter physics experiments.”

Although the researchers say their sensor didn’t measure each quantity at the highest possible precision, in future work they plan to explore if their approach can achieve higher precision for each parameter.

They also plan to explore how their approach works to characterize heterogenous materials.

“In an extremely uniform environment, you could use many different classical and quantum sensors and measure each physical quantity at the same time,” Isogawa says. “But if the physical quantities change at different locations, you need high spatial sensors, and you need a sensor that can measure multiple physical quantities. This approach has major advantages in such situations.”

The work was supported, in part, by the U.S. National Science Foundation, the National Research Foundation of Korea, and the Research Grants Council of Hong Kong.


Jazz in the key of life

Saxophonist Miguel Zenón, a Grammy-winning MIT faculty member, creates a distinctive blend of jazz and traditional Puerto Rican music.


It is not hard to find glowing reviews of saxophonist Miguel Zenón, a creative jazz artist whose compositions incorporate musical elements from his native Puerto Rico.

For instance, The Jazz Times called “Jibaro,” Zenón’s breakthrough 2005 album, “profound yet joyful.” The New York Times called the same music “strong and light,” adding that we have “rarely seen a jazz composer step forward with a project so impressively organized, intellectually powerful and well played from the start.”

In 2009, when Zenón won a prestigious MacArthur Fellowship, the MacArthur Foundation called Zenón’s work “elegant and innovative,” with “a high degree of daring and sophistication.” In 2012, The New York Times reviewed another Zenón work, “Puerto Rico Nació en Mi: Tales From the Diaspora,” by calling the music “deeply hybridized and original, complex but clear.”

As you may have noticed, these notices all contain multiple descriptive terms. That’s because Zenón’s work is many things at once: jazz, combined with other musical genres; technically rigorous, and supple; novel, yet steeped in tradition. Indeed, Zenón has always seen jazz as being multifaceted.

“What I discovered, when I first encountered jazz, was this idea that you were using improvisation to portray your personality directly to your listeners,” Zenón explains. “And it was connected to a very interesting and intricate improvisational language. That provided something I hadn’t encountered in music before, this idea that you could have something personal and heartfelt walking hand in hand with something that was intellectual and brainy. That balance spoke to me.”

It is still speaking. In 2024, Zenón won the Grammy Award for Best Latin Jazz Album for “El Arte Del Bolero Vol. 2,” a collaboration with Venezuelan pianist Luis Perdomo, a musical partner in the Miguel Zenón Quartet.

Zenón has taught at MIT for three years now. He became a tenured faculty member last year, in MIT’s Music and Theater Arts program, where he helps students find the same satisfaction in music that he does.

“When I first got into music, I was looking for fulfillment,” Zenón says. “It wasn’t about success. I was just looking for music to fulfill something within me. And I still search for that now. And sometimes it still feels like it did 25 or 30 years ago, when I first encountered that feeling. It’s nice to have that in your pocket, to say, this is what I’m looking for, that initial feeling.”

Paradise in the Back Bay

Zenón grew up in San Juan, Puerto Rico. Around age 11, he started attending a performing arts school and playing the saxophone. In his last year of school, Zenón was admitted into college to study engineering. However, a few years before, he had encountered something new: jazz. Zenón’s training had been in classical music. But jazz felt different.

“Discovering jazz music ignited a passion for music in me that had not existed up to that point,” says Zenón, who decided to pursue music in college. “I kind of jumped ship, and it was a blind jump. I didn’t know what to expect, I didn’t know what was on the other side, I didn’t have any artists or any musicians in my family. I just followed a hunch, followed my heart.”

After teachers recommended he study at the renowned Berklee College of Music in Boston, Zenón worked to find a scholarship and funding.

“This was way before the internet. I was looking at catalogs,” Zenón recalls. “I had never been to Boston in my life, I didn’t even know what Berklee looked like. But at Berklee it was the first time I was able to connect with a jazz teacher in a formal way, to learn about history, theory, harmony, and I soaked in it. Also, I was surrounded by young people like myself, who were as enamored and passionate about music as I was. It really felt like paradise.”

After earning his BA from Berklee in 1998, Zenón then moved to New York City. He earned an MA from the Manhattan School of Music in 2001 and began playing more extensively with new bandmates.

“I just wanted to be able to play with people who were better than me, and learn from the experience,” Zenón says. He started generating new ideas, writing music, and performing publicly. With Antonio Sánchez, Hans Glawischnig, and Perdomo, he founded the Miguel Zenón Quartet.

“That led to going into the studio and making an album,” Zenón recounts. “And that led to more experience, and more albums.”

Did it ever. Zenón has now been the leader for about 20 albums, mostly featuring the quartet. (After several years, Henry Cole replaced Sánchez as the group’s drummer.) Zenón has played on many recordings by other artists, and helped found the SFJAZZ Collective.

Not many prolific musicians will name any one recording as their best, and Zenón is the same way, but he is willing to cite a few that were milestones for him.

“Jibaro” draws on the music of Puerto Rico’s jibaro singers, troubadors using 10-line stanzas with eight-syllable lines, something Zenón adopted for jazz-quartet use. “Esta Plena,” a 2009 record, fuses jazz and the structures of “plena,” a traditional percussion-based Puerto Rican song form. “Alma Adentro,” a 2011 album, covers classic songs from Puerto Rico.

“It would be impossible for me to pick one favorite, but what I would say is, there are a couple of albums in the earlier part of my career that explored a balance between things coming from a jazz world and coming from traditional Puerto Rican traditional music and folklore, when I was able to feel like that balance was right, it felt like me,” Zenón says. “This is what I have to give. This is my persona.”

In 2008, Zenón was also honored with a Guggenheim Fellowship, which helped him conduct music research, another facet of his career. Zenón has often extensively interviewed traditional Puerto Rican musicians about the intricacies of their works before writing material in those forms.

And Zenón has made a point of giving back, founding the Caravana Cultural, a project that brings free jazz concerts to rural Puerto Rico.

Work, joy, and love

Zenón is now settled in at MIT, which boasts a vibrant music program. More than 1,500 MIT students take a music class each year, and over 500 students participate in one of 30 campus ensembles. Last year, MIT opened its new Edward and Joyce Linde Music Building, a purpose-built performance, rehearsal, and teaching space.

“There are definitely students at MIT who could be at some of the best music schools in the world,” Zenón says. “That’s not in question.”

Moreover, among MIT students, Zenón says, “There is a communal approach to music. Everything they do, they do for each other. They look out for each other, they work together. And that has been one of the most rewarding things to see.”

He continues: “Of course the students are brilliant and the faculty are too. In terms of what I like to teach, it’s been a good fit for me personally, and I couldn’t be happier about the opportunity. There’s more and more interest in jazz, more and more interest in creating things together, and there’s a unique mindset being built in front of our eyes.”

He is also pleased to work in the Linde Music Building: “It’s amazing to have the building, not only in terms of the facilities, but it’s also a symbol of the place music has within the Institute. We’re not just talking about music, we’re creating it. It’s a great commitment from the school and says a lot about our leadership.”

Meanwhile, along with teaching, Zenón’s own recording career continues at full speed. With Luis Perdomo, he is working on “El Arte Del Bolero Vol. 3,” the follow-up to his Grammy-winning album. And Zenón has plans for still another album, to be recorded in Puerto Rico with a large ensemble, based on music he is writing about Puerto Rico’s history and present.

“Things are always linked,” Zenón explains. “Once you finish one project, the next one starts. It feels natural for me to do it that way.”

In conversation, Zenón is engaging, genial, and reflective. So what advice does he have for younger musicians? Not everyone who plays an instrument will become Miguel Zenón. But what about people who want to pursue music, not knowing how far it will take them?

“If you find something you enjoy, just enjoy it for the sake of it,” Zenón says. “Find what brings joy, and make sure you don’t lose that. Having said that, with music, like any art form, or anything else in life, in order to make progress, it takes work and commitment. There’s no hiding that. So if music is something you’re serious about, set goals you can achieve over time, so you always have something to work for. In my experience, that’s key. But I always pair that with the idea of joy and love for music — keeping that love close to your heart.”