General News from MIT - Massachusetts Institute of Technology

Latest general updates from MIT.

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Jazz in the key of life

Saxophonist Miguel Zenón, a Grammy-winning MIT faculty member, creates a distinctive blend of jazz and traditional Puerto Rican music.


It is not hard to find glowing reviews of saxophonist Miguel Zenón, a creative jazz artist whose compositions incorporate musical elements from his native Puerto Rico.

For instance, The Jazz Times called “Jibaro,” Zenón’s breakthrough 2005 album, “profound yet joyful.” The New York Times called the same music “strong and light,” adding that we have “rarely seen a jazz composer step forward with a project so impressively organized, intellectually powerful and well played from the start.”

In 2009, when Zenón won a prestigious MacArthur Fellowship, the MacArthur Foundation called Zenón’s work “elegant and innovative,” with “a high degree of daring and sophistication.” In 2012, The New York Times reviewed another Zenón work, “Puerto Rico Nació en Mi: Tales From the Diaspora,” by calling the music “deeply hybridized and original, complex but clear.”

As you may have noticed, these notices all contain multiple descriptive terms. That’s because Zenón’s work is many things at once: jazz, combined with other musical genres; technically rigorous, and supple; novel, yet steeped in tradition. Indeed, Zenón has always seen jazz as being multifaceted.

“What I discovered, when I first encountered jazz, was this idea that you were using improvisation to portray your personality directly to your listeners,” Zenón explains. “And it was connected to a very interesting and intricate improvisational language. That provided something I hadn’t encountered in music before, this idea that you could have something personal and heartfelt walking hand in hand with something that was intellectual and brainy. That balance spoke to me.”

It is still speaking. In 2024, Zenón won the Grammy Award for Best Latin Jazz Album for “El Arte Del Bolero Vol. 2,” a collaboration with Venezuelan pianist Luis Perdomo, a musical partner in the Miguel Zenón Quartet.

Zenón has taught at MIT for three years now. He became a tenured faculty member last year, in MIT’s Music and Theater Arts program, where he helps students find the same satisfaction in music that he does.

“When I first got into music, I was looking for fulfillment,” Zenón says. “It wasn’t about success. I was just looking for music to fulfill something within me. And I still search for that now. And sometimes it still feels like it did 25 or 30 years ago, when I first encountered that feeling. It’s nice to have that in your pocket, to say, this is what I’m looking for, that initial feeling.”

Paradise in the Back Bay

Zenón grew up in San Juan, Puerto Rico. Around age 11, he started attending a performing arts school and playing the saxophone. In his last year of school, Zenón was admitted into college to study engineering. However, a few years before, he had encountered something new: jazz. Zenón’s training had been in classical music. But jazz felt different.

“Discovering jazz music ignited a passion for music in me that had not existed up to that point,” says Zenón, who decided to pursue music in college. “I kind of jumped ship, and it was a blind jump. I didn’t know what to expect, I didn’t know what was on the other side, I didn’t have any artists or any musicians in my family. I just followed a hunch, followed my heart.”

After teachers recommended he study at the renowned Berklee College of Music in Boston, Zenón worked to find a scholarship and funding.

“This was way before the internet. I was looking at catalogs,” Zenón recalls. “I had never been to Boston in my life, I didn’t even know what Berklee looked like. But at Berklee it was the first time I was able to connect with a jazz teacher in a formal way, to learn about history, theory, harmony, and I soaked in it. Also, I was surrounded by young people like myself, who were as enamored and passionate about music as I was. It really felt like paradise.”

After earning his BA from Berklee in 1998, Zenón then moved to New York City. He earned an MA from the Manhattan School of Music in 2001 and began playing more extensively with new bandmates.

“I just wanted to be able to play with people who were better than me, and learn from the experience,” Zenón says. He started generating new ideas, writing music, and performing publicly. With Antonio Sánchez, Hans Glawischnig, and Perdomo, he founded the Miguel Zenón Quartet.

“That led to going into the studio and making an album,” Zenón recounts. “And that led to more experience, and more albums.”

Did it ever. Zenón has now been the leader for about 20 albums, mostly featuring the quartet. (After several years, Henry Cole replaced Sánchez as the group’s drummer.) Zenón has played on many recordings by other artists, and helped found the SFJAZZ Collective.

Not many prolific musicians will name any one recording as their best, and Zenón is the same way, but he is willing to cite a few that were milestones for him.

“Jibaro” draws on the music of Puerto Rico’s jibaro singers, troubadors using 10-line stanzas with eight-syllable lines, something Zenón adopted for jazz-quartet use. “Esta Plena,” a 2009 record, fuses jazz and the structures of “plena,” a traditional percussion-based Puerto Rican song form. “Alma Adentro,” a 2011 album, covers classic songs from Puerto Rico.

“It would be impossible for me to pick one favorite, but what I would say is, there are a couple of albums in the earlier part of my career that explored a balance between things coming from a jazz world and coming from traditional Puerto Rican traditional music and folklore, when I was able to feel like that balance was right, it felt like me,” Zenón says. “This is what I have to give. This is my persona.”

In 2008, Zenón was also honored with a Guggenheim Fellowship, which helped him conduct music research, another facet of his career. Zenón has often extensively interviewed traditional Puerto Rican musicians about the intricacies of their works before writing material in those forms.

And Zenón has made a point of giving back, founding the Caravana Cultural, a project that brings free jazz concerts to rural Puerto Rico.

Work, joy, and love

Zenón is now settled in at MIT, which boasts a vibrant music program. More than 1,500 MIT students take a music class each year, and over 500 students participate in one of 30 campus ensembles. Last year, MIT opened its new Edward and Joyce Linde Music Building, a purpose-built performance, rehearsal, and teaching space.

“There are definitely students at MIT who could be at some of the best music schools in the world,” Zenón says. “That’s not in question.”

Moreover, among MIT students, Zenón says, “There is a communal approach to music. Everything they do, they do for each other. They look out for each other, they work together. And that has been one of the most rewarding things to see.”

He continues: “Of course the students are brilliant and the faculty are too. In terms of what I like to teach, it’s been a good fit for me personally, and I couldn’t be happier about the opportunity. There’s more and more interest in jazz, more and more interest in creating things together, and there’s a unique mindset being built in front of our eyes.”

He is also pleased to work in the Linde Music Building: “It’s amazing to have the building, not only in terms of the facilities, but it’s also a symbol of the place music has within the Institute. We’re not just talking about music, we’re creating it. It’s a great commitment from the school and says a lot about our leadership.”

Meanwhile, along with teaching, Zenón’s own recording career continues at full speed. With Luis Perdomo, he is working on “El Arte Del Bolero Vol. 3,” the follow-up to his Grammy-winning album. And Zenón has plans for still another album, to be recorded in Puerto Rico with a large ensemble, based on music he is writing about Puerto Rico’s history and present.

“Things are always linked,” Zenón explains. “Once you finish one project, the next one starts. It feels natural for me to do it that way.”

In conversation, Zenón is engaging, genial, and reflective. So what advice does he have for younger musicians? Not everyone who plays an instrument will become Miguel Zenón. But what about people who want to pursue music, not knowing how far it will take them?

“If you find something you enjoy, just enjoy it for the sake of it,” Zenón says. “Find what brings joy, and make sure you don’t lose that. Having said that, with music, like any art form, or anything else in life, in order to make progress, it takes work and commitment. There’s no hiding that. So if music is something you’re serious about, set goals you can achieve over time, so you always have something to work for. In my experience, that’s key. But I always pair that with the idea of joy and love for music — keeping that love close to your heart.”


Professor Emeritus Jack Dennis, pioneering developer of dataflow models of computation, dies at 94

The influential first leader of the Computation Structures Group at MIT played a key role in the development of asynchronous computing.


Jack Dennis, an influential MIT professor emeritus of computer science and engineering, died on March 14 at age 94. The original leader of the Computation Structures Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), he pioneered the development of dataflow models of computation, and, subsequently, many novel principles of computer architecture inspired by dataflow models.

The second child of an engineer and a textile designer, Dennis showed early interest in both engineering and music, rewriting Gilbert and Sullivan lyrics with his parents and playing piano with the Norwalk Symphony Orchestra in Connecticut as a teen, while building a canoe at home with his father. As an undergraduate at MIT, he developed his wide array of interests further, joining the VI-A Cooperative Program in Electrical Engineering; working at the Air Force Cambridge Research Laboratories on projects in speech processing and novel radar systems; participating in the model railroad club; and joining the MIT Symphony Orchestra, where he met his first wife, Jane Hodgson ’55, SM ’56, PhD ’61. (The two later separated when she went to study medicine in Florida.) 

Dennis earned his BS (1953), MS (1954), and ScD (1958) from MIT before joining the then-Department of Electrical Engineering as a faculty member. He was promoted to full professor status in 1969. His doctoral thesis, entitled, “Mathematical Programming and electrical networks,” explored analogies between electric circuit theory and quadratic programming problems. Ideas he developed in that paper further crystallized in his 1964 paper, “Distributed solution of network programming problems,” which created an important early class of digital distributed optimization solvers.

In a 2003 piece that Dennis wrote for his undergraduate class’s 50th reunion, he remembered his earliest encounters with computers at the Institute: “I prepared programs written in assembly language on punched paper tape using Frieden 'Flexowriters,' and stood aside watching the myriad lights blink and flash while operator Mike Solamita fed the tapes [...] That was 1954. Fifty years later, much has changed: A room full of vacuum tubes has become a tiny chip with millions of transistors. A phenomenon once limited to research laboratories has become an industry producing commodity products that anyone can own and use beneficially.”

Dennis’ influence in steering that change was profound. As a collaborator with the teams behind both Project MAC and Multics, the earliest attempts to allow multiple users to work with a single computer seemingly simultaneously (i.e., a time-shared operating system), Dennis helped to specify the unique segment addressing and paging mechanisms that became a fundamental part of the General Electric Model 645 computer. His insights stemmed from a tendency to pay equal attention to both hard- and software when others considered themselves specialists in one or the other. 

“I formed the Computation Structures Group [within CSAIL] and focused on architectural concepts that could narrow the acknowledged gap between programming concepts and the organization of computer hardware,” Dennis explained in his 2003 recollection. “I found myself dismayed that people would consider themselves to be either hardware or software experts, but paid little heed to how joint advances in programming and architecture could lead to a synergistic outcome that might revolutionize computing practice.”

Dennis’ emphasis on synergy did not go unnoticed. Gerald Sussman, the Panasonic Professor of Electrical Engineering, points out “the relationship of [Dennis’] dataflow architecture to single-assignment programs, and thus to pure functional programs. This coupled the virtue of referential transparency in programming to the effective use of hardware parallelism. Dennis also pioneered the use of self-timed circuits in digital systems. The ideas from that work generalize to much of the work on highly distributed systems.” 

The Computation Structures Group attracted multiple scholars interested in developing asynchronous computing and dataflow architecture, many of whom became lifelong friends and collaborators. These included Peter Denning, with whom Dennis and Joseph Qualitz co-authored the textbook “Machines, Languages, and Computation” (1978); the late Arvind, who became faculty head of computer science for the Department of Electrical Engineering and Computer Science (EECS), and the late Guang R. Gao, who became distinguished professor of electrical and computer engineering at the University of Delaware. 

In recognition of his contributions to the Multics project, Dennis was elected fellow of the Institute of Electrical and Electronics Engineers (IEEE). Many additional honors would follow: He received the Association for Computing Machinery (ACM)/IEEE Eckert-Mauchly Award in 1984; was inducted as a fellow of the ACM (1994); was named to the National Academy of Engineering (2009); was elected to the (ACM) Special Interest Group on Operating Systems (SIGOPS) Hall of Fame (2012); and was awarded the IEEE John von Neumann Medal (2013). 

A successful researcher, Dennis was perhaps equally influential in the development of EECS’ curriculum, developing six subjects in areas of computer theory and systems: Theoretical Models for Computation; Computation Structures; Structure of Computer Systems; Semantic Theory for Computer Systems; Semantics of Parallel Computation; and Computer System Architecture (taught in collaboration with Arvind.) Several of the courses that Dennis developed continue to be taught, in updated form, to this day.

Following his retirement from teaching in 1987, he consulted on projects relating to parallel computer hardware and software for such varied groups as NASA Research Institute for Advanced Computer Science; Boeing Aerospace; McGill University; the Architecture Group of Carlstedt Elektronik in Gothenburg, Sweden; and Acorn Networks, Inc. His fruitful relationship with former student Guang Gao continued in the form of a lecture tour through China, as well as co-authorship of a book, “Dataflow Architecture,” currently in progress at MIT Press. 

A voracious lifelong learner, Dennis was fond of repeating a friend’s observation that “a scholar is just a book’s way of making another book.” In a full and active retirement, he still made room for music, trying his hand at composing; performing at Tanglewood as a tenor in Chorus Pro Musica; playing piano at the marriage of Guang Gao’s son Nick; and joining the chorus at the First Church in Belmont, Massachusetts, where his celebration of life (with concurrent livestreaming) will be held on Monday, June 8, at 2 p.m. 

Dennis is survived by his wife Therese Smith ’75; children David Hodgson Dennis of North Miami, Florida; Randall Dennis of Connecticut; and Galen Dennis, a resident of Australia. 


Learning with audiobooks

A new study finds that audiobooks help students learn new words — especially when paired with one-on-one instruction.


Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.

“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute. 

Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.

“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”

So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.

Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.

“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”

Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.

Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.

A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.

Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.

Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.

For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.

The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning. 


A philosophy of work

As the NC Ethics of Technology Postdoctoral Fellow, Michal Masny is advancing dialogue, teaching, and research into the social and ethical dimensions of new computing technologies.


What makes work valuable? Michal Masny, the NC Ethics of Technology Postdoctoral Fellow in the MIT Department of Philosophy, investigates the role work plays in our lives and its impact on our well-being. 

Masny sees numerous benefits to work, beyond a paycheck. It’s a space for people to develop excellence at something, make a social contribution, gain social recognition, and create and sustain community. 

“Consider a future in which we shorten the work week, or one in which we eliminate work altogether,” Masny says. “I don’t believe either of these scenarios would be unambiguously good for everyone.”

“Work is both necessary and positively valuable,” he argues, further suggesting that our lives might be worsened if we were to eliminate work completely. “There can be optimal combinations of work and leisure time.”

Masny is completing his two-year term in the NC Ethics of Technology Fellowship at the end of the spring semester. In addition to advancing his research, Masny has been working to foster dialogue and educate students on issues at the intersection of philosophy and computing. This semester, Masny is teaching an undergraduate course, 24.131 (Ethics of Technology).

Masny advocates for an updated approach to educating complete, socially aware students. “I want to create scientists who think about their projects and potential outcomes as lawyers and philosophers might, and vice versa,” he says. Masny argues for the importance of eliminating the “wisdom gap” between these groups, citing scientist Carl Sagan’s warning about the dangers of becoming “powerful without becoming commensurately wise” as scientific and technological advances continue.

“The traditional division of labor is that scientists and engineers invent new technologies, and then philosophers and lawyers evaluate and regulate them,” he continues. “But the pace at which new technologies are invented and deployed has made this division of labor untenable.” 

Established in 2021 with support from the NC Cultural Foundation, the fellowship was created with the goal of advancing critical discourse and research in the ethics of technology and AI at MIT, and by making important research and information available to the global community. 

Venture capitalist Songyee Yoon, founder and managing partner of AI-focused investment firm Principal Venture Partners and a supporter of the NC Ethics of Technology Fellowship, believes technology and scientific discovery are among humanity’s most valuable public goods, and artificial intelligence represents the most consequential technology of our time. 

“If we want the fabric of our society to be built responsibly, we must train our builders upstream, at the very moment they begin learning to design and scale technology. There is no better place to begin this work than MIT,” she says. “Supporting the Ethics of Technology Fellows Program was born from that conviction, and I am deeply encouraged to see it embraced at MIT.”

“In philosophy, you’re supposed to question everything”

Masny arrived at MIT in fall 2024, following a year as a postdoc at the Kavli Center for Ethics, Science, and the Public at the University of California at Berkeley. Originally from Poland, Masny received his PhD in philosophy from Princeton University after completing studies at Oxford University and the University of Warwick in the United Kingdom. 

He works mainly in value theory, ethics of technology, and social and political philosophy. His current research interests include the nature of human and animal well-being, our obligations to future generations, the risk of human extinction, the future of work, and anti-aging technology. 

During his tenure in the fellowship, Masny has published several research articles on ethical issues concerning the future of humanity — a topic closely relevant to thinking about the existential risks of AI development and deployment. 

“In philosophy, you’re supposed to question everything,” he says.  

Masny’s work in the fellowship continues a tradition of collaborative investigation and exploration that MIT encourages and celebrates. In fall 2024, Masny co-taught an introductory undergraduate course, STS.006J/24.06J (Bioethics), with Robin Scheffler, an associate professor in the Program in Science, Technology, and Society

During the 2024-25 academic year, Masny led a student research group, “Deepfakes: Ethical, Political, and Epistemological Issues,” as a part of the Social and Ethical Responsibilities of Computing (SERC) Scholars Program. The group explored the ethical, political, and epistemological dimensions of concerns over misleading deepfakes, and how they can be mitigated.

Students in Masny’s cohort spent spring 2025 working in small groups on a number of projects and presented their findings in a poster session during the MIT Ethics of Computing Research Symposium at the MIT Schwarzman College of Computing.

In summer 2025, Masny assisted with a summer course in philosophy, 24.133/134 (Experiential Ethics), in which students subject their computer science and engineering projects to ethical scrutiny with the help of trained philosophers. 

He’s encouraged by the opportunities to test his ideas and share them with people who can help refine and improve them. 

Communities of practice and engagement

When considering the value of his experience at MIT, Masny lauds the philosophy department and the opportunities to collaborate with so many different kinds of scholars. To answer the kinds of questions his research uncovers, he says, you must range further afield. He values the space MIT creates for broad inquiry while also seeking connections between his findings on work, its value, and the human impact of technology on our social lives. 

“Typically, undergraduate philosophy courses include two hour-long lectures followed by discussion; a lecture is like an audiobook,” he says. Instead, he believes, they should more like listening to a podcast or watching a talk show. 

“I want the class to be an event in a student’s schedule,” he continues. 

Masny is also considering how to integrate valuable philosophical tools into life outside the classroom. Philosophy and research can support other kinds of inquiry. Developing philosophers’ mindsets is a net positive, by his reckoning. Designing better questions, for example, can lead to better, more insightful, more accurate answers. It can also improve students’ abilities to identify challenges.

Masny will begin teaching at the University of Colorado at Boulder in fall 2026, and wants to test new ideas while continuing his research into the value of work. 

Kieran Setiya, the Peter de Florez Professor in Philosophy and head of the Department of Linguistics and Philosophy, says the NC Ethics of Technology Postdoctoral Fellowship has allowed MIT to bring in a series of exceptional young philosophers working at the intersection of ethics and AI, studying the systemic effects of new computing technologies and the moral, social, and political challenges they pose.

“This is just the kind of applied interdisciplinary thinking we need to support and sustain at MIT,” he adds.


Slice and dice

SNIPE, a newly characterized biological defense system, directly protects bacteria by chopping up invading viral DNA.


What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?

That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.

Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.

New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages. 

Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.

“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.” 

Light it up

For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria. 

“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”

When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection. 

Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE. 

“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says. 

Problems at the periphery

Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes. 

SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.

The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage. 

The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before. 

There may also be other proteins or interactions involved. 

“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.” 

Future directions

Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.

Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role. 

Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.

“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”


A new type of electrically driven artificial muscle fiber

Electrofluidic fibers mimic how natural muscle fibers bundle, and could enable compact, silent robotic and prosthetic systems.


Muscles are remarkably effective systems for generating controlled force, and engineers developing hardware for robots or prosthetics have long struggled to create analogs that can approach their unique combination of strength, rapid response, scalability, and control. But now, researchers at the MIT Media Lab and Politecnico di Bari in Italy have developed artificial muscle fibers that come closer to matching many of these qualities.

Like the fibers that bundle together to form biological muscles, these fibers can be arranged in different configurations to meet the demands of a given task. Unlike conventional robotic actuation systems, they are compliant enough to interface comfortably with the human body and operate silently without motors, external pumps, or other bulky supporting hardware.

The new electrofluidic fiber muscles — electrically driven actuators built in fiber format — are described in a recent paper published in Science Robotics. The work is led by Media Lab PhD candidate Ozgun Kilic Afsar; Vito Cacucciolo, a professor at the Politecnico di Bari; and four co-authors.

The new system brings together two technologies, Afsar explains. One is a fluidically driven artificial muscle known as a thin McKibben actuator, and the other is a miniaturized solid-state pump based on electrohydrodynamics (EHD), which can generate pressure inside a sealed fluid compartment without moving parts or an external fluid supply.

Until now, most fluid-driven soft actuators have relied on external “heavy, bulky, oftentimes noisy hydraulic infrastructure,” Afsar says, “which makes them difficult to integrate into systems where mobility or compact, lightweight design is important.” This has created a fundamental bottleneck in the practical use of fluidic actuators in real-world applications.

The key to breaking through that bottleneck was the use of integrated pumps based on electrohydrodynamic principles. These millimeter-scale, electrically driven pumps generate pressure and flow by injecting charge into a dielectric fluid, creating ions that drag the fluid along with them. Weighing just a few grams each and not much thicker than a toothpick, they can be fabricated continuously and scaled easily. “We integrated these fiber pumps into a closed fluidic circuit with the thin McKibben actuators,” Afsar says, noting that this was not a simple task given the different dynamics of the two components.

A key design strategy was to pair these fibers in what are known as antagonistic configurations. Cacucciolo explains that this is where “one muscle contracts while another elongates,” as when you bend your arm and your biceps contract while your triceps stretch. In their system, a millimeter-scale fiber pump sits between two similarly scaled McKibben actuators, driving fluid into one actuator to contract it while simultaneously relaxing the other.

“This is very much reminiscent of how biological muscles are configured and organized,” Afsar says. “We didn’t choose this configuration simply for the sake of biomimicry, but because we needed a way to store the fluid within the muscle design.” The need for an external reservoir open to the atmosphere has been one of the main factors limiting the practical use of EHD pumps in robotic systems outside the lab. By pairing two McKibben fibers in line, with a fiber pump between them to form a closed circuit, the team eliminated that need entirely.

Another key finding was that the muscle fibers needed to be pre-pressurized, rather than simply filled. “There is a minimum internal system pressure that the system can tolerate,” Afsar says, “below which the pump can degrade or temporarily stop working.” This happens because of cavitation, in which vapor bubbles form when the pressure at the pump inlet drops below the vapor pressure of the liquid, eventually leading to dielectric breakdown.

To prevent cavitation, they applied a “bias” pressure from the outset so that the pressure at the fiber pump inlet never falls below the liquid’s vapor pressure. The magnitude of this bias pressure can be adjusted depending on the application. “To achieve the maximum contraction the muscle can generate, we found there is a specific bias pressure range that is optimal,” she says. “If you want to configure the system for faster response, you might increase that bias pressure, though with some reduction in maximum contraction.”

Cacucciolo adds that most of today’s robotic limbs and hands are built around electric servo motors, whose configuration differs fundamentally from that of natural muscles. Servo motors generate rotational motion on a shaft that must be converted into linear movement, whereas muscle fibers naturally contract and extend linearly, as do these electrofluidic fibers. 

“Most robotic arms and humanoid robots are designed around the servo motors that drive them,” he says. “That creates integration constraints, because servo motors are hard to package densely and tend to concentrate mass near the joints they drive. By contrast, artificial muscles in fiber form can be packed tightly inside a robot or exoskeleton and distributed throughout the structure, rather than concentrated near a joint.”

These electrofluidic muscles may be especially useful for wearable applications, such as exoskeletons that help a person lift heavier loads or assistive devices that restore or augment dexterity. But the underlying principles could also apply more broadly. “Our findings extend to fluid-driven robotic systems in general,” Cacucciolo says. “Wherever fluidic actuators are used, or where engineers want to replace external pumps with internal ones, these design principles could apply across a wide range of fluid-driven robotic systems.”

This work “presents a major advancement in fiber-format soft actuation,” which “addresses several long-standing hurdles in the field, particularly regarding portability and power density,” says Herbert Shea, a professor in the Soft Transducers Laboratory at Ecole Polytechnique Federale de Lausanne in Switzerland, who was not associated with this research. “The lack of moving parts in the pump makes these muscles silent, a major advantage for prosthetic devices and assistive clothing,” he says.

Shea adds that “this high-quality and rigorous work bridges the gap between fundamental fluid dynamics and practical robotic applications. The authors provide a complete system-level solution — characterizing the individual components, developing a predictive physical model, and validating it through a range of demonstrators.”

In addition to Afsar and Cacucciolo, the team also included Gabriele Pupillo and Gennaro Vitucci at Politecnico di Bari and Wedyan Babatain and Professor Hiroshi Ishii at the MIT Media Lab. The work was supported by the European Research Council and the Media Lab’s multi-sponsored consortium.


Bridging space research and policy

PhD student Carissma McGee studies exoplanets and examines intellectual property frameworks for space collaborations.


While earning her dual master’s degrees in aeronautics and astronautics and public policy, Carissma McGee SM ’25 learned to navigate between two seemingly distinct worlds, bridging rigorous technical analysis and policy decisions.

As an undergraduate congressional intern and researcher, she saw a persistent gap in space policymaking. Policymakers often lacked technical expertise, while researchers were rarely involved in increasingly complex questions surrounding intellectual property and international collaboration in space.

Her work on intellectual property frameworks for space collaborations directly addresses that gap, combining expertise in gravitational microlensing and space telescope operations with policy analysis to tackle emerging governance challenges.

“I want to bring an expert level in science in the rooms where policy decisions are made,” says McGee, now a doctoral student in aeronautics and astronautics. “That perspective is critical for shaping the future of research and exploration.”

Likewise, she wants to bring her expertise in public policy into the lab.

“I enjoy being able to ask questions about intellectual property, territorial claims, knowledge transfer, or allocation of resources early on in a research project,” adds McGee.

McGee’s fascination with space started during her high school years in Delaware, when she first volunteered at a local observatory and then interned at the NASA Goddard Space Flight Center in Maryland.

Following high school, McGee attended Howard University. She was selected to participate in the Karsh STEM Scholars Program, a full-ride scholarship track for students committed to working continuously toward earning doctoral degrees. Howard, which holds an R1 research classification from the Carnegie Foundation, is in close proximity the Goddard Space Flight Center, as well as the American Astronomical Society and the D.C. Space Grant Consortium.

In 2020, after her first year at Howard, the Covid-19 pandemic sent McGee back to her hometown in Delaware. As it turned out, that gave her an opportunity to work with her local congresswoman, Lisa Blunt Rochester, then a U.S. representative. In addition to supporting the congresswoman’s constituents, she drafted dozens of letters related to STEM education and energy reform.

Working in government gave McGee an opportunity to use her voice to “advocate for astronomy and astrophysics with the American Astronomical Society, advocate for space sciences, and for science representation.”

As an undergraduate, McGee also conducted research linking computational physics and astronomy, working with both NASA’s Jet Propulsion Laboratory and Yale University’s Department of Astronomy. She also continued research begun in 2021 with the Harvard and Smithsonian Center for Astrophysics’ Black Hole Initiative, contributing to work associated with the Event Horizon Telescope.

When she visited MIT in 2023, McGee was struck by the Institute’s openness to interdisciplinary work and support of her interest in combining aeronautics and astronautics with policy.

Once at MIT, she started working in the Space, Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab) with advisor Kerri Cahoy, professor of aeronautics and astronautics. McGee says she experienced a great deal of freedom to craft her own program.

“I was drawn to the lab’s work on satellite missions and CubeSats, and excited to discover that I could pursue exoplanet astrophysics research within this framework and that submitting a dual thesis or focusing on astrophysics applications was possible,” says McGee. “When I expressed interest in participating in the Technology [and] Policy Program for a dual thesis in a framework for space policy, my advisors encouraged me to explore how we could integrate these diverse interests into a path forward.”

In 2024, McGee was awarded a MathWorks Fellowship to pursue research associated with the Nancy Grace Roman Space Telescope and join a NASA mission.

“It was just amazing to join the exoplanet group at NASA,” she says. “I had a front-row seat to see how real researchers and workers navigate complex problems.”

McGee credits MathWorks with helping fellows to “be at the forefront of knowledge and shaping innovation.”

One of her proudest academic accomplishments is PyLIMASS, a software system she developed with collaborators at Louisiana State University, the Ohio State University, and NASA’s Goddard Space Flight Center. The tool enables more accurate mass and distance estimates in gravitational microlensing events, helping the Roman Space Telescope project meet its precision goals for studying exoplanets.

“To build software that didn’t previously exist — and to know it will be used for the Roman mission — is incredibly exciting,” McGee says.

In May 2025, McGee graduated with dual master’s degrees in aeronautics and astronautics and technology and policy. That same month, she presented her research at the American Astronomical Society meeting in Anchorage, Alaska, and at the Technology Management and Policy Conference in Portugal.

McGee remained at MIT to pursue her doctoral degree. Last fall, as an MIT BAMIT Community Advancement Program and Fund Fellow, she hosted a daylong conference for STEM students focused on how intellectual property frameworks shape technical fields.

McGee’s accomplishments and contributions have been celebrated with a number of honors recently. In 2026, she was named Miss Black Massachusetts United States, was recognized among MIT’s Graduate Students of Excellence, and received the MIT MLK Leadership Award in recognition of her service, integrity, and community impact.

Beyond her academic work, McGee is active across campus. She teaches Pilates with MIT Recreation, participates in the Graduate Women in Aerospace Engineering group, and serves as a graduate resident assistant in an undergraduate dorm on East Campus.

She credits the AeroAstro graduate community with keeping her momentum going.

“Even if we’re tired, there’s this powerful camaraderie among AeroAstro graduate students working together. Seeing my peers are pushing through similar research milestones and solve daunting problems motivates you to advance beyond the finish line to further developments in the field.”


New technique makes AI models leaner and faster while they’re still learning

Researchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.


Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance. 

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.

The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.

"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."

The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.

"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”

The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.

"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."

What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.

The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.

The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.

The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.

There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.

The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.

Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.

"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."

"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."

The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.


The flawed fundamentals of failing banks

MIT economist Emil Verner’s historical detective work shows how banking-sector crises develop out of bad business practices.


Bank runs are dramatic: Picture Depression-era footage of customers lined up, trying to get their deposits back. Or recall Lehmann Brothers emptying out in 2008 or Silicon Valley Bank collapsing in 2023.

But what causes these runs in the first place? One viewpoint is that something of a self-fulfilling prophecy is involved. Panic spreads, and suddenly many customers are seeking their money back, until an otherwise solid institution is run into the ground.

That is not exactly Emil Verner’s position, however. Verner, an MIT economist, has been studying bank failures empirically for years and now has a different perspective. Verner and his collaborators have produced extensive evidence suggesting that when banks fail, it is usually because they are in a fundamentally shaky position. A bank run generally finishes off an already flawed business rather than upending a viable one.

“What we essentially find is that banks that fail are almost always very weak, and are in trouble,” says Verner, who is the Jerome and Dorothy Lemelson Professor of Management and Financial Economics at the MIT Sloan School of Management. “Most banks that have been subject to runs have been pretty insolvent. Runs are more the final spasm that brings down weak banks, rather than the causes of indiscriminate failures.”

This conclusion has plenty of policy relevance for the banking sector and follows a lengthy analysis of historical data. In one forthcoming paper, in the Quarterly Journal of Economics, Verner and two colleagues reviewed U.S. bank data from 1863 to 2024, concluding that “the primary cause of bank failures and banking crises is almost always and everywhere a deterioration of bank fundamentals.” In a 2021 paper in the same journal, Verner and two other colleagues studied banking data from 46 countries covering 1870-2016, and found that declining bank fundamentals usually preceded runs. And currently, Verner is working to make more historical U.S. bank data publicly available to scholars.

Seen in this light, sure, bank runs are damaging, but bank failures likely have more to do with bad portfolios, poor risk management, and minimal assets in reserve, rather than sentiment-driven client behavior.

“From the idea that bank crises are really about sudden runs on bank debt, we’re moving to thinking that runs are one symptom of crisis that runs deeper,” Verner says. “For most people, we’re saying something reasonable, refining our knowledge, and just shifting the emphasis,” Verner says.

For his research and teaching, Verner received tenure at MIT last year.

Landing in a “great place”

Verner is a native of Denmark who also lived in the U.S. for several years while growing up. Around the time he was finishing school, the U.S. housing market imploded, taking some financial institutions with it.

“Everything came crashing down,” Verner said. “I got obsessed with understanding it.”

As an undergraduate, he studied economics at the University of Copenhagen. After three years, Verner was unconvinced the discipline had fully explained financial crises. He decided to keep studying economics in graduate school, and was accepted into the PhD program at Princeton University.

Along the way, Verner became a historically minded economist, digging into data and cases from past decades to shed light on larger patterns about crises and bank insolvency.

“I’ve always thought history was extremely fascinating in itself,” Verner says. And while history may not repeat, he notes, it is “a really valuable tool. It helps you think through what could happen, what are similar scenarios, and how agents acted when facing similar constraints and incentives in the past.”

For studying financial crises in particular, he adds, history helps in multiple ways. Crises are rare, so historical cases add data. Changes over time, like more financial regulations and more complex investment tools, provide different settings to examine the same cause-and-effect issues. “History is a useful laboratory to study these questions,” Verner says.

After earning his PhD from Princeton, Verner went on the job market and landed his faculty position at MIT Sloan. Many aspects of Institute life — the classroom experience, the collegiality, the campus — have strongly resonated with him.

“MIT is a great place,” Verner says simply. “Great colleagues, great students.”

Focused on fundamentals

Over the last decade, Verner has published papers on numerous topics in addition to banking crises. As an outgrowth of his doctoral work, for instance, he published innovative papers examining the dampening effect that household debt has on economic growth in many countries. He also co-authored the lead paper in an issue of the American Economic Review last year examining the way German hyperinflation after World War I reallocated wealth to large business with substantial debt, leading them to grow faster.

Still, the main focus of Verner’s work right now is on banking crises and bank failures — including their causes. In a 2024 paper looking at private lending in 117 countries since 1940, Verner and economist Karsten Müller showed that financial crises are often preceded by credit booms in what scholars call the “non-tradeable” sector of the economy. That includes industries such as retail or construction, which do not produce easily tradeable goods. Firms in the non-tradeable sector tend to rely more heavily on loans secured by real estate; during real estate booms, such firms use high valuations to borrow more, and they become more vulnerable to crashes — which helps explain why bank portfolios, in turn, can crater as well.

In recent years, in the process of studying these topics, Verner has helped expand the domain of known U.S. historical data in the field. Working with economists Sergio Correa and Stephan Luck, Verner has helped apply large language models to historical newspaper collections, unearthing information about 3,421 runs on individual banks from 1863 to 1934; they are making that data freely available to other scholars.

This topic has important policy implications. If runs are a contagion bringing down worthy banks, then one solution is to provide banks with more liquidity to get through the crisis — something that has indeed been tried in the U.S. However, if bank failures are more based in fundamentals about risk and not keeping enough capital on hand, more systemic policy options about best practices might be logical. At a minimum, substantive new research can help alter the contents of those discussions.

“When banks fail, it’s usually because these banks have taken a lot of risk and have big losses,” Verner says. “It’s rarely unjustified. So that means these types of liquidity interventions alone are not enough to stop a crisis.”

The expansive research Verner has helped conduct includes a number of specific indicators that fundamentals are a big factor in failure. For instance, examining how infrequently banks recover their all assets shows how shaky their foundations are.

“The recovery rate on assets is informative about how solvent a bank was,” Verner says. “This is where I think we’ve contributed something new.” Some economists in the past have cited particular examples of struggling banks making depositors whole, but those are exceptions, not the rule. “Sometimes people argue this or that bank was actually solvent because depositors ended up getting all their money back, and that might be true of one bank, but on aggregate it’s not the case,” Verner says.

Overall, Verner intends to keep following the facts, digging up more evidence, and seeing where it leads.

“While there is this notion that liquidity problems can arise pretty much out of nowhere, I think we are changing that emphasis by showing that financial crises happen basically because banks become insolvent,” Verner underscores. “And then the bank run is that final dramatic spasm — which slightly shifts how we teach and talk about it, and perhaps think about the policy response.”


Desirée Plata appointed associate dean of engineering

Faculty member in civil and environmental engineering will advance research and entrepreneurial initiatives across the School of Engineering.


Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor in the MIT Department of Civil and Environmental Engineering, has been named associate dean of engineering, effective July 1.

In her new role, Plata will focus on fostering early-stage research initiatives across the school’s faculty and on strengthening entrepreneurial and innovation efforts. She will also support the school’s Technical Leadership and Communication (TLC) Programs, including: the Gordon Engineering Leadership Program, the Daniel J. Riccio Graduate Engineering Leadership Program, the School of Engineering Communication Lab, and the Undergraduate Practice Opportunities Program.

Plata will join Associate Dean Hamsa Balakrishnan, who continues to lead faculty searches, fellowships, and outreach programs. Together, the two associate deans will serve on key leadership groups including Engineering Council and the Dean’s Advisory Council to shape the school’s strategic priorities.

“Desirée’s leadership, scholarship, and commitment to excellence have already had a meaningful impact on the MIT community, and I look forward to the perspective and energy she will bring to this role,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering.

Plata’s research centers on the sustainable design of industrial processes and materials through environmental chemistry, with an emphasis on clean energy technologies. She develops ways to make industrial processes more environmentally sustainable, incorporating environmental objectives into the design phase of processes and materials. Her work spans nanomaterials and carbon-based materials for pollution reduction, as well as advanced methods for environmental cleanup and energy conversion.  Plata directs MIT’s Parsons Laboratory, which conducts interdisciplinary research on natural systems and human adaptation to environmental change.

Plata is a leader on campus and beyond in climate and sustainability initiatives. She serves as director of the MIT Climate and Sustainability Consortium (MCSC), an industry–academia collaboration launched to accelerate solutions for global climate challenges. She founded and directs the MIT Methane Network, a multi-institution effort to cut global methane emissions within this decade. Plata also co-directs the National Institute of Environmental Health Sciences MIT Superfund Research Program, which focuses on strategies to protect communities concerned about hazardous chemicals, pollutants, and other contaminants in their environment.

Beyond academia, Plata has co-founded two climate and energy startups, Nth Cycle and Moxair. Nth Cycle is redefining metal refining and the domestic battery supply chain. Earlier this month, the company signed a $1.1 billion off-take agreement to help establish a secure and circular technology for battery minerals.

Her company Moxair specializes in advanced approaches for low-level methane monitoring and destruction. In 2026, with support from the U.S. Department of Energy and collaboration with MIT, Moxair will build and demonstrate a first-of-a-kind dilute methane oxidation technology to tackle methane emissions using transition metal catalysts.

As an educator, Plata has helped develop programs that enhance research experience for students and postdocs. She played a pivotal role in the founding of the MIT Postdoctoral Fellowship Program for Engineering Excellence, serving on its faculty steering committee, overseeing admissions, and leading both the academic track and entrepreneurship track. She also helped design the MCSC Climate and Sustainability Scholars Program, a yearlong program open to juniors and seniors across MIT.

Plata earned a BS in chemistry from Union College in 2003 and a PhD in the joint MIT-Woods Hole Oceanographic Institution program in oceanography and applied ocean science in 2009. After completing her doctorate, she held faculty positions at Mount Holyoke College, Duke University, and Yale University. While at Yale, she served as associate director of research at the university’s Center for Green Chemistry and Green Engineering. In 2018, Plata joined MIT’s faculty in the Department of Civil and Environmental Engineering.

Her work as a scholar and educator has earned numerous awards and honors. She received MIT’s Harold E. Edgerton Faculty Achievement Award in 2020, recognizing her excellence in research, teaching, and service. She has also been honored with an NSF CAREER Award and the Odebrecht Award for Sustainable Development. Plata is a fellow of the American Chemical Society and was a Young Investigator Sustainability Fellow at Caltech.

Plata is a two-time National Academy of Engineering Frontiers of Engineering Fellow and a two-time National Academy of Sciences Kavli Frontiers of Science Fellow. Her dedication to mentoring was recognized with MIT’s Junior Bose Award for Excellence in Teaching and the Frank Perkins Graduate Advising Award.


Physicists zero in on the mass of the fundamental W boson particle

The team’s ultra-precise measurement confirms the Standard Model’s predictions.


When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.

In a paper appearing today in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.

The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.

Now, scientists have determined the mass of the W boson by analyzing more than 1 billion proton-colliding events produced by the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) in Switzerland. The LHC accelerates protons toward each other at close to the speed of light. When they collide, two protons can produce a W boson, among a shower of other particles.

Catching a W boson is nearly impossible, as it decays almost immediately into two types of particles, one of which, a neutrino, is so elusive that it cannot be detected. Scientists are left to measure the other particle, known as a muon, and model how it might add up to the total mass of its parent, the W boson. In the new study, scientists used the Compact Muon Solenoid (CMS) experiment, a particle detector at the LHC that precisely tracks muons and other particles produced in the aftermath of proton collisions.

From billions of proton-proton collisions, the team identified 100 million events that produced a W boson decaying to a muon and a neutrino. For each of these events, they carried out detailed analyses to narrow in on a precise mass measurement. In the end, they determined that the W boson has a mass of 80360.2 ± 9.9 megaelectron volts (MeV). This new mass is in line with predictions of the Standard Model, which is physicists’ best rulebook for describing the fundamental particles and forces of nature.

The precision of the new measurement is on par with a previous measurement made in 2022 by the Collider Detector at Fermilab (CDF). That measurement took physicists by surprise, as it was significantly heavier than what the Standard Model predicted, and therefore raised the possibility of “new physics,” such as particles and forces that have yet to be discovered.

Because the new CMS measurement is just as precise as the CDF result and agrees with the Standard Model along with a number of other experiments, it is more likely that physicists are on solid ground in terms of how they understand the W boson.

“It’s just a huge relief, to be honest,” says Kenneth Long, a lead author of the study, who is a senior postdoc in MIT’s Laboratory for Nuclear Science. “This new measurement is a strong confirmation that we can trust the Standard Model.”

The study is authored by more than 3,000 members of CERN’s CMS Collaboration. The core group who worked on the new measurement includes about 30 scientists from 10 institutions, led by a team at MIT that includes Long; Tianyu Justin Yang PhD ’24; David Walter and Jan Eysermans, who are both MIT postdocs in physics; Guillelmo Gomez-Ceballos, a principal research scientist in the Particle Physics Collaboration; Josh Bendavid, a former research scientist; and Christoph Paus, a professor of physics at MIT and principal investigator with the Particle Physics Collaboration.

Piecing together

The W boson was first discovered in 1983 and is predicted to be the fourth heaviest among all the fundamental particles. Multiple experiments have aimed to narrow in on the particle’s mass, with varying degrees of precision. For the most part, these experiments have produced measurements that agree with the Standard Model’s predictions. The 2022 measurement by Fermilab’s CDF experiment is the one significant outlier. It also happens to be the most precise experiment to date.

“If you take the CDF measurement at face value, you would say there must be physics beyond the Standard Model,” says co-author Christoph Paus. “And of course that was the big mystery.”

Paus and his colleagues sought to either support or refute the CDF’s findings by making an independent measurement, with an experiment that matches CDF’s precision. Their new W boson mass measurement is a product of 10 years’ worth of work, both to analyze actual particle collision events and to simulate all the scenarios that could produce those events.

For their new study, the physicists analyzed proton collision events that were produced at the LHC in 2016. When it is running, the particle collider generates proton collisions at a furious rate of about one every 25 nanoseconds. The team analyzed a portion of the LHC’s 2016 dataset that encompasses billions of proton-proton collisions. Among these, they identified about 100 million events that produced a very short-lived W boson.

“A particle like the W boson exists for a teeny tiny moment — something like 10-24 seconds — before decaying to two particles, one of which is a neutrino that can’t be measured directly,” Long explains. “That’s the tricky part: You have to measure the other particle — a muon — really well, and be able to piece things together with only one piece of the puzzle.”

Gathering momentum

When a muon is produced from the decay of a W boson, it carries half of the W boson’s mass, which is converted into momentum that carries the muon away from the original collision. Due to the strong magnetic field inside the CMS detector, the electrically charged muon follows a path whose curvature is a function of its momentum. Scientists’ challenge is to track the muon’s path and every interaction it may have with other particles and its surroundings, in order to estimate its initial momentum.

The muon’s momentum is also influenced by the momentum of the W boson before it decays. Decoding the impact of the W boson’s motion from the effects of its mass presented a major challenge. To infer the W boson mass, the team first carried out simulations of every scenario they could think of that a muon might experience after a proton-proton collision in the chaotic environment of the particle collider. In all, the team produced 4 billion such simulated events described by state-of-the-art theoretical calculations. The simulations encoded diverse hypotheses about how the muon momentum is affected by the physical features of the CMS detector, as well as uncertainties in the predictions that govern W boson production in LHC collisions.

The researchers compared their simulations with data from the 2016 LHC run. For every proton-proton collision event that occurs in the collider, scientists can use the CMS detector at CERN’s LHC to precisely measure the energy and momentum of resulting particles such as muons. The team analyzed CMS measurements of muons that were produced from over 100 million W boson events. They then overlaid this data onto their simulations of the muon momentum, which they then converted to a new mass for the W boson.

That mass — 80360.2 ± 9.9 megaelectron volts — is significantly lighter than the CDF experiment’s measurement. What’s more, the new estimate is within the range of what the Standard Model predicts for the W boson’s mass, bolstering physicists’ confidence in the Standard Model and its descriptions of the major particles and forces of nature.

“With the combination of our really precise result and other experiments that line up with the Standard Model’s predictions, I think that most people would place their bets on the Standard Model,” Long says. “Though I do think people should continue doing this measurement. We are not done.”

“We want to add more data, make our analysis techniques more precise, and basically squeeze the lemon a little harder. There is always some juice left,” Paus adds. “With a better look, then we can say for certain whether we truly understand this one fundamental building block.”

This work was supported, in part, by multiple funding agencies, including the U.S. Department of Energy, and the SubMIT computing facility, sponsored by the MIT Department of Physics. 


Sixteen new START.nano companies are developing hard-tech solutions with the support of MIT.nano

Startup accelerator program grows to over 30 companies, almost half of them with MIT pedigrees.


MIT.nano has announced that 16 startups became active participants in its START.nano program in 2025, more than doubling the number of new companies from the previous year. Aimed at speeding the transition of hard-tech innovation to market, START.nano supports new ventures through the discounted use of MIT.nano shared facilities and a guided access to the MIT innovation ecosystem. The newly engaged startups are developing solutions for some of the world’s greatest challenges in health, climate, energy, semiconductors, novel materials, and quantum computing.

“The unique resources of MIT.nano enable not just the foundational research of academia, but the translation of that research into commercial innovations through startups,” says START.nano Program Manager Joyce Wu SM ’00, PhD ’07. “The START.nano accelerator supports early-stage companies from MIT and beyond with the tools and network they need for success.”

Launched in 2021, START.nano aims to increase the survival rate of hard-tech startups by easing their journey from the lab to the real world. In addition to receiving access to MIT.nano’s laboratories, program participants are invited to present at startup exhibits at MIT conferences, and in exclusive events including the newly launched PITCH.nano competition.

“For an early-stage startup working at the frontier of superconductor discovery, the combination of infrastructure and community has been irreplaceable,” says Jason Gibson, CEO and co-founder of Quantum Formatics. “START.nano isn’t just a resource,” adds Cynthia Liao MBA ’24, CEO and co-founder of Vertical Semiconductor. “It’s a strategic advantage that accelerates our roadmap, allowing us to iterate quickly to meet customer needs and strengthen our competitive edge.”

Although an MIT affiliation is not required, five of the 16 companies in the new cohort are led by MIT alumni, and an additional three have MIT affiliation. In total, 49 percent of the startups in START.nano are founded by MIT graduates.

Here are the intended impacts of the 16 new START.nano companies:

Acorn Genetics is developing a "smartphone of sequencing," launching the power of genetic analysis out of slow, centralized labs and into the hands of consumers for fast, portable, and affordable sequencing.

Addis Energy leverages oil, gas, and geothermal drilling technologies to unlock the chemical potential of iron-rich rocks. By injecting engineered fluids, they harness the earth’s natural energy to produce ammonia that is both abundant and cost-effective.

Augmend Health uses virtual reality and AI to deliver clinical data intelligence services for specialty care that turns incomplete documentation into revenue, compliance, and better treatment decisions.

Brightlight Photonics is building high-performance laser infrastructure at chip scale, integrating Titanium:Sapphire gain to deliver broadband, high-power, low-noise optical sources for advanced photonic systems.

Cahira Technologies is creating the new paradigm of brain-computer symbiosis for treating intractable diseases and human augmentation through autonomous, nonsurgical neural implants.

Copernic Catalysts is leveraging computational modeling to develop and commercialize transformational catalysts for low-cost and sustainable production of bulk chemicals and e-fuels.

Daqus Energy is unlocking high-energy lithium-ion batteries using critical metal-free organic cathodes.

Electrified Thermal Solutions is reinventing the firebrick to electrify industrial heat.

Guardion is making analytical instruments, chemical detectors, and radiation detectors more sensitive, portable, and easier to scale with nanomaterial-based ion detectors.

Mantel Capture is designing carbon capture materials to operate at the high temperatures found inside boilers, kilns, and furnaces — enabling highly efficient carbon capture that has not been possible until now.

nOhm Devices is developing highly-efficient cryogenic electronics for quantum computers and sensors.

Quantum Formatics is speeding discovery of the world’s next superconductors using proprietary AI.

Qunett is building the foundational hardware stack for deployable quantum networks to power the next era of global connectivity.

Rheyo is developing new ways to make dental care more effective, efficient, and easy through advanced materials and technology.

Vertical Semiconductor is commercializing high-voltage, high-density, high-efficiency vertical GaN (gallium nitride) to power the next era of compute.

VioNano Innovations is developing specialty material solutions that reduce variability and improve precision in semiconductor manufacturing, allowing chipmakers to build even smaller, faster, and more cost-effective chips.

START.nano now comprises over 32 companies and 11 graduates — ventures that have moved beyond the prototyping stages, and some into commercialization. See the full list here.


Researchers develop molecular editing tool to relocate alcohol groups

This new technique will allow chemists to efficiently fine-tune the chemical structure of an organic molecule.


A significant challenge for researchers in materials science and drug discovery is that even the most minor change to a molecule’s structure can completely alter its function. Historically, making these adjustments meant researchers had to re-synthesize the target molecule from scratch — a time-consuming and expensive bottleneck akin to tearing down a house just to move a lamp.

In an exciting discovery recently published in Nature, MIT chemists led by Professor Alison Wendlandt have developed a precision technique that allows scientists to seamlessly relocate alcohol functional groups from one spot on a molecule to a neighboring site. This process bypasses the need to rebuild the entire structure and is the result of a multi-year collaboration with Bristol Myers Squibb.

Functional group repositioning

Using a special light-sensitive molecule called decatungstate as a catalyst, the reaction triggers a highly controlled “migration” of the alcohol group. The process is remarkably predictable, ensuring the molecule retains its precise 3D shape and orientation throughout the move.

The ability to implement subtle structural tweaks without the waste of “from-scratch” synthesis eliminates a primary hurdle that has long plagued the field. Furthermore, because the reaction is gentle enough to work on complex, nearly finished structures, it serves as a powerful fine-tuning tool for late-stage drug candidates.

Precision editing to unlock new chemical designs

When combined with existing chemical methods, this tool provides new pathways to create challenging molecular architectures and oxygenation patterns that were previously out of reach.

“This alcohol migration strategy allows for precise, molecular-level tuning of oxygen atom positions,” says Qian Xu, the co-first author of the paper and a postdoc in the Wendlandt Group. “With predictable stereo- and regioselectivity and late-stage operability, it presents an enticing chance to modify natural products and drug molecules through ‘editing.’”

Ultimately, this precision editing tool holds the potential to dramatically improve the efficiency of molecular design campaigns, accelerating the development of new pharmaceuticals, materials, and agrochemicals.

In addition to Wendlandt and Xu, MIT contributors include co-lead author and graduate student Yichen Nie, recent postdoc Ronghua Zhang, and professor of chemistry Jeremiah A. Johnson. Other authors include Jacob-Jan Haaksma of the University of Groningen in The Netherlands; Natalie Holmberg-Douglas, Farid van der Mei, and Chloe Williams of of Bristol Myers Squibb; and Paul M. Scola of Actithera.


Study reveals “two-factor authentication” system that controls microRNA destruction

Researchers uncovered how cells selectively destroy certain microRNAs — key gene regulators — through a mechanism that requires two RNA signals working together.


Cells rely on tiny molecules called microRNAs to tune which genes are active and when. Cells must carefully control the lifespan of microRNAs to prevent widespread disruption to gene regulation.

A new study led by researchers at MIT’s Whitehead Institute for Biomedical Research and Germany’s Max Planck Institute of Biochemistry reveals how cells selectively eliminate certain microRNAs through an unexpectedly intricate molecular recognition system. The open-access work, published on March 18 in Nature, shows that the process requires two separate RNA signals, similar to how many digital systems require two forms of identity verification before granting access.

The findings explain how cells use this “two-factor authentication” system to ensure that only intended microRNAs are destroyed, leaving the rest of the gene regulation machinery in operation.

MicroRNAs are short strands of RNA that help control gene expression. Working together with a protein called Argonaute, they bind to specific messenger RNAs — the molecules that carry genetic instructions from DNA to the cell’s protein-making machinery — and trigger their destruction. In this way, microRNAs can reduce the production of specific proteins.

While scientists recognized that microRNAs could be destroyed through a pathway known as target-directed microRNA degradation, or TDMD, the details of how cells recognized which microRNAs to eliminate remained unclear.

“We knew there was a pathway that could target microRNAs for degradation, but the biochemical mechanism behind it wasn’t understood,” says MIT Professor David Bartel, a Whitehead Institute member and co-senior author of the study.

Earlier work from Bartel’s lab and others had identified a key player in this pathway: the ZSWIM8 E3 ubiquitin ligase. E3 ubiquitin ligases are involved in the cell’s recycling system and attach a small molecular tag called ubiquitin to other proteins, marking them for destruction.

The researchers first showed that the ZSWIM8 E3 ligase specifically binds and tags Argonaute, the protein that holds microRNAs and helps regulate genes. The researchers’ next challenge was to understand how this machinery recognized only Argonaute complexes carrying specific microRNAs that should be degraded.

The answer turned out to be surprisingly sophisticated.

Using a combination of biochemistry and cryo-electron microscopy — an imaging technique that reveals molecular structures at near-atomic resolution — the researchers discovered that the degradation system relies on a dual-RNA recognition process. First, Argonaute must carry a specific microRNA. Second, another RNA molecule called a “trigger RNA” must bind to that microRNA in a particular way.

The degradation machinery activates only when both signals are present.

This dual requirement ensures exquisite specificity. Each cell contains over a hundred thousand Argonaute–microRNA complexes regulating many genes, and destroying them indiscriminately would disrupt essential biological processes.

“The vast majority of Argonaute molecules in the cell are doing useful work regulating gene expression,” says Bartel, who is a professor of biology at MIT and also a Howard Hughes Medical Institute investigator. “You only want to degrade the ones carrying a particular microRNA and bound to the right trigger RNA. Without that specificity, the cell would lose its microRNAs and the essential regulation that they provide.”

The structural images revealed complex molecular interactions. The ZSWIM8 ligase detects multiple structural changes that occur when the two RNAs bind together within the Argonaute protein.

“When we saw the structure, everything clicked,” says Elena Slobodyanyuk, a graduate student in Bartel’s lab and co-first author of the study. “You could see how the pairing of the trigger RNA with the microRNA reshapes the Argonaute complex in a way that the ligase can recognize.”

Beyond explaining how TDMD works, the findings may impact how scientists think about the regulation of RNA molecules more broadly.

“A lot of E3 ligases recognize their targets through simpler signals,” says Jakob Farnung, co-first author and researcher in the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “It was like opening a treasure chest where every detail revealed something new and mesmerizing.”

MicroRNAs typically persist in cells for much longer time periods than most messenger RNAs, but some degrade far more quickly, and the TDMD pathway appears to account for many of these unusually short-lived microRNAs.

The researchers are now investigating whether other RNAs can trigger similar degradation pathways and whether additional microRNAs are regulated through variations of the mechanism shown in this study.

“This opens up a whole new way of thinking about how RNA molecules can control protein degradation,” says Brenda Schulman, study co-senior author and director of the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “Here, the recognition was far more elaborate than expected. There’s likely much more left to discover.”

Uncovering the details of this intricate regulatory system required interdisciplinary collaboration, combining expertise in RNA biochemistry, structural biology, and ubiquitin enzymology to solve this long-standing molecular puzzle.

“This was a project that required the strengths of two labs working at the forefront of their fields,” says Schulman, who is also an alum of Whitehead Institute. “It was an incredible team effort.”


How bacteria suppress immune defenses in stubborn wound infections

Study finds a common bacterium can suppress the body’s early warning system in wounds, causing infections to persist and create an environment that allows other bacteria to take hold.


Chronic wound infections are notoriously difficult to manage because some bacteria can actively interfere with the body’s immune defenses. In wounds, Enterococcus faecalis (E. faecalis) is particularly resilient — it can survive inside tissues, alter the wound environment, and weaken immune signals at the injury site. This disruption creates conditions where other microbes can easily establish themselves, resulting in multi-species infections that are complex and slow to resolve. Such persistent wounds, including diabetic foot ulcers and post-surgical infections, place a heavy burden on patients and health care systems, and sometimes lead to serious complications such as amputations.

Now, researchers have discovered how E. faecalis releases lactic acid to acidify its surroundings and suppresses the immune-cell signal needed to start a proper response to infection. By silencing the body’s defenses, the bacterium can cause persistent and hard-to-treat wound infections. This explains why some wounds struggle to heal, even with treatment, and why infections involving multiple bacteria are especially difficult to eradicate.

The work was led by researchers from the Singapore-MIT Alliance for Research and Technology (SMART) Antimicrobial Resistance (AMR) interdisciplinary research group, alongside collaborators from the Singapore Centre for Environmental Life Sciences Engineering at Nanyang Technological University (NTU Singapore), MIT, and the University of Geneva in Switzerland.

In a paper titled Enterococcus faecalis-derived lactic acid suppresses macrophage activation to facilitate persistent and polymicrobial wound infections,” recently published in Cell Host & Microbe, the researchers documented how E. faecalis releases large amounts of lactic acid during infection. This acidity suppresses the activation of macrophages — immune cells that normally help to clear infections — and interferes with several important internal processes that help the cell recognize and respond to infection. As a result, the mechanisms that cells rely on to send out “danger” signals are suppressed, leaving the macrophages unable to fully activate.

Researchers found that E. faecalis uses a two‑step mechanism to achieve this. Lactic acid enters the macrophages through a lactate transporter called MCT‑1 and also binds to a lactate-sensing receptor, GPR81, on the cell surface. By engaging both pathways, the bacterium effectively shuts down downstream immune signalling and blocks the macrophage’s inflammatory response, allowing E. faecalis to persist in the wound much longer than it should. Specifically, the lactic acid prevents a key immune alarm signal, known as NF-κB, from switching on inside these cells.

This was proven in a mouse wound model, where strains of E. faecalis that could not make lactic acid were cleared much more quickly, and the wounds also showed stronger immune activity. In wounds infected with both E. faecalis and Escherichia coli, the weakened immune response caused by lactic acid also allowed E. coli to grow better. This explains why wound infections often involve multiple species of bacteria and become harder to treat over time, particularly since E. faecalis is among the most common bacteria found in chronic wounds.

“Chronic wound infections often fail not because antibiotics are powerless, but because the immune system has effectively been ‘switched off’ at the infection site. We found that E. faecalis floods the wound with lactic acid, lowering pH and muting the NF‑κB alarm inside macrophages — the very cells that should be calling for help. By pinpointing how acidity rewires immune signalling, we now have clear targets to reactivate the immune response,” says first author Ronni da Silva, research scientist at SMART AMR, former postdoc in the lab of co-author and MIT professor of biology Jianzhu Chen, and SCELSE-NTU visiting researcher.

“This discovery strengthens our understanding of host-pathogen interactions and offers new directions for developing treatments and wound care that target the bacteria’s immunosuppressive strategies. By revealing how the immune response is shut down, this research may help improve infection management and support better recovery outcomes for patients, especially those with chronic wounds or weakened immunity,” says Kimberly Kline, principal investigator at SMART AMR, SCELSE-NTU visiting academic, professor at the University of Geneva, and corresponding author of the paper.

By identifying lactic‑acid‑driven immune suppression as a root cause of persistent wound infections, this work highlights the potential of treatment approaches that support the immune system, rather than rely on antibiotics alone. This could lead to therapies that help wounds heal more reliably and reduce the risk of complications. Potential directions include reducing acidity in the wound or blocking the signals that lactic acid uses to switch off immune cells.

Building on their study, the researchers plan to explore validation in additional pathogens and human wound samples, followed by assessments in advanced preclinical models ahead of any potential clinical trials.

The research was partially supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.


MIT graduate engineering and business programs ranked highly by U.S. News for 2026-27

Graduate engineering program is No. 1 in the nation; MIT Sloan is No. 6.


U.S. News and World Report has again placed MIT’s graduate program in engineering at the top of its annual rankings, released today. The Institute has held the No. 1 spot since 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly, occupying the No. 6 spot for the best graduate business programs.

Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering, chemical engineering, computer engineering (tied with the University of California at Berkeley), electrical/electronic/communications engineering (tied with Stanford University and Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering.

In the rankings of individual MBA specialties, MIT placed first in four areas: business analytics, entrepreneurship (with Stanford), production/operations, and supply chain/logistics. It placed second in executive MBA programs (with the University of Chicago).

U.S. News bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of graduate programs in the sciences, social sciences, and humanities are based solely on reputational surveys.

In the sciences, ranked by U.S. News for the first time in four years, MIT’s doctoral programs placed first in four areas: biology (with Scripps Research Institute), chemistry (with Berkeley and Caltech), computer science (with Carnegie Mellon University and Stanford), and physics (with Caltech, Princeton University, and Stanford). The Institute placed second in mathematics (with Harvard University, Stanford, and Berkeley).


Helping data centers deliver higher performance with less hardware

Researchers developed a system that intelligently balances workloads to improve the efficiency of flash storage hardware in a data center.


To improve data center efficiency, multiple storage devices are often pooled together over a network so many applications can share them. But even with pooling, significant device capacity remains underutilized due to performance variability across the devices.

MIT researchers have now developed a system that boosts the performance of storage devices by handling three major sources of variability simultaneously. Their approach delivers significant speed improvements over traditional methods that tackle only one source of variability at a time.

The system uses a two-tier architecture, with a central controller that makes big-picture decisions about which tasks each storage device performs, and local controllers for each machine that rapidly reroute data if that device is struggling.

The method, which can adapt in real-time to shifting workloads, does not require specialized hardware. When the researchers tested this system on realistic tasks like AI model training and image compression, it nearly doubled the performance delivered by traditional approaches. By intelligently balancing the workloads of multiple storage devices, the system can increase overall data center efficiency.

“There is a tendency to want to throw more resources at a problem to solve it, but that is not sustainable in many ways. We want to be able to maximize the longevity of these very expensive and carbon-intensive resources,” says Gohar Chaudhry, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique. “With our adaptive software solution, you can still squeeze a lot of performance out of your existing devices before you need to throw them away and buy new ones.”

Chaudhry is joined on the paper by Ankit Bhardwaj, an assistant professor at Tufts University; Zhenyuan Ruan PhD ’24; and senior author Adam Belay, an associate professor of EECS and a member of the MIT Computer Science and Artificial Intelligence Laboratory. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation.

Leveraging untapped performance

Solid-state drives (SSDs) are high-performance digital storage devices that allow applications to read and write data. For instance, an SSD can store vast datasets and rapidly send data to a processor for machine-learning model training.   

Pooling multiple SSDs together so many applications can share them improves efficiency, since not every application needs to use the entire capacity of an SSD at a given time. But not all SSDs perform equally, and the slowest device can limit the overall performance of the pool.

These inefficiencies arise from variability in SSD hardware and the tasks they perform.

To utilize this untapped SSD performance, the researchers developed Sandook, a software-based system that tackles three major forms of performance-hampering variability simultaneously. “Sandook” is an Urdu word that means “box,” to signify “storage.”

One type of variability is caused by differences in the age, amount of wear, and capacity of SSDs that may have been purchased at different times from multiple vendors.

The second type of variability is due to the mismatch between read and write operations occurring on the same SSD. To write new data to the device, the SSD must erase some existing data. This process can slow down data reads, or retrievals, happening at the same time.

The third source of variability is garbage collection, a process of gathering and removing outdated data to free up space. This process, which slows SSD operations, is triggered at random intervals that a data center operator cannot control.

“I can’t assume all SSDs will behave identically through my entire deployment cycle. Even if I give them all the same workload, some of them will be stragglers, which hurts the net throughput I can achieve,” Chaudhry explains.

Plan globally, react locally

To handle all three sources of variability, Sandook utilizes a two-tier structure. A global schedular optimizes the distribution of tasks for the overall pool, while faster schedulers on each SSD react to urgent events and shift operations away from congested devices.

The system overcomes delays from read-write interference by rotating which SSDs an application can use for reads and writes. This reduces the chance reads and writes happen simultaneously on the same machine.

Sandook also profiles the typical performance of each SSD. It uses this information to detect when garbage collection is likely slowing operations down. Once detected, Sandook reduces the workload on that SSD by diverting some tasks until garbage collection is finished.

“If that SSD is doing garbage collection and can’t handle the same workload anymore, I want to give it a smaller workload and slowly ramp things back up. We want to find the sweet spot where it is still doing some work, and tap into that performance,” Chaudhry says.

The SSD profiles also allow Sandook’s global controller to assign workloads in a weighted fashion that considers the characteristics and capacity of each device.

Because the global controller sees the overall picture and the local controllers react on the fly, Sandook can simultaneously manage forms of variability that happen over different time scales. For instance, delays from garbage collection occur suddenly, while latency caused by wear and tear builds up over many months.

The researchers tested Sandook on a pool of 10 SSDs and evaluated the system on four tasks: running a database, training a machine-learning model, compressing images, and storing user data. Sandook boosted the throughput of each application between 12 and 94 percent when compared to static methods, and improved the overall utilization of SSD capacity by 23 percent.

The system enabled SSDs to achieve 95 percent of their theoretical maximum performance, without the need for specialized hardware or application-specific updates.

“Our dynamic solution can unlock more performance for all the SSDs and really push them to the limit. Every bit of capacity you can save really counts at this scale,” Chaudhry says.

In the future, the researchers want to incorporate new protocols available on the latest SSDs that give operators more control over data placement. They also want to leverage the predictability in AI workloads to increase the efficiency of SSD operations.

“Flash storage is a powerful technology that underpins modern datacenter applications, but sharing this resource across workloads with widely varying performance demands remains an outstanding challenge. This work moves the needle meaningfully forward with an elegant and practical solution ready for deployment, bringing flash storage closer to its full potential in production clouds,” says Josh Fried, a software engineer at Google and incoming assistant professor at the University of Pennsylvania, who was not involved with this work.

This research was funded, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the Semiconductor Research Corporation.


Why does wealth inequality matter?

An MIT Stone Center event examined the origins, mechanisms, and political consequences of high inequality.


The MIT James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work recently hosted a half-day symposium at the Institute on “Why Wealth Inequality Matters.”

Three panel discussions convened experts from economics, philosophy, sociology, and political science to explore the origins, mechanisms, and political consequences of wealth inequality.

Richard Locke, John C Head III Dean of the MIT Sloan School of Management, welcomed attendees to the symposium, emphasizing how the event reflects MIT’s commitments to interdisciplinary collaboration and to addressing “society's most pressing issues.”

Here are three key takeaways from the afternoon’s panels.

When wealth buys political influence and legal immunity, democracy is threatened

Hélène Landemore of Yale University argued that wealth inequality isn’t inherently problematic, but becomes dangerous when wealth offers disproportionate influence in other spheres, including political power.

Wojciech Kopczuk of Columbia University echoed this, emphasizing that wealth is a complicated and often ambiguous measure of inequality. Wealth reflects institutional contexts — for example, weak safety nets drive precautionary saving. Still, he agreed that wealth is a relevant metric at the very top, where it correlates with political capture and corporate power.

Landemore explained that when the wealthy dominate policy discussions, “some groups are systematically disbelieved or ignored, and the result is policy failure.” For example, French carbon taxes disproportionately burdened working-class people who were more dependent on cars, which led to the yellow vests protests.

Elizabeth Anderson of the University of Michigan extended this point to corporate power, warning that extreme concentration gives powerful firms de facto immunity from the rule of law — the wealthiest companies can hire hundreds of lawyers to swamp the legal system.

To counteract these negative consequences of high inequality, Oren Cass of American Compass argued that strengthening worker power is key. Redistribution, he said, is a way to improve living standards, but “it is not a solution to the kinds of problems that actually plague democratic capitalism.”

The roots of the racial wealth gap are so deep that equal opportunity alone won’t close it

Ellora Derenoncourt of Princeton University explained that in the United States today, the wealth gap between Black and white Americans is 6:1. In other words, for every dollar of wealth held by an average white American, the average Black American holds about $0.17. She noted that this racial wealth gap has largely remained unchanged for the past 50 years.

“Even if we were to equalize differences in wealth accumulating opportunities — equal savings rates, equal capital gains rates going forward — we’re still hundreds of years away from convergence,” she explained, due to the magnitude of the original gap.

Alexandra Killewald of the University of Michigan added that the racial wealth gap is actively rebuilt each generation through unequal schools, unequal pay, and unequal access to homeownership.

“The past matters, but it’s not just about the past,” she explained. Even if a massive reparations plan were implemented, “if we just let things go on as they are, we will start to recreate inequality from Day 1.”

High inequality and authoritarianism reinforce each other

Daron Acemoglu of MIT described how increasing inequality goes hand-in-hand with the weakening of democracy: “Once inequality starts building up, it also naturally erodes democracies’ claim for legitimacy.”

High inequality, he argued, is both a cause and an effect of liberal democracy failing to deliver on its promise of shared prosperity. This failure, in turn, weakens public support for democracy.

Building on this argument, Sheri Berman of Barnard College examined why economically disadvantaged voters in the United States and Europe have increasingly voted for right-wing populist parties, despite holding economically progressive views.

She described how center-left parties have transformed since the late 20th century, converging with the right on economic policy (embracing free trade and market deregulation) while moving left on social and cultural issues. As a result, she argued, working-class and rural voters no longer saw center-left parties as champions of their economic interests, or as reflecting their social and cultural preferences.

David Yang of Harvard University explained that once authoritarianism takes hold, regimes continue to produce inequality. For example, non-democratic regimes are most responsive not to the average citizen, but to whoever poses the greatest threat to regime survival. In China, this tends to be the wealthier urban population capable of organizing large-scale collective action.


Toward cheaper, cleaner hydrogen production

Co-founded by Dan Sobek ’88, SM ’92, PhD ’97, 1s1 Energy has developed electrochemical cell materials for hydrogen electrolyzers that it says reduces energy use by 30 percent.


Hydrogen sits at the center of some of the world’s most important industrial processes, but its production still comes with a heavy environmental cost. Today, most hydrogen is produced through high-emissions processes like steam methane reforming and coal gasification.

But hydrogen can also be made by splitting water molecules using renewable electricity, eliminating fossil fuel emissions and other toxic byproducts. Such “green hydrogen” is made by running an electric current through water in an electrolyzer.

Green hydrogen won’t scale through decarbonization alone. It also has to be cost-competitive with the traditional methods of production.

1s1 Energy thinks it has the technology to finally make green hydrogen go mainstream. The company says its boron-based membrane material unlocks previously unachievable performance and durability in electrolyzers.

In tests with partners, 1s1 says, electrolyzers with its membranes needed just 70 percent of the energy to produce each kilogram of hydrogen, compared to incumbent devices.

“Green hydrogen has been a hard industry to have success in so far,” acknowledges 1s1 co-founder Dan Sobek ’88, SM ’92, PhD ’97. “The difference with us is we’ve done very targeted customer discovery. We have a very strong value proposition that’s not just about decarbonization. We have a pipeline of potential customers that see around a 60 percent reduction in operating costs with our technology. That’s a nice point of entry.”

Although 1s1 is focused on hydrogen production now, its technology could also be used in fuel cells and solid-state batteries, and to extract critical metals from mining waste. The company is beginning trials in some of those applications, and it is working with a large materials company to scale up production of its membranes for hydrogen production.

“We’re at an inflection point for the company,” Sobek says. “The plan is, by 2030, to have a solid business in several segments: electrolyzers, mineral extraction, and in collaborations with several large companies. But right now, we have to be judicious and focused.”

Improving electrolyzers

Sobek was born and raised in Argentina, but he also grew up at MIT over the course of three degrees and more than a decade. He first studied aeronautics and astronautics at MIT, then jumped to mechanical engineering as a graduate student, then moved to the Department of Electrical Engineering and Computer Science, where he worked under PhD advisors and MIT professors Martha Gray and Stephen Senturia. His thesis focused on a technique for quickly measuring optical properties of large numbers of biological cells.

“A lot of my learnings around microfabrication and materials chemistry ended up being really relevant for 1s1,” Sobek says. “A class that was very important to me was taught by Professor Amar Bose. I was a teaching assistant for him for a couple of semesters, and that had an incredible influence on my thinking.”

Following graduation, Sobek worked in microelectronics and microfluidics before founding his own company, Zymera, in 2004. The company developed deep-tissue imaging technology for detecting cancer and other serious diseases.

Around 2013, Sobek started talking to his Zymera co-founder, Sukanta Bhattacharyya, about making electrolysis more efficient, focusing on “proton exchange membrane” electrolyzers. Such electrolyzers employ a large amount of electricity to split water into hydrogen and oxygen ions. At their center is a membrane that can lose efficiency through voltage resistance.

On top of the efficiency challenge, electricity is often more expensive than fossil fuels in many parts of the world. Traditional hydrogen production also has the benefit of existing infrastructure, making it that much more difficult for green hydrogen production to scale.

Sobek and Bhattacharyya knew the most important part of such electrolyzers is their proton-conducting membrane, which shuttles hydrogen ions from the anode to the cathode in the electrolyzer’s electrochemical cell.

“I asked Sukanta how we could improve the efficiency and durability of that element,” Sobek recalls. “He gave me a one-word answer: boron.”

Boron can be given a negative charge, which makes hydrogen ions, or protons, bond to it more quickly. The hydrogen ions can then be filtered through the membrane and released as they move through the cell. Boron-based materials are also more stable and resistant to corrosion, further improving the long-term performance of electrolyzers.

The company was officially founded in late 2019. After years of development, today 1s1 attaches a chemically tailored version of boron onto polymer materials to create its membranes for exchanging protons.

“These are first-of-a-kind membranes with stable and durable, super-acid proton exchange groups that do not poison catalysts,” Sobek says.

Tiny membranes with big impact

In 2021, the U.S. Department of Energy set a goal for proton exchange membrane electrolysis to achieve 77 percent electrical efficiency by 2031. Sobek says 1s1 is already reaching that milestone in tests.

“It’s not just the technology, but the way we’re applying it,” Sobek says, “We’re making hydrogen viable for use in the production of different industrial chemicals.”

1s1 is currently conducting pilots with partners, including an electrical utility owned by a large steel company in Brazil. The company is also actively exploring other applications for its technology. Last year, 1s1 announced a project to produce green ammonia with the company Nitrofix through joint funding from the U.S. Department of Energy and the Israeli Ministry of Energy and Infrastructure. It’s also working with a large mine in Brazil to extract a material called niobium, which is useful for high-strength steel as well as fast-charging batteries. A similar process could even be used to extract gold.

“We can do that without using harsh chemicals, because the standard processes used to extract niobium and gold use extremely strong acids at high temperatures or extremely toxic chemicals,” Sobek says. “It’s gratifying for me because my home country of Argentina has had a lot of problems with the use of toxic chemicals to extract gold. We’re trying to enable low-cost, responsible mining.”

As 1s1 scales its membrane technology, Sobek says the goal is to deploy wherever the technology can improve processes.

“We have a large number of potential customers because this technology is really foundational,” Sobek says. “Creating high-impact technologies is always fun.”


Lincoln Laboratory laser communications terminal launches on historic Artemis II moon mission

High-definition video and data sent from the lunar vicinity to Earth will demonstrate the first use of laser communications on a crewed mission.


In 1969, Apollo 11 astronaut Neil Armstrong stepped onto the moon’s surface — a momentous engineering and science feat marked by his iconic words: "That’s one small step for man, one giant leap for mankind." Now, NASA is making history again.

With the successful launch of NASA’s Artemis II mission yesterday, four astronauts are set to become the first humans to travel to the moon in more than 50 years. In 2022, the uncrewed Artemis I mission demonstrated that NASA’s new Orion spacecraft could travel farther into space than ever before and return safely to Earth. Building on that success, the 10-day Artemis II mission will pave the way for future Artemis missions, which aim to land astronauts on the moon to prepare for a lasting lunar presence, and eventually human missions to Mars.

As it orbits the moon, the Orion spacecraft will carry an optical (laser) communications system developed at MIT Lincoln Laboratory in collaboration with NASA Goddard Space Flight Center. Called the Orion Artemis II Optical Communications System (O2O), the system is capable of higher-bandwidth data transmissions from space compared to traditional radio-frequency (RF) systems. During the Artemis II mission, O2O will use laser beams to send high-resolution video and images of the lunar surface down to Earth.

"Space-based communications has always been a big challenge," says lead systems engineer Farzana Khatri, a senior staff member in the laboratory’s Optical and Quantum Communications Group. "RF communications have served their purpose well. However, the RF spectrum is highly congested now, and RF does not scale well to longer distances across space. Laser communication [lasercom] is a solution that could solve this problem, and the laboratory is an expert in the field, which was really pioneered here."

Artemis II is historic not only for renewing human exploration beyond Earth, but also for being the first crewed lunar flight to demonstrate lasercom technologies, which are poised to revolutionize how spacecraft communicate. Lincoln Laboratory has been developing such technologies for more than two decades, and NASA has been infusing them into its missions to meet the growing demands of long-distance and data-intensive space exploration.

"The Orion spacecraft collects a huge amount of data during the first day of a mission, and typically these data sit on the spacecraft until it splashes down and can take months to be offloaded," Khatri says. "With an optical link running at the highest rate, we should be able to get all the data down to Earth within a few hours for immediate analysis. Furthermore, astronauts will be able to communicate in real-time over the optical link to stay in touch with Earth during their journey, inspiring the public and the next generation of deep-space explorers, much like the Apollo 11 astronauts who first landed on the moon 57 years ago."

At the heart of O2O is the laboratory-developed Modular, Agile, Scalable Optical Terminal (MAScOT). About the size of a house cat, MAScOT features a 4-inch telescope mounted on a two-axis pivoted support (gimbal) with fixed backend optics. The gimbal precisely points the telescope and tracks the laser beam through which communications signals are emitted and received in the direction of the desired data recipient or sender. Underneath the gimbal, in a separate assembly, are the backend optics, which contain light-focusing lenses, tracking sensors, fast-steering mirrors, and other components to finely point the laser beam.

MAScOT made its debut in space as part of the laboratory’s Integrated Laser Communications Relay Demonstration (LCRD) LEO User Modem and Amplifier Terminal (ILLUMA-T), which launched to the International Space Station in November 2023. Over the following six months, the laboratory team performed experiments to test and characterize the system's basic functionality, performance, and utility for human crews and user applications. Initially, the team checked whether the ILLUMA-T-to-LCRD optical link was operating at the intended data rates in both directions: 622 Mbps down and 51 Mbps up. In fact, even higher data rates were achieved: 1.2 Gbps down and 155 Mbps up. MAScOT’s lasercom terminal architecture, which was recognized with a 2025 R&D 100 Award, is now being used for Artemis II and will support future space missions.

"Our success with ILLUMA-T laid the foundation for streaming HD [high-definition] video to and from the moon," says co-principal investigator Jade Wang, an assistant leader of the Optical and Quantum Communications Group. "You can imagine the Artemis astronauts using videoconferencing to connect with physicians, coordinate mission activities, and livestream their lunar trips."

A dedicated operations team from Lincoln Laboratory is following the 10-day Artemis II mission from ground stations in Houston, Texas, and White Sands, New Mexico, and even as far as an experimental ground station in Australia, which allows for a better view of the spacecraft from the Southern Hemisphere. Leading up to the launch, the operations team had been making monthly trips to the Houston and White Sands ground stations to perform maintenance and simulations of various stages of the Artemis mission — from prelaunch to launch to the journey to the moon and back to the splashdown at the end of the mission. 

"Doing these monthly simulations is important so we all stay fresh and engaged, especially when there is a launch delay," says Khatri, who adds that team members have had the opportunity to meet and speak with the four astronauts several times during these trips.

Lessons learned throughout the Artemis II mission will pave the way for humans to return to the lunar surface and beyond, eventually to Mars. Through the Artemis program, NASA will travel farther into space and explore more of the moon while creating an enduring presence in deep space and a legacy for future generations.

O2O is funded by the Space Communication and Navigation (SCaN) program at NASA Headquarters in Washington. O2O was developed by a team of engineers from NASA’s Goddard Space Flight Center and Lincoln Laboratory. This partnership has led to multiple lasercom missions, such as the 2013 Lunar Laser Communication Demonstration (LLCD), the 2021 LCRD, the 2022 TeraByte Infrared Delivery (TBIRD), and the 2023 ILLUMA-T.


MIT researchers measure traffic emissions, to the block, in real-time

A new study pieces together existing data sources in order to develop a detailed, dynamic picture of auto emissions.


In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.

The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.

“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”

Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”

The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.

“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.

The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.

The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.

Manhattan measurements

To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.

The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.

“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.

Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.

For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.

Major emissions drop

On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.

To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.

This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.

“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”

There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.

“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.

The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.

It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.


Evaluating the ethics of autonomous systems

MIT researchers developed a testing framework that pinpoints situations where AI decision-support systems are not treating people and communities fairly.


Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.

But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?

To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.   

The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences. 

The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.

“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.

Evaluating ethics

In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.

Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.

Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.

Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.   

“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.

Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.

For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.

These ethical criteria may not be well-specified, so they can’t be measured analytically.

The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.

SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.

“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.

Encoding subjectivity

To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.

The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.

“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.

SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.

In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.

For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.

To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.

The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.

“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.

To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.

In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.

This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.


Preview tool helps makers visualize 3D-printed objects

By quickly generating aesthetically accurate previews of fabricated objects, the VisiPrint system could make prototyping faster and less wasteful.


Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.

But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.

To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.

Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.

The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.

Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.

“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.

She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

Accurate aesthetics

The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.

Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.

VisiPrint uses two AI models that work together to overcome those challenges.

The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.

From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.

It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.

The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.

Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.

“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.

A user-focused system

The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.

The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.

In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.

To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.

In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.

“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.

In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.

“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.

“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.

This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.


Two physicists and a curious host walk into a studio…

On GBH’s new show The Curiosity Desk, MIT LIGO researchers revel in the beauties of fundamental discovery science and MIT astronomers talk planetary defense.


This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).

Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.

In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”

Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.

While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)

In fact, it’s fantastic to have people from both worlds at MIT, said Vitale.  Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.

As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.

But what if I’m not interested in any of that, asked Herwick? Why should I care? 

“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”

Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:

“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing. 

“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”

Watch the full conversation below and on YouTube:

 

Planetary defense

Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.    

“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.

There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”

He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”

Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”

Watch the full conversation below and on the GBH YouTube channel: 

Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team. 


Tomás Palacios named director of the Institute for Soldier Nanotechnologies

The electrical engineering and nanotechnology leader will guide the US Army-sponsored research center as it advances next-generation materials, electronics, and photonics for national security.


Tomás Palacios, the Clarence J. LeBel Professor of Electrical Engineering at MIT, has been appointed director of the MIT Institute for Soldier Nanotechnologies (ISN). Palacios assumed the role on Feb. 4, and will continue to serve as the director of the MIT Microsystems Technology Laboratories (MTL).

Founded in 2002, ISN is a U.S. Army-sponsored University Affiliated Research Center focused on advancing fundamental science and engineering to enable next-generation capabilities for protection, survivability, sensing, and system performance. ISN brings together researchers from across MIT to address challenges at the intersection of materials, devices, and systems. In collaboration with industry, MIT Lincoln Laboratory, the U.S. Army, and other U.S. military services, ISN works to transition promising technologies for both commercial and defense applications.

As director, Palacios will oversee ISN’s research portfolio, facilities, and strategic partnerships, working closely with the ISN leadership team, MIT administration, U.S. Army, and other research sponsors to guide the institute’s next phase of research and collaboration.

“Tomás Palacios brings exceptional energy, vision, and leadership to the Institute for Soldier Nanotechnologies,” says Ian A. Waitz, MIT’s vice president for research, who announced the appointment in a recent letter. “As director of Microsystems Technology Laboratories, he has demonstrated a rare ability to build strong research communities and partnerships across academia, industry, and government. I am confident he will guide ISN’s next phase with momentum, scientific excellence, and a deep sense of service to MIT and the nation.”

Palacios brings deep leadership experience within MIT and across national research collaborations. As director of MTL, he leads one of MIT’s flagship interdisciplinary research laboratories supporting work in micro- and nano-scale materials, devices, and systems. He is a member of the MIT.nano Leadership Council and, since 2023, has served as associate director of the multi-university SUPeRior Energy-efficient Materials and dEvices (SUPREME) Center, a Semiconductor Research Corp. JUMP 2.0 program focused on next-generation energy-efficient semiconductor technologies. Palacios is also the co-founder of several technology companies, including Vertical Semiconductor, Finwave Semiconductor, and CDimension, Inc.

“MIT’s motto, ‘mens et manus’ — ‘mind and hand’ — reminds us that fundamental research and real-world impact must go hand-in-hand,” says Palacios. “At ISN, our mission is to help protect and empower those who defend our nation. That responsibility demands urgency, creativity, and deep collaboration. I look forward to building on ISN’s strong partnership with the U.S. Army, industry, and colleagues across MIT to push the frontiers of nanotechnology and translate discovery into meaningful impact at the speed of relevance.”

Palacios is internationally recognized for his work on wide-bandgap semiconductors, nanoelectronics, and advanced electronic materials. An IEEE Fellow, his research spans fundamental device physics through system-level integration, with applications in high-power and high-frequency electronics, sensing, and energy systems. He is widely recognized for his research contributions, as well as for his leadership in education and mentoring.

Palacios succeeds John Joannopoulos, who served as ISN director from 2006 until his death in August 2025. During his nearly two decades of ISN leadership, Joannopoulos strengthened ISN’s interdisciplinary culture, devoting significant effort to fostering collaborations among ISN-funded principal investigators, building partnerships that extend across MIT and beyond to the Army research community. Joannopoulos, an extraordinary researcher and a generous mentor, was also a co-founder of companies such as WiTricity and OmniGuide, helping to translate many of ISN’s foundational scientific discoveries into commercial technologies. Raúl Radovitzky, ISN’s associate director, served as interim director during the search for a new director, providing continuity to ISN’s research programs, facilities, and partnerships.

“It is an honor to serve as director of the Institute for Soldier Nanotechnologies at such an important moment in time,” says Palacios. “ISN has built an extraordinary foundation of interdisciplinary excellence under Professor John Joannopoulos’ leadership and, more recently, Prof. Radovitzky’s. I look forward to working with the ISN community to advance breakthrough research at the intersection of materials, devices, and systems — research that not only strengthens national security, but also translates into technologies that benefit society more broadly.” 


Climate change may produce “fast-food” phytoplankton

With warmer ocean temperatures, the composition of marine plankton could shift from protein-rich to carb-heavy, a new study suggests.


We are what we eat. And in the ocean, most life-forms source their food from phytoplankton. These microscopic, plant-like algae are the primary food source for krill, sea snails, some small fish, and jellyfish, which in turn feed larger marine animals that are prey for the ocean’s top predators, including humans.

Now MIT scientists are finding that phytoplankton's composition, and the basic diet of the ocean, will shift significantly with climate change.

In an open-access study appearing today in the journal Nature Climate Change, the team reports that as sea surface temperatures rise over the next century, phytoplankton in polar regions will adapt to be less rich in proteins, heavier in carbohydrates, and lower in nutrients overall.

The conclusions are based on results from the team’s new model, which simulates the composition of phytoplankton in response to changes in ocean temperature, circulation, and sea ice coverage. In a scenario in which humans continue to emit greenhouse gases through the year 2100, the team found that changing ocean conditions, particularly in the polar regions, will shift phytoplankton’s balance of proteins to carbohydrates and lipids by approximately 20 percent. The researchers analyzed observations from the past several decades, and already have found a signature of this change in the real world.

“We’re moving in the poles toward a sort of fast-food ocean,” says lead author and MIT postdoc Shlomit Sharoni. “Based on this prediction, the nutritional composition of the surface ocean will look very different by the end of the century.”

The study’s MIT co-authors are Mick Follows, Stephanie Dutkiewicz, and Oliver Jahn; along with Keisuke Inomura of the University of Rhode Island; Zoe Finkel, Andrew Irwin, and Mohammad Amirian of Dalhousie University in Halifax, Canada; and Erwan Monier of the University of California at Davis.

Nutritional information

Phytoplankton drift through the upper, sun-lit layers of the ocean. Like plants on land, the marine microalgae are photosynthetic. Their growth depends on light from the sun, carbon dioxide from the atmosphere, and nutrients such as nitrogen and iron that well up from the deep ocean.

When studying how phytoplankton will respond to climate change, scientists have primarily focused on how rising ocean temperatures will affect phytoplankton populations. Whether and how the plankton’s composition will change is less well-understood.

“There’s been an awareness that the nutritional value of phytoplankton can shift with climate change,” says Sharoni, “But there has been very little work in directly addressing that question.”

She and her colleagues set out to understand how ocean conditions influence phytoplankton macromolecular composition. Macromolecules are large molecules that are essential for life. The main types of macromolecules include proteins, lipids, carbohydrates, and nucleic acids (the building blocks of DNA and RNA). Every form of life, including phytoplankton, is composed of a balance of macromolecules that helps it to survive in its particular environment.

“Nearly all the material in a living organism is in these broad molecular forms, each having a particular physiological function, depending on the circumstances that the organism finds itself in,” says Follows, a professor in the Department of Earth, Atmospheric and Planetary Sciences.

An unbalanced diet

In their new study, the researchers first looked at how today’s ocean conditions influence phytoplankton’s macromolecular composition. The team used data from lab experiments carried out by their collaborators at Dalhousie. These experiments revealed ways in which phytoplankton’s balance of macromolecules, such as proteins to carbohydrates, shifted in response to changes in water temperature and the availability of light and nutrients.

With these lab-based data, the group developed a quantitative model that simulates how plankton in the lab would readjust its balance of proteins to carbohydrates under different light and nutrient conditions. Sharoni and Inomura then paired this new model with an established model of ocean circulation and dynamics developed previously at MIT. With this modeling combination, they simulated how phytoplankton composition shifts in response to ocean conditions in different parts of the world and under different climate scenarios.

The team first modeled today’s current climate conditions. Consistent with observations, their model predicts that that a little more than half of the average phytoplankton cell today is composed of proteins. The rest is a mix of carbohydrates and lipids.

Interestingly, in polar regions, phytoplankton are slightly more protein-rich. At the poles, the cover of sea ice limits the amount of sunlight phytoplankton can absorb. The researchers surmise that phytoplankton may have adapted by making more light-harvesting proteins to help the organisms efficiently absorb the weak sunlight.

However, when they modeled a future climate change scenario, the team found a significant shift in phytoplankton composition. They simulated a scenario in which humans continue to emit greenhouse gases through the year 2100. In this scenario, the ocean sea surface temperatures will rise by 3 degrees Celsius, substantially reducing sea ice coverage. Warmer temperatures will also limit the ocean’s circulation, as well as the amount of nutrients that can circulate up from the deep ocean.

Under these conditions, the model predicts that the population of phytoplankton growth in polar regions will increase significantly, consistent with earlier studies. Uniquely, this model predicts that phytoplankton in polar regions will shift from a protein-rich to a carb- and lipid-heavy composition. They found that plankton will not need as much light-harvesting protein, since less sea ice will make sunlight more easily available for the organisms to absorb. Total protein levels in these polar phytoplankton will decline by up to 30 percent, with a corresponding increase in the contribution of carbs and lipids.

It’s unclear what impact a larger population of carb- and lipid-heavy phytoplankton may have on the rest of the marine food web. While some organisms may be stressed by a reduction in protein, others that make lipid stores to survive through the winter might thrive.

The team also simulated phytoplankton in subtropical, higher-latitude regions. In these ocean areas, it’s expected that phytoplankton populations will decline by 50 percent. And the team’s modeling shows that their composition will also shift.

With warmer temperatures, the ocean’s circulation will slow down, limiting the amount of nutrients that can upwell from the deep ocean. In response, subtropical phytoplankton may have to find ways to live at deeper depths, to strike a balance between getting enough sunlight and nutrients. Under these conditions, the organisms will likely shift to a slightly more protein-rich composition, making use of the same photosynthetic proteins that their polar counterparts will require less of.

On balance, given the projected changes in phytoplankton populations with climate change, their average composition around the world will shift to a more carb-heavy, low-nutrient composition.

The researchers went a step further and found that their modeling agrees with available small set of actual phytoplankton field samples that other scientists previously collected from Arctic and Antarctic regions. These samples showed compositions of phytoplankton have become  more carb- and lipid-heavy over the past few decades, as the team’s model predicts under climate warming.

“In these regions, you can already see climate change, because sea ice is already melting,” Sharoni explains. “And our model shows that proteins in polar plankton have been declining, while carbs and lipids are increasing.”

“It turns out that climate change is accelerated in the Arctic, and we have data showing that the composition of phytoplankton has already responded,” Follows adds. “The main message is: The caloric content at the base of the marine food web is already changing. And it’s not a clear story as to how this change will transmit through the food web.”

This work was supported, in part, by the Simons Foundation.


MIT researchers use AI to uncover atomic defects in materials

A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.


In biology, defects are generally bad. But in materials science, defects can be intentionally tuned to give materials useful new properties. Today, atomic-scale defects are carefully introduced during the manufacturing process of products like steel, semiconductors, and solar cells to help improve strength, control electrical conductivity, optimize performance, and more.

But even as defects have become a powerful tool, accurately measuring different types of defects and their concentrations in finished products has been challenging, especially without cutting open or damaging the final material. Without knowing what defects are in their materials, engineers risk making products that perform poorly or have unintended properties.

Now, MIT researchers have built an AI model capable of classifying and quantifying certain defects using data from a noninvasive neutron-scattering technique. The model, which was trained on 2,000 different semiconductor materials, can detect up to six kinds of point defects in a material simultaneously, something that would be impossible using conventional techniques alone.

“Existing techniques can’t accurately characterize defects in a universal and quantitative way without destroying the material,” says lead author Mouyang Cheng, a PhD candidate in the Department of Materials Science and Engineering. “For conventional techniques without machine learning, detecting six different defects is unthinkable. It’s something you can’t do any other way.”

The researchers say the model is a step toward harnessing defects more precisely in products like semiconductors, microelectronics, solar cells, and battery materials.

“Right now, detecting defects is like the saying about seeing an elephant: Each technique can only see part of it,” says senior author and associate professor of nuclear science and engineering Mingda Li. “Some see the nose, others the trunk or ears. But it is extremely hard to see the full elephant. We need better ways of getting the full picture of defects, because we have to understand them to make materials more useful.”

Joining Cheng and Li on the paper are postdoc Chu-Liang Fu, undergraduate researcher Bowen Yu, master’s student Eunbi Rha, PhD student Abhijatmedhi Chotrattanapituk ’21, and Oak Ridge National Laboratory staff members Douglas L Abernathy PhD ’93 and Yongqiang Cheng. The paper appears today in the journal Matter.

Detecting defects

Manufacturers have gotten good at tuning defects in their materials, but measuring precise quantities of defects in finished products is still largely a guessing game.

“Engineers have many ways to introduce defects, like through doping, but they still struggle with basic questions like what kind of defect they’ve created and in what concentration,” Fu says. “Sometimes they also have unwanted defects, like oxidation. They don’t always know if they introduced some unwanted defects or impurity during synthesis. It’s a longstanding challenge.”

The result is that there are often multiple defects in each material. Unfortunately, each method for understanding defects has its limits. Techniques like X-ray diffraction and positron annihilation characterize only some types of defects. Raman spectroscopy can discern the type of defect but can’t directly infer the concentration. Another technique known as transmission electron microscope requires people to cut thin slices of samples for scanning.

In a few previous papers, Li and collaborators applied machine learning to experimental spectroscopy data to characterize crystalline materials. For the new paper, they wanted to apply that technique to defects.

For their experiment, the researchers built a computational database of 2,000 semiconductor materials. They made sample pairs of each material, with one doped for defects and one left without defects, then used a neutron-scattering technique that measures the different vibrational frequencies of atoms in solid materials. They trained a machine-learning model on the results.

“That built a foundational model that covers 56 elements in the periodic table,” Cheng says. “The model leverages the multihead attention mechanism, just like what ChatGPT is using. It similarly extracts the difference in the data between materials with and without defects and outputs a prediction of what dopants were used and in what concentrations.”

The researchers fine-tuned their model, verified it on experimental data, and showed it could measure defect concentrations in an alloy commonly used in electronics and in a separate superconductor material.

The researchers also doped the materials multiple times to introduce multiple point defects and test the limits of the model, ultimately finding it can make predictions about up to six defects in materials simultaneously, with defect concentrations as low as 0.2 percent.

“We were really surprised it worked that well,” Cheng says. “It’s very challenging to decode the mixed signals from two different types of defects — let alone six.”

A model approach

Typically, manufacturers of things like semiconductors run invasive tests on a small percentage of products as they come off the manufacturing line, a slow process that limits their ability to detect every defect.

“Right now, people largely estimate the quantities of defects in their materials,” Yu says. “It is a painstaking experience to check the estimates by using each individual technique, which only offers local information in a single grain anyway. It creates misunderstandings about what defects people think they have in their material.”

The results were exciting for the researchers, but they note their technique measuring the vibrational frequencies with neutrons would be difficult for companies to quickly deploy in their own quality-control processes.

“This method is very powerful, but its availability is limited,” Rha says. “Vibrational spectra is a simple idea, but in certain setups it’s very complicated. There are some simpler experimental setups based on other approaches, like Raman spectroscopy, that could be more quickly adopted.”

Li says companies have already expressed interest in the approach and asked when it will work with Raman spectroscopy, a widely used technique that measures the scattering of light. Li says the researchers’ next step is training a similar model based on Raman spectroscopy data. They also plan to expand their approach to detect features that are larger than point defects, like grains and dislocations.

For now, though, the researchers believe their study demonstrates the inherent advantage of AI techniques for interpreting defect data.

“To the human eye, these defect signals would look essentially the same,” Li says. “But the pattern recognition of AI is good enough to discern different signals and get to the ground truth. Defects are this double-edged sword. There are many good defects, but if there are too many, performance can degrade. This opens up a new paradigm in defect science.”

The work was supported, in part, by the Department of Energy and the National Science Foundation.


G. Anthony Grant named a 2025-26 NACDA Athletics Director of the Year

MIT director of athletics and DAPER department head has supported remarkable success by MIT’s student-athletes and coaches while strengthening department culture and initiatives.


The National Association of Collegiate Directors of Athletics (NACDA) has announced that MIT Director of Athletics G. Anthony Grant, head of the MIT Department of Athletics, Physical Education, and Recreation, is among 28 winners of the 2025-26 NACDA Athletic Director of the Year (ADOY) Award.

The ADOY Award highlights the efforts of athletics directors at all levels for their commitment and positive contributions to student-athletes, campuses, and their surrounding communities. Grant is currently in his sixth year at MIT, leading one of the most comprehensive Division III athletics programs in the country. In his role, he directs a department featuring 33 intercollegiate teams, including four Division I rowing programs, while providing opportunities for over 800 student-athletes.

MIT achieved remarkable success under Grant's leadership during the 2024-25 academic year, winning four NCAA championships. Women's swimming and diving captured the first national title in program history, while the women's cross country and track and field program swept all three NCAA championships in 2024-25, a historic first for an NCAA Division III women's program and the first MIT women's titles in cross country, as well as women's indoor and outdoor track and field. 

The year also saw MIT win 13 individual national champions, with 158 student-athletes earning All-American honors, 166 named All-Region, 227 named All-Conference, and 24 named CSC Academic All-America. Multiple head and assistant coaches claimed national, regional, and conference recognition. Nine teams claimed conference titles, while MIT earned seven NCAA/national top 10 finishes, as men's indoor track and field (7th), men's swimming and diving (9th) and men's lightweight crew joined the four national title winners.

Despite having begun his tenure at MIT just weeks prior to the start of the Covid-19 pandemic, MIT has continued to excel and grow under Grant's leadership. The Engineers have won six NCAA team national championships, finishing in the top seven of the NACDA LEARFIELD Directors' Cup standings every year since MIT returned to play following the pandemic. Most recently, MIT finished sixth in the final LEARFIELD Directors' Cup standings for the 2024-25 academic year, marking the 10th time the Engineers finished in the top 10, while MIT captured the NEWMAC Women's Presidents Cup for the 10th straight season and 11th time overall in 2024-25.

Grant was instrumental in negotiating an exciting re-branding effort that included the transition of the team uniforms and other apparel to Nike, as he worked in conjunction with BSN Sports as the official apparel provider. He also increased fundraising efforts with a record-breaking year for annual gifts in 2022. To wit, Grant has overseen several key initiatives, including a record-breaking fundraising campaign and a $5 million renovation to the varsity athletics Sports Performance Center that reopened in 2024-25. Most recently, Grant announced a state-of-the-art facility upgrade and turf renovation of the Fran O'Brien Baseball Field and Briggs Softball Field, with work currently underway. 

In addition to the on- and off-field accomplishments of MIT's student-athletes and coaches, Grant has intentionally strengthened department culture by focusing on MIT's mission and shared values and behaviors, which were re-branded in 2020 under his leadership. Grant embodies an open-door leadership style, creating an environment where staff at all levels feel comfortable engaging with him. He values feedback and open communication, and fosters a supportive, respectful, and inclusive environment. He actively supports employee initiatives and has worked with student-athlete leaders to enhance the Student-Athlete Advisory Committee to improve real-time feedback collection and engagement at meetings. 

Grant came to MIT from Metropolitan State University of Denver, where he also served as the director of athletics. Prior to MSU Denver, Grant served as the interim director of athletics at Millersville University in Pennsylvania, where he also worked as associate director of athletics for seven years. In addition, Grant has served as the athletic academic coordinator at the University of Iowa. 

He earned his master's degree from Temple University in sport and recreation, along with a PhD in health and sport studies with a specialization in athletic administration from the University of Iowa. His leadership extends beyond MIT, as he is also involved with the National Association of Collegiate Directors of Athletics, the National Association of Division III Athletics Administrators (NADIIIAA), and the Minority Opportunities Athletic Association. Most recently, he was named to the NADIIIAA Board of Directors for 2025-26. 

The ADOY Award program is in its 28th year and has recognized a total of 633 deserving athletics directors to date. The award spans seven divisions (NCAA FBS, FCS, Division I-AAA, II, III, NAIA/Other Four-Year Institutions and Junior College/Community Colleges). Winners will be recognized in conjunction with the 61st Annual NACDA and Affiliates Convention at Mandalay Bay Resort in Las Vegas, Nevada, at the beginning of the Association-Wide Featured Session on Tuesday, June 9. Additional history surrounding the ADOY award, including a list of past winners, can be found here.


Implantable islet cells could control diabetes without insulin injections

The cells can survive in the body for at least three months, producing enough insulin to control blood sugar levels, research shows.


Most diabetes patients must carefully monitor their blood sugar levels and inject insulin multiple times per day, to help keep their blood sugar from getting too high.

As a possible alternative to those injections, MIT researchers are developing an implantable device that contains insulin-producing cells. The device encapsulates the cells, protecting them from immune rejection, and it also carries an on-board oxygen generator to keep the cells healthy.

This device, the researchers hope, could offer a way to achieve long-term control of type 1 diabetes. In a new study, they showed that these encapsulated pancreatic islet cells could survive in the body for at least 90 days. In mice that received the implants, the cells remained functional and produced enough insulin to control the animals’ blood sugar levels.

“Islet cell therapy can be a transformative treatment for patients. However, current methods also require immune suppression, which for some people can be really debilitating,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Our goal is to find a way to give patients the benefit of cell therapy without the need for immune suppression.”

Anderson is the senior author of the study, which appears today in the journal Device. Former MIT research scientist Siddharth Krishnan, who is now an assistant professor of electrical engineering at Stanford University, and former MIT postdoc Matthew Bochenek are the lead authors of the paper. Robert Langer, the David H. Koch Institute Professor at MIT, is also a co-author.

Insulin on demand

Islet cell transplantation has already been used successfully to treat diabetes in patients. Those islet cells typically come from human cadavers, or more recently, can be generated from stem cells. In either case, patients must take immunosuppressive drugs to prevent their immune system from rejecting the transplanted cells.

Another way to prevent immune rejection is to encapsulate cells in a protective device. However, this raises new challenges, as the coating that surrounds the cells can prevent them from receiving enough oxygen.

In a 2023 study, Anderson and his colleagues reported an islet-encapsulation device that also carries an on-board oxygen generator. This generator consists of a proton-exchange membrane that can split water vapor (found abundantly in the body) into hydrogen and oxygen. The hydrogen diffuses harmlessly away, while oxygen goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.

Cells encapsulated within this device, they found, could produce insulin for up to a month after being implanted in mice.

“A month is a good timeframe in that it shows basic proof-of-concept. But from a translational standpoint, it’s important to show that you can go quite a bit longer than that,” Krishnan says.

In the new study, the researchers increased the lifespan of the devices by making them more waterproof and more resilient to cracking. They also improved the device electronics to deliver more power to the oxygen generator. The implant is powered wirelessly by an external antenna placed on the skin, which transfers energy to the device. By optimizing the circuitry, the researchers were able to increase the amount of power reaching the oxygen-generating system.

The additional power allowed the device to produce more oxygen, helping the encapsulated cells to survive and function more effectively. As a result, the cells were able to generate much more insulin over time.

Protein factories

In studies in rats and mice, the researchers showed that the new device could function for at least 90 days after being implanted under the skin. During this time, donor islet cells were able to produce enough insulin to keep the animals’ blood sugar levels within a healthy range.

The researchers saw similar results with islet cells derived from induced pluripotent stem cells, which could one day provide an indefinite supply that could be used for any patient who needs them. These islets didn’t fully reverse diabetes, but they did achieve some control of blood sugar levels.

“We’re hoping that in the future, if we can give the cells a little bit longer to fully mature, that they’ll secrete even more insulin to better regulate diabetes in the animals,” Bochenek says.

The researchers now plan to study whether they can get the devices to last for even longer in the body — up to two years, or longer.

“Long-term survival of the islets is an important goal,” Anderson says. “The cells, if they’re in the right environment, seem to be able to survive for a long time. We are excited by the duration we’ve already achieved, and we will be working to extend their function as long as possible.”

The researchers are also exploring the possibility of using this approach to deliver cells that could produce other useful proteins, such as antibodies, enzymes, or clotting factors.

“We think that these technologies could provide a long-term way to treat human disease by making drugs in the body instead of outside of the body,” Anderson says. “There are many protein therapies where patients must receive repeated, lengthy infusions. We think it may be possible to create a device that could continuously create protein therapeutics on demand and as needed by the patient.”

The research was funded, in part, by Breakthrough TID, the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, and a Koch Institute Support (core) Grant from the National Cancer Institute.


Study reveals why some cancer therapies don’t work for all patients

A backup survival pathway can help tumor cells resist certain lung cancer and other drugs. Combining therapies may offer a solution.


Drugs that block enzymes called tyrosine kinases are among the most effective targeted therapies for cancer. However, they typically work for only 40 to 80 percent of the patients who would be expected to respond to them.

In a new study, MIT researchers have figured out why those drugs don’t work in all cases: Many of these tumors have turned on a backup survival pathway that helps them keep growing when the targeted pathway is knocked out.

“This seems to be hardwired into the cells and seems to be providing activation of a critical survival pathway in cancer cells,” says Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT. “This pathway allows the cells to be resistant to a wide variety of therapies, including chemotherapies.”

Additionally, the researchers found that they could kill those drug-resistant cancer cells by treating with both a tyrosine kinase inhibitor and a drug that targets the backup pathway. Clinical trials are now underway to test one such combination in lung cancer patients.

White is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Cameron Flower PhD ’24, who is now a postdoc at Dana-Farber Cancer Institute and Boston Children’s Hospital, is the paper’s lead author.

Tumor survival

Tyrosine kinases are involved in many signaling pathways that allow cells to receive input from the external environment and convert it into a response such as growing or dividing. There are about 90 types of these kinases in human cells, and many of them are overactive in cancer cells.

“These kinases are very important for regulating cell growth and mitosis, and pushing the cell from a nondividing state to a dividing state depends on the activity of a lot of different tyrosine kinases,” Flower says. “We see a lot of mutations and overexpression of these kinases in cancer cells.”

These cancer-associated kinases include EGFR and BCR-ABL. Many cancer drugs targeting these kinases, including imatinib (Gleevec), have been approved to treat leukemia and other cancers. However, these drugs are not effective for all of the patients whose tumors overexpress tyrosine kinases — a phenomenon that has puzzled cancer researchers.

That lower-than-expected success rate motivated the MIT team to look into these drugs and try to figure out why some tumors do not respond to them.

For this study, the researchers examined six different cancer cell lines, which originally came from lung cancer patients. They chose two cell lines with EGFR mutations, two with mutations in a tyrosine kinase called MET and two with mutations in a tyrosine kinase called ALK. Each pair included one line that responded well to the tyrosine kinase inhibitor targeting the overactive pathway and one line that did not.

Using a technique called phosphoproteomics, the researchers were able to analyze the signaling pathways that were active in each of the cells, before and after treatment. Phosphoproteomics is used to identify proteins that have had a phosphate group added to them by a kinase. This process, known as phosphorylation, can activate or deactivate the target protein.

The researchers’ analysis revealed that the drugs were working as intended in all of the cancer cells. Even in resistant cells, the drugs did knock out signaling by their target kinase. However, in the cells that were resistant, an alternative network was already turned on, which helped the cells survive in spite of the treatment.

“Even before the therapy begins, the cells are in a state that intrinsically is resistant to the drug,” Flower says.

This survival network consists of signaling pathways that are regulated by another type of kinases known as SRC family kinases. Activation of this network appears to help cancer cells proliferate and possibly to migrate to new locations in the body. In addition to lung cancer, researchers from White’s lab have also found SRC family kinases activated in melanoma cells, where they also play a role in drug resistance, and in glioblastoma, a type of brain cancer.

“As inhibitors for SRC kinases are also drugs, the work suggests that combining inhibitors of driver oncogenes with SRC inhibitors could increase the number of patients who would benefit. This strategy merits testing in new clinical trials,” says Benjamin Neel, a professor of medicine at NYU Grossman School of Medicine, who was not involved in the study.

These findings might also explain why some patients who initially respond to tyrosine kinase inhibitors end up having their tumors recur later; the cells may end up activating this same survival pathway, but not until sometime after the initial treatment.

Combating resistance

The researchers also found that treating the resistant cells with both a tyrosine kinase inhibitor and a drug that inhibits SRC family kinases led to much greater cell death rates. By coincidence, a clinical trial testing the combination of a tyrosine kinase inhibitor called osmertinib and an SRC inhibitor is now underway, in patients with lung cancer. The MIT team now hopes to work with the same drug company to run a similar trial in pancreatic cancer patients.

The researchers also showed that they could use phosphoproteomics to analyze patient biopsy samples to see which cells already have the SRC pathways turned on.

“We are really excited to watch these clinical trials and to see how well patients do on these combinations. And I really think there’s a future for using tyrosine phosphoproteomics to guide this clinical decision-making,” White says.

This therapy might also be useful for patients whose tumors are originally susceptible to tyrosine kinase inhibitors but then later become resistant by turning on SRC pathways.

“Among the sensitive cells, some of them are able to upregulate this survival pathway and survive, which might be the residual disease that’s still there after treatment,” White says. “One of the interesting avenues here is, could we improve therapy for almost everybody, regardless of whether their tumors have intrinsic or adaptive resistance?”

The research was funded by the National Institutes of Health and the MIT Center for Precision Cancer Medicine.


“Near-misses” in particle accelerators can illuminate new physics, study finds

Physicists discovered new properties of the strong force by analyzing what happens when light-speed particles skim by each other.


Particle accelerators reveal the heart of nuclear matter by smashing together atoms at close to the speed of light. The high-energy collisions produce a shower of subatomic fragments that scientists can then study to reconstruct the core building blocks of matter.

An MIT-led team has now used the world’s most powerful particle accelerator to discover new properties of matter, through particles’ “near-misses.” The approach has turned the particle accelerator into a new kind of microscope — and led to the discovery of new behavior in the forces that hold matter together.

In a study appearing this week in the journal Physical Review Letters, the team reports results from the Large Hadron Collider (LHC) — a massive underground, ring-shaped accelerator in Geneva, Switzerland. Rather than focus on the accelerator’s particle collisions, the MIT team searched for instances when particles barely glanced by each other.

When particles travel at close to the speed of light, they are surrounded by an electromagnetic halo that flattens when particles pass close but don’t collide. The pancaked energy fields produce extremely high-energy photons. Occasionally, a photon from one particle can ping off another particle, like an intense, quantum-sized pinprick of light.

The MIT team was able to pick out such near-miss pinpricks, or what scientists call “photonuclear interactions,” from the LHC’s particle-collision data. They found that when some photons pinged off a particle, they kicked out a type of subatomic particle, known as a D0 meson, that the scientists could measure for the first time.

D0 mesons are subatomic particles that contain a charm quark, a rare type of quark not normally found in ordinary nuclear matter. Quarks are the fundamental building blocks of all matter, and are bound by gluons, which are massless particles that are the invisible glue, or “strong force” that holds matter together. The rare charm quarks can only be created in high-energy interactions. As such, they provide an especially clean, unambiguous probe of quarks and gluons inside a nucleus.

Through their measurements of D0 mesons , the researchers could estimate how tightly gluons are packed, and, essentially, how strong the strong force is within a particle’s nucleus.

Our result gives an indication that when nuclear matter is squeezed together, then gluons start behaving in a funny way,” says lead author Gian Michele Innocenti, an assistant professor of physics at MIT. “We need to know how these gluons behave in these extreme conditions because gluons keep the universe together. And at this point, photonuclear interactions are the best way we have to study gluon behavior.”

The study’s co-authors include members of the CMS Collaboration — a global consortium of physicists who operate and maintain the Compact Muon Solenoid (CMS) experiment, which is one of the largest detectors within the LHC that was used to collect the study’s data.

Bringing a “background” into focus

With each run, the Large Hadron Collider fires off needle-thin beams of particles in opposite directions around a 27-kilometer-long underground ring. When the beams cross paths, particles can collide. If the collisions happen to take place in a region of the ring where the CMS detector is set up, the detector can record the collisions, and scientists can then analyze the aftermath to reconstruct the fragments that make up the original particles.

Since the LHC began operations in 2008, the focus has been overwhelmingly on the detection and analysis of “head-on” collisions. Physicists have known that by accelerating particle beams, they would also produce photonuclear interactions — near-miss events where a particle might collide not with another particle, but with its cloud of photons. But such light-nucleus interactions were thought to be simply noise.

“These photonuclear events were considered a background that people wanted to cancel,” Innocenti says. “But now people want to use it as a signal because a collision between a photon and a nucleus can essentially be like a super-high-accuracy microscope for nuclear matter.”

When a photon pings off a particle, the abundance, direction, and energy of the produced D0 meson relates directly to the energy and density of the gluons in the nucleus. If scientists can detect and measure this photon interaction, it would be like using an extremely small and powerful flashlight to illuminate the nuclear structures. But until now, it was assumed that photonuclear interactions would be impossible to pick out amid the various physics processes that can occur in such collisions.

“People didn’t think it was possible to remove the huge mess of all these other collisions, to zoom in on single photons hitting single nuclei producing a D0 meson,” Innocenti says. “We had to devise a system to recognize those very rare photonuclear interactions while data was being taken of particle collisions.”

Illuminating charm

For their new study, Innocenti and his colleagues first simulated what a photonuclear interaction would look like amid a shower of other particle collisions. In particular, they simulated a scenario in which a photon pings off a nucleus and produces a D0 meson. Although these events are rare, D0 mesons are among the most abundant particles that contain a charm quark. The team reasoned that if they could detect signs of a charm quark in Dmesons that are produced in a photonuclear interaction, it could give valuable information about the gluons that hold the nucleus together.

With their simulations, the researchers then developed an algorithm to detect photonuclear interactions. They implemented the algorithm at the CMS detector to search for signals in real-time during the LHC’s particle-colliding runs.

“We had to collect tens of billions of collisions in order to extract a few hundred of these rare instances where a photon hits a nucleus and produces one of these exotic D0 meson particles,” Innocenti explains.

From this enormous dataset, the team identified a clean sample of these rare events by exploiting CMS’s advanced detector capabilities to select near-miss events and reconstruct the properties of the D0 mesons.

Through this process, the team detected instances of D0 meson production and then worked back to calculate properties of the particles’ charm quarks and the gluons that would have held them together in the original nucleus. 

“We are constraining what happens to gluons when they are squeezed in ions that are very large that are traveling very fast,” Innocenti says. “So far, our data confirms what people expect in terms of high-density nuclear matter. In reality, this is the first time we’ve shown this kind of measurement is feasible. ”

The team is working to improve the measurement’s accuracy in order to provide a clearer picture of how quarks and gluons are arranged inside a nucleus.

“Gluons are a very strong force that keeps the universe together,” Innocenti says. “The description of the strong force is at the basis of everything we see in nature. Now we have a way to either fully confirm, or show deviations from, that description.”

This work was supported, in part, by the U.S. Department of Energy, including support from a DOE Early Career Research Program award, and it builds on the contributions of a large MIT team of graduate students, undergraduate researchers, scientists, and postdocs.


AI system learns to keep warehouse robot traffic running smoothly

This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.


Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns.

To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly. Their method learns which robots should go first at each moment, based on how congestion is forming, and adapts to prioritize robots that are about to get stuck. In this way, the system can reroute robots in advance to avoid bottlenecks.

The hybrid system utilizes deep reinforcement learning, a powerful artificial intelligence method for solving complex problems, to figure out which robots should be prioritized. Then, a fast and reliable planning algorithm feeds instructions to the robots, enabling them to respond rapidly in constantly changing conditions.

In simulations inspired by actual e-commerce warehouse layouts, this new approach achieved about a 25 percent gain in throughput over other methods. Importantly, the system can quickly adapt to new environments with different quantities of robots or varied warehouse layouts.

“There are a lot of decision-making problems in manufacturing and logistics where companies rely on algorithms designed by human experts. But we have shown that, with the power of deep reinforcement learning, we can achieve super-human performance. This is a very promising approach, because in these giant warehouses even a 2 or 3 percent increase in throughput can have a huge impact,” says Han Zheng, a graduate student in the Laboratory for Information and Decision Systems (LIDS) at MIT and lead author of a paper on this new approach.

Zheng is joined on the paper by Yining Ma, a LIDS postdoc; Brandon Araki and Jingkai Chen of Symbotic; and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in the Journal of Artificial Intelligence Research.

Rerouting robots

Coordinating hundreds of robots in an e-commerce warehouse simultaneously is no easy task.

The problem is especially complicated because the warehouse is a dynamic environment, and robots continually receive new tasks after reaching their goals. They need to be rapidly redirected as they leave and enter the warehouse floor.

Companies often leverage algorithms written by human experts to determine where and when robots should move to maximize the number of packages they can handle.

But if there is congestion or a collision, a firm may have no choice but to shut down the entire warehouse for hours to manually sort the problem out.

“In this setting, we don’t have an exact prediction of the future. We only know what the future might hold, in terms of the packages that come in or the distribution of future orders. The planning system needs to be adaptive to these changes as the warehouse operations go on,” Zheng says.

The MIT researchers achieved this adaptability using machine learning. They began by designing a neural network model to take observations of the warehouse environment and decide how to prioritize the robots. They train this model using deep reinforcement learning, a trial-and-error method in which the model learns to control robots in simulations that mimic actual warehouses. The model is rewarded for making decisions that increase overall throughput while avoiding conflicts.

Over time, the neural network learns to coordinate many robots efficiently.

“By interacting with simulations inspired by real warehouse layouts, our system receives feedback that we use to make its decision-making more intelligent. The trained neural network can then adapt to warehouses with different layouts,” Zheng explains.

It is designed to capture the long-term constraints and obstacles in each robot’s path, while also considering dynamic interactions between robots as they move through the warehouse.

By predicting current and future robot interactions, the model plans to avoid congestion before it happens.

After the neural network decides which robots should receive priority, the system employs a tried-and-true planning algorithm to tell each robot how to move from one point to another. This efficient algorithm helps the robots react quickly in the changing warehouse environment.

This combination of methods is key.

“This hybrid approach builds on my group’s work on how to achieve the best of both worlds between machine learning and classical optimization methods. Pure machine-learning methods still struggle to solve complex optimization problems, and yet it is extremely time- and labor-intensive for human experts to design effective methods. But together, using expert-designed methods the right way can tremendously simplify the machine learning task,” says Wu.

Overcoming complexity

Once the researchers trained the neural network, they tested the system in simulated warehouses that were different than those it had seen during training. Since industrial simulations were too inefficient for this complex problem, the researchers designed their own environments to mimic what happens in actual warehouses.

On average, their hybrid learning-based approach achieved 25 percent greater throughput than traditional algorithms as well as a random search method, in terms of number of packages delivered per robot. Their approach could also generate feasible robot path plans that overcame congestion caused by traditional methods.

“Especially when the density of robots in the warehouse goes up, the complexity scales exponentially, and these traditional methods quickly start to break down. In these environments, our method is much more efficient,” Zheng says.

While their system is still far away from real-world deployment, these demonstrations highlight the feasibility and benefits of using a machine learning-guided approach in warehouse automation.

In the future, the researchers want to include task assignments in the problem formulation, since determining which robot will complete each task impacts congestion. They also plan to scale up their system to larger warehouses with thousands of robots.

This research was funded by Symbotic.


Why solid-state batteries keep short-circuiting

New insights into metallic cracks that harm battery performance could advance the longstanding quest to develop energy-dense solid-state batteries.


Batteries that use solids as their charge-carrying electrolyte could potentially be a safer and far more energy-dense alternative to lithium-ion batteries. However, these solid-state batteries have been plagued by the formation of metallic cracks called dendrites that cause them to short circuit.

The problem has so far prevented such batteries from becoming a major player in energy storage. But now, research from MIT could finally help engineers find a way to get past this hurdle.

For decades, many researchers have treated dendrites as largely the result of mechanical stress — like cracks that form on the sidewalk when a tree root grows underneath. But MIT engineers have discovered the exact opposite: Faster dendrite growth was associated with lower stress levels in a commonly used battery electrolyte material. Using a new technique that allowed them to directly measure the stress around growing dendrites, the researchers found cracks formed at stress levels as low as 25 percent of what would be expected under mechanical stress alone.

The experiments, published in Nature today, instead revealed another culprit: chemical reactions caused by high electrical currents that weaken the electrolyte and make it more susceptible to dendrite growth. Researchers had previously proposed that such reactions cause dendrite growth, but the new study provides the first experimental data on the interplay between chemical and mechanical stress in dendrite formation.

“Direct measurement techniques allowed us to see how tough the material is as we cycle the cell,” says Cole Fincher, the paper’s first author and an MIT PhD student in materials science and engineering. “What we saw was that if you just test the ceramic electrolyte on the benchtop, it’s about as tough as your tooth. But during charging, it gets a lot weaker — closer to the brittleness of a lollipop.”

The findings reveal why developing stronger electrolytes alone hasn’t solved the decades-old dendrite problem. It also points to the importance of developing more chemically stable materials to finally fulfill the promise of high-density solid-state batteries.

“There’s a large community of researchers that are constantly trying to discover and design better solid electrolytes to enable the solid-state battery,” says senior author Yet-Ming Chiang, MIT’s Kyocera Professor of Materials Science and Engineering. “This study provides guidance in those efforts. We discovered a new mechanism by which these dendrites grow, allowing us to explore ways to design around it to make solid-state batteries successful.”

Joining Fincher and Chiang on the paper are MIT PhD student Colin Gilgenbach; Thermo Fisher Scientific scientists Christian Roach and Rachel Osmundsen; MIT.nano researcher Aubrey Penn; MIT Toyota Professor in Materials Processing W. Craig Carter; MIT Kyocera Professor of Materials Science and Engineering James LeBeau; University of Michigan Professor Michael Thouless; and Brown University Professor Brian W. Sheldon.

Measuring stress

Dendrites have presented a major roadblock to battery development since the 1970s. One reason lithium-ion batteries have become ubiquitous while other approaches have stalled is that their commonly used graphite anodes are less susceptible to dendrite formation. That’s a shame because solid-state batteries that use lithium metal as an anode and a solid electrolyte could theoretically store far more energy in the same sized package with less weight. They could thus enable longer-lasting phones and laptops, or electric cars with double the range of today’s options.

“There’s no more energy-dense form of lithium than lithium metal,” Chiang says. “But the dendrite problem has limited progress with solid-state batteries.”

Lithium metal is soft like taffy. Fincher, who has been studying the dendrite problem in the labs of Chiang and Carter, says one puzzle is how such a soft material can penetrate into the hard electrolyte materials being explored for use in solid-state batteries.

“The ceramics that have been used in these applications are stiff, like a coffee mug, so it’s been hoped that solid-state batteries would stop this relatively soft dendrite from growing,” Fincher explains.

Believing that mechanical stress causes dendrites, scientists have worked to develop stronger electrolytes that can withstand more mechanical stress. Some researchers have proposed that chemical reactions play a role in dendrite formation, but how those reactions worked with mechanical stress was not known.

For their Nature study, the researchers set out to directly observe mechanical and chemical changes in a commonly used solid-state electrolyte material as dendrites grew. Solid-state batteries are typically organized like a sandwich, which makes it hard to look inside the middle electrolyte layer. For their first experiment, the researchers developed a special solid-state battery cell in which the ceramic layers can be observed from the side, allowing the researchers to watch dendrite growth occurring in the electrolyte.

The researchers also used a measurement technique called birefringence microscopy to precisely measure the stress around the dendrite, which Fincher developed as part of his PhD thesis.

“It works the same way as polarized sunglasses when you look at something like a windshield,” Fincher explains of the technique. “When light comes through, residual stresses in the glass enable light of some orientations to pass faster than others, and that can give rise to observable rainbow patterns. These patterns can be used to measure stress.”

The technique gave the researchers a way to both visualize and quantify stress around actively growing dendrites for the first time, leading to the unexpected findings.

“Normally you would expect that the faster a dendrite grows, the more stress it creates,” Chiang says. “Instead, we observed exactly the opposite. The faster it grew, the lower the stress around it, meaning the solid electrolyte is breaking under a lower stress, and therefore it’s been embrittled.”

In fact, the dendrites grew at stress levels far weaker than expected. Fincher describes the weaker electrolyte as electrochemically corroded.

“Imagine you test a piece of glass one day, and the next day it’s only a quarter as strong,” Chiang says. “It was very surprising.”

Led by LeBeau, the researchers then cooled the electrolyte to extremely low temperatures and applied a powerful imaging technique called cryogenic scanning transmission electron microscopy that allowed them to study the area around the dendrite on nearly atomic scales. The imaging revealed that the passage of ionic current through the material had caused chemical reactions that made it more brittle.

“The electric current drives the flow of lithium ions through the solid electrolyte,” Chiang explains. “That causes a highly concentrated flow of lithium ions at the dendrite tip. We believe that leads to a chemical reduction of the material compound, which leads to its decomposition into new phases. You start with a crystalline phase of the electrolyte, then there’s a volume contraction after the deposition that is consistent with the embrittlement we see.”

Toward better batteries

The experiment was done on one of the most stable electrolytes used in solid-state batteries, making the researchers confident the findings will carry over to other electrolyte materials.

“This tells us we have to look for electrolyte materials that are even more stable, especially when in contact with lithium metal, which chemically speaking is very reducing,” Chiang says. “This will help direct the search for new materials.”

For instance, Chiang says now that they understand more about the chemical changes causing embrittlement, researchers could explore materials that actually get tougher as cracks grow.

The researchers say it will take more work to figure out what electrochemical reactions are taking place to make the electrolyte so much weaker. But they say their approach for directly observing stresses could also help improve materials for use in devices like fuel cells and electrolyzers.

The work was supported by the center for Mechano-Chemical Understanding of Solid Ionic Conductors, a Department of Energy Engineering Frontiers Research Center, the National Science Foundation, and Fincher’s Department of Defense Science and Engineering Graduate Fellowship, and was carried out using MIT.nano facilities.


QS World University Rankings rates MIT No. 1 in 12 subjects for 2026

The Institute also ranks second in seven subject areas.


QS World University Rankings has placed MIT in the No. 1 spot in 12 subject areas for 2026, the organization announced today.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Chemistry; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Engineering and Technology; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; and Physics and Astronomy.

MIT also placed second in seven subject areas: Architecture/Built Environment; History of Art; Biological Sciences; Economics and Econometrics; Marketing; Natural Sciences; and Statistics and Operational Research.

For 2026, universities were evaluated in 55 specific subjects and five broader subject areas.

Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.

MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 14 straight years.


Wristband enables wearers to control a robotic hand with their own movements

By moving their hands and fingers, users can direct a robot to play piano or shoot a basketball, or they can manipulate objects in a virtual environment.


The next time you’re scrolling your phone, take a moment to appreciate the feat: The seemingly mundane act is possible thanks to the coordination of 34 muscles, 27 joints, and over 100 tendons and ligaments in your hand. Indeed, our hands are the most nimble parts of our bodies. Mimicking their many nuanced gestures has been a longstanding challenge in robotics and virtual reality.

Now, MIT engineers have designed an ultrasound wristband that precisely tracks a wearer’s hand movements in real-time. The wristband produces ultrasound images of the wrist’s muscles, tendons, and ligaments as the hand moves, and is paired with an artificial intelligence algorithm that continuously translates the images into the corresponding positions of the five fingers and palm.

The researchers can train the wristband to learn a wearer’s hand motions, which the device can communicate in real-time to a robot or a virtual environment.

In demonstrations, the team has shown that a person wearing the wristband can wirelessly control a robotic hand. As the person gestures or points, the robot does the same. In a sort of wireless marionette interaction, the wearer can manipulate the robot to play a simple tune on the piano and shoot a small basketball into a desktop hoop. With the same wristband, a wearer can also manipulate objects on a computer screen, for instance pinching their fingers together to enlarge and minimize a virtual object.

The team is using the wristband to gather hand motion data from many more users with different hand sizes, finger shapes, and gestures. They envision building a large dataset of hand motions that can be plumbed, for instance, to train humanoid robots in dexterity tasks, such as performing certain surgical procedures. The ultrasound band could also be used to grasp, manipulate, and interact with objects in video games, design applications, or other virtual settings.

“We think this work has immediate impact in potentially replacing hand tracking techniques with wearable ultrasound bands in virtual and augmented reality,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering at MIT. “It could also provide huge amounts of training data for dexterous humanoid robots.”

Zhao, Gengxi Lu, and their colleagues present the wristband’s new design in a paper appearing today in Nature Electronics. Their MIT co-authors are former postdocs Xiaoyu Chen, Shucong Li, and Bolei Deng; graduate students SeongHyeon Kim and Dian Li; postdocs Shu Wang and Runze Li; and Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science. Other co-authors are graduate students Yushun Zheng and Junhang Zhang, Baoqiang Liu, Chen Gong, and Professor Qifa Zhou from the University of Southern California.

Seeing strings

There are currently a number of approaches to capturing and mimicking human hand dexterity in robots. Some approaches use cameras to record a person’s hand movements as they manipulate objects or perform tasks. Others involve having a person wear a glove with sensors, which records the person’s hand movements and transmits the data to a receiving robot. But erecting a complex camera system for different applications is impractical and prone to visual obstacles. And sensor-laden gloves could limit a person’s natural hand motions and sensations.

A third approach uses the electrical signals from muscles in the wrist or forearm that scientists then correlate with specific hand movements. Researchers have made significant advances in this approach, however these signals are easily affected by noise in the environment. They are also not sensitive enough to distinguish subtle changes in movements. For instance, they may discern whether a thumb and index finger are pinched together or pulled apart, but not much of the in-between path.

Zhao’s team wondered whether ultrasound imaging might capture more dexterous and continuous hand movements. His group has been developing various forms of ultrasound stickers — miniaturized versions of the transducers used in doctor’s offices that are paired with hydrogel material that can safely stick to skin.

In their new study, the team incorporated the ultrasound sticker design into a wearable wristband to continuously image the muscles and tendons in the wrist.

“The tendons and muscles in your wrist are like strings pulling on puppets, which are your fingers,” Lu says. “So the idea is: Each time you take a picture of the state of the strings, you’ll know the state of the hand.”

Mapping manipulation

The team designed a wristband with an ultrasound sticker that is the size of a smartwatch, and added onboard electronics that are about as small as a cellphone. They attached the wristband to a volunteer’s wrist and confirmed that the device produced clear and continuous images of the wrist as the volunteer moved their fingers in various gestures.

The challenge then was to relate the black and white ultrasound images of the wrist to specific positions of the hand. As it turns out, the fingers and thumb are capable of 22 degrees of freedom, or different ways of extending or angling. The researchers found that they could identify specific regions in their ultrasound images of the wrist that correlate to each of these 22 degrees of freedom. For instance, changes in one region relate to thumb extension, while changes in another region correlate with movements of the index finger.

To establish these connections, a volunteer wearing the wristband would move their hand in various positions while the researchers recorded the gestures with multiple cameras surrounding the volunteer. By matching changes in certain regions of the ultrasound images with hand positions recorded by the cameras, the team could label wrist image regions with the corresponding degree of freedom in the hand. But to do this translation continuously, and in real-time, would be an impossible task for humans.

So, the team turned to artificial intelligence. They used an AI algorithm that can be trained to recognize image patterns and correlate them with specific labels and, in this case, the hand’s various degrees of freedom. The researchers trained the algorithm with ultrasound images that they meticulously labeled, annotating the image regions associated with a specific degree of freedom. They tested the algorithm on a new set of ultrasound images and found it correctly predicted the corresponding hand gestures.

Once the researchers successfully paired the AI algorithm with the wristband, they tested the device on more volunteers. For the new study, eight volunteers with different hand and wrist sizes wore the wristband while they formed various hand gestures and grasps, including making the signs for all 26 letters in American Sign Language. They also held objects such as a tennis ball, a plastic bottle, a pair of scissors, and a pencil. In each case, the wristband precisely tracked and predicted the position of the hand.

To demonstrate potential applications, the team developed a simple computer program that they wirelessly paired with the wristband. As a wearer went through the motions of pinching and grasping, the gestures corresponded to zooming in and out on an object on the computer screen, and virtually moving and manipulating it in a smooth and continuous fashion.

The researchers also tested the wristband as a wireless controller of a simple commercial robotic hand. While wearing the wristband, a volunteer went through the motions of playing a keyboard. The robot in turn mimicked the motions in real-time to play a simple tune on a piano. The same robot was also able to mimic a person’s finger taps to play a desktop basketball game.

Zhao is planning to further miniaturize the wristband’s hardware, as well as train the AI software on many more gestures and movements from volunteers with wider ranging hand sizes and shapes. Ultimately, the team is building toward a wearable hand tracker that can be worn by anyone, to wirelessly manipulate humanoid robots or virtual objects with high dexterity.

“We believe this is the most advanced way to track dexterous hand motion, through wearable imaging of the wrist,” Zhao says. “We think these wearable ultrasound bands can provide intuitive and versatile controls for virtual reality and robotic hands.”

This research was supported, in part, by MIT, the U.S. National Institutes of Health, the U.S. National Science Foundation, the U.S. Department of Defense, and Singapore National Research Foundation through the Singapore-MIT Alliance for Research and Technology.


Enduring passions for medicine, journalism, and triathlons

As an aspiring physician-scientist and editor-in-chief of The Tech, MIT senior Alex Tang has found inspiration in the lives of patients and others in his community.


Alex Tang’s dream of becoming a physician started in grade school when he read Lisa Sanders’ “Diagnosis” column in The New York Times Magazine. Although he often encountered unfamiliar medical terms, Tang was captivated by the magic of medicine, as Sanders described how physicians turned puzzling sets of symptoms into concrete diagnoses and treatment plans for patients.

A decade later, Tang is one step closer to achieving his dream. The MIT senior has challenged himself academically, dual-majoring in chemistry and biology and minoring in biomedical engineering. “All of the courses have encouraged me to think about problems through different lenses,” he says.

Tang has also challenged himself as the editor-in-chief of MIT’s student newspaper, The Tech, and as a competitive triathlete. In the fall, he will begin medical school, where he hopes to develop clinical skills and continue honing his scientific abilities. Ultimately, he aspires to pursue a career as a physician-scientist, focusing on how cancers respond to and resist treatment. He wants to help convert those insights into novel therapies that can be tailored to individual cancer patients.

“I want to advance precision oncology, ensuring that each patient receives the most effective, personalized treatment possible,” he says.

Thriving in the lab

Originally from Massachusetts, Tang was eager to make the most of his MIT experience, especially because of its extensive research opportunities. “Both my parents worked in the Cambridge biotech space, and being able to contribute to innovative science here has been a priority,” he says.

Early on, Tang gravitated toward oncology after joining the Nir Hacohen Lab at the Broad Institute, an interest cemented after taking 7.45 (Cancer Biology), which was taught by professors Tyler Jacks and Michael Hemann. Fascinated by how new cancer therapies were changing patients’ lives, he joined a project with implications for patients with difficult prognoses: For the last three-and-half years, Tang has been studying the effects of combined immunotherapy and targeted molecular therapy on tumors in patients with metastatic colorectal cancer.

“I hope my work can provide clarity for patients and physicians, and empower them to be confident in their options for care,” Tang says.

Last year, Tang was awarded a prestigious Goldwater Scholarship, which supports undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.

In addition to gaining technical skills, Tang has found working in the Hacohen Lab to be enriching in other important ways.

“What’s been great about research is learning from experts in the field who become your role models,” he says, “They are at the frontiers of investigating the most challenging questions in the field, and iterating through the scientific process with them is such a joy.”

Looking forward to medical school, he hopes to complement his basic science research with work that is more clinically involved.

“I want to bridge the gap between fundamental discoveries and tangible improvements in patient care,” Tang says. He has already set out on this mission, recently leading the development of a prognostic assay in lung cancer.

Breaking news

After stopping by the booth for MIT’s student newspaper, The Tech, during Campus Preview Weekend, Tang knew he wanted to join and contribute to a publication that has long chronicled MIT’s history and culture. Starting as a news writer and later serving as editor-in-chief, he learned how to write under pressure, reported on major campus events, and balanced leadership with collaboration.

“It’s been such an honor and pleasure to document people across the diverse MIT community who are all contributing to the character of the Institute in different ways,” he says.

It’s an activity he’ll drop everything for.

“When we have things come up and we have to do a breaking news story or we have some editorial thing that needs to be managed, I’ll just stop working to sort out whatever’s happening,” he says. “I think that’s what passion really is about.”

His journey with The Tech has not always been easy. In the summer between his first and second year, he found himself solely responsible for producing the paper’s news content amidst a staff shortage while the paper was facing financial difficulties.

“Coming into sophomore fall, I focused on recruiting more staff and seeking out ways to get more funding,” Tang says. “The paper wouldn’t be here without the people, both students and faculty advisors alike, who bought into The Tech’s mission.”

Though he hopes to pursue a career in medicine, Tang has found journalism to be integral in shaping how he will connect and communicate with patients and colleagues.

“You are responsible for taking someone’s story, breaking it down, and retelling it in your own words in a way that you feel would resonate with the audience and serve the community,” he says.

An outlet through triathlon

Despite his busy schedule, Tang prioritizes staying active and maintaining fitness. A former competitive swimmer in high school and now a triathlete, he still finds himself drawn back to the water when everything around him feels fast-paced.

“Swimming, biking, and running are good ways to de-stress,” Tang says. “It’s therapeutic in the sense that you can just let go. The race is just that culmination of letting it go at a more elevated level.”

He credits MIT’s infrastructure for helping him stay committed to training. “My dorm is steps away from the pool and the track,” he says. “The convenience is superb.”

Tang has found success in competitions, most recently placing third in his age group at the 2025 Boston Triathlon. In fact, it is the feeling of accomplishment that pushes him every day.

“There are many days when you want to take it easy, but you have to remember the joy waiting for you at the end of the race when you’ve put in the work,” he says. “It motivates me to be conscious and aware of what I’m doing in practice.”

During the summer, Tang and his younger brother go out for long runs in the Boston suburbs. “It is great to have my brother push me every day,” Tang says. “There has been no one more supportive of me than my family.”


How to create “humble” AI

An MIT-led team is designing artificial intelligence systems for medical diagnosis that are more collaborative and forthcoming about uncertainty.


Artificial intelligence holds promise for helping doctors diagnose patients and personalize treatment options. However, an international group of scientists led by MIT cautions that AI systems, as currently designed, carry the risk of steering doctors in the wrong direction because they may overconfidently make incorrect decisions.

One way to prevent these mistakes is to program AI systems to be more “humble,” according to the researchers. Such systems would reveal when they are not confident in their diagnoses or recommendations and would encourage users to gather additional information when the diagnosis is uncertain.

“We’re now using AI as an oracle, but we can use AI as a coach. We could use AI as a true co-pilot. That would not only increase our ability to retrieve information but increase our agency to be able to connect the dots,” says Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School.

Celi and his colleagues have created a framework that they say can guide AI developers in designing systems that display curiosity and humility. This new approach could allow doctors and AI systems to work as partners, the researchers say, and help prevent AI from exerting too much influence over doctors’ decisions.

Celi is the senior author of the study, which appears today in BMJ Health and Care Informatics. The paper’s lead author is Sebastián Andrés Cajas Ordoñez, a researcher at MIT Critical Data, a global consortium led by the Laboratory for Computational Physiology within the MIT Institute for Medical Engineering and Science.

Instilling human values

Overconfident AI systems can lead to errors in medical settings, according to the MIT team. Previous studies have found that ICU physicians defer to AI systems that they perceive as reliable even when their own intuition goes against the AI suggestion. Physicians and patients alike are more likely to accept incorrect AI recommendations when they are perceived as authoritative.

In place of systems that offer overconfident but potentially incorrect advice, health care facilities should have access to AI systems that work more collaboratively with clinicians, the researchers say.

“We are trying to include humans in these human-AI systems, so that we are facilitating humans to collectively reflect and reimagine, instead of having isolated AI agents that do everything. We want humans to become more creative through the usage of AI,” Cajas Ordoñez says.

To create such a system, the consortium designed a framework that includes several computational modules that can be incorporated into existing AI systems. The first of these modules requires an AI model to evaluate its own certainty when making diagnostic predictions. Developed by consortium members Janan Arslan and Kurt Benke of the University of Melbourne, the Epistemic Virtue Score acts as a self-awareness check, ensuring the system’s confidence is appropriately tempered by the inherent uncertainty and complexity of each clinical scenario.

With that self-awareness in place, the model can tailor its response to the situation. If the system detects that its confidence exceeds what the available evidence supports, it can pause and flag the mismatch, requesting specific tests or history that would resolve the uncertainty, or recommending specialist consultation. The goal is an AI that not only provides answers but also signals when those answers should be treated with caution.

“It’s like having a co-pilot that would tell you that you need to seek a fresh pair of eyes to be able to understand this complex patient better,” Celi says.

Celi and his colleagues have previously developed large-scale databases that can be used to train AI systems, including the Medical Information Mart for Intensive Care (MIMIC) database from Beth Israel Deaconess Medical Center. His team is now working on implementing the new framework into AI systems based on MIMIC and introducing it to clinicians in the Beth Israel Lahey Health system.

This approach could also be implemented in AI systems that are used to analyze X-ray images or to determine the best treatment options for patients in the emergency room, among others, the researchers say.

Toward more inclusive AI

This study is part of a larger effort by Celi and his colleagues to create AI systems that are designed by and for the people who are ultimately going to be most impacted by these tools. Many AI models, such as MIMIC, are trained on publicly available data from the United States, which can lead to the introduction of biases toward a certain way of thinking about medical issues, and exclusion of others.

Bringing in more viewpoints is critical to overcoming these potential biases, says Celi, emphasizing that each member of the global consortium brings a distinct perspective to a broader, collective understanding.

Another problem with existing AI systems used for diagnostics is that they are usually trained on electronic health records, which weren’t originally intended for that purpose. This means that the data lack much of the context that would be useful in making diagnoses and treatment recommendations. Additionally, many patients never get included in those datasets because of lack of access, such as people who live in rural areas.

At data workshops hosted by MIT Critical Data, groups of data scientists, health care professionals, social scientists, patients, and others work together on designing new AI systems. Before beginning, everyone is prompted to think about whether the data they’re using captures all the drivers of whatever they aim to predict, ensuring they don’t inadvertently encode existing structural inequities into their models.

“We make them question the dataset. Are they confident about their training data and validation data? Do they think that there are patients that were excluded, unintentionally or intentionally, and how will that affect the model itself?” he says. “Of course, we cannot stop or even delay the development of AI, not just in health care, but in every sector. But, we must be more deliberate and thoughtful in how we do this.”

The research was funded by the Boston-Korea Innovative Research Project through the Korea Health Industry Development Institute.


A complicated future for a methane-cleansing molecule

A new model shows how levels of the “atmosphere’s detergent” may rise and fall in response to climate change.


Methane is a powerful greenhouse gas that is second only to carbon dioxide in driving up global temperatures. But it doesn’t linger in the atmosphere for long thanks to molecules called hydroxyl radicals, which are known as the “atmosphere’s detergent” for their ability to break down methane. As the planet warms, however, it’s unclear how the air-cleaning agents will respond.

MIT scientists are now shedding some light on this. The team has developed a new model to study different processes that control how levels of hydroxyl radical will shift with warming temperatures.

They find that the picture is complicated. As temperatures increase, so too will water vapor in the atmosphere, which will in turn boost the molecule’s concentrations. But rising temperatures will also increase “biogenic volatile organic compound emissions” — gases that are naturally released by some plants and trees. These natural emissions can reduce hydroxyl radical and dampen water vapor’s boosting effect.

Specifically, the team finds that if the planet’s average temperatures rise by 2 degrees Celsius, the accompanying rise in water vapor will increase hydroxyl radical levels by about 9 percent. But the corresponding increase in biogenic emissions would in turn bring down hydroxyl radical levels by 6 percent. The final accounting could mean a small boost, of about 3 percent, in the atmosphere’s ability to break down methane and other chemical compounds as the planet warms.

“Hydroxyl radicals are important in determining the lifetime of methane and other reactive greenhouse gases, as well as gases that affect public health, including ozone and certain other air pollutants,” says study author Qindan Zhu, who led the work as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

“There’s a whole range of environmental reasons why we want to understand what’s going on with this molecule,” adds Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in EAPS. “We want to make sure it’s around to chemically remove all these gases and pollutants.”

Fiore and Zhu’s new study appears today in the Journal of Advances in Modeling Earth Systems (JAMES). The study’s MIT co-authors include Jian Guan and Paolo Giani, along with Robert Pincus, Nicole Neumann, George Milly, and Clare Singer of Lamont-Doherty Earth Observatory and the Columbia Climate School, and Brian Medeiros at the National Center for Atmospheric Research.

A natural neutralizer

The hydroxyl radical, known chemically as OH, is made up of one oxygen atom and one hydrogen atom, along with an unpaired electron. This configuration makes the molecule extremely reactive. Like a chemical vacuum cleaner, OH easily pulls an electron or hydrogen atom away from other molecules, breaking them down into weaker, more water-soluble forms. In this way, OH reduces a vast range of chemicals, including some air pollutants, pathogens, and ozone. And changes in OH are a powerful lever on methane.

“For methane, the reaction with OH is considered the most important loss pathway,” Zhu says. “About 90 percent of the methane that’s removed from the atmosphere is due to the reaction with OH.”

Indeed, it’s thanks to reactions with hydroxyl radical that methane can only stick around in the atmosphere for about a decade — far shorter than carbon dioxide, which can linger for 1,000 years or longer. But even as OH breaks down methane already in the atmosphere, more methane continues to accumulate. Rising methane concentrations, in addition to human-derived emissions of carbon dioxide, are driving global warming, and it’s unclear how OH’s methane-clearing power will keep up.

“The questions we’re exploring here are: What are the main processes that control OH concentrations? And how will OH respond to climate change?” Fiore says.

An aquaplanet’s air

For their study, the researchers developed a new model to simulate levels of OH in the atmosphere under a current global climate scenario, compared to a future warmer climate. Their model, dubbed “AquaChem,” is an expansion of a simplified model that is part of a suite of tools developed by the Community Earth System Model (CESM) project. The model that the team chose to build off is one that represents the Earth as a simplified “aquaplanet,” with an entirely ocean-covered surface.

Aquaplanet models allow scientists to study detailed interactions in the atmosphere in response to changes in surface temperatures, without having to also spend computing time and energy on simulating complex dynamics between the land, water, and polar ice caps.

To the aquaplanet model, Zhu added an atmospheric chemistry component that simulates detailed chemical reactions in the atmosphere consistent with the applied surface temperatures. The chemical reactions that she modeled represent those that are known to affect OH concentrations.

OH is primarily produced when ozone interacts with sunlight in the presence of water vapor. For instance, scientists have found that OH levels can vary depending certain anthropogenic and natural emissions, all of which Zhu incorporated separately and together into the AquaChem model in order to isolate the impact of each process on OH.

The emissions in particular include carbon monoxide, methane, nitrogen oxides, and volatile organic compounds (VOCs), some of which are emitted through human practices, and others that are given off by natural processes. One type of naturally-derived VOCs are “biogenic” emissions — gases, such as isoprene, that some plants and trees emit through tiny pores called stomata during transpiration.

Into the AquaChem model, Zhu plugged in data that were available for each type of emissions from the year 2000 — a year that is generally considered to represent the current climate in a simplified form. She set the aquaplanet’s sea surface temperatures to the zonal annual mean of that year, and found that the model accurately reproduced the major sensitivities of OH chemistry to the underlying chemical processing as simulated in a more complex chemistry-climate model.

Then, Zhu ran the model under a second, globally warming scenario. She set the planet’s sea surface temperatures to warm by 2 degrees Celsius (a warming that is likely to occur unless global anthropogenic carbon emissions are mitigated). The team looked at how this warming would affect the various types of emissions and chemical processes, and how these changes would ultimately affect levels of OH in the atmosphere.

In the end, they found the two biggest drivers of OH levels were rising water vapor and biogenic emissions. They found that global warming would increase the amount of water vapor to the atmosphere, which in turn would boost production of OH by 9 percent. However, this same degree of warming would also increase biogenic emissions such as isoprene, which reacts with and breaks down OH, bringing down its levels by 6 percent.

The team recognizes that there are many other factors that affect the response of isoprene emissions to surface warming. Rising CO2, not considered in this study, may dampen this temperature-driven response. Of all the factors that can shift OH levels under global warming, the researchers caution that biogenic emissions are the most uncertain, even though they appear to have a large influence. Going forward, the scientists plan to update AquaChem to continue studying how biogenic emissions, as well as other processes and climate scenarios, could sway OH concentrations.

“We know that changes in atmospheric OH, even of a few percent, can actually matter for interpreting how methane might accumulate in the atmosphere,” Zhu says. “Understanding future trends of OH will allow us to determine future trends of methane.”

This work was supported, in part, by Spark Climate Solutions and the National Oceanic and Atmospheric Administration. 


On algorithms, life, and learning

Operations research expert Dimitris Bertsimas delivered the annual Killian Lecture, providing a look at the past and future of his work.


From enhancing international business logistics to freeing up more hospital beds to helping farmers, MIT Professor Dimitris Bertsimas SM ’87, PhD ’88 summarized how his work in operations research has helped drive real-world improvements, while delivering the 54th annual James R. Killian Faculty Achievement Award Lecture at MIT on Thursday, March 19.

Bertsimas also described how artificial intelligence is now being used in some of his scholarly projects and as a tool in MIT Open Learning efforts, which he currently directs — another facet of a highly productive and lauded career over four decades at the Institute. The Killian Award is the highest prize MIT gives its faculty.

“I have tried to improve the human condition,” Bertsimas said, summarizing the breadth of his work and the many applications to everyday living that he has found for it.

At MIT, Bertsimas is the vice provost for open learning, associate dean for online education and artificial intelligence, Boeing Leaders for Global Operations Professor of Management, and professor of operations research in the MIT Sloan School of Management. He also served as the inaugural faculty director of the master of business analytics program at MIT Sloan, and has held the position of associate dean of business analytics.

Bertsimas’ remarks encompassed both his past insights and his ongoing studies, as well as his current efforts to add AI to his research. Describing the concept of “robust optimization,” a highly influential approach that Bertsimas helped develop in the early 2000s, he explained how it has enabled, for instance, more reliable shipping through the Panama Canal. Other approaches to optimization aimed at getting more vessels through the canal every day — up to 48 — but would encounter significant problems at times. Bertsimas’ approach identified that 45 vessels a day was better — a slightly lower number, but one that “was always feasible,” he noted.

Over time, Bertsimas’ work has helped structure all kinds of solutions in business logistics; it has even been used for the allocation of school buses in Boston.

More recently, as Bertsimas explained in the lecture, he and his collaborators have been working with Hartford HealthCare in Connecticut on a wide range of issues, and are increasingly incorporating AI into the development of tools for diagnostics, among other things. On the optimization front, their research has suggested ways to reduce the average stay of a hospital patient, from 5.38 days to 4.93 days. In the main Hartford hospital they have studied, given the number of existing beds, that reduction has enabled more than 5,000 additional patient stays per year.

“It’s a very different ballgame,” Bertsimas said.

Bertsimas delivered his lecture, titled “Algorithms for Life: AI and Operations Research Transforming Healthcare, Education, and Agriculture,” to an audience of over 300 MIT community members in Huntington Hall (Room 10-250) on campus.

The award was established in 1971 to honor James Killian, whose distinguished career included serving as MIT’s 10th president, from 1948 to 1959, and subsequently as chair of the MIT Corporation, from 1959 to 1971.

“Professor Bertsimas’ scholarly contributions are both extensive and groundbreaking,” said Roger Levy, chair of the MIT faculty and a professor in the Department of Brain and Cognitive Sciences, while making introductory remarks. “He’s one of the rare individuals who has made significant contributions to both intellectual threads in the field of operations research: one, optimization — combinatorial, linear, and nonlinear — and number two, stochastic processes.”

Indeed, Bertsimas’ work has helped develop both better tools for studying and conducting operations, while also having a wide range of applications. As Bertsimas noted in his lecture, the deaths of both of his parents in 2009 helped propel him to start looking at extensively at ways operations research could help health care.

Bertsimas received his BS in electrical engineering and computer science from the National Technical University of Athens in Greece. Moving to MIT for his graduate work, he then earned his MS in operations research and his PhD in applied mathematics and operations research. Bertsimas joined the MIT faculty after receiving his doctorate, and has remained at the Institute ever since.

Bertsimas is also known as an energetic teacher who has been the principal advisor to a remarkable number of PhD students — 106 and counting, at this point.

“It is far and away my favorite activity, to supervise my doctoral students,” Bertsimas said. “It is a privilege, in my opinion, to work with exceptional young people like the ones we have at MIT, in ability and character and aspiration. They actually make me a better scientist, and a better person.”

“MIT is part of my identity,” Bertsimas quipped while noting that he is the only faculty member on campus who has those three letters, in order, in his first name.

In the latter part of the lecture, Bertsimas highlighted work he has been doing as vice provost of open learning at MIT. He has personally developed an large online course based on his own material, “The Analytics Edge.” In his current role, Bertsimas said, he now aspires for MIT to reach a billion learners with online courses, part of his effort to “democratize access to education.”

Bertsimas also demonstrated for the audience some AI tools he and his colleagues are working to bring to online education, including ways of condensing material, and the translation of online material into other languages.

It is just one more chapter in a long and broad-ranging career dedicated to grasping phenomena and developing tools to help us navigate it.

Or as Berstimas noted while summarizing his scholarship at one point in the lecture, “I try to increase the human understanding of how the world works.” 


Bridging medical realities in the study of technology and health

Anthropologist Amy Moran-Thomas studies overlooked insights from people health care is meant to reach.


A few weeks ago, Amy Moran-Thomas and 20 students in her class 21A.311 (The Social Lives of Medical Objects) were gathered around a glucose meter, a jar of test strips, and various spare medical parts in the MIT Museum seminar room, talking about how to make them work better.

The class had just heard a presentation from the president of the Belize Diabetes Association in Dangriga, Norma Flores, a nurse whose hospital had recently received a huge shipment of insulin that, although durable in theory, seemed to have spoiled in a heat wave. Flores and the students discussed whether scientists could develop temperature-stable insulin and design repairable glucose meters and other technologies for hospitals worldwide.

“Whenever people keep saying they are concerned about an issue, but the medical literature doesn’t describe it yet, there is a key question about what’s happening,” says Moran-Thomas. “Ethnography can help us learn about it.”

For Moran-Thomas, an MIT anthropologist, that class session was a way of connecting people and ideas that are too often overlooked. Flores was a central figure in Moran-Thomas’ 2019 book, “Traveling with Sugar: Chronicles of a Global Epidemic,” about diabetes in Belize and the failures of medical technology designed to treat it. (At the end of class, Flores surprised Moran-Thomas with a framed commendation from the Belize Diabetes Association for their nearly 20 years of work together.)

That approach informs all of Moran-Thomas’ work. Currently she is co-leading a group working on a project called the “Sugar Atlas,” mapping the social and economic dimensions of diabetes in the Caribbean, in tandem with scholars Nicole Charles of the University of Toronto and Tonya Haynes of the University of West Indies. Moran-Thomas has also spent more than a decade following the case of notorious medical experiments that took place in Guatemala in the 1940s, the subject of a recent paper she published with Susan Reverby of Wellesley College.

Closer to home, Moran-Thomas is working on a book about how energy extraction affects chronic conditions and mental health in her native Pennsylvania, at a time of increasing hospital closures. As part of this research, she has been working with MIT seismologist William Frank to develop low-cost sensors that people can use to measure the impact of industrial activity on their home neighborhoods. The research team was recently awarded a grant by the MIT Human Insight Collaborative (MITHIC) for the work. And with another MITHIC grant, Moran-Thomas and several colleagues are working to create a new “Health and Society” educational program at MIT.

“A through line in my work is the question about how to put people at the center of health and medicine,” says Moran-Thomas, an associate professor in MIT’s anthropology program. “Because that’s not how it feels to most people in the world. Care technologies that work for everybody, and health technologies in relation to chronic disease, connect all these different projects.”

The work Moran-Thomas may be best known for occurred in 2020, during the Covid-19 pandemic, when her research recovered an array of neglected clinical studies showing that oximeters functioned differently depending on the skin color of patients. After she published a piece about it in the Boston Review, further hospital studies by physicians who found the essay confirmed a pattern of disproportionately inaccurate readings, leading to subsequent efforts to improve the technology — all steming from her careful, patient-centric approach.

“What anthropology has to offer the world, and other knowledge systems, is the insights of people that might be missing from many accounts, and highlighting these stories that are getting left out,” Moran-Thomas says. “Those are not footnotes, but the central thing to follow. And those histories are also alive in the material world around us.”

Thinking across medical and climate technologies

After growing up in Pennsylvania, Moran-Thomas majored in literature while earning her BA from American University. She decided to pursue ethnographic research as a graduate student, and entered Princeton University’s program in anthropology, earning an MA in 2008 and her PhD in 2012. After postdoc stints at Princeton and Brown University, Moran-Thomas joined the MIT faculty in 2015.

At Princeton, Moran-Thomas’ dissertation research examined the diabetes epidemic in Belize, forming the basis of her first book, “Traveling with Sugar,” whose title is an expression in Belize for living with diabetes. As she chronicles in the book, plantation-era changes that undermined indigenous agriculture, among other things, contributed to a local economy that made diets sugar-heavy, while medical technologies are often unreliable or ill-suited to local conditions. The book also traces breakdowns in care technologies, such as prosthetic limbs (often sought after diabetes-linked amputations), glucose meters, hyperbaric chambers, insulin supply chains, dialysis machines, and pain management technologies.

“Traveling with Sugar” also develops a critique that has become a theme of Moran-Thomas’ work: that society often shifts the blame for illness onto patients while minimizing the larger-scale factors affecting everyday health.

“There can be this focus on exclusively prevention without care, the implicit assumption that patients need to act differently,” Moran-Thomas says. “Blame falls on individuals and families instead of a focus on other questions. Why are these technologies always breaking down? How are they designed, and by whom, for whom? What role is history playing in the present? And how are people trying to remake those structures?”

Those issues are highlighted in Moran-Thomas’ ongoing project, “Sugar Atlas: Counter-Mapping Diabetes from the Caribbean,” which is backed by a two-year Digital Justice Seed Grant from the American Council of Learned Societies. Whereas international organizations tend to lump North America and the Caribbean together when tracking diabetes, this project zooms in on specific aspects of the disease and its historical and structural contributors in the Caribbean, such as the distance people must travel to buy vegetables, their proximity to insulin supplies, and the ways climate change is affecting sea life and fishing practices.

“We’re trying to create a community platform offering a different vision of these conditions,” Moran-Thomas says of the effort to map otherwise unrecorded aspects of the global diabetes epidemic, while tracing mutual aid networks and people’s “arts of care” in the present.

Better design for common devices

Following her research in Belize, where glucose meters were prone to breaking, Moran-Thomas began taking a more active focus on the design of medical technology. At MIT, she began co-teaching a course with tech innovator Jose Gomez-Marquez, 21A.311 (The Social Lives of Medical Objects). The idea was to get students to think about device design that could lead to more durable, fixable, and equitable products.

In turn, Moran-Thomas’ interest in devices led her to question the pulse oximeter readings she started seeing first-hand during the Covid-19 pandemic. Pulse oximeters measure oxygen saturation levels in patients and are a part of even routine appointment check-ins. They work visually, casting beams of light to measure the color of hemoglobin, which varies depending on how much oxygen it contains. 

After firsthand encounters with the sensors led to more research, Moran-Thomas learned that some medical professionals had lingering, unanswered questions about pulse oximeters and they way they were calibrated. After she published her essay in the Boston Review, arguing for more data collection, medical researchers examined the issue more closely, finding that patients with darker skin were about three times more likely to have erroneous blood-oxygen readings than patients with lighter skin. Ultimately, an FDA panel recommended changes to the devices.

“A lot of my work has been learning about health and medicine technologies from the perspectives of patients, families, and nurses, rather than beginning with engineers and doctors,” Moran-Thomas says. “Those two projects, about blood sugar and blood oxygen, were about the shortcomings of those devices and how they could be improved. Those are perspectives I can highlight in hopes others will pick up on them and make other kinds of designs and policies possible.”

Moran-Thomas’ interest in device design has continued with her current book project, about the chronic health effects of energy production in Pennsylvania. She has worked with MIT seismologist William Frank, of the Department of Earth, Atmospheric and Planetary Sciences, to construct an inexpensive meter people can use to measure shaking in their homes caused by industrial activities. (After colleagues in western Pennsylvania reached out with seismic concerns, Moran-Thomas first got the idea to contact Frank after reading about his work in MIT News, incidentally).

The effort is also inspired by guidance from community leaders based at the Center for Coalfield Justice in western Pennsylvania. The research team has received a MITHIC SHASS+ Connectivity grant for their project, “Seismic Collaboratory: Rural Health, Missing Science, and Communicating the Chronic Impacts of Extraction.”

“I’ve met people who have been told by their doctors they must have vertigo, while they thought the walls of their house were really shaking,” Moran-Thomas says. “In a case like that, the device you need is not in the clinic, it’s a monitor at home.”

The book, overall, will examine the effects of energy production on chronic disease and mental health issues in Pennsylvania, something exacerbated by more hospitals being shuttered in the state.

Moran-Thomas is simultaneously working with several co-investigators to create the “Health and Society” educational program at MIT, including Katharina Ribbeck, Erica James, Aleshia Carlsen-Bryan, and Dina Asfaha. Their work was recently awarded an Education Innovation Seed Grant from MITHIC.

From small devices to large-scale changes in health care systems, from the U.S. to other regions, Moran-Thomas remains focused on a core set of issues about how to improve and broaden health care — and lessen the need for it in the first place.

“Thinking across scales is something that’s really useful about anthropology,” Moran-Thomas says. “Even one medical device is a tiny piece of a bigger infrastructure. In order to study that technology or device or sensor, you have to understand the bigger infrastructure it’s attached to, and that there are people involved in all parts of it.” 


What’s the right path for AI?

Conference speakers discussed the unfolding trajectory of AI and the benefits of shaping technology to meet people’s needs.


Who benefits from artificial intelligence? This basic question, which has been especially salient during the AI surge of the last few years, was front and center at a conference at MIT on Wednesday, as speakers and audience members grappled with the many dimensions of AI’s impact.

In one of the conferences’s keynote talks, journalist Karen Hao ’15 called for an altered trajectory of AI development, including a move away from the massive scale-up of data use, data centers, and models being used to develop tools under the rubric of “artificial general intelligence.”

“This scale is unnecessary,” said Hao, who has become a prominent voice in AI discussions. “You do not need this scale of AI and compute to realize the benefits.” Indeed, she added, “If we really want AI to be broadly beneficial, we urgently need to shift away from this approach.”

Hao is a former staff member at The Wall Street Journal and MIT Technology Review, and author of the 2025 book, “Empire of AI.” She has reported extensively on the growth of the AI industry.

In her remarks, Hao outlined the astonishing size of datasets now being used by the biggest AI firms to develop large language models. She also emphasized some of the tradeoffs in this scale-up, such as the massive energy consumption and emissions of hyper-scale data centers, which also consume large amounts of water. Drawing on her own reporting, Hao also noted the human toll from the input work that global gig-economy employees do, inputting data manually for the hyper-scale models.

By contrast, Hao offered, an alternate path for AI might exist in the example of AlphaFold, the Nobel Prize-winning tool used to identify protein structures. This represents the concept of the “small, task-specific AI model tackling a well-scoped problem that lends itself to the computational strengths of AI,” Hao said.

She added: “It’s trained on highly curated data sets that only have to do with the problem at hand: protein folding and amino acid sequences. … There’s no need for fast supercomputing because the datasets are small, the model is small, and it’s still unlocking enormous benefit.”

In a second keynote address, scholar Paola Ricaurte underscored the desirability of purpose-driven AI approaches, outlining a number of conceptual keys to evaluating the usefulness of AI.

“There is no sense in having technologies that are not going to respond to the communities that are going to use them,” said Ricaurte.

She is a professor at Tecnologico de Monterrey in Mexico and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. Ricaurte has also served on expert committees such as the Global Partnership for AI, UNESCO’s AI Ethics Experts Without Borders, and the Women for Ethical AI project.

The event was hosted by the MIT Program in Women’s and Gender Studies. Manduhai Buyandelger, the program’s director and a professor of anthropology, provided introductory remarks.

Titled “Gender, Empire, and AI: Symposium and Design Workshop,” the event was held in the conference space at the MIT Schwarzman College of Computing, with over 300 people in attendance for the keynote talks. There was also a segment of the event devoted to discussion groups, and an afternoon session on design, in a half-dozen different subject areas.

In her talk, Hao decried the often-vague nature of AI discourse, suggesting it impedes a more thoughtful discussion about the industry’s direction.

“Part of the challenge in talking about AI is the complete lack of specificity in the term ‘artificial intelligence,’” Hao said. “It’s like the word ‘transportation.’ You could be referring to anything from a bicycle to a rocket.” As a result, she said, “when we talk about accessing its benefits, we actually have to be very specific. Which AI technologies are we talking about, and which ones do we want more of?”

In her view, the smaller-sized tools — more akin to the bicycle, by analogy — are more useful on an everyday basis. As another example, Hao mentioned the project Climate Change AI, focused on tools that can help improve the energy efficiency of buildings, track emissions, optimize supply chains, forecast extreme weather, and more.

“This is the vision of AI that we should be building towards,” Hao said.

In conclusion, Hao encouraged audience members to be active participants in AI-related discourse and projects, saying the trajectory of the technology was not yet fixed, and that public interventions matter.

Citing the writer Rebecca Solnit, Hao suggested to the audience that “Hope locates itself in the premise that we don’t know what will happen, and that in the spaciousness of uncertainty is room to act.” She also noted, “Each and every one of you has an active role to play in shaping technology development.”

Ricaurte, similarly, encouraged attendees to be proactive participants in AI matters, noting that technologies will work best when the pressing everyday needs of all citizens are addressed.

“We have the responsibility to make hope possible,” Ricaurte said.


MIT and Hasso Plattner Institute establish collaborative hub for AI and creativity

Jointly led by the MIT Morningside Academy for Design, MIT Schwarzman College of Computing, and the Hasso Plattner Institute in Potsdam, the hub will foster a dynamic community where computing, creativity, and human-centered innovation meet.


The following is a joint announcement from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, Hasso Plattner Institute, and Hasso Plattner Foundation.

The MIT Morningside Academy for Design (MAD), MIT Schwarzman College of Computing, Hasso Plattner Institute (HPI), and Hasso Plattner Foundation celebrated the launch of the MIT and HPI AI and Creativity Hub (MHACH) at a signing ceremony this week. This 10-year initiative aims to deepen ties between computing and design as advances in artificial intelligence are reshaping how ideas are conceived and shared.

Funded by the Hasso Plattner Foundation, MIT and HPI will work together to foster collaborative interdisciplinary research and support a portfolio of educational programs, fellowships, and faculty engagement focused on AI and creativity, expanding scholarly inquiry into AI applications across disciplines, industries, and societal challenges. The collaboration begins with an inaugural two-day workshop March 19-20 at MIT, bringing together faculty, students, and researchers to set early priorities.

“As we hear from our faculty, as the Information Age gives way to an era of imagination, we expect a new emphasis on human creativity,” reflects MIT President Sally Kornbluth. “Through this collaboration, MIT and HPI are creating a shared space where students and faculty will come together across disciplines to explore new ideas, experiment with emerging tools, and invent new frontiers at the intersection of human creativity and AI.”

“The best minds need the right environment to do their most creative work,” says Rouven Westphal, from the Hasso Plattner Foundation. “When HPI and MIT come together across disciplines and borders, they create exactly that. The Hasso Plattner Foundation is committed to supporting this collaboration for the long term, building on Hasso Plattner’s vision of uniting technological excellence with human-centered design and creativity.”

Deepening collaboration at the intersection of technology, creativity, and societal impact

Building on the success of the Hasso Plattner Institute-MIT Research Program on Designing for Sustainability, established in 2022 between MIT MAD and HPI, the new MHACH hub represents a commitment to deepen collaboration at the intersection of technology, creativity, and societal impact.

“MIT and HPI share a common commitment to turning scientific excellence into real-world impact. Through this collaboration, we will create an environment where students and researchers from both sides of the Atlantic can work together, experiment across disciplines, and learn from one another — at a time when artificial intelligence is set to profoundly shape our lives. We are convinced that this collaboration will generate ideas with impact far beyond both institutions and inspire international cooperation and innovation,” says Professor Tobias Friedrich, dean and managing director of the Hasso Plattner Institute.

“HPI and MIT exist at the nexus of technology and creativity. Expanding this dynamic relationship will generate new paths for the infusion of AI, design, and creativity, enabling students, faculty, and researchers to dream and discover novel solutions, moving more quickly than ever from idea to implementation. MAD was established to connect thinkers across and beyond the Institute, and this new era of collaboration with HPI advances that mission on a global scale,” comments Hashim Sarkis, dean of the MIT School of Architecture and Planning and the Elizabeth and James Killian (1926) Professor.

Academic leadership from MIT and HPI will jointly shape the hub’s research and teaching agenda. Based in Potsdam, Germany, HPI is a center of excellence for digital engineering advancing research, education, and societal transfer in IT systems engineering, data engineering, cybersecurity, entrepreneurship, and digital health. Through its globally recognized HPI d-school and pioneering work in design thinking methodology, HPI brings a distinctive perspective on human-centered innovation to the collaboration, alongside a strong record in AI and data science research and technology transfer.

Expanding research and education on AI and creativity

The efforts of this multifaceted initiative are intended to foster a dynamic academic community spanning MIT and HPI, anchored by Hasso Plattner–named professorships and graduate fellowships whose recipients will be actively engaged in the hub. The long-term framework is designed to provide continuity for faculty appointments, doctoral training, and cross-campus research.

The agreement also includes the development of classes and educational programs in areas of shared AI focus, along with expanded experiential opportunities through AI-focused workshops, hackathons, and summer exchanges. A steering committee composed of representatives from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, and Hasso Plattner Institute will facilitate the shared governance of MHACH.

“Creativity has always been about extending human capability. At its core, this collaboration asks what it truly means to create something new. The question isn’t whether AI diminishes creativity, but how new forms of intelligence can deepen and enrich that process. Our goal is to explore that intersection with rigor and build a cross-disciplinary scholarly and research community that shapes how AI supports the creation of new ideas and knowledge,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

This collaboration is made possible by the Hasso Plattner Foundation’s long-term philanthropic commitment to institutions that connect technological innovation with design thinking and education. The Hasso Plattner Foundation has played a central role in establishing and supporting institutions such as the Hasso Plattner Institute and international design thinking programs that bridge disciplines and geographies.


Generative AI improves a wireless vision system that sees through obstructions

With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.


MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.

Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.

This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.

The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.  

This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.

These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.

“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”

Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.

Surmounting specularity

The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.

These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.

But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.

“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.

The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.

In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.

“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.

Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.

Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.

“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.

The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.

The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.

Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.

Seeing “ghosts”

The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.

Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.

These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.

“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.

They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.

They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.

In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.

This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.


A better method for identifying overconfident large language models

This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.


Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.

But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.   

To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.

Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.

They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.

“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.

She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.

The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.    

“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.

To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.

An ensemble approach

The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.

To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.

“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.

Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.

“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.

TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.

They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.

Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.

Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.

This work is funded, in part, by the MIT-IBM Watson AI Lab.


New model predicts how mosquitoes will fly

Their flight patterns change in response to different sensory cues, a new study finds. The work could lead to more effective traps and mosquito control strategies.


A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.

Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.

Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.

When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.

When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.

Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.

The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.

“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”

The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.

Flight by numbers

Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.

Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.

“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”

At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.

Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?

“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”

Taking cues

For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.

In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.

The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.

Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.

“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”

In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.

“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.

“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”

The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.

“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”

This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund. 


Brain circuit needed to incorporate new information may be linked to schizophrenia

Impairments of this circuit may help to explain why some people with schizophrenia lose touch with reality.


One of the symptoms of schizophrenia is difficulty incorporating new information about the world. This can lead people with schizophrenia to struggle with making decisions and, eventually, to lose touch with reality.

MIT neuroscientists have now identified a gene mutation that appears to give rise to this type of difficulty. In a study of mice, the researchers found that the mutated gene impairs the function of a brain circuit that is responsible for updating beliefs based on new input.

This mutation, in a gene called grin2a, was originally identified in a large-scale screen of patients with schizophrenia. The new study suggests that drugs targeting this brain circuit could help with some of the cognitive impairments seen in people with schizophrenia.

“If this circuit doesn’t work well, you cannot quickly integrate information,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT. “We are quite confident this circuit is one of the mechanisms that contributes to the cognitive impairment that is a major part of the pathology of schizophrenia.”

Feng and Michael Halassa, a professor of psychiatry and neuroscience and director of translational research at Tufts University School of Medicine, are the senior authors of the new study, which appears today in Nature Neuroscience. Tingting Zhou, a research scientist at the McGovern Institute, and Yi-Yun Ho, a former MIT postdoc, are the lead authors of the paper.

Adapting to new information

Schizophrenia is known to have a strong genetic component. For the general population, the risk of developing the disease is about 1 percent, but that goes up to 10 percent for those who have a parent or sibling with the disease, and 50 percent for people who have an identical twin with the disease.

Researchers at the Stanley Center for Psychiatric Research at the Broad Institute have identified more than 100 gene variants linked to schizophrenia, using genome-wide association studies. However, many of those variants are located in non-coding regions of the genome, making it difficult to figure out how they might influence development of the disease.

More recently, researchers at the Stanley Center used a different strategy, known as whole-exome sequencing, to reveal gene mutations linked to schizophrenia. This technique sequences only the protein-coding regions of the genome, so it can reveal mutations that are located in known genes.

Using this approach on about 25,000 sequences from people with schizophrenia and 100,000 sequences from control subjects, the researchers identified 10 genes in which mutations significantly increase the risk of developing schizophrenia.

In the new Nature Neuroscience study, Feng and his students created a mouse model with a mutation in one of those genes, grin2a. This gene encodes a protein that forms part of the NMDA receptor — a receptor that is activated by the neurotransmitter glutamate and is often found on the surface of neurons.

Zhou then investigated whether these mice displayed any of the characteristic behaviors seen in people with schizophrenia. These individuals show many complex symptoms, including psychoses such as hallucinations and delusions (loss of contact with reality). Those are difficult to study in mice, but it is possible to study related symptoms such as difficulty in interpreting new sensory input.

Over the past two decades, schizophrenia researchers have hypothesized that psychosis may stem from an impaired ability to update beliefs based on new information.

“Our brain can form a prior belief of reality, and when sensory input comes into the brain, a neurotypical brain can use this new input to update the prior belief. This allows us to generate a new belief that’s close to what the reality is,” Zhou says. “What happens in schizophrenia patients is that they weigh too heavily on the prior belief. They don’t use as much current input to update what they believed before, so the new belief is detached from reality.”

To study this, Zhou designed an experiment that required mice to choose between two levers to press to earn a food reward. One lever was low-reward — mice had to push it six times to get one drop of milk. A high-reward lever dispensed three drops per push.

At the beginning of the study, all of the mice learned to prefer the high-reward lever. However, as the experiment went on, the number of presses required to dispense the higher reward gradually went up, while there were no changes to the low-reward lever.

As the effort required went up, healthy mice start to switch back and forth between the two levers. Once they had to press the high-reward lever around 18 times for three drops of milk, making the effort per drop about the same for each lever, they eventually switched permanently to the low-reward lever. However, mice with a mutation in grin2a showed a different behavior pattern. They spent more time switching back and forth between the two levers, and they made the switch to the low-reward side much later.

“We find that neurotypical animals make adaptive decisions in this changing environment,” Zhou says. “They can switch from the high-reward side to the low-reward side around the equal value point, while for the animals with the mutation, the switch happens much later. Their adaptive decision-making is much slower compared to the wild-type animals.”

An impaired circuit

Using functional ultrasound imaging and electrical recordings, the researchers found that the brain region affected most by the grin2a mutation was the mediodorsal thalamus. This part of the brain connects with the prefrontal cortex to form a thalamocortical circuit that is responsible for regulating cognitive functions such as executive control and decision-making.

The researchers found that neuronal activity in the mediodorsal thalamus appears to keep track of the changes in value of the two reward options. Additionally, the mice showed different patterns of neural activity depending on which state they were — either an exploratory state or committed to one side.

The researchers also showed that they could use optogenetics to reverse the behavioral symptoms of the mice with mutated grin2a. They engineered the neurons of the mediodorsal thalamus so that they could be activated by light, and when these neurons were activated, the mice began behaving similarly to mice without the grin2a mutation.

While only a very small percentage of schizophrenia patients have mutations in the grin2a gene, it’s possible that this circuit dysfunction is a converging mechanism of cognitive impairment for a subset of schizophrenia patients with different causes.

Targeting this circuit could offer a way to overcome some of the cognitive impairments seen in people with schizophrenia, the researchers say. To do that, they are now working on identifying targets within the circuit that could be potentially druggable.

The research was funded by the National Institutes of Mental Health, the Poitras Center for Psychiatric Disorders Research at MIT, the Yang Tan Collective at MIT, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Stelling Family Research Fund at MIT, the Stanley Center for Psychiatric Research, and the Brain and Behavior Research Foundation.


Turning extreme heat into large-scale energy storage

Fourth Power, founded by Professor Asegun Henry, is developing thermal batteries for efficiently storing excess electricity from utility grids and power producers.


Thermal batteries can efficiently store energy as heat. But building them requires a carefully designed system with materials that can withstand cycles of extremely high temperatures, without succumbing to problems like corrosion, thermal expansion, and structural fatigue.

Many thermal battery systems move high-temperature gas or molten salt around through metal pipes. Fourth Power, founded by MIT Professor Asegun Henry, is turning these materials inside out, using molten metal to transport the heat, which is stored in carbon bricks.

“The idea was, instead of making the system from metal, let’s move liquid metals,” says Henry SM ’06, PhD ’09.

Henry’s approach earned him a Guinness World Record for the hottest liquid pump back in 2017 — important because when you double the absolute temperature of a material, to the point where it glows white-hot, the amount of light it emits doesn’t just double, it increases 16 times (or to the fourth power).

The company is harvesting all that light with thermophotovoltaic cells, which work like solar cells to convert light into electricity. Henry and his collaborators broke another record when they demonstrated a thermophotovoltaic cell that could convert light to electricity with an efficiency above 40 percent.

Fourth Power is working to use those record-breaking innovations to provide energy for power grids, power producers, and technology companies building power-hungry infrastructure like data centers. Henry says the batteries can provide anywhere from 10 to over 100 hours of electricity at a storage cost that is significantly cheaper than lithium-ion batteries at grid scale. The company is currently cycling each section of its system through relevant operating temperatures — which are nearly half as hot as the sun — and plans to have a fully integrated demonstration unit operating later this year.

“Explaining why our system is such a huge improvement over everything else centers around power density,” explains Henry, who serves as Fourth Power’s chief technologist. “We realized if you push the temperature higher, you will transfer heat at a higher rate and shrink the system. Then everything gets cheaper. That’s why we pursue such high temperatures at Fourth Power. We operate our thermal battery between 1,900 and 2,400 degrees Celsius, which allows us to save a tremendous amount on the balance of system costs.”

A career in heat

Henry earned his master’s and PhD degrees from MIT before working in faculty positions at Georgia Tech and MIT. As a professor at both schools, his research has focused on thermal transport, storage, renewable energy, and other technologies that could lead to improvements in sustainability and decarbonization. Today, he is the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering.

Heat transfer systems are usually made out of metals like iron and nickel. Generally, the higher temperature you want to reach, the more expensive the metal. Henry noticed ceramics can get much hotter than metals, but they’re not used nearly as often. He started asking why.

“The answer is often pretty straightforward: You can’t weld ceramics,” Henry says. “Ceramics aren’t ductile. They generally fail in a catastrophically brittle way, and that’s not how we like large systems to behave. But I couldn’t find many problems beyond that.”

After receiving funding from the Department of Energy and the MIT Energy Initiative, Henry spent years developing a pump made from ceramics and graphite (which is similar to a ceramic). In 2017, his pump set the record for the highest recorded operating temperature for a liquid pump, at 1,200 Celsius. The pump used white-hot liquid tin as a fuel. He chose tin because it doesn’t react with carbon, eliminating corrosion. It also has a relatively low melting point and high boiling point, which keeps it liquid in a large temperature range.

The challenge then became designing the system.

“Typically, a mechanical engineer would come up with a design and say, ‘Give me the best materials to do this,’” Henry says. “We flipped the problem, so we were saying, ‘We know what materials will work, now we need to figure out how to make a system out of it.’”

In 2023, Henry met Arvin Ganesan, who had previously led global energy work at Apple. At first, Ganesan wasn’t interested in joining a startup — he had two young kids and wanted to prioritize his family — but he was intrigued by the potential of the technology. At their first meeting, the two connected over shared values and fatherhood, as Henry surprised Ganesan by bringing his own young children.

“I had a sense this technology had the promise to tackle the twin crises of affordability and climate change at the same time,” says Ganesan, who is now Fourth Power’s CEO. “As energy demand becomes more pronounced, we either need to deploy harder and deeper tech, which is also important, or improve existing tech. Fourth Power is trying to simplify the physics and thermodynamic principles to deliver an approach that has been very well-studied for a very long time.”

Since 2023, Fourth Power has been conducting sponsored research at the LNS Bates Research and Engineering Center to validate the durability and reliability of its components ahead of a fully integrated demonstration.

The system Fourth Power designed takes in excess electricity from sources like the grid and uses it to heat a series of 6-foot-long, 20-inch thick graphite bricks until they reach about 2,400 Celsius. At that point the system is considered fully charged.

When the customer wants the electricity back, the bricks are used to heat up liquid tin, which flows through a series of graphite pipes, pumps, and flow meters to thermophotovoltaic cells, which turn the light from the glowing hot infrastructure back into electricity.

“You can basically dip the cells into the light and get power, or you can pull them back out and shut it off,” Henry explains. “The liquid metal starts at 2,400 Celsius and then cools as it’s going through the system because it’s giving a bunch of its energy to the photovoltaic, and then it circulates back through the graphite blocks, which act as a furnace, to retrieve more heat.”

From concept to company

Later this year, Fourth Power plans to turn on a 1-megawatt-hour system in its new headquarters in Bedford, Massachusetts. A full-scale system would offer 25 megawatts of power and 250 megawatt hours of storage and take up about half a football field.

“Most technologies you’ll see in storage are around 10 megawatts an acre or less,” Henry explains. “Fourth Power is more like 100 megawatts per acre. It’s very power-dense.”

The power and storage units of Fourth Power’s system are modular, which will allow customers to start with a smaller system and add storage units to extend storage length later. The company expects to lose about 1 percent of total heat stored per day.

“Customers can buy one storage and one power module, and that’s a 10-hour battery,” Henry explains. “But if they want one power module and two storage modules, that’s a 20-hour battery. Customers can mix and match, which is really advantageous for utilities as renewables scale and storage needs change.”

Down the line, the system could also be run as a power plant, converting fuel into electricity or using fuel to charge its batteries during stretches with little wind or sun. It could also be used to provide industrial heat.

But for now, Fourth Power is focused on the battery application.

“Utilities need something cheap and they need something reliable,” Henry says. “The only technology that has managed to reach at least one of those requirements is lithium ion. But the world is waiting for something that’s much cheaper than lithium ion and just as reliable, if not better. That’s what we’re focused on demonstrating to the world.”


Three anesthesia drugs all have the same effect in the brain, MIT researchers find

Discovering this common mechanism could lead to a universal anesthesia-delivery system to monitor patients more effectively.


When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.

This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.

“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

Miller, Emery Brown, who is the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.

Destabilizing the brain

Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.

When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.

“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”

In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.

For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.

In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.

This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.

Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.

“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”

The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.

“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”

Monitoring anesthesia

Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.

For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.

To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.

“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.

Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.

The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.


“We the People” depicts inventors, dreamers, and innovators in all 50 states

For the 250th anniversary of the US, Joshua Bennett’s epic poem set celebrates unexpected lives forged across the nation.


Zora Neale Hurston remains one of America’s best-known authors. Charles Henry Turner developed landmark studies about the behavior of bees and spiders. Brian Wilson founded the Beach Boys. George Nissen invented the trampoline. What do they all have in common?

Well, for one thing, they were all innovative Americans — creators and discoverers, producing work no one anticipated. For another, they are all now celebrated as such, in verse, by Joshua Bennett.

That’s right. Bennett — an MIT professor, lauded poet, and literary scholar — is marking the 250th anniversary of the founding of the U.S. with a book-length work of poetry about the country and some of its distinctive figures. In fact, 50 of them: Bennett has written a substantial work featuring remarkable people or inventions from each of the 50 states, meditating on their place in cultural fabric of the U.S.

“There’s so much to be said for a country where you and I are possible, and the things we do are possible,” Bennett says.

The book, “We (The People of the United States),” is published today by Penguin Books. Bennett is a professor and the Distinguished Chair of the Humanities at MIT.

Bennett’s new work has some prominent Americans in it, but is no gauzy listing of familiar icons. Many of the 50 people in his book overcame hardship, poverty, rejection, or discrimination; some have already been rescued from obscurity, but others have not received proper acclaim. Few of them had a straightforward, simple connection with their times.

“It’s about feeling that you have a life in this country which is undeniably complex, but also has this remarkable beauty to it,” Bennett says of the work. “A beauty you helped to create, and that no one can take away from you.”

The figures that Bennett writes about are sources of fascination, and inspiration, demonstrating the kinds of lives it is possible to invent in the U.S.

“We’re in a moment that calls for compelling, historically grounded stories about what America is, what it has been, and what it can be,” Bennett adds. “Can we build a life-affirming vision for the future and those who will inherit it? I’m trying to. I work on it every day.”

Taking flight

“We (The People of the United States)” is inspired, in part, by Virgil’s “Georgics,” pastoral poems by the great Roman poet. Bennett encountered them while a PhD student in literature at Princeton University.

“The poet Susan Stewart, my professor at Princeton, introduced me to Virgil’s Georgics,” Bennett says. “I eventually started to think: What would it look like for me to cover Virgil?” Adding to his interest in the concept, one of his favorite poets, Gwendolyn Brooks, had spent time recasting Virgil’s ancient epic, “The Aeneid,” for her Pulitzer Prize-winning work, “Annie Allen.” She also translated the original work from Latin as a teenager. Moreover, Bennett’s writing has long engaged with the subject of people working the land in America.

“I decided to start writing all these poems about agriculture,” Bennett says. “But then I thought, this would be interesting as an epic poem about America.” As he launched the project, its focus shifted some more: “I started to think about the book as an ode to invention.”

Soon Bennett had worked out the structure. An opening section of the work is about his own family background, becoming a father, and the process of building a life here in Massachusetts.

“Where does my influence, my aspiration, end and the child begin?” Bennett writes in one poem. That section prefigures further themes in the collection about the domestic environments many of its figures emerged from. For the rest of the work, with one innovator or innovation for each of the 50 states, Bennett adopted a regular writing schedule, producing at least one new poem per week until he was finished. 

Hurston, one of several famous authors and artists featured in the book, represents Florida. From Ohio, entomologist Charles Henry Turner was the first Black person to receive a PhD from the University of Chicago, in 1907, before conducting a wide range of studies about the cognition and behavior of spiders and bees, among other things.

George Nissen, alternately, was a University of Iowa gymnast who built the first trampoline in the 1930s in his home state — something Bennett calls a “magical device” that brings to life “the scene in your mind of the leap/and of the leap itself, where you are airborne, illuminated/quickly immortal.” Whether these innovations appear through rigorous academic exploration or became mass-market goods that produce flights of fancy, Bennett has a keen eye for people who break new ground and fire our own feelings of wonder.

“We actually are all bound up in it together,” Bennett says. “These different figures, from various fields, eras, and lifelong pursuits are in here together precisely because they helped weave the story of this country together. It’s a story that is still unfolding.”

Bennett is straightforward about the struggles many of his subjects faced. His choice to represent North Carolina is the poet George Moses Horton, an enslaved man who not only learned to read and write in the early 1800s — the state later made that illegal for enslaved persons, in 1830 — but made money selling poems to University of North Carolina students. Indeed, Horton’s work was published in the 1820s. Bennett writes that Horton’s public performance of his poetry was “an ancient art revived in the flesh of a prodigy in chains.”

Bennett’s unblinking regard for historical reality is a motif throughout the work. “To me it’s not only about exploring a history that a reader might feel connected to or want to learn more about,” he says. “It’s about honoring those who lived that history, who helped make some of the most beautiful parts of the present possible, through an engagement with the substance of their lives.”

Just my imagination

Many figures in “We (The People of the United States)” are artists, but of many forms. From watching VH1 as a child, Bennett got into the Beach Boys, and he devotes the California entry in the poem to them. Or as Bennett puts it, he was “newly initiated into a sound/I do not understand until I am old enough to be nostalgic/for windswept locales, and singular moments in time/I never lived through.”

Bennett was learning about the Beach Boys while growing up in Yonkers, New York, far from any California beaches. But then, Brian Wilson wasn’t a surfer either — he grew up in an industrial suburb of Los Angeles. Imagination was the coin of the realm for Wilson, something Bennett understood when Beach Boys songs would veer off in unexpected directions.

“I’ve always been drawn to moments of great surprise, or revelation, in the works of art I love,” Bennett says. “Which is part of why I’ve dedicated my life to poetry. You think one thing is happening in a poem, and suddenly that shock comes, that unexpected turn, or volta. Brian Wilson always had a great understanding of that. It works in pop music. Surprise, sometimes, is a shift in register that takes you higher.”

Various poems in the collection have down-to-earth origins. Bennett remembers his father often fixing things in the family home, from toys to the boiler, saying, “Pass me the Phillips-head,” when he needed a screwdriver. Thus Oregon appears in the book: Portland is where the Phillips-head screwdriver was invented.

In conversation, Bennett notes the hopeful disposition of his father, who after living through Jim Crow and serving in the Vietnam War, worked 10-hour shifts at the U.S. Postal Service to support his family. Even with all the difficulty he experienced in his life, Bennett’s father always encouraged his son to pursue his dreams.

“I’m grateful that I inherited a profound sense of belonging, and dignity, from my parents,” Bennett says. “There was always this feeling that we were part of a much larger story, and that we had a responsibility to tell the truth about the world as we knew it.”

And that’s really what Bennett’s new book is about.

“We can reckon with our history in its fullness and work, tirelessly, toward a world that’s worthy of the most vulnerable among us,” Bennett says. “Like Toni Morrison, we can ‘dream the world as it ought to be.’ And then make it real. That’s my vision.”


Ocean bacteria team up to break down biodegradable plastic

MIT researchers uncovered the roles of bacterial species from the environment as they consume biodegradable plastic.


Biodegradable plastics could help alleviate the plastic waste crisis that is polluting the environment and harming our health. But how long plastics take to degrade and how environmental bacteria work together to break them down is still largely unknown.

Understanding how plastics are broken down by microbes could help scientists create more sustainable materials and even new microbial recycling systems that convert plastic waste into useful materials.

Now MIT researchers have taken an important first step toward understanding how bacteria work together to break down plastic. In a new paper, the researchers uncovered the role of individual ocean bacteria in the breakdown of a widely used biodegradable plastic. They also showed the complementary processes microbes use to fully consume the plastic, with one microbe cleaving the plastic into its component chemicals and others consuming each chemical.

The researchers say it’s one of the first studies illuminating specific bacterial species’ role in the breakdown of plastic and indicates the speed of plastic degradation can vary widely depending on a few key factors.

“There is a lot of ambiguity about how long these materials actually exist in the environment,” says lead author Marc Foster, a PhD student in the MIT-WHOI Joint Program. “This shows plastic biodegradation is highly dependent on the microbial community where the plastic ends up. It’s also dependent on the plastics — the chemistry of the polymer and how they’re made as a product. It’s important to understand these processes because we’re trying to constrain the environmental lifetime of these materials.”

Joining Foster on the paper are MIT PhD candidate Philip Wasson; former MIT postdoc Andreas Sichert; MIT undergraduate Deborah Madden; Woods Hole Oceanographic Institute researchers Matthew Hayden and Adam Subhas; Chong Becker and Sebastian Gross of the international chemical and plastic company BASF; Otto Cordero, an MIT associate professor of civil and environmental engineering; Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor; and Desirée Plata, MIT’s School of Engineering Distinguished Climate and Energy Professor. The paper appears in the journal Environmental Science and Technology.

Uncovering collaboration

Scientists hope biodegradable plastic can be used to address the mountains of plastic waste piling up in our oceans and landfills.

“More than half of produced plastic is either sent to landfills or directly released into the environment,” Foster says. “But without knowing the specifics of different degradation processes, we won’t be able to accurately predict the lifetime of these materials and better control that degradation.”

To date, many studies into the biodegradation of plastics have focused on single microbial organisms, but Foster says that’s not representative of how most plastics are broken down in the environment.

“It’s really rare for a single bacterium to carry out the full degradation process because it requires a significant metabolic burden to carry all of the enzymatic functions to depolymerize the polymer and then use those chemical subunits as a carbon and energy source,” Foster says.

Other studies have sought to capture the molecular footprints of groups of bacteria as they degrade plastic, which gives a snapshot of the species involved without uncovering the mechanisms of action.

For this study, the researchers wanted to uncover the roles of specific bacterial species as they fully degraded plastic. They started with a type of biodegradable plastic known as an aromatic aliphatic co-polyester. Such plastic is used in shopping bags and food packaging. It’s also often laid across the soil of farms to prevent weeds and retain moisture.

To begin the study, researchers at BASF, which produces that type of plastic, first placed samples of the product into different depths of the Mediterranean Sea to let bacteria grow as a thin biofilm around the plastic. The company then shipped the samples to researchers at MIT, who isolated as many species of bacteria as possible from the samples. The researchers mixed those isolates and identified 30 bacterial species that continued to grow in abundance on the plastic.

Using carbon dioxide as a measure of plastic degradation, the researchers isolated each bacterium and found one, Pseudomonas pachastrellae, that could depolymerize the plastic compounds, breaking them into the three chemical components of the plastic: terephthalic acid, sebacic acid, and butanediol.

But that bacterium couldn’t consume all three components on its own. One by one, the researchers exposed each bacterium to each chemical, finding no bacteria that could consume all three, although they did find some species that could consume one or two chemicals on their own.

Finally, the researchers selected five bacterial species based on their complementary breakdown abilities and showed the small group exhibited the same ability to fully degrade the plastic as the 30-member bacteria community.

“I was able to minimize the degradation process to this simplistic set of specific metabolic functions,” Foster says. “And then when I took out one bacterium, the mineralization dropped, which indicated the organism was controlling the degradation of the polymer. Then when I had each one of the bacteria alone in a culture, none of them could reach the same degradation as all five together, indicating there was this complementary function required. It worked much better than I thought it would.”

The researchers also found the five-member bacteria community couldn’t mineralize a different plastic, showing groups of bacteria may only be able to mineralize specific plastics.

“It highlights that the microbes living where this plastic ends up are going to dictate the plastic’s lifetime,” Foster says.

Faster plastic degradation

Foster notes the bacteria in his study are likely specific to the Mediterranean Sea. The study also only involved bacteria that could survive in his lab environment. Still, Foster says it’s one of the first papers that identifies the roles of bacteria in consuming plastic.

“Most studies wouldn’t be able to identify the specific bacteria that’s controlling each complementary mineralization process,” Foster says. “Here we can say this bacteria controls degradation, these bacteria handle mineralization, and then we show the function of each bacteria and show that together, they can remove the entire polymer.”

Foster says the work is an important first step toward creating microbial systems that are better at breaking down plastic or converting it into something useful. In follow-up work for his PhD, he is exploring what makes successful bacterial pairs for faster plastic consumption and how enzymes dock on plastic particles to initiate and continue degradation.

The work was supported by the MIT Climate and Sustainability Consortium and BASF SE. Partial support was provided by the U.S. National Science Foundation Graduate Research Fellowship Program.