It is a deep question, from deep in our history: When did human language as we know it emerge? A new survey of genomic evidence suggests our unique language capacity was present at least 135,000 years ago. Subsequently, language might have entered social use 100,000 years ago.
Our species, Homo sapiens, is about 230,000 years old. Estimates of when language originated vary widely, based on different forms of evidence, from fossils to cultural artifacts. The authors of the new analysis took a different approach. They reasoned that since all human languages likely have a common origin — as the researchers strongly think — the key question is how far back in time regional groups began spreading around the world.
“The logic is very simple,” says Shigeru Miyagawa, an MIT professor and co-author of a new paper summarizing the results. “Every population branching across the globe has human language, and all languages are related.” Based on what the genomics data indicate about the geographic divergence of early human populations, he adds, “I think we can say with a fair amount of certainty that the first split occurred about 135,000 years ago, so human language capacity must have been present by then, or before.”
The paper, “Linguistic capacity was present in the Homo sapiens population 135 thousand years ago,” appears in Frontiers in Psychology. The co-authors are Miyagawa, who is a professor emeritus of linguistics and the Kochi-Manjiro Professor of Japanese Language and Culture at MIT; Rob DeSalle, a principal investigator at the American Museum of Natural History’s Institute for Comparative Genomics; Vitor Augusto Nóbrega, a faculty member in linguistics at the University of São Paolo; Remo Nitschke, of the University of Zurich, who worked on the project while at the University of Arizona linguistics department; Mercedes Okumura of the Department of Genetics and Evolutionary Biology at the University of São Paulo; and Ian Tattersall, curator emeritus of human origins at the American Museum of Natural History.
The new paper examines 15 genetic studies of different varieties, published over the past 18 years: Three used data about the inherited Y chromosome, three examined mitochondrial DNA, and nine were whole-genome studies.
All told, the data from these studies suggest an initial regional branching of humans about 135,000 years ago. That is, after the emergence of Homo sapiens, groups of people subsequently moved apart geographically, and some resulting genetic variations have developed, over time, among the different regional subpopulations. The amount of genetic variation shown in the studies allows researchers to estimate the point in time at which Homo sapiens was still one regionally undivided group.
Miyagawa says the studies collectively provide increasingly converging evidence about when these geographic splits started taking place. The first survey of this type was performed by other scholars in 2017, but they had fewer existing genetic studies to draw upon. Now, there are much more published data available, which when considered together point to 135,000 years ago as the likely time of the first split.
The new meta-analysis was possible because “quantity-wise we have more studies, and quality-wise, it’s a narrower window [of time],” says Miyagawa, who also holds an appointment at the University of São Paolo.
Like many linguists, Miyagawa believes all human languages are demonstrably related to each other, something he has examined in his own work. For instance, in his 2010 book, “Why Agree? Why Move?” he analyzed previously unexplored similarities between English, Japanese, and some of the Bantu languages. There are more than 7,000 identified human languages around the globe.
Some scholars have proposed that language capacity dates back a couple of million years, based on the physiological characteristics of other primates. But to Miyagawa, the question is not when primates could utter certain sounds; it is when humans had the cognitive ability to develop language as we know it, combining vocabulary and grammar into a system generating an infinite amount of rules-based expression.
“Human language is qualitatively different because there are two things, words and syntax, working together to create this very complex system,” Miyagawa says. “No other animal has a parallel structure in their communication system. And that gives us the ability to generate very sophisticated thoughts and to communicate them to others.”
This conception of human language origins also holds that humans had the cognitive capacity for language for some period of time before we constructed our first languages.
“Language is both a cognitive system and a communication system,” Miyagawa says. “My guess is prior to 135,000 years ago, it did start out as a private cognitive system, but relatively quickly that turned into a communications system.”
So, how can we know when distinctively human language was first used? The archaeological record is invaluable in this regard. Roughly 100,000 years ago, the evidence shows, there was a widespread appearance of symbolic activity, from meaningful markings on objects to the use of fire to produce ochre, a decorative red color.
Like our complex, highly generative language, these symbolic activities are engaged in by people, and no other creatures. As the paper notes, “behaviors compatible with language and the consistent exercise of symbolic thinking are detectable only in the archaeological record of H. sapiens.”
Among the co-authors, Tattersall has most prominently propounded the view that language served as a kind of ignition for symbolic thinking and other organized activities.
“Language was the trigger for modern human behavior,” Miyagawa says. “Somehow it stimulated human thinking and helped create these kinds of behaviors. If we are right, people were learning from each other [due to language] and encouraging innovations of the types we saw 100,000 years ago.”
To be sure, as the authors acknowledge in the paper, other scholars believe there was a more incremental and broad-based development of new activities around 100,000 years ago, involving materials, tools, and social coordination, with language playing a role in this, but not necessarily being the central force.
For his part, Miyagawa recognizes that there is considerable room for further progress in this area of research, but thinks efforts like the current paper are at least steps toward filling out a more detailed picture of language’s emergence.
“Our approach is very empirically based, grounded in the latest genetic understanding of early homo sapiens,” Miyagawa says. “I think we are on a good research arc, and I hope this will encourage people to look more at human language and evolution.”
This research was, in part, supported by the São Paolo Excellence Chair awarded to Miyagawa by the São Paolo Research Foundation.
A collaboration across continents to solve a plastics problemMIT students travel to the Amazon, working with locals to address the plastics sustainability crisis.More than 60,000 tons of plastic makes the journey down the Amazon River to the Atlantic Ocean every year. And that doesn’t include what finds its way to the river’s banks, or the microplastics ingested by the region’s abundant and diverse wildlife.
It’s easy to demonize plastic, but it has been crucial in developing the society we live in today. Creating materials that have the benefits of plastics while reducing the harms of traditional production methods is a goal of chemical engineering and materials science labs the world over, including that of Bradley Olsen, the Alexander and I. Michael Kasser (1960) Professor of Chemical Engineering at MIT.
Olsen, a Fulbright Amazonia scholar and the faculty lead of MIT-Brazil, works with communities to develop alternative plastics solutions that can be derived from resources within their own environments.
“The word that we use for this is co-design,” says Olsen. “The idea is, instead of engineers just designing something independently, they engage and jointly design the solution with the stakeholders.”
In this case, the stakeholders were small businesses around Manaus in the Brazilian state of Amazonas curious about the feasibility of bioplastics and other alternative packaging.
“Plastics are inherent to modern life and actually perform key functions and have a really beautiful chemistry that we want to be able to continue to leverage, but we want to do it in a way that is more earth-compatible,” says Desirée Plata, MIT associate professor of civil and environmental engineering.
That’s why Plata joined Olsen in creating the course 1.096/10.496 (Design of Sustainable Polymer Systems) in 2021. Now, as a Global Classroom offering under the umbrella of MISTI since 2023, the class brings MIT students to Manaus during the three weeks of Independent Activities Period (IAP).
“In my work running the Global Teaching Labs in Brazil since 2016, MIT students collaborate closely with Brazilian undergraduates,” says Rosabelli Coelho-Keyssar, managing director of MIT-Brazil and MIT-Amazonia, who also runs MIT’s Global Teaching Labs program in Brazil. “This peer-learning model was incorporated into the Global Classroom in Manaus, ensuring that MIT and Brazilian students worked together throughout the course.”
The class leadership worked with climate scientist and MIT alumnus Carlos Nobre PhD ’83, who facilitated introductions to faculty at the Universidade Estadual de Amazonas (UAE), the state university of Amazonas. The group then scouted businesses in the Amazonas region who would be interested in partnering with the students.
“In the first year, it was Comunidade Julião, a community of people living on the edge of the Tarumã Mirim River west of Manaus,” says Olsen. “This year, we worked with Comunidade Para Maravilha, a community living in the dry land forest east of Manaus.”
A tailored solution
Plastic, by definition, is made up of many small carbon-based molecules, called monomers, linked by strong bonds into larger molecules called polymers. Linking different monomers and polymers in different ways creates different plastics — from trash bags to a swimming pool float to the dashboard of a car. Plastics are traditionally made from petroleum byproducts that are easy to link together, stable, and plentiful.
But there are ways to reduce the use of petroleum-based plastics. Packaging can be made from materials found within the local ecosystem, as was the focus of the 2024 class. Or carbon-based monomers can be extracted from high-starch plant matter through a number of techniques, the goal of the 2025 cohort. But plants that grow well in one location might not in another. And bioplastic production facilities can be tricky to install if the necessary resources aren’t immediately available.
“We can design a whole bunch of new sustainable chemical processes, use brand new top-of-the-line catalysts, but if you can’t actually implement them sustainably inside an environment, it falls short on a lot of the overall goals,” says Brian Carrick, a PhD candidate in the Olsen lab and a teaching assistant for the 2025 course offering.
So, identifying local candidates and tailoring the process is key. The 2025 MIT cohort collaborated with students from throughout the Amazonas state to explore the local flora, study its starch content in the lab, and develop a new plastic-making process — all within the three weeks of IAP.
“It’s easy when you have projects like this to get really locked into the MIT vacuum of just doing what sounds really cool, which isn’t always effective or constructive for people actually living in that environment,” says Claire Underwood, a junior chemical-biological engineering major who took the class. “That’s what really drew me into the project, being able to work with people in Brazil.”
The 31 students visited a protected area of the Amazon rainforest on Day One. They also had chances throughout IAP to visit the Amazon River, where the potential impact of their work became clear as they saw plastic waste collecting on its banks.
“That was a really cool aspect to the class, for sure, being able to actually see what we were working towards protecting and what the goal was,” says Underwood.
They interviewed stakeholders, such as farmers who could provide the feedstock and plastics manufacturers who could incorporate new techniques. Then, they got into the classroom, where massive intellectual ground was covered in a crash course on the sustainable design process, the nitty gritty of plastic production, and the Brazilian cultural context on how building such an industry would affect the community. For the final project, they separated into teams to craft preliminary designs of process and plant using a simplified model of these systems.
Connecting across boundaries
Working in another country brought to the fore how interlinked policy, culture, and technical solutions are.
“I know nothing about economics, and especially not Brazilian economics and politics,” says Underwood. But one of the Brazilian students in her group was a management and finance major. “He was super helpful when we were trying to source things and account for inflation and things like that — knowing what was feasible, and not just academically feasible.”
Before they parted at the end of IAP, each team presented their proposals to a panel of company representatives and Brazilian MIT alumni who chose first-, second-, and third-place winners. While more research is needed before comfortably implementing the ideas, the experience seemed to generate legitimate interest in creating a local bioplastics production facility.
Understanding sustainable design concepts and how to do interdisciplinary work is an important skill to learn. Even if these students don’t wind up working on bioplastics in the heart of the Amazon, being able to work with people of different perspectives — be it a different discipline or a different culture — is valuable in virtually every field.
“The exchange of knowledge across different fields and cultures is essential for developing innovative and sustainable solutions to global challenges such as climate change, waste management, and the development of eco-friendly materials,” says Taisa Sampaio, a PhD candidate in materials chemistry at UEA and a co-instructor for the course. “Programs like this are crucial in preparing professionals who are more aware and better equipped to tackle future challenges.”
Right now, Olsen and Plata are focused on harnessing the deep well of connections and resources they have around Manaus, but they hope to develop that kind of network elsewhere to expand this sustainable design exploration to other regions of the world.
“A lot of sustainability solutions are hyperlocal,” says Plata. “Understanding that not all locales are exactly the same is really powerful and important when we’re thinking about sustainability challenges. And it’s probably where we've gone wrong with the one-size-fits-all or silver-bullet solution — seeking that we’ve been doing for the past many decades.”
Collaborations for the 2026 trip are still in development but, as Olsen says, “we hope this is an experience we can continue to offer long into the future, based on how positive it has been for our students and our Brazilian partners.”
High-performance computing, with much less codeThe Exo 2 programming language enables reusable scheduling libraries external to compilers.Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.
But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?
That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called Exo 2.
Exo 2 belongs to a new category of programming languages that MIT Professor Jonathan Ragan-Kelley calls “user-schedulable languages” (USLs). Instead of hoping that an opaque compiler will auto-generate the fastest possible code, USLs put programmers in the driver's seat, allowing them to write “schedules” that explicitly control how the compiler generates code. This enables performance engineers to transform simple programs that specify what they want to compute into complex programs that do the same thing as the original specification, but much, much faster.
One of the limitations of existing USLs (like the original Exo) is their relatively fixed set of scheduling operations, which makes it difficult to reuse scheduling code across different “kernels” (the individual components in a high-performance library).
In contrast, Exo 2 enables users to define new scheduling operations externally to the compiler, facilitating the creation of reusable scheduling libraries. Lead author Yuka Ikarashi, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate, says that Exo 2 can reduce total schedule code by a factor of 100 and deliver performance competitive with state-of-the-art implementations on multiple different platforms, including Basic Linear Algebra Subprograms (BLAS) that power many machine learning applications. This makes it an attractive option for engineers in HPC focused on optimizing kernels across different operations, data types, and target architectures.
“It’s a bottom-up approach to automation, rather than doing an ML/AI search over high-performance code,” says Ikarashi. “What that means is that performance engineers and hardware implementers can write their own scheduling library, which is a set of optimization techniques to apply on their hardware to reach the peak performance.”
One major advantage of Exo 2 is that it reduces the amount of coding effort needed at any one time by reusing the scheduling code across applications and hardware targets. The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.
Exo 2 includes a novel mechanism called “Cursors” that provides what they call a “stable reference” for pointing at the object code throughout the scheduling process. Ikarashi says that a stable reference is essential for users to encapsulate schedules within a library function, as it renders the scheduling code independent of object-code transformations.
“We believe that USLs should be designed to be user-extensible, rather than having a fixed set of operations,” says Ikarashi. “In this way, a language can grow to support large projects through the implementation of libraries that accommodate diverse optimization requirements and application domains.”
Exo 2’s design allows performance engineers to focus on high-level optimization strategies while ensuring that the underlying object code remains functionally equivalent through the use of safe primitives. In the future, the team hopes to expand Exo 2’s support for different types of hardware accelerators, like GPUs. Several ongoing projects aim to improve the compiler analysis itself, in terms of correctness, compilation time, and expressivity.
Ikarashi and Ragan-Kelley co-authored the paper with graduate students Kevin Qian and Samir Droubi, Alex Reinking of Adobe, and former CSAIL postdoc Gilbert Bernstein, now a professor at the University of Washington. This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the U.S. National Science Foundation, while the first author was also supported by Masason, Funai, and Quad Fellowships.
MIT engineers turn skin cells directly into neurons for cell therapyA new, highly efficient process for performing this conversion could make it easier to develop therapies for spinal cord injuries or diseases like ALS.Converting one type of cell to another — for example, a skin cell to a neuron — can be done through a process that requires the skin cell to be induced into a “pluripotent” stem cell, then differentiated into a neuron. Researchers at MIT have now devised a simplified process that bypasses the stem cell stage, converting a skin cell directly into a neuron.
Working with mouse cells, the researchers developed a conversion method that is highly efficient and can produce more than 10 neurons from a single skin cell. If replicated in human cells, this approach could enable the generation of large quantities of motor neurons, which could potentially be used to treat patients with spinal cord injuries or diseases that impair mobility.
“We were able to get to yields where we could ask questions about whether these cells can be viable candidates for the cell replacement therapies, which we hope they could be. That’s where these types of reprogramming technologies can take us,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering.
As a first step toward developing these cells as a therapy, the researchers showed that they could generate motor neurons and engraft them into the brains of mice, where they integrated with host tissue.
Galloway is the senior author of two papers describing the new method, which appear today in Cell Systems. MIT graduate student Nathan Wang is the lead author of both papers.
From skin to neurons
Nearly 20 years ago, scientists in Japan showed that by delivering four transcription factors to skin cells, they could coax them to become induced pluripotent stem cells (iPSCs). Similar to embryonic stem cells, iPSCs can be differentiated into many other cell types. This technique works well, but it takes several weeks, and many of the cells don’t end up fully transitioning to mature cell types.
“Oftentimes, one of the challenges in reprogramming is that cells can get stuck in intermediate states,” Galloway says. “So, we’re using direct conversion, where instead of going through an iPSC intermediate, we’re going directly from a somatic cell to a motor neuron.”
Galloway’s research group and others have demonstrated this type of direct conversion before, but with very low yields — fewer than 1 percent. In Galloway’s previous work, she used a combination of six transcription factors plus two other proteins that stimulate cell proliferation. Each of those eight genes was delivered using a separate viral vector, making it difficult to ensure that each was expressed at the correct level in each cell.
In the first of the new Cell Systems papers, Galloway and her students reported a way to streamline the process so that skin cells can be converted to motor neurons using just three transcription factors, plus the two genes that drive cells into a highly proliferative state.
Using mouse cells, the researchers started with the original six transcription factors and experimented with dropping them out, one at a time, until they reached a combination of three — NGN2, ISL1, and LHX3 — that could successfully complete the conversion to neurons.
Once the number of genes was down to three, the researchers could use a single modified virus to deliver all three of them, allowing them to ensure that each cell expresses each gene at the correct levels.
Using a separate virus, the researchers also delivered genes encoding p53DD and a mutated version of HRAS. These genes drive the skin cells to divide many times before they start converting to neurons, allowing for a much higher yield of neurons, about 1,100 percent.
“If you were to express the transcription factors at really high levels in nonproliferative cells, the reprogramming rates would be really low, but hyperproliferative cells are more receptive. It’s like they’ve been potentiated for conversion, and then they become much more receptive to the levels of the transcription factors,” Galloway says.
The researchers also developed a slightly different combination of transcription factors that allowed them to perform the same direct conversion using human cells, but with a lower efficiency rate — between 10 and 30 percent, the researchers estimate. This process takes about five weeks, which is slightly faster than converting the cells to iPSCs first and then turning them into neurons.
Implanting cells
Once the researchers identified the optimal combination of genes to deliver, they began working on the best ways to deliver them, which was the focus of the second Cell Systems paper.
They tried out three different delivery viruses and found that a retrovirus achieved the most efficient rate of conversion. Reducing the density of cells grown in the dish also helped to improve the overall yield of motor neurons. This optimized process, which takes about two weeks in mouse cells, achieved a yield of more than 1,000 percent.
Working with colleagues at Boston University, the researchers then tested whether these motor neurons could be successfully engrafted into mice. They delivered the cells to a part of the brain known as the striatum, which is involved in motor control and other functions.
After two weeks, the researchers found that many of the neurons had survived and seemed to be forming connections with other brain cells. When grown in a dish, these cells showed measurable electrical activity and calcium signaling, suggesting the ability to communicate with other neurons. The researchers now hope to explore the possibility of implanting these neurons into the spinal cord.
The MIT team also hopes to increase the efficiency of this process for human cell conversion, which could allow for the generation of large quantities of neurons that could be used to treat spinal cord injuries or diseases that affect motor control, such as ALS. Clinical trials using neurons derived from iPSCs to treat ALS are now underway, but expanding the number of cells available for such treatments could make it easier to test and develop them for more widespread use in humans, Galloway says.
The research was funded by the National Institute of General Medical Sciences and the National Science Foundation Graduate Research Fellowship Program.
Five ways to succeed in sports analyticsThe 19th annual MIT Sloan Sports Analytics Conference spotlighted a thriving industry. Here are a handful of ideas for getting ahead in it.Sports analytics is fueled by fans, and funded by teams. The 19th annual MIT Sloan Sports Analytics Conference (SSAC), held last Friday and Saturday, showed more clearly than ever how both groups can join forces.
After all, for decades, the industry’s main energy source has been fans weary of bad strategies: too much bunting in baseball, too much punting in football, and more. The most enduring analytics icon, Bill James, was a teacher and night watchman until his annual “Baseball Abstract” books began to upend a century of conventional wisdom, in the 1980s. After that, sports analytics became a profession.
Meanwhile, franchise valuations keep rising, women’s sports are booming, and U.S. college sports are professionalizing. All of it should create more analytics jobs, as “Moneyball” author Michael Lewis noted during a Friday panel.
“This whole analytics movement is a byproduct of the decisions becoming really expensive decisions,” Lewis said. “It didn’t matter if you got it wrong if you were paying someone $50,000 a year. But if you’re going to pay them $50 million, you better get it right. So, all of a sudden, someone who can give you a little bit more of an edge in that decision-making has more value.”
Would you like to be a valued sports analytics professional? Here are five ideas, gleaned from MIT’s industry-leading event, about how to gain traction in the field.
1. You can jump into this industry.
Bill James, as it happens, was the first speaker on the opening Friday-morning panel at SSAC, held at the Hynes Convention Center in Boston. His theme: the value of everyone’s work, since today’s amateurs become tomorrow’s professionals.
“Time will reveal that the people doing really important work here are not the people sitting on the stages, but the people in the audience,” James said.
This year, that audience had 2,500 attendees, from 44 U.S. states, 42 countries, and over 220 academic institutions, along with dozens of panels, a research paper competition, and thousands of hallway conversations among networking attendees. SSAC was co-founded in 2007 by Daryl Morey SM ’00, president of basketball operations for the Philadelphia 76ers, and Jessica Gelman, CEO of KAGR, the Kraft Analytics Group. The first three conferences were held in MIT classrooms.
But even now, sports analytics remains largely a grassroots thing. Why? Because fans can observe sports intensively, without being bound to its conventions, then study it quantitatively.
“The driving thing for a lot of people is they want to take this [analytical] way of thinking and apply it to sports,” soccer journalist Ryan O’Hanlon of ESPN said to MIT News, in one of those hallway conversations.
O’Hanlon’s 2022 book, “Net Gains,” chronicles the work of several people who held non-sports jobs, made useful advances in soccer analytics, then jumped into the industry. Soon, the sport may have more landing spots, between the growth of Major League Soccer in the U.S. and women’s soccer everywhere. Also, in O’Hanlon’s estimation, only three of the 20 clubs in England’s Premier League are deeply invested in analytics: Brentford, Brighton, and (league-leading) Liverpool. That could change.
In any case, most of the people who leap from fandom to professional status are willing to examine issues that others take for granted.
“I think it’s not being afraid to question the way everyone is doing things,” O’Hanlon added. “Whether that’s how a game is played, how we acquire players, how we think about anything. Pretty much anyone who gets to a high level and has impact [in analytics] has asked those questions and found a way to answer some.”
2. Make friends with the video team.
Suppose you love a sport, start analyzing it, produce good work that gets some attention, and — jackpot! — get hired by a pro team to do analytics.
Well, as former NBA player Shane Battier pointed out during a basketball panel at SSAC, you still won’t spend any time talking to players about your beloved data. That just isn’t how professional teams work, not even stat-savvy ones.
But there is good news: Analysts can still reach coaches and athletes through skilled use of video clips. Most European soccer managers ignore data, but will pay attention to the team’s video analysts. Basketball coaches love video. In American football, film study is essential. And technology has made it easier than ever to link data to video clips.
So analysts should become buddies with the video group. Importantly, analytics professionals now grasp this better than ever, something evident at SSAC across sports.
“Video in football [soccer] is the best way to communicate and get on the same page,” said Sarah Rudd, co-founder and CTO of src | ftbl, and a former analyst for Arsenal, at Friday’s panel on soccer analytics.
3. Seek opportunities in women’s sports analytics.
Have we mentioned that women’s sports is booming? The WNBA is expanding, the size of the U.S. transfer market in women’s soccer has doubled for three straight years, and you can now find women’s college volleyball in a basic cable package.
That growth is starting to fund greater data collection, in the WNBA and elsewhere, a frequent conversation topic at SSAC.
As Jennifer Rizzotti, president of the WNBA’s Connecticut Sun, noted of her own playing days in the 1990s: “We didn’t have statistics, we didn’t have [opponents’] tendencies that were being explained to us. So, when I think of what players have access to now and how far we’ve come, it’s really impressive.” And yet, she added, the amount of data in men’s basketball remains well ahead of the women’s game: “It gives you an awareness of how far we have to go.”
Some women’s sports still lack the cash needed for basic analytics infrastructure. One Friday panelist, LPGA golfer Stacy Lewis, a 13-time winner on tour, noted that the popular ball-tracking analytics system used in men’s golf costs $1 million per week, beyond budget for the women’s game.
And at a Saturday panel, Gelman said that full data parity between men’s and women’s sports was not imminent. “Sadly, I think we’re years away because we just need more investment into it,” she said.
But there is movement. At one Saturday talk, data developer Charlotte Eisenberg detailed how the website Sports Reference — a key resource of free public data —has been adding play-by-play data for WNBA games. That can help for evaluating individual players, particularly over long time periods, and has long been available for NBA games.
In short, as women’s sports grow, their analytics opportunities will, too.
4. Don’t be daunted by someone’s blurry “eye test.”
A subtle trip-wire in sports analytics, even at SSAC, is the idea that analytics should match the so-called “eye test,” or seemingly intuitive sports observations.
Here’s the problem: There is no one “eye test” in any sport, because people’s intuitions differ. For some basketball coaches, an unselfish role player stands out. To others, a flashy off-the-dribble shooter passes the eye test, even without a high shooting percentage. That tension would exist even if statistics did not.
Enter analytics, which confirms the high value of efficient shooting (as well as old-school virtues like defense, rebounding, and avoiding turnovers). But in a twist, the definition of a good shot in basketball has famously changed. In 1979-80, the NBA introduced the three-point line; in 1985, teams were taking 3.1 three-pointers per game; now in 2024-25, teams are averaging 37.5 three-pointers per game, with great efficiency. What happened?
“People didn’t use [the three-point shot] well at the beginning,” Morey said on a Saturday panel, quipping that “they were too dumb to know that three is greater than two.”
Granted, players weren’t used to shooting threes in 1980. But it also took a long time to change intuitions in the sport. Today, analytics shows that a contested three-pointer is a higher-value shot that an open 18-foot two-pointer. That might still run counter to someone’s “eye test.”
Incidentally, always following analytically informed coaching might also lead to a more standardized, less interesting game, as Morey and basketball legend Sue Bird suggested at the same panel.
“There’s a little bit of instinct that is now removed from the game,” Bird said. Shooting threes makes sense, she concurred, but “You’re only focused on the three-point line, and it takes away all the other things.”
5. Think about absolute truths, but solve for current tactics.
Bill James set the bar high for sports analytics: His breakthrough equation, “runs created,” described how baseball works with almost Newtonian simplicity. Team runs are the product of on-base percentage and slugging percentage, divided by plate appearances. This applies to individual players, too.
But it’s almost impossible to replicate that kind of fundamental formula in other sports.
“I think in soccer there’s still a ton to learn about how the game works,” O’Hanlon told MIT News. Should a team patiently build possession, play long balls, or press up high? And how do we value players with wildly varying roles?
That sometimes leads to situations where, O’Hanlon notes, “No one really knows the right questions that the data should be asking, because no one really knows the right way to play soccer.”
Happily, the search for underlying truths can also produce some tactical insights. Consider one of the three finalists in the conference’s research paper competition, “A Machine Learning Approach to Player Value and Decision Making in Professional Ultimate Frisbee,” by Braden Eberhard, Jacob Miller, and Nathan Sandholtz.
In it, the authors examine playing patterns in ultimate, seeing if teams score more by using a longer string of higher-percentage short-range passes, or by trying longer, high-risk throws. They found that players tend to try higher-percentage passes, although there is some variation, including among star players. That suggests tactical flexibility matters. If the defense is trying to take away short passes, throw long sometimes.
It is a classic sports issue: The right way to play often depends on how your opponent is playing. In the search for ultimate truths, analysts can reveal the usefulness of short-term tactics. That helps team win, which helps analytics types stay employed. But none of this would come to light if analysts weren’t digging into the sports they love, searching for answers and trying to let the world know what they find.
“There is nothing happening here that will change your life if you don’t follow through on it,” James said. “But there are many things happening here that will change your life if you do.”
Making airfield assessments automatic, remote, and safeU.S. Air Force engineer and PhD student Randall Pietersen is using AI and next-generation imaging technology to detect pavement damage and unexploded munitions.In 2022, Randall Pietersen, a civil engineer in the U.S. Air Force, set out on a training mission to assess damage at an airfield runway, practicing “base recovery” protocol after a simulated attack. For hours, his team walked over the area in chemical protection gear, radioing in geocoordinates as they documented damage and looked for threats like unexploded munitions.
The work is standard for all Air Force engineers before they deploy, but it held special significance for Pietersen, who has spent the last five years developing faster, safer approaches for assessing airfields as a master’s student and now a PhD candidate and MathWorks Fellow at MIT. For Pietersen, the time-intensive, painstaking, and potentially dangerous work underscored the potential for his research to enable remote airfield assessments.
“That experience was really eye-opening,” Pietersen says. “We’ve been told for almost a decade that a new, drone-based system is in the works, but it is still limited by an inability to identify unexploded ordnances; from the air, they look too much like rocks or debris. Even ultra-high-resolution cameras just don’t perform well enough. Rapid and remote airfield assessment is not the standard practice yet. We’re still only prepared to do this on foot, and that’s where my research comes in.”
Pietersen’s goal is to create drone-based automated systems for assessing airfield damage and detecting unexploded munitions. This has taken him down a number of research paths, from deep learning to small uncrewed aerial systems to “hyperspectral” imaging, which captures passive electromagnetic radiation across a broad spectrum of wavelengths. Hyperspectral imaging is getting cheaper, faster, and more durable, which could make Pietersen’s research increasingly useful in a range of applications including agriculture, emergency response, mining, and building assessments.
Finding computer science and community
Growing up in a suburb of Sacramento, California, Pietersen gravitated toward math and physics in school. But he was also a cross country athlete and an Eagle Scout, and he wanted a way to put his interests together.
“I liked the multifaceted challenge the Air Force Academy presented,” Pietersen says. “My family doesn’t have a history of serving, but the recruiters talked about the holistic education, where academics were one part, but so was athletic fitness and leadership. That well-rounded approach to the college experience appealed to me.”
Pietersen majored in civil engineering as an undergrad at the Air Force Academy, where he first began learning how to conduct academic research. This required him to learn a little bit of computer programming.
“In my senior year, the Air Force research labs had some pavement-related projects that fell into my scope as a civil engineer,” Pietersen recalls. “While my domain knowledge helped define the initial problems, it was very clear that developing the right solutions would require a deeper understanding of computer vision and remote sensing.”
The projects, which dealt with airfield pavement assessments and threat detection, also led Pietersen to start using hyperspectral imaging and machine learning, which he built on when he came to MIT to pursue his master’s and PhD in 2020.
“MIT was a clear choice for my research because the school has such a strong history of research partnerships and multidisciplinary thinking that helps you solve these unconventional problems,” Pietersen says. “There’s no better place in the world than MIT for cutting-edge work like this.”
By the time Pietersen got to MIT, he’d also embraced extreme sports like ultra-marathons, skydiving, and rock climbing. Some of that stemmed from his participation in infantry skills competitions as an undergrad. The multiday competitions are military-focused races in which teams from around the world traverse mountains and perform graded activities like tactical combat casualty care, orienteering, and marksmanship.
“The crowd I ran with in college was really into that stuff, so it was sort of a natural consequence of relationship-building,” Pietersen says. “These events would run you around for 48 or 72 hours, sometimes with some sleep mixed in, and you get to compete with your buddies and have a good time.”
Since coming to MIT with his wife and two children, Pietersen has embraced the local running community and even worked as an indoor skydiving instructor in New Hampshire, though he admits the East Coast winters have been tough for him and his family to adjust to.
Pietersen went remote between 2022 to 2024, but he wasn’t doing his research from the comfort of a home office. The training that showed him the reality of airfield assessments took place in Florida, and then he was deployed to Saudi Arabia. He happened to write one of his PhD journal publications from a tent in the desert.
Now back at MIT and nearing the completion of his doctorate this spring, Pietersen is thankful for all the people who have supported him in throughout his journey.
“It has been fun exploring all sorts of different engineering disciplines, trying to figure things out with the help of all the mentors at MIT and the resources available to work on these really niche problems,” Pietersen says.
Research with a purpose
In the summer of 2020, Pietersen did an internship with the HALO Trust, a humanitarian organization working to clear landmines and other explosives from areas impacted by war. The experience demonstrated another powerful application for his work at MIT.
“We have post-conflict regions around the world where kids are trying to play and there are landmines and unexploded ordnances in their backyards,” Pietersen says. “Ukraine is a good example of this in the news today. There are always remnants of war left behind. Right now, people have to go into these potentially dangerous areas and clear them, but new remote-sensing techniques could speed that process up and make it far safer.”
Although Pietersen’s master’s work primarily revolved around assessing normal wear and tear of pavement structures, his PhD has focused on ways to detect unexploded ordnances and more severe damage.
“If the runway is attacked, there would be bombs and craters all over it,” Pietersen says. “This makes for a challenging environment to assess. Different types of sensors extract different kinds of information and each has its pros and cons. There is still a lot of work to be done on both the hardware and software side of things, but so far, hyperspectral data appears to be a promising discriminator for deep learning object detectors.”
After graduation, Pietersen will be stationed in Guam, where Air Force engineers regularly perform the same airfield assessment simulations he participated in in Florida. He hopes someday soon, those assessments will be done not by humans in protective gear, but by drones.
“Right now, we rely on visible lines of site,” Pietersen says. “If we can move to spectral imaging and deep-learning solutions, we can finally conduct remote assessments that make everyone safer.”
2025 MacVicar Faculty Fellows named MIT professors Paloma Duong, Frank Schilbach, and Justin Steil are honored for exceptional undergraduate teaching.Three outstanding educators have been named MacVicar Faculty Fellows: associate professor in comparative media studies/writing Paloma Duong, associate professor of economics Frank Schilbach, and associate professor of urban studies and planning Justin Steil.
For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program. Fellows are chosen through a highly competitive, annual nomination process. The MIT Registrar’s Office coordinates and administers the award on behalf of the Office of the Vice Chancellor; nominations are reviewed by an advisory committee, and final selections are made by the provost.
Paloma Duong: Equipping students with a holistic, global worldview
Paloma Duong is the Ford International Career Development Associate Professor of Latin American and Media Studies. Her work has helped to reinvigorate Latin American subject offerings, increase the number of Spanish minors, and build community at the Institute.
Duong brings an interdisciplinary perspective to teaching Latin American culture in dialogue with media theory and political philosophy in the Comparative Media Studies/Writing (CMS/W) program. Her approach is built on a foundation of respect for each student’s unique academic journey and underscores the importance of caring for the whole student, honoring where they can go as intellectuals, and connecting them to a world bigger than themselves.
Senior Alex Wardle says that Professor Duong “broadened my worldview and made me more receptive to new concepts and ideas … her class has deepened my critical thinking skills in a way that very few other classes at MIT have even attempted to.”
Duong’s Spanish language classes and seminars incorporate a wide range of practices — including cultural analyses, artifacts, guest speakers, and hands-on multimedia projects — to help students engage with the material, think critically, and challenge preconceived notions while learning about Latin American history. CMS/W head and professor of science writing Seth Mnookin notes, “students become conversant with region-specific vocabularies, worldviews, and challenges.” This approach makes students feel “deeply respected” and treats them as “learning partners — interlocutors in their own right,” observes Bruno Perreau, the Cynthia L. Reed Professor of French Studies and Language.
Outside the classroom, Duong takes the time to mentor and get to know students by supporting and attending programs connected to MIT Cubanos, Cena a las Seis, and Global Health Alliance. She also serves as an advisor for comparative media studies and Spanish majors, is the undergraduate officer for CMS/W, and is a member of the School of Humanities, Arts, and Social Sciences Education Advisory Committee and the Committee on Curricula.
“Subject areas like Spanish and Latin American Studies play an important role at MIT,” writes T.L. Taylor, professor in comparative media studies/writing and MacVicar Faculty Fellow. “Students find a sense of community and support in these spaces, something that should be at the heart of our attention more than ever these days. We are lucky to have such a dynamic and engaged educator like Professor Duong.”
On receiving this award, Duong says, “I’m positively elated! I’m very grateful to my students and colleagues for the nomination and am honored to become part of such a remarkable group of fellow teachers and mentors. Teaching undergraduates at MIT is always a beautiful challenge and an endless source of learning; I feel super lucky to be in this position.”
Frank Schilbach: Bringing energy and excitement to the curriculum
Frank Schilbach is an associate professor in the Department of Economics. His connection and dedication to undergraduates, combined with his efforts in communicating the importance of economics as a field of study, were key components in the revitalization of Course 14.
When Schilbach arrived at MIT in 2015, there were only three sophomore economics majors. “A less committed teacher would have probably just taken it as a given and got on with their research,” writes professor of economics Abhijit Banerjee. “Frank, instead, took it as a challenge … his patient efforts in convincing students that they need to make economics a part of their general education was a key reason why innovations [to broaden the major] succeeded. The department now has more than 40 sophomores.”
In addition to bolstering enrollment, Schilbach had a hand in curricular improvements. Among them, he created a “next step” for students completing class 14.01 (Principles of Microeconomics) with a revised class 14.13 (Psychology and Economics) that goes beyond classic topics in behavioral economics to explore links with poverty, mental health, happiness, and identity.
Even more significant is the thoughtful and inclusive approach to teaching that Schilbach brings. “He is considerate and careful, listening to everyone, explaining concepts while making students understand that we care about them … it is just a joy to see how the students revel in the activities and the learning,” writes Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics. Erin Grela ’20 notes, “Professor Schilbach goes above and beyond to solicit student feedback so that he can make real-time changes to ensure that his classes are serving his students as best they can.”
His impacts extend beyond MIT as well. Professor of economics David Atkin writes: “Many of these students are inspired by their work with Frank to continue their studies at the graduate level, with an incredible 29 of his students going on to PhD studies at many of the best programs in the country. For someone who has only recently been promoted to a tenured professor, this is a remarkable record of advising.”
“I am delighted to be selected as a MacVicar Fellow,” says Schilbach. “I am thrilled that students find my courses valuable, and it brings me great joy to think that my teaching may help some students improve their well-being and inspire them to use their incredible talents to better the lives of others.”
Justin Steil: Experiential learning meets public service
“I am honored to join the MacVicar Faculty Fellows,” writes associate professor of law and urban planning Justin Steil. “I am deeply grateful to have the chance to teach and learn with such hard-working and creative students who are enthusiastic about collaborating to discover new knowledge and solve hard problems, in the classroom and beyond.”
Professor Steil uses his background as a lawyer, a sociologist, and an urban planner to combine experiential learning with opportunities for public service. In class 11.469 (Urban Sociology in Theory and Practice), he connects students with incarcerated individuals to examine inequality at one of the state’s largest prisons, MCI Norfolk. In another undergraduate seminar, students meet with leaders of local groups like GreenRoots in Chelsea, Massachusetts; Alternatives for Community and Environment in Roxbury, Massachusetts; and the Dudley Street Neighborhood Initiative in Roxbury to work on urban environmental hazards. Ford Professor of Urban Design and Planning and MacVicar Faculty Fellow Lawrence Vale calls Steil’s classes “life-altering.”
In addition to teaching, Steil is also a paramedic and has volunteered as an EMT for MIT Emergency Medical Service (EMS), where he continues to transform routine activities into teachable moments. “There are numerous opportunities at MIT to receive mentorship and perform research. Justin went beyond that. My conversations with Justin have inspired me to go to graduate school to research medical devices in the EMS context,” says Abigail Schipper ’24.
“Justin is truly devoted to the complete education of our undergraduate students in ways that meaningfully serve the broader MIT community as well as the residents of Cambridge and Boston,” says Andrew (1956) and Erna Viterbi Professor of Biological Engineering Katharina Ribbeck. Miho Mazereeuw, associate professor of architecture and urbanism and director of the Urban Risk Lab, concurs: “through his teaching, advising, mentoring, and connections with community-based organizations and public agencies, Justin has knit together diverse threads into a coherent undergraduate experience.”
Student testimonials also highlight Steil’s ability to make each student feel special by delivering undivided attention and individualized mentorship. A former student writes: “I was so grateful to have met an instructor who believed in his students so earnestly … despite being one of the busiest people I’ve ever known, [he] … unerringly made the students he works with feel certain that he always has time for them.”
Since joining MIT in 2015, Steil has received a Committed to Caring award in 2018; the Harold E. Edgerton Award for exceptional contributions in research, teaching, and service in 2021; and a First Year Advising Award from the Office of the First Year in 2022.
Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website.
QS World University Rankings rates MIT No. 1 in 11 subjects for 2025The Institute also ranks second in seven subject areas.QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2025, the organization announced today.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 13 straight years.
How do we foster trust in science in an increasingly polarized world? A group including scientists, journalists, policymakers and more gathered at MIT on March 10 to discuss how to bridge the gap between scientific expertise and understanding.
The conference, titled “Building Trust in Science for a More Informed Future,” was organized by the MIT Press and the nonprofit Aspen Institute’s Science and Society Program. It featured talks about the power of storytelling, the role of social media and generative artificial intelligence in our information landscape, and why discussions about certain science topics can become so emotionally heated.
A common theme was the importance of empathy between science communicators and the public.
“The idea that disagreement is often seen as disrespect is insightful,” said MIT’s Ford Professor of Political Science Lily Tsai. “One way to communicate respect is genuine curiosity along with the willingness to change one’s mind. We’re often focused on the facts and evidence and saying, ‘Don’t you understand the facts?’ But the ideal conversation is more like, ‘You value ‘x.’ Tell me why you value ‘x’ and let’s see if we can connect on how the science and research helps you to fulfill those values, even if I don’t agree with them.’”
Many participants discussed the threat of misinformation, a problem exacerbated by the emergence of social media and generative AI. But it’s not all bad news for the scientific community. MIT Provost Cindy Barnhart opened the event by citing surveys showing a high level of trust broadly in scientists across the globe. Still, she also pointed to a U.S. survey showing communication was seen as an area of relative weakness for scientists.
Barnhart noted MIT’s long commitment to science communication and commended communication efforts affiliated with MIT including MIT Press, MIT Technology Review, and MIT News.
“We’re working hard to communicate the value of science to society as we fight to build public support for the scientific research, discovery, and evidence that is needed in our society,” Barnhart said. “At MIT, an essential way we do that is by shining a bright light on the groundbreaking work of our faculty, research, scientists, staff, postdocs, and students.”
Another theme was the importance of storytelling in science communication, and participants including the two keynote speakers offered plenty of their own stories. Francis Collins, who directed the National Institutes of Health between 2009 and 2021, and Sudanese climate journalist Lina Yassin delivered a joint keynote address moderated by MIT Vice President for Communications Alfred Ironside.
Recalling his time leading the NIH through the Covid-19 pandemic, Collins said the Covid-19 vaccine development was a major success, but the scientific community failed to explain to the public the way science evolves based on new evidence.
“We missed a chance to use the pandemic as a teachable moment,” Collins said. “In March of 2020, we were just starting to learn about the virus and how it spread, but we had to make recommendations to the public, which would often change a month or two later. So people began to doubt the information they were getting was reliable because it kept changing. If you’re in a circumstance where you’re communicating scientific evidence, start by saying, ‘This is a work in progress.’”
Collins said the government should have had a better plan for communicating information to the public when the pandemic started.
“Our health system was badly broken at the time because it had been underinvested in for far too long, so community-based education wasn’t really possible,” Collins said, noting his agency should have done more to empower physicians who were trusted voices in rural communities. “Far too much of our communication was top down.”
In her keynote address, Yassin shared her experience trying to get people in her home country to evacuate ahead of natural disasters. She said many people initially ignored her advice, citing their faith in God’s plan for them. But when she reframed her messaging to incorporate the teachings of Islam, a religion most of the country practices, she said people were much more receptive.
That was another recurring lesson participants shared: Science discussions don’t occur in a vacuum. Any conversation that ignores a person’s existing values and experiences will be less effective.
“Personal experience, as well as personal faith and belief, are critically important filters that we encounter every time we talk to people about science,” Ironside said.
Want to climb the leadership ladder? Try debate training Experiments find debate training boosts careers by enhancing assertiveness and communications techniques.For those looking to climb the corporate ladder in the U.S., here’s an idea you might not have considered: debate training.
According to a new research paper, people who learn the basics of debate are more likely to advance to leadership roles in U.S. organizations, compared to those who do not receive this training. One key reason is that being equipped with debate skills makes people more assertive in the workplace.
“Debate training can promote leadership emergence and advancement by fostering individuals’ assertiveness, which is a key, valued leadership characteristic in U.S. organizations,” says MIT Associate Professor Jackson Lu, one of the scholars who conducted the study.
The research is based on two experiments and provides empirical insights into leadership development, a subject more often discussed anecdotally than studied systematically.
“Leadership development is a multi-billion-dollar industry, where people spend a lot of money trying to help individuals emerge as leaders,” Lu says. “But the public doesn’t actually know what would be effective, because there hasn’t been a lot of causal evidence. That’s exactly what we provide.”
The paper, “Breaking Ceilings: Debate Training Promotes Leadership Emergence by Increasing Assertiveness,” was published Monday in the Journal of Applied Psychology. The authors are Lu, an associate professor at the MIT Sloan School of Management; Michelle X. Zhao, an undergraduate student at the Olin Business School of Washington University in St. Louis; Hui Liao, a professor and assistant dean at the University of Maryland’s Robert H. Smith School of Business; and Lu Doris Zhang, a doctoral student at MIT Sloan.
Assertiveness in the attention economy
The researchers conducted two experiments. In the first, 471 employees in a Fortune 100 firm were randomly assigned to receive either nine weeks of debate training or no training. Examined 18 months later, those receiving debate training were more likely to have advanced to leadership roles, by about 12 percentage points. This effect was statistically explained by increased assertiveness among those with debate training.
The second experiment, conducted with 975 university participants, further tested the causal effects of debate training in a controlled setting. Participants were randomly assigned to receive debate training, an alternative non-debate training, or no training. Consistent with the first experiment, participants receiving the debate training were more likely to emerge as leaders in subsequent group activities, an effect statistically explained by their increased assertiveness.
“The inclusion of a non-debate training condition allowed us to causally claim that debate training, rather than just any training, improved assertiveness and increased leadership emergence,” Zhang says.
To some people, increasing assertiveness might not seem like an ideal recipe for success in an organizational setting, as it might seem likely to increase tensions or decrease cooperation. But as the authors note, the American Psychological Association conceptualizes assertiveness as “an adaptive style of communication in which individuals express their feelings and needs directly, while maintaining respect for others.”
Lu adds: “Assertiveness is conceptually different from aggressiveness. To speak up in meetings or classrooms, people don’t need to be aggressive jerks. You can ask questions politely, yet still effectively express opinons. Of course, that’s different from not saying anything at all.”
Moreover, in the contemporary world where we all must compete for attention, refined communication skills may be more important than ever.
“Whether it is cutting filler or mastering pacing, knowing how to assert our opinions helps us sound more leader-like,” Zhang says.
How firms identify leaders
The research also finds that debate training benefits people across demographics: Its impact was not significantly different for men or women, for those born in the U.S. or outside it, or for different ethnic groups.
However, the findings raise still other questions about how firms identify leaders. As the results show, individuals might have incentive to seek debate training and other general workplace skills. But how much responsibility do firms have to understand and recognize the many kinds of skills, beyond assertiveness, that employees may have?
“We emphasize that the onus of breaking leadership barriers should not fall on individuals themelves,” Lu says. “Organizations should also recognize and appreciate different communication and leadership styles in the workplace.”
Lu also notes that ongoing work is needed to understand if those firms are properly valuing the attributes of their own leaders.
“There is an important distinction between leadership emergence and leadership effectiveness,” Lu says. “Our paper looks at leadership emergence. It’s possible that people who are better listeners, who are more cooperative, and humbler, should also be selected for leadership positions because they are more effective leaders.”
This research was partly funded by the Society for Personality and Social Psychology.
Making solar projects cheaper and faster with portable factoriesCharge Robotics, founded by MIT alumni, has created a system that automatically assembles and installs completed sections of large solar farms.As the price of solar panels has plummeted in recent decades, installation costs have taken up a greater share of the technology’s overall price tag. The long installation process for solar farms is also emerging as a key bottleneck in the deployment of solar energy.
Now the startup Charge Robotics is developing solar installation factories to speed up the process of building large-scale solar farms. The company’s factories are shipped to the site of utility solar projects, where equipment including tracks, mounting brackets, and panels are fed into the system and automatically assembled. A robotic vehicle autonomously puts the finished product — which amounts to a completed section of solar farm — in its final place.
“We think of this as the Henry Ford moment for solar,” says CEO Banks Hunter ’15, who founded Charge Robotics with fellow MIT alumnus Max Justicz ’17. “We’re going from a very bespoke, hands on, manual installation process to something much more streamlined and set up for mass manufacturing. There are all kinds of benefits that come along with that, including consistency, quality, speed, cost, and safety.”
Last year, solar energy accounted for 81 percent of new electric capacity in the U.S., and Hunter and Justicz see their factories as necessary for continued acceleration in the industry.
The founders say they were met with skepticism when they first unveiled their plans. But in the beginning of last year, they deployed a prototype system that successfully built a solar farm with SOLV Energy, one of the largest solar installers in the U.S. Now, Charge has raised $22 million for its first commercial deployments later this year.
From surgical robots to solar robots
While majoring in mechanical engineering at MIT, Hunter found plenty of excuses to build things. One such excuse was Course 2.009 (Product Engineering Processes), where he and his classmates built a smart watch for communication in remote areas.
After graduation, Hunter worked for the MIT alumni-founded startups Shaper Tools and Vicarious Surgical. Vicarious Surgical is a medical robotics company that has raised more than $450 million to date. Hunter was the second employee and worked there for five years.
“A lot of really hands on, project-based classes at MIT translated directly into my first roles coming out of school and set me up to be very independent and run large engineering projects,” Hunter says, “Course 2.009, in particular, was a big launch point for me. The founders of Vicarious Surgical got in touch with me through the 2.009 network.”
As early as 2017, Hunter and Justicz, who majored in mechanical engineering and computer science, had discussed starting a company together. But they had to decide where to apply their broad engineering and product skillsets.
“Both of us care a lot about climate change. We see climate change as the biggest problem impacting the greatest number of people on the planet,” Hunter says. “Our mentality was if we can build anything, we might as well build something that really matters.”
In the process of cold calling hundreds of people in the energy industry, the founders decided solar was the future of energy production because its price was decreasing so quickly.
“It’s becoming cheaper faster than any other form of energy production in human history,” Hunter says.
When the founders began visiting construction sites for the large, utility-scale solar farms that make up the bulk of energy generation, it wasn’t hard to find the bottlenecks. The first site they traveled to was in the Mojave Desert in California. Hunter describes it as a massive dust bowl where thousands of workers spent months repeating tasks like moving material and assembling the same parts, over and over again.
“The site had something like 2 million panels on it, and every single one was assembled and fastened the same way by hand,” Hunter says. “Max and I thought it was insane. There’s no way that can scale to transform the energy grid in a short window of time.”
Hunter says he heard from each of the largest solar companies in the U.S. that their biggest limitation for scaling was labor shortages. The problem was slowing growth and killing projects.
Hunter and Justicz founded Charge Robotics in 2021 to break through that bottleneck. Their first step was to order utility solar parts and assemble them by hand in their backyards.
“From there, we came up with this portable assembly line that we could ship out to construction sites and then feed in the entire solar system, including the steel tracks, mounting brackets, fasteners, and the solar panels,” Hunter explains. “The assembly line robotically assembles all those pieces to produce completed solar bays, which are chunks of a solar farm.”
Each bay represents a 40-foot piece of the solar farm and weighs about 800 pounds. A robotic vehicle brings it to its final location in the field. Hunter says Charge’s system automates all mechanical installation except for the process of pile driving the first metal stakes into the ground.
Charge’s assembly lines also have machine-vision systems that scan each part to ensure quality, and the systems work with the most common solar parts and panel sizes.
From pilot to product
When the founders started pitching their plans to investors and construction companies, people didn’t believe it was possible.
“The initial feedback was basically, ‘This will never work,’” Hunter says. “But as soon as we took our first system out into the field and people saw it operating, they got much more excited and started believing it was real.”
Since that first deployment, Charge’s team has been making its system faster and easier to operate. The company plans to set up its factories at project sites and run them in partnership with solar construction companies. The factories could even run alongside human workers.
“With our system, people are operating robotic equipment remotely rather than putting in the screws themselves,” Hunter explains. “We can essentially deliver the assembled solar to customers. Their only responsibility is to deliver the materials and parts on big pallets that we feed into our system.”
Hunter says multiple factories could be deployed at the same site and could also operate 24/7 to dramatically speed up projects.
“We are hitting the limits of solar growth because these companies don’t have enough people,” Hunter says. “We can build much bigger sites much faster with the same number of people by just shipping out more of our factories. It’s a fundamentally new way of scaling solar energy.”
Compassionate leadership Professors Emery Brown and Hamsa Balakrishnan are honored as “Committed to Caring” for their guidance of graduate students.Professors Emery Brown and Hamsa Balakrishnan work in vastly different fields, but are united by their deep commitment to mentoring students. While each has contributed to major advancements in their respective areas — statistical neuroscience for Brown, and large-scale transportation systems for Balakrishnan — their students might argue that their greatest impact comes from the guidance, empathy, and personal support they provide.
Emery Brown: Holistic mentorship
Brown is the Edward Hood Professor of Medical Engineering and Computational Neuroscience at MIT and a practicing anesthesiologist at Massachusetts General Hospital. Brown’s experimental research has made important contributions toward understanding the neuroscience of how anesthetics act in the brain to create the states of general anesthesia.
One of the biggest challenges in academic environments is knowing how to chart a course. Brown takes the time to connect with students individually, helping them identify meaningful pathways that they may not have considered for themselves. In addition to mentoring his graduate students and postdocs, Brown also hosts clinicians and faculty from around the world. Their presence in the lab exposes students to a number of career opportunities and connections outside of MIT’s academic environment.
Brown also continues to support former students beyond their time in his lab, offering guidance on personal and professional development even after they have moved on to other roles. “Knowing that I have Emery at my back as someone I can always turn to … is such a source of confidence and strength as I go forward into my own career,” one nominator wrote.
When Brown faced a major career decision recently, he turned to his students to ask how his choice might affect them. He met with students individually to understand the personal impact that each might experience. Brown was adamant in ensuring that his professional advancement would not jeopardize his students, and invested a great deal of thought and effort in ensuring a positive outcome for them.
Brown is deeply committed to the health and well-being of his students, with many nominators sharing examples of his constant support through challenging personal circumstances. When one student reached out to Brown, overwhelmed by research, recent personal loss, and career uncertainty, Brown created a safe space for vulnerable conversations.
“He listened, supported me, and encouraged me to reflect on my aspirations for the next five years, assuring me that I should pursue them regardless of any obstacles,” the nominator shared. “Following our conversation, I felt more grounded and regained momentum in my research project.”
In summation, his student felt that Brown’s advice was “simple, yet enlightening, and exactly what I needed to hear at that moment.”
Hamsa Balakrishnan: Unequivocal advocacy
Balakrishnan is the William E. Leonhard Professor of Aeronautics and Astronautics at MIT. She leads the Dynamics, Infrastructure Networks, and Mobility (DINaMo) Research Group. Her current research interests are in the design, analysis, and implementation of control and optimization algorithms for large-scale cyber-physical infrastructures, with an emphasis on air transportation systems.
Her nominators commended Balakrishnan for her efforts to support and advocate for all of her students. In particular, she connects her students to academic mentors within the community, which contributes to their sense of acceptance within the field.
Balakrishnan’s mindfulness in respecting personal expression and her proactive approach to making everyone feel welcome have made a lasting impact on her students. “Hamsa’s efforts have encouraged me to bring my full self to the workplace,” one student wrote; “I will be forever grateful for her mentorship and kindness as an advisor.”
One student shared their experience of moving from a difficult advising situation to working with Balakrishnan, describing how her mentorship was crucial in the nominator’s successful return to research: “Hamsa’s mentorship has been vital to building up my confidence as a researcher, as she [often] provides helpful guidance and positive affirmation.”
Balakrishnan frequently gives her students freedom to independently explore and develop their research interests. When students wanted to delve into new areas like space research — far removed from her expertise in air traffic management and uncrewed aerial vehicles — Balakrishnan embraced the challenge and learned about these topics in order to provide better guidance.
One student described how Balakrishnan consistently encouraged the lab to work on topics that interested them. This led the student to develop a novel research topic and publish a first author paper within months of joining the lab.
Balakrishnan is deeply committed to promoting a healthy work-life balance for her students. She ensures that mentees do not feel compelled to overwork by encouraging them to take time off. Even if students do not have significant updates, Balakrishnan encourages weekly meetings to foster an open line of communication. She helps them set attainable goals, especially when it comes to tasks like paper reading and writing, and never pressures them to work late hours in order to meet paper or conference deadlines.
How nature organizes itself, from brain cells to ecosystemsMcGovern Institute researchers develop a mathematical model to help define how modularity occurs in the brain — and across nature.Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity — a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?
A new study from MIT Professor Ila Fiete suggests a surprising answer.
In findings published Feb. 18 in Nature, Fiete, an associate investigator in the McGovern Institute for Brain Research and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.
Joining two big ideas
“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona PhD '25, a former graduate student and K. Lisa Yang ICoN Center graduate fellow, and postdoc Sarthak Chandra also led the study.
Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition — small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.
Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.
Modular systems in the brain
The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale — they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.
No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.
Modular systems in nature
The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread, and also to vary, smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries — distinct ecological “neighborhoods” that don’t overlap.
Fiete’s study suggests why: local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection — and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.
A self-organizing world
One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same — they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.
The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.
Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”
Study: Climate change will reduce the number of satellites that can safely orbit in space Increasing greenhouse gas emissions will reduce the atmosphere’s ability to burn up old space junk, MIT scientists report.MIT aerospace engineers have found that greenhouse gas emissions are changing the environment of near-Earth space in ways that, over time, will reduce the number of satellites that can sustainably operate there.
In a study appearing today in Nature Sustainability, the researchers report that carbon dioxide and other greenhouse gases can cause the upper atmosphere to shrink. An atmospheric layer of special interest is the thermosphere, where the International Space Station and most satellites orbit today. When the thermosphere contracts, the decreasing density reduces atmospheric drag — a force that pulls old satellites and other debris down to altitudes where they will encounter air molecules and burn up.
Less drag therefore means extended lifetimes for space junk, which will litter sought-after regions for decades and increase the potential for collisions in orbit.
The team carried out simulations of how carbon emissions affect the upper atmosphere and orbital dynamics, in order to estimate the “satellite carrying capacity” of low Earth orbit. These simulations predict that by the year 2100, the carrying capacity of the most popular regions could be reduced by 50-66 percent due to the effects of greenhouse gases.
“Our behavior with greenhouse gases here on Earth over the past 100 years is having an effect on how we operate satellites over the next 100 years,” says study author Richard Linares, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro).
“The upper atmosphere is in a fragile state as climate change disrupts the status quo,” adds lead author William Parker, a graduate student in AeroAstro. “At the same time, there’s been a massive increase in the number of satellites launched, especially for delivering broadband internet from space. If we don’t manage this activity carefully and work to reduce our emissions, space could become too crowded, leading to more collisions and debris.”
The study includes co-author Matthew Brown of the University of Birmingham.
Sky fall
The thermosphere naturally contracts and expands every 11 years in response to the sun’s regular activity cycle. When the sun’s activity is low, the Earth receives less radiation, and its outermost atmosphere temporarily cools and contracts before expanding again during solar maximum.
In the 1990s, scientists wondered what response the thermosphere might have to greenhouse gases. Their preliminary modeling showed that, while the gases trap heat in the lower atmosphere, where we experience global warming and weather, the same gases radiate heat at much higher altitudes, effectively cooling the thermosphere. With this cooling, the researchers predicted that the thermosphere should shrink, reducing atmospheric density at high altitudes.
In the last decade, scientists have been able to measure changes in drag on satellites, which has provided some evidence that the thermosphere is contracting in response to something more than the sun’s natural, 11-year cycle.
“The sky is quite literally falling — just at a rate that’s on the scale of decades,” Parker says. “And we can see this by how the drag on our satellites is changing.”
The MIT team wondered how that response will affect the number of satellites that can safely operate in Earth’s orbit. Today, there are over 10,000 satellites drifting through low Earth orbit, which describes the region of space up to 1,200 miles (2,000 kilometers), from Earth’s surface. These satellites deliver essential services, including internet, communications, navigation, weather forecasting, and banking. The satellite population has ballooned in recent years, requiring operators to perform regular collision-avoidance maneuvers to keep safe. Any collisions that do occur can generate debris that remains in orbit for decades or centuries, increasing the chance for follow-on collisions with satellites, both old and new.
“More satellites have been launched in the last five years than in the preceding 60 years combined,” Parker says. “One of key things we’re trying to understand is whether the path we’re on today is sustainable.”
Crowded shells
In their new study, the researchers simulated different greenhouse gas emissions scenarios over the next century to investigate impacts on atmospheric density and drag. For each “shell,” or altitude range of interest, they then modeled the orbital dynamics and the risk of satellite collisions based on the number of objects within the shell. They used this approach to identify each shell’s “carrying capacity” — a term that is typically used in studies of ecology to describe the number of individuals that an ecosystem can support.
“We’re taking that carrying capacity idea and translating it to this space sustainability problem, to understand how many satellites low Earth orbit can sustain,” Parker explains.
The team compared several scenarios: one in which greenhouse gas concentrations remain at their level from the year 2000 and others where emissions change according to the Intergovernmental Panel on Climate Change (IPCC) Shared Socioeconomic Pathways (SSPs). They found that scenarios with continuing increases in emissions would lead to a significantly reduced carrying capacity throughout low Earth orbit.
In particular, the team estimates that by the end of this century, the number of satellites safely accommodated within the altitudes of 200 and 1,000 kilometers could be reduced by 50 to 66 percent compared with a scenario in which emissions remain at year-2000 levels. If satellite capacity is exceeded, even in a local region, the researchers predict that the region will experience a “runaway instability,” or a cascade of collisions that would create so much debris that satellites could no longer safely operate there.
Their predictions forecast out to the year 2100, but the team says that certain shells in the atmosphere today are already crowding up with satellites, particularly from recent “megaconstellations” such as SpaceX’s Starlink, which comprises fleets of thousands of small internet satellites.
“The megaconstellation is a new trend, and we’re showing that because of climate change, we’re going to have a reduced capacity in orbit,” Linares says. “And in local regions, we’re close to approaching this capacity value today.”
“We rely on the atmosphere to clean up our debris. If the atmosphere is changing, then the debris environment will change too,” Parker adds. “We show the long-term outlook on orbital debris is critically dependent on curbing our greenhouse gas emissions.”
This research is supported, in part, by the U.S. National Science Foundation, the U.S. Air Force, and the U.K. Natural Environment Research Council.
Study: Tuberculosis relies on protective genes during airborne transmissionThe findings provide new drug targets for stopping the infection’s spread.Tuberculosis lives and thrives in the lungs. When the bacteria that cause the disease are coughed into the air, they are thrust into a comparatively hostile environment, with drastic changes to their surrounding pH and chemistry. How these bacteria survive their airborne journey is key to their persistence, but very little is known about how they protect themselves as they waft from one host to the next.
Now MIT researchers and their collaborators have discovered a family of genes that becomes essential for survival specifically when the pathogen is exposed to the air, likely protecting the bacterium during its flight.
Many of these genes were previously considered to be nonessential, as they didn’t seem to have any effect on the bacteria’s role in causing disease when injected into a host. The new work suggests that these genes are indeed essential, though for transmission rather than proliferation.
“There is a blind spot that we have toward airborne transmission, in terms of how a pathogen can survive these sudden changes as it circulates in the air,” says Lydia Bourouiba, who is the head of the Fluid Dynamics of Disease Transmission Laboratory, an associate professor of civil and environmental engineering and mechanical engineering, and a core faculty member in the Instiute for Medical Engineering and Science at MIT. “Now we have a sense, through these genes, of what tools tuberculosis uses to protect itself.”
The team’s results, appearing this week in the Proceedings of the National Academy of Sciences, could provide new targets for tuberculosis therapies that simultaneously treat infection and prevent transmission.
“If a drug were to target the product of these same genes, it could effectively treat an individual, and even before that person is cured, it could keep the infection from spreading to others,” says Carl Nathan, chair of the Department of Microbiology and Immunology and R.A. Rees Pritchett Professor of Microbiology at Weill Cornell Medicine.
Nathan and Bourouiba are co-senior authors of the study, which includes MIT co-authors and mentees of Bourouiba in the Fluids and Health Network: co-lead author postdoc Xiaoyi Hu, postdoc Eric Shen, and student mentees Robin Jahn and Luc Geurts. The study also includes collaborators from Weill Cornell Medicine, the University of California at San Diego, Rockefeller University, Hackensack Meridian Health, and the University of Washington.
Pathogen’s perspective
Tuberculosis is a respiratory disease caused by Mycobacterium tuberculosis, a bacterium that most commonly affects the lungs and is transmitted through droplets that an infected individual expels into the air, often through coughing or sneezing. Tuberculosis is the single leading cause of death from infection, except during the major global pandemics caused by viruses.
“In the last 100 years, we have had the 1918 influenza, the 1981 HIV AIDS epidemic, and the 2019 SARS Cov2 pandemic,” Nathan notes. “Each of those viruses has killed an enormous number of people. And as they have settled down, we are left with a ‘permanent pandemic’ of tuberculosis.”
Much of the research on tuberculosis centers on its pathophysiology — the mechanisms by which the bacteria take over and infect a host — as well as ways to diagnose and treat the disease. For their new study, Nathan and Bourouiba focused on transmission of tuberculosis, from the perspective of the bacterium itself, to investigate what defenses it might rely on to help it survive its airborne transmission.
“This is one of the first attempts to look at tuberculosis from the airborne perspective, in terms of what is happening to the organism, at the level of being protected from these sudden changes and very harsh biophysical conditions,” Bourouiba says.
Critical defense
At MIT, Bourouiba studies the physics of fluids and the ways in which droplet dynamics can spread particles and pathogens. She teamed up with Nathan, who studies tuberculosis, and the genes that the bacteria rely on throughout their life cycle.
To get a handle on how tuberculosis can survive in the air, the team aimed to mimic the conditions that the bacterium experiences during transmission. The researchers first looked to develop a fluid that is similar in viscosity and droplet sizes to what a patient would cough or sneeze out into the air. Bourouiba notes that much of the experimental work that has been done on tuberculosis in the past has been based on a liquid solution that scientists use to grow the bacteria. But the team found that this liquid has a chemical composition that is very different from the fluid that tuberculosis patients actually cough and sneeze into the air.
Additionally, Bourouiba notes that fluid commonly sampled from tuberculosis patients is based on sputum that a patient spits out, for instance for a diagnostic test. “The fluid is thick and gooey and it’s what most of the tuberculosis world considers to represent what is happening in the body,” she says. “But it’s extraordinarily inefficient in spreading to others because it’s too sticky to break into inhalable droplets.”
Through Bourouiba’s work with fluid and droplet physics, the team determined the more realistic viscosity and likely size distribution of tuberculosis-carrying microdroplets that would be transmitted through the air. The team also characterized the droplet compositions, based on analyses of patient samples of infected lung tissues. They then created a more realistic fluid, with a composition, viscosity, surface tension and droplet size that is similar to what would be released into the air from exhalations.
Then, the researchers deposited different fluid mixtures onto plates in tiny individual droplets and measured in detail how they evaporate and what internal structure they leave behind. They observed that the new fluid tended to shield the bacteria at the center of the droplet as the droplet evaporated, compared to conventional fluids where bacteria tended to be more exposed to the air. The more realistic fluid was also capable of retaining more water.
Additionally, the team infused each droplet with bacteria containing genes with various knockdowns, to see whether the absence of certain genes would affect the bacteria’s survival as the droplets evaporated.
In this way, the team assessed the activity of over 4,000 tuberculosis genes and discovered a family of several hundred genes that seemed to become important specifically as the bacteria adapted to airborne conditions. Many of these genes are involved in repairing damage to oxidized proteins, such as proteins that have been exposed to air. Other activated genes have to do with destroying damaged proteins that are beyond repair.
“What we turned up was a candidate list that’s very long,” Nathan says. “There are hundreds of genes, some more prominently implicated than others, that may be critically involved in helping tuberculosis survive its transmission phase.”
The team acknowledges the experiments are not a complete analog of the bacteria’s biophysical transmission. In reality, tuberculosis is carried in droplets that fly through the air, evaporating as they go. In order to carry out their genetic analyses, the team had to work with droplets sitting on a plate. Under these constraints, they mimicked the droplet transmission as best they could, by setting the plates in an extremely dry chamber to accelerate the droplets’ evaporation, analogous to what they would experience in flight.
Going forward, the researchers have started experimenting with platforms that allow them to study the droplets in flight, in a range of conditions. They plan to focus on the new family of genes in even more realistic experiments, to confirm whether the genes do indeed shield Mycobacterium tuberculosis as it is transmitted through the air, potentially opening the way to weakening its airborne defenses.
“The idea of waiting to find someone with tuberculosis, then treating and curing them, is a totally inefficient way to stop the pandemic,” Nathan says. “Most people who exhale tuberculosis do not yet have a diagnosis. So we have to interrupt its transmission. And how do you do that, if you don’t know anything about the process itself? We have some ideas now.”
This work was supported, in part, by the National Institutes of Health, the Abby and Howard P. Milstein Program in Chemical Biology and Translational Medicine, and the Potts Memorial Foundation, the National Science Foundation Center for Analysis and Prediction of Pandemic Expansion (APPEX), Inditex, NASA Translational Research Institute for Space Health , and Analog Devices, Inc.
Robotic helper making mistakes? Just nudge it in the right directionNew research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.Imagine that a robot is helping you clean the dishes. You ask it to grab a soapy bowl out of the sink, but its gripper slightly misses the mark.
Using a new framework developed by MIT and NVIDIA researchers, you could correct that robot’s behavior with simple interactions. The method would allow you to point to the bowl or trace a trajectory to it on a screen, or simply give the robot’s arm a nudge in the right direction.
Unlike other methods for correcting robot behavior, this technique does not require users to collect new data and retrain the machine-learning model that powers the robot’s brain. It enables a robot to use intuitive, real-time human feedback to choose a feasible action sequence that gets as close as possible to satisfying the user’s intent.
When the researchers tested their framework, its success rate was 21 percent higher than an alternative method that did not leverage human interventions.
In the long run, this framework could enable a user to more easily guide a factory-trained robot to perform a wide variety of household tasks even though the robot has never seen their home or the objects in it.
“We can’t expect laypeople to perform data collection and fine-tune a neural network model. The consumer will expect the robot to work right out of the box, and if it doesn’t, they would want an intuitive mechanism to customize it. That is the challenge we tackled in this work,” says Felix Yanwei Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this method.
His co-authors include Lirui Wang PhD ’24 and Yilun Du PhD ’24; senior author Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino PhD ’19, and Dieter Fox of NVIDIA. The research will be presented at the International Conference on Robots and Automation.
Mitigating misalignment
Recently, researchers have begun using pre-trained generative AI models to learn a “policy,” or a set of rules, that a robot follows to complete an action. Generative models can solve multiple complex tasks.
During training, the model only sees feasible robot motions, so it learns to generate valid trajectories for the robot to follow.
While these trajectories are valid, that doesn’t mean they always align with a user’s intent in the real world. The robot might have been trained to grab boxes off a shelf without knocking them over, but it could fail to reach the box on top of someone’s bookshelf if the shelf is oriented differently than those it saw in training.
To overcome these failures, engineers typically collect data demonstrating the new task and re-train the generative model, a costly and time-consuming process that requires machine-learning expertise.
Instead, the MIT researchers wanted to allow users to steer the robot’s behavior during deployment when it makes a mistake.
But if a human interacts with the robot to correct its behavior, that could inadvertently cause the generative model to choose an invalid action. It might reach the box the user wants, but knock books off the shelf in the process.
“We want to allow the user to interact with the robot without introducing those kinds of mistakes, so we get a behavior that is much more aligned with user intent during deployment, but that is also valid and feasible,” Wang says.
Their framework accomplishes this by providing the user with three intuitive ways to correct the robot’s behavior, each of which offers certain advantages.
First, the user can point to the object they want the robot to manipulate in an interface that shows its camera view. Second, they can trace a trajectory in that interface, allowing them to specify how they want the robot to reach the object. Third, they can physically move the robot’s arm in the direction they want it to follow.
“When you are mapping a 2D image of the environment to actions in a 3D space, some information is lost. Physically nudging the robot is the most direct way to specifying user intent without losing any of the information,” says Wang.
Sampling for success
To ensure these interactions don’t cause the robot to choose an invalid action, such as colliding with other objects, the researchers use a specific sampling procedure. This technique lets the model choose an action from the set of valid actions that most closely aligns with the user’s goal.
“Rather than just imposing the user’s will, we give the robot an idea of what the user intends but let the sampling procedure oscillate around its own set of learned behaviors,” Wang explains.
This sampling method enabled the researchers’ framework to outperform the other methods they compared it to during simulations and experiments with a real robot arm in a toy kitchen.
While their method might not always complete the task right away, it offers users the advantage of being able to immediately correct the robot if they see it doing something wrong, rather than waiting for it to finish and then giving it new instructions.
Moreover, after a user nudges the robot a few times until it picks up the correct bowl, it could log that corrective action and incorporate it into its behavior through future training. Then, the next day, the robot could pick up the correct bowl without needing a nudge.
“But the key to that continuous improvement is having a way for the user to interact with the robot, which is what we have shown here,” Wang says.
In the future, the researchers want to boost the speed of the sampling procedure while maintaining or improving its performance. They also want to experiment with robot policy generation in novel environments.
SMART researchers pioneer nanosensor for real-time iron detection in plantsThe innovation enables nondestructive iron tracking within plant tissues, helping to optimize plant nutrient management, reduce fertilizer waste, and improve crop health.Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT, have developed a groundbreaking near-infrared (NIR) fluorescent nanosensor capable of simultaneously detecting and differentiating between iron forms — Fe(II) and Fe(III) — in living plants.
Iron is crucial for plant health, supporting photosynthesis, respiration, and enzyme function. It primarily exists in two forms: Fe(II), which is readily available for plants to absorb and use, and Fe(III), which must first be converted into Fe(II) before plants can utilize it effectively. Traditional methods only measure total iron, missing the distinction between these forms — a key factor in plant nutrition. Distinguishing between Fe(II) and Fe(III) provides insights into iron uptake efficiency, helps diagnose deficiencies or toxicities, and enables precise fertilization strategies in agriculture, reducing waste and environmental impact while improving crop productivity.
The first-of-its-kind nanosensor developed by SMART researchers enables real-time, nondestructive monitoring of iron uptake, transport, and changes between its different forms — providing precise and detailed observations of iron dynamics. Its high spatial resolution allows precise localization of iron in plant tissues or subcellular compartments, enabling the measurement of even minute changes in iron levels within plants — changes that can inform how a plant handles stress and uses nutrients.
Traditional detection methods are destructive, or limited to a single form of iron. This new technology enables the diagnosis of deficiencies and optimization of fertilization strategies. By identifying insufficient or excessive iron intake, adjustments can be made to enhance plant health, reduce waste, and support more sustainable agriculture. While the nanosensor was tested on spinach and bok choy, it is species-agnostic, allowing it to be applied across a diverse range of plant species without genetic modification. This capability enhances our understanding of iron dynamics in various ecological settings, providing comprehensive insights into plant health and nutrient management. As a result, it serves as a valuable tool for both fundamental plant research and agricultural applications, supporting precision nutrient management, reducing fertilizer waste, and improving crop health.
“Iron is essential for plant growth and development, but monitoring its levels in plants has been a challenge. This breakthrough sensor is the first of its kind to detect both Fe(II) and Fe(III) in living plants with real-time, high-resolution imaging. With this technology, we can ensure plants receive the right amount of iron, improving crop health and agricultural sustainability,” says Duc Thinh Khong, DiSTAP research scientist and co-lead author of the paper.
“In enabling non-destructive real-time tracking of iron speciation in plants, this sensor opens new avenues for understanding plant iron metabolism and the implications of different iron variations for plants. Such knowledge will help guide the development of tailored management approaches to improve crop yield and more cost-effective soil fertilization strategies,” says Grace Tan, TLL research scientist and co-lead author of the paper.
The research, recently published in Nano Letters and titled, “Nanosensor for Fe(II) and Fe(III) Allowing Spatiotemporal Sensing in Planta,” builds upon SMART DiSTAP’s established expertise in plant nanobionics, leveraging the Corona Phase Molecular Recognition (CoPhMoRe) platform pioneered by the Strano Lab at SMART DiSTAP and MIT. The new nanosensor features single-walled carbon nanotubes (SWNTs) wrapped in a negatively charged fluorescent polymer, forming a helical corona phase structure that interacts differently with Fe(II) and Fe(III). Upon introduction into plant tissues and interaction with iron, the sensor emits distinct NIR fluorescence signals based on the iron type, enabling real-time tracking of iron movement and chemical changes.
The CoPhMoRe technique was used to develop highly selective fluorescent responses, allowing precise detection of iron oxidation states. The NIR fluorescence of SWNTs offers superior sensitivity, selectivity, and tissue transparency while minimizing interference, making it more effective than conventional fluorescent sensors. This capability allows researchers to track iron movement and chemical changes in real time using NIR imaging.
“This sensor provides a powerful tool to study plant metabolism, nutrient transport, and stress responses. It supports optimized fertilizer use, reduces costs and environmental impact, and contributes to more nutritious crops, better food security, and sustainable farming practices,” says Professor Daisuke Urano, TLL senior principal investigator, DiSTAP principal investigator, National University of Singapore adjunct assistant professor, and co-corresponding author of the paper.
“This set of sensors gives us access to an important type of signalling in plants, and a critical nutrient necessary for plants to make chlorophyll. This new tool will not just help farmers to detect nutrient deficiency, but also give access to certain messages within the plant. It expands our ability to understand the plant response to its growth environment,” says Professor Michael Strano, DiSTAP co-lead principal investigator, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper.
Beyond agriculture, this nanosensor holds promise for environmental monitoring, food safety, and health sciences, particularly in studying iron metabolism, iron deficiency, and iron-related diseases in humans and animals. Future research will focus on leveraging this nanosensor to advance fundamental plant studies on iron homeostasis, nutrient signaling, and redox dynamics. Efforts are also underway to integrate the nanosensor into automated nutrient management systems for hydroponic and soil-based farming and expand its functionality to detect other essential micronutrients. These advancements aim to enhance sustainability, precision, and efficiency in agriculture.
The research is carried out by SMART, and supported by the National Research Foundation under its Campus for Research Excellence And Technological Enterprise program.
3 Questions: Visualizing research in the age of AIFelice Frankel discusses the implications of generative AI when communicating science visually.For over 30 years, science photographer Felice Frankel has helped MIT professors, researchers, and students communicate their work visually. Throughout that time, she has seen the development of various tools to support the creation of compelling images: some helpful, and some antithetical to the effort of producing a trustworthy and complete representation of the research. In a recent opinion piece published in Nature magazine, Frankel discusses the burgeoning use of generative artificial intelligence (GenAI) in images and the challenges and implications it has for communicating research. On a more personal note, she questions whether there will still be a place for a science photographer in the research community.
Q: You’ve mentioned that as soon as a photo is taken, the image can be considered “manipulated.” There are ways you’ve manipulated your own images to create a visual that more successfully communicates the desired message. Where is the line between acceptable and unacceptable manipulation?
A: In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember the image is merely a representation of the thing, and not the thing itself. Decisions have to be made when creating the image. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure. For example, for an image I made some time ago, I digitally deleted the petri dish in which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The data in the image is the morphology of the colony. I did not manipulate that data. However, I always indicate in the text if I have done something to an image. I discuss the idea of image enhancement in my handbook, “The Visual Elements, Photography.”
Q: What can researchers do to make sure their research is communicated correctly and ethically?
A: With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for the present and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what about the visual, which is no longer tangential to a journal submission? I will bet that most readers of scientific articles go right to the figures, after they read the abstract.
We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of “nudging” an image to look a certain predetermined way. I describe in the article an incident when a student altered one of my images (without asking me) to match what the student wanted to visually communicate. I didn’t permit it, of course, and was disappointed that the ethics of such an alteration were not considered. We need to develop, at the very least, conversations on campus and, even better, create a visual literacy requirement along with the writing requirement.
Q: Generative AI is not going away. What do you see as the future for communicating science visually?
A: For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt:
“Create a photo of Moungi Bawendi’s nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light.”
The results of my AI experimentation were often cartoon-like images that could hardly pass as reality — let alone documentation — but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation.
But AI-generated visuals will, in fact, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I believe the researcher MUST
Senior Kevin Guo, a computer science major, and junior Erin Hovendon, studying mechanical engineering, are on widely divergent paths at MIT. But their lives do intersect in one dimension: They share an understanding that their political science and public policy minors provide crucial perspectives on their research and future careers.
For Guo, the connection between computer science and policy emerged through his work at MIT's Election Data and Science Lab. “When I started, I was just looking for a place to learn how to code and do data science,” he reflects. “But what I found was this fascinating intersection where technical skills could directly shape democratic processes.”
Hovendon is focused on sustainable methods for addressing climate change. She is currently participating in a multisemester research project at MIT's Environmental Dynamics Lab (ENDLab) developing monitoring technology for marine carbon dioxide removal (mCDR).
She believes the success of her research today and in the future depends on understanding its impact on society. Her academic track in policy provides that grounding. “When you’re developing a new technology, you need to focus as well on how it will be applied,” she says. “This means learning about the policies required to scale it up, and about the best ways to convey the value of what you’re working on to the public.”
Bridging STEM and policy
For both Hovendon and Guo, interdisciplinary study is proving to be a valuable platform for tangibly addressing real-world challenges.
Guo came to MIT from Andover, Massachusetts, the son of parents who specialize in semiconductors and computer science. While math and computer science were a natural track for him, Guo was also keenly interested in geopolitics. He enrolled in class 17.40 (American Foreign Policy). “It was my first engagement with MIT political science and I liked it a lot, because it dealt with historical episodes I wanted to learn more about, like World War II, the Korean War, and Vietnam,” says Guo.
He followed up with a class on American Military History and on the Rise of Asia, where he found himself enrolled with graduate students and active duty U.S. military officers. “I liked attending a course with people who had unusual insights,” Guo remarks. “I also liked that these humanities classes were small seminars, and focused a lot on individual students.”
From coding to elections
It was in class 17.835 (Machine Learning and Data Science in Politics) that Guo first realized he could directly connect his computer science and math expertise to the humanities. “They gave us big political science datasets to analyze, which was a pretty cool application of the skills I learned in my major,” he says.
Guo springboarded from this class to a three-year, undergraduate research project in the Election Data and Science Lab. “The hardest part is data collection, which I worked on for an election audit project that looked at whether there were significant differences between original vote counts and audit counts in all the states, at the precinct level,” says Guo. “We had to scrape data, raw PDFs, and create a unified dataset, standardized to our format, that we could publish.”
The data analysis skills he acquired in the lab have come in handy in the professional sphere in which he has begun training: investment finance.
“The workflow is very similar: clean the data to see what you want, analyze it to see if I can find an edge, and then write some code to implement it,” he says. “The biggest difference between finance and the lab research is that the development cycle is a lot faster, where you want to act on a dataset in a few days, rather than weeks or months.”
Engineering environmental solutions
Hovendon, a native of North Carolina with a deep love for the outdoors, arrived at MIT committed “to doing something related to sustainability and having a direct application in the world around me,” she says.
Initially, she headed toward environmental engineering, “but then I realized that pretty much every major can take a different approach to that topic,” she says. “So I ended up switching to mechanical engineering because I really enjoy the hands-on aspects of the field.”
In parallel to her design and manufacturing, and mechanics and materials courses, Hovendon also immersed herself in energy and environmental policy classes. One memorable anthropology class, 21A.404 (Living through Climate Change), asked students to consider whether technological or policy solutions could be fully effective on their own for combating climate change. “It was useful to apply holistic ways of exploring human relations to the environment,” says Hovendon.
Hovendon brings this well-rounded perspective to her research at ENDLab in marine carbon capture and fluid dynamics. She is helping to develop verification methods for mCDR at a pilot treatment plant in California. The facility aims to remove 100 tons of carbon dioxide directly from the ocean by enhancing natural processes. Hovendon hopes to design cost-efficient monitoring systems to demonstrate the efficacy of this new technology. If scaled up, mCDR could enable oceans to store significantly more atmospheric carbon, helping cool the planet.
But Hovendon is well aware that innovation with a major impact cannot emerge on the basis of technical efficacy alone.
“You're going to have people who think that you shouldn't be trying to replicate or interfere with a natural system, and if you're putting one of these facilities somewhere in water, then you're using public spaces and resources,” she says. “It's impossible to come up with any kind of technology, but especially any kind of climate-related technology, without first getting the public to buy into it.”
She recalls class 17.30J (Making Public Policy), which emphasized the importance of both economic and social analysis to the successful passage of highly impactful legislation, such as the Affordable Care Act.
“I think that breakthroughs in science and engineering should be evaluated not just through their technological prowess, but through the success of their implementation for general societal benefit,” she says. “Understanding the policy aspects is vital for improving accessibility for scientific advancements.”
Beyond the dome
Guo will soon set out for a career as a quantitative financial trader, and he views his political science background as essential to his success. While his expertise in data cleaning and analysis will come into play, he believes other skills will as well: “Understanding foreign policy, considering how U.S. policy impacts other places, that's actually very important in finance,” he explains. “Macroeconomic changes and politics affect trading volatility and markets in general, so it's very important to understand what's going on.”
With one year to go, Hovendon is contemplating graduate school in mechanical engineering, perhaps designing renewable energy technologies. “I just really hope that I'm working on something I'm genuinely passionate about, something that has a broader purpose,” she says. “In terms of politics and technology, I also hope that at least some government research and development will still go to climate work, because I'm sure there will be an urgent need for it.”
Knitted microtissue can accelerate healingLincoln Laboratory and MIT researchers are creating new types of bioabsorbable fabrics that mimic the unique way soft tissues stretch while nurturing growing cells.Treating severe or chronic injury to soft tissues such as skin and muscle is a challenge in health care. Current treatment methods can be costly and ineffective, and the frequency of chronic wounds in general from conditions such as diabetes and vascular disease, as well as an increasingly aging population, is only expected to rise.
One promising treatment method involves implanting biocompatible materials seeded with living cells (i.e., microtissue) into the wound. The materials provide a scaffolding for stem cells, or other precursor cells, to grow into the wounded tissue and aid in repair. However, current techniques to construct these scaffolding materials suffer a recurring setback. Human tissue moves and flexes in a unique way that traditional soft materials struggle to replicate, and if the scaffolds stretch, they can also stretch the embedded cells, often causing those cells to die. The dead cells hinder the healing process and can also trigger an inadvertent immune response in the body.
"The human body has this hierarchical structure that actually un-crimps or unfolds, rather than stretches," says Steve Gillmer, a researcher in MIT Lincoln Laboratory's Mechanical Engineering Group. "That's why if you stretch your own skin or muscles, your cells aren't dying. What's actually happening is your tissues are uncrimping a little bit before they stretch."
Gillmer is part of a multidisciplinary research team that is searching for a solution to this stretching setback. He is working with Professor Ming Guo from MIT's Department of Mechanical Engineering and the laboratory's Defense Fabric Discovery Center (DFDC) to knit new kinds of fabrics that can uncrimp and move just as human tissue does.
The idea for the collaboration came while Gillmer and Guo were teaching a course at MIT. Guo had been researching how to grow stem cells on new forms of materials that could mimic the uncrimping of natural tissue. He chose electrospun nanofibers, which worked well, but were difficult to fabricate at long lengths, preventing him from integrating the fibers into larger knit structures for larger-scale tissue repair.
"Steve mentioned that Lincoln Laboratory had access to industrial knitting machines," Guo says. These machines allowed him to switch focus to designing larger knits, rather than individual yarns. "We immediately started to test new ideas through internal support from the laboratory."
Gillmer and Guo worked with the DFDC to discover which knit patterns could move similarly to different types of soft tissue. They started with three basic knit constructions called interlock, rib, and jersey.
"For jersey, think of your T-shirt. When you stretch your shirt, the yarn loops are doing the stretching," says Emily Holtzman, a textile specialist at the DFDC. "The longer the loop length, the more stretch your fabric can accommodate. For ribbed, think of the cuff on your sweater. This fabric construction has a global stretch that allows the fabric to unfold like an accordion."
Interlock is similar to ribbed but is knitted in a denser pattern and contains twice as much yarn per inch of fabric. By having more yarn, there is more surface area on which to embed the cells. "Knit fabrics can also be designed to have specific porosities, or hydraulic permeability, created by the loops of the fabric and yarn sizes," says Erin Doran, another textile specialist on the team. "These pores can help with the healing process as well."
So far, the team has conducted a number of tests embedding mouse embryonic fibroblast cells and mesenchymal stem cells within the different knit patterns and seeing how they behave when the patterns are stretched. Each pattern had variations that affected how much the fabric could uncrimp, in addition to how stiff it became after it started stretching. All showed a high rate of cell survival, and in 2024 the team received an R&D 100 award for their knit designs.
Gillmer explains that although the project began with treating skin and muscle injuries in mind, their fabrics have the potential to mimic many different types of human soft tissue, such as cartilage or fat. The team recently filed a provisional patent that outlines how to create these patterns and identifies the appropriate materials that should be used to make the yarn. This information can be used as a toolbox to tune different knitted structures to match the mechanical properties of the injured tissue to which they are applied.
"This project has definitely been a learning experience for me," Gillmer says. "Each branch of this team has a unique expertise, and I think the project would be impossible without them all working together. Our collaboration as a whole enables us to expand the scope of the work to solve these larger, more complex problems."
Study: The ozone hole is healing, thanks to global reduction of CFCsNew results show with high statistical confidence that ozone recovery is going strong.A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.
Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.
“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”
The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.
Roots of ozone recovery
Within the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.
In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.
The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.
In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.
“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.
Anthropogenic healing
In their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.
Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.
“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”
The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.
They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.
The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.
“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”
If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.
“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”
This research was supported, in part, by the National Science Foundation and NASA.
Why rationality can push people in different directionsPhilosopher Kevin Dorst’s work examines how we apply rational thought to everyday life.It’s not a stretch to suggest that when we disagree with other people, we often regard them as being irrational. Kevin Dorst PhD ’19 has developed a body of research with surprising things to say about that.
Dorst, an associate professor of philosophy at MIT, studies rationality: how we apply it, or think we do, and how that bears out in society. The goal is to help us think clearly and perhaps with fresh eyes about something we may take for granted.
Throughout his work, Dorst specializes in exploring the nuances of rationality. To take just one instance, consider how ambiguity can interact with rationality. Suppose there are two studies about the effect of a new housing subdivision on local traffic patterns: One shows there will be a substantial increase in traffic, and one shows a minor effect. Even if both studies are sound in their methods and data, neither may have a totally airtight case. People who regard themselves as rationally assessing the numbers will likely disagree about which is most valid, and — though this may not be entirely rational — may use their prior beliefs to poke holes in the study that does not represent their prior beliefs.
Among other things, this process also calls into question the widespread “Bayesian” conception that people’s views shift and come into alignment as they’re presented with new evidence. It may be that instead, people apply rationality while their views diverge, not converge.
This is also the kind of phenomenon Dorst explores in the paper “Rational Polarization,” published in The Philosophical Review in 2023; currently Dorst is working on a book about how people can take rational approaches but still wind up with different conclusions about the world. Dorst combines careful argumentation, mathematically structured descriptions of thinking, and even experimental evidence about cognition and people’s views, an increasing trend in philosophy.
“There’s something freeing about how methodologically open philosophy is,” says Dorst, a good-humored and genial conversationalist. “A question can be philosophical if it’s important and we don’t yet have settled methods for answering it, because in philosophy it’s always okay to ask what methods we should be using. It’s one of the exciting things about philosophy.”
For his research and teaching, Dorst was awarded tenure at MIT last year.
Show me your work
Dorst grew up in Missouri, not exactly expecting to become a philosopher, but he started following in the academic trail of his older brother, who had become interested in philosophy.
“We didn’t know what philosophy was growing up, but once my brother started getting interested, there was a little bootstrapping, egging each other on, and having someone to talk to,” Dorst says.
As an undergraduate at Washington University in St. Louis, Dorst majored in philosophy and political science. By graduation, he had become sold on studying philosophy full-time, and was accepted into MIT’s program as a doctoral student.
At the Institute, he started specializing in the problems he now studies full-time, about how we know things and how much we are thinking rationally, while working with Roger White as his primary adviser, along with faculty members Robert Stalnaker and Kieran Setiya of MIT and Branden Fitelson of Northeastern University.
After earning his PhD, Dorst spent a year as a fellow at Oxford University’s Magdalen College, then joined faculty of the University of Pittsburgh. He returned to MIT, this time on the faculty, in 2022. Now settled in the MIT philosophy faculty, Dorst tries to continue the department’s tradition of engaged teaching with his students.
“They wrestle like everyone does with the conceptual and philosophical questions, but the speed with which you can get through technical things in a course is astounding,” Dorst says of MIT undergraduates.
New methods, time-honored issues
At present Dorst, who has published widely in philosophy journals, is grinding through the process of writing a book manuscript about the complexity of rationality. Chapter subjects include hindsight bias, confirmation bias, overconfidence, and polarization.
In the process, Dorst is also developing and conducting more experiments than ever before, to look at the way people process information and regard themselves as being rational.
“There’s this whole movement of experimental philosophy, using experimental data, being sensitive to cognitive science and being interested in connecting questions we have to it,” Dorst says.
In his case, he adds, “The big picture is trying to connect the theoretical work on rationality with the more empirical work about what leads to polarization,” he says. The salience of the work, meanwhile, applies to a wide range of subjects: “People have been polarized forever over everything.”
As he explains all of this, Dorst looks up at the whiteboard in his office, where an extensive set of equations represents the output of some experiments and his ongoing effort to comprehend the results, as part of the book project. When he finishes, he hopes to have work broadly useful in philosophy, cognitive science, and other fields.
“We might use some different models in philosophy,” he says, “but let’s all try to figure out how people process information and regard arguments.”
Study suggests new molecular strategy for treating fragile X syndromeEnhancing activity of a specific component of neurons’ “NMDA” receptors normalized protein synthesis, neural activity, and seizure susceptibility in the hippocampus of fragile X lab mice.Building on more than two decades of research, a study by MIT neuroscientists at The Picower Institute for Learning and Memory reports a new way to treat pathology and symptoms of fragile X syndrome, the most common genetically-caused autism spectrum disorder. The team showed that augmenting a novel type of neurotransmitter signaling reduced hallmarks of fragile X in mouse models of the disorder.
The new approach, described in Cell Reports, works by targeting a specific molecular subunit of “NMDA” receptors that they discovered plays a key role in how neurons synthesize proteins to regulate their connections, or “synapses,” with other neurons in brain circuits. The scientists showed that in fragile X model mice, increasing the receptor’s activity caused neurons in the hippocampus region of the brain to increase molecular signaling that suppressed excessive bulk protein synthesis, leading to other key improvements.
Setting the table
“One of the things I find most satisfying about this study is that the pieces of the puzzle fit so nicely into what had come before,” says study senior author Mark Bear, Picower Professor in MIT’s Department of Brain and Cognitive Sciences. Former postdoc Stephanie Barnes, now a lecturer at the University of Glasgow, is the study’s lead author.
Bear’s lab studies how neurons continually edit their circuit connections, a process called “synaptic plasticity” that scientists believe to underlie the brain’s ability to adapt to experience and to form and process memories. These studies led to two discoveries that set the table for the newly published advance. In 2011, Bear’s lab showed that fragile X and another autism disorder, tuberous sclerosis (Tsc), represented two ends of a continuum of a kind of protein synthesis in the same neurons. In fragile X there was too much. In Tsc there was too little. When lab members crossbred fragile X and Tsc mice, in fact, their offspring emerged healthy, as the mutations of each disorder essentially canceled each other out.
More recently, Bear’s lab showed a different dichotomy. It has long been understood from their influential work in the 1990s that the flow of calcium ions through NMDA receptors can trigger a form of synaptic plasticity called “long-term depression” (LTD). But in 2020, they found that another mode of signaling by the receptor — one that did not require ion flow — altered protein synthesis in the neuron and caused a physical shrinking of the dendritic “spine” structures housing synapses.
For Bear and Barnes, these studies raised the prospect that if they could pinpoint how NMDA receptors affect protein synthesis they might identify a new mechanism that could be manipulated therapeutically to address fragile X (and perhaps tuberous sclerosis) pathology and symptoms. That would be an important advance to complement ongoing work Bear’s lab has done to correct fragile X protein synthesis levels via another receptor called mGluR5.
Receptor dissection
In the new study, Bear and Barnes’ team decided to use the non-ionic effect on spine shrinkage as a readout to dissect how NMDARs signal protein synthesis for synaptic plasticity in hippocampus neurons. They hypothesized that the dichotomy of ionic effects on synaptic function and non-ionic effects on spine structure might derive from the presence of two distinct components of NMDA receptors: “subunits” called GluN2A and GluN2B. To test that, they used genetic manipulations to knock out each of the subunits. When they did so, they found that knocking out “2A” or “2B” could eliminate LTD, but that only knocking out 2B affected spine size. Further experiments clarified that 2A and 2B are required for LTD, but that spine shrinkage solely depends on the 2B subunit.
The next task was to resolve how the 2B subunit signals spine shrinkage. A promising possibility was a part of the subunit called the “carboxyterminal domain,” or CTD. So, in a new experiment Bear and Barnes took advantage of a mouse that had been genetically engineered by researchers at the University of Edinburgh so that the 2A and 2B CTDs could be swapped with one another. A telling result was that when the 2B subunit lacked its proper CTD, the effect on spine structure disappeared. The result affirmed that the 2B subunit signals spine shrinkage via its CTD.
Another consequence of replacing the CTD of the 2B subunit was an increase in bulk protein synthesis that resembled findings in fragile X. Conversely, augmenting the non-ionic signaling through the 2B subunit suppressed bulk protein synthesis, reminiscent of Tsc.
Treating fragile X
Putting the pieces together, the findings indicated that augmenting signaling through the 2B subunit might, like introducing the mutation causing Tsc, rescue aspects of fragile X.
Indeed, when the scientists swapped in the 2B subunit CTD of NMDA receptor in fragile X model mice they found correction of not only the excessive bulk protein synthesis, but also altered synaptic plasticity, and increased electrical excitability that are hallmarks of the disease. To see if a treatment that targets NMDA receptors might be effective in fragile X, they tried an experimental drug called Glyx-13. This drug binds to the 2B subunit of NMDA receptors to augment signaling. The researchers found that this treatment can also normalize protein synthesis and reduced sound-induced seizures in the fragile X mice.
The team now hypothesizes, based on another prior study in the lab, that the beneficial effect to fragile X mice of the 2B subunit’s CTD signaling is that it shifts the balance of protein synthesis away from an all-too-efficient translation of short messenger RNAs (which leads to excessive bulk protein synthesis) toward a lower-efficiency translation of longer messenger RNAs.
Bear says he does not know what the prospects are for Glyx-13 as a clinical drug, but he noted that there are some drugs in clinical development that specifically target the 2B subunit of NMDA receptors.
In addition to Bear and Barnes, the study’s other authors are Aurore Thomazeau, Peter Finnie, Max Heinreich, Arnold Heynen, Noboru Komiyama, Seth Grant, Frank Menniti, and Emily Osterweil.
The FRAXA Foundation, The Picower Institute for Learning and Memory, The Freedom Together Foundation, and the National Institutes of Health funded the study.
Developing materials for stellar performance in fusion power plants Zoe Fisher, a doctoral student in NSE, is researching how defects can alter the fundamental properties of ceramics upon radiation.When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.
When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT's Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.
On track to an MIT education
Growing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.
Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.
It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.
Pursuing fusion at NSE
Before she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.
For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.
When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.
The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.
Sailing new paths
It’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.
Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”
She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”
That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today.
Letterlocking: A new look at a centuries-old practiceA first history of the document security technology, co-authored by MIT Libraries’ Jana Dambrogio, provides new tools for interdisciplinary research.For as long as people have been communicating through writing, they have found ways to keep their messages private. Before the invention of the gummed envelope in 1830, securing correspondence involved letterlocking, an ingenious process of folding a flat sheet of paper to become its own envelope, often using a combination of folds, tucks, slits, or adhesives such as sealing wax. Letter writers from Erasmus to Catherine de’ Medici to Emily Dickinson employed these techniques, which Jana Dambrogio, the MIT Libraries’ Thomas F. Peterson (1957) Conservator, has named “letterlocking.”
“The study of letterlocking very consciously bridges humanities and sciences,” says Dambrogio, who first became interested in the practice as a fellow in the conservation studio of the Vatican Apostolic Archives, where she discovered examples from the 15th and 16th centuries. “It draws on the perspectives of not only conservators and historians, but also engineers, imaging experts, and scientists.”
Now the rich history of this centuries-old document security technology is the subject of a new book, “Letterlocking: The Hidden History of the Letter,” published by the MIT Press and co-authored with Daniel Starza Smith, a lecturer in early modern English literature at King’s College London. Dambrogio and Smith have pioneered the field of letterlocking research over the last 10 years, working with an international and interdisciplinary collection of experts, the Unlocking History Research Group.
With more than 300 images and diagrams, “Letterlocking” explores the practice’s history through real examples from all over the world. It includes a dictionary of 60 technical terms and concepts, systems the authors developed while studying more than 250,000 historic letters. The book aims to be a springboard for new discoveries, whether providing a new lens on history or spurring technological advancements.
In working with the Brienne Collection — a 17th-century postal trunk full of undelivered letters — the Unlocking History Research Group sought to study intact examples of locked letters without destroying them in the process. This stimulated advances in conservation, radiology, and computational algorithms. In 2020, the team collaborated with Amanda Ghassaei SM ’17 and Holly Jackson ’22, working at the MIT Center for Bits and Atoms, and students and faculty from the MIT Computer Science and Artificial Intelligence Laboratory; the School of Humanities, Arts, and Social Sciences; and the Department of Materials Science and Engineering to develop new algorithms that could virtually read an unopened letter, publishing the results in Nature Communications in 2021.
“Letterlocking” also offers a comprehensive guide to making one’s own locked letters. “The best introduction to letterlocking is to make some models,” says Dambrogio. “Feel the shape and the weight; see how easy it would be to conceal or hard to open without being noticed. We’re inviting people to explore and expand this new field of study through ‘mind and hand.’”
Designing better ways to deliver drugsGraduate student and MathWorks Fellow Louis DeRidder is developing a device to make chemotherapy dosing more accurate for individual patients.When Louis DeRidder was 12 years old, he had a medical emergency that nearly cost him his life. The terrifying experience gave him a close-up look at medical care and made him eager to learn more.
“You can’t always pinpoint exactly what gets you interested in something, but that was a transformative moment,” says DeRidder.
In high school, he grabbed the chance to participate in a medicine-focused program, spending about half of his days during his senior year in high school learning about medical science and shadowing doctors.
DeRidder was hooked. He became fascinated by the technologies that make treatments possible and was particularly interested in how drugs are delivered to the brain, a curiosity that sparked a lifelong passion.
“Here I was, a 17-year-old in high school, and a decade later, that problem still fascinates me,” he says. “That’s what eventually got me into the drug delivery field.”
DeRidder’s interests led him to transfer half-way through his undergraduate studies to Johns Hopkins University, where he performed research he had proposed in a Goldwater Scholarship proposal. The research focused on the development of a nanoparticle-drug conjugate to deliver a drug to brain cells in order to transform them from a pro-inflammatory to an anti-inflammatory phenotype. Such a technology could be valuable in the treatment of neurodegenerative diseases, including Alzheimer’s and Parkinson’s.
In 2019, DeRidder entered the joint Harvard-MIT Health Sciences and Technology program, where he has embarked on a somewhat different type of drug delivery project — developing a device that measures the concentration of a chemotherapy drug in the blood while it is being administered and adjusts the infusion rate so the concentration is optimal for the patient. The system is known as CLAUDIA, or Closed-Loop AUtomated Drug Infusion RegulAtor, and can allow for the personalization of drug dosing for a variety of different drugs.
The project stemmed from discussions with his faculty advisors — Robert Langer, the David H. Koch Institute Professor, and Giovanni Traverso, the Karl Van Tassel Career Development Professor and a gastroenterologist at Brigham and Women’s Hospital. They explained to him that chemotherapy dosing is based on a formula developed in 1916 that estimates a patient’s body surface area. The formula doesn’t consider important influences such as differences in body composition and metabolism, or circadian fluctuations that can affect how a drug interacts with a patient.
“Once my advisors presented the reality of how chemotherapies are dosed,” DeRidder says, “I thought, ‘This is insane. How is this the clinical reality?’”
He and his advisors agreed this was a great project for his PhD.
“After they gave me the problem statement, we began to brainstorm ways that we could develop a medical device to improve the lives of patients” DeRidder says, adding, “I love starting with a blank piece of paper and then brainstorming to work out the best solution.”
Almost from the start, DeRidder’s research process involved MATLAB and Simulink, developed by the mathematical computer software company MathWorks.
“MathWorks and Simulink are key to what we do,” DeRidder says. “They enable us to model the drug pharmacokinetics — how the body distributes and metabolizes the drug. We also model the components of our system with their software. That was especially critical for us in the very early days, because it let us know whether it was even possible to control the concentration of the drug. And since then, we’ve continuously improved the control algorithm, using these simulations. You simulate hundreds of different experiments before performing any experiments in the lab.”
With his innovative use of the MATLAB and Simulink tools, DeRidder was awarded MathWorks fellowships both last year and this year. He has also received a National Science Foundation Graduate Research Fellowship.
“The fellowships have been critical to our development of the CLAUDIA drug-delivery system,” DeRidder says, adding that he has “had the pleasure of working with a great team of students and researchers in the lab.”
He says he would like to move CLAUDIA toward clinical use, where he thinks it could have significant impact. “Whatever I can do to help push it toward the clinic, including potentially helping to start a company to help commercialize the system, I’m definitely interested in doing it.”
In addition to developing CLAUDIA, DeRidder is working on developing new nanoparticles to deliver therapeutic nucleic acids. The project involves synthesizing new nucleic acid molecules, as well as developing the new polymeric and lipid nanoparticles to deliver the nucleic acids to targeted tissue and cells.
DeRidder says he likes working on technologies at different scales, from medical devices to molecules — all with the potential to improve the practice of medicine.
Meanwhile, he finds time in his busy schedule to do community service. For the past three years, he has spent time helping the homeless on Boston streets.
“It’s easy to lose track of the concrete, simple ways that we can serve our communities when we’re doing research,” DeRidder says, “which is why I have often sought out ways to serve people I come across every day, whether it is a student I mentor in lab, serving the homeless, or helping out the stranger you meet in the store who is having a bad day.”
Ultimately, DeRidder says, he’ll head back to work that also recalls his early exposure to the medical field in high school, where he interacted with a lot of people with different types of dementia and other neurological diseases at a local nursing home.
“My long-term plan includes working on developing devices and molecular therapies to treat neurological diseases, in addition to continuing to work on cancer,” he says. “Really, I’d say that early experience had a big impact on me.”
Breakfast of champions: MIT hosts top young scientistsAt an MIT-led event at AJAS/AAAS, researchers connect with MIT faculty, Nobel laureates, and industry leaders to share their work, gain mentorship, and explore future careers in science.On Feb. 14, some of the nation’s most talented high school researchers convened in Boston for the annual American Junior Academy of Science (AJAS) conference, held alongside the American Association for the Advancement of Science (AAAS) annual meeting. As a highlight of the event, MIT once again hosted its renowned “Breakfast with Scientists,” offering students a unique opportunity to connect with leading scientific minds from around the world.
The AJAS conference began with an opening reception at the MIT Schwarzman College of Computing, where professor of biology and chemistry Catherine Drennan delivered the keynote address, welcoming 162 high school students from 21 states. Delegates were selected through state Academy of Science competitions, earning the chance to share their work and connect with peers and professionals in science, technology, engineering, and mathematics (STEM).
Over breakfast, students engaged with distinguished scientists, including MIT faculty, Nobel laureates, and industry leaders, discussing research, career paths, and the broader impact of scientific discovery.
Amy Keating, MIT biology department head, sat at a table with students ranging from high school juniors to college sophomores. The group engaged in an open discussion about life as a scientist at a leading institution like MIT. One student expressed concern about the competitive nature of innovative research environments, prompting Keating to reassure them, saying, “MIT has a collaborative philosophy rather than a competitive one.”
At another table, Nobel laureate and former MIT postdoc Gary Ruvkun shared a lighthearted moment with students, laughing at a TikTok video they had created to explain their science fair project. The interaction reflected the innate curiosity and excitement that drives discovery at all stages of a scientific career.
Donna Gerardi, executive director of the National Association of Academies of Science, highlighted the significance of the AJAS program. “These students are not just competing in science fairs; they are becoming part of a larger scientific community. The connections they make here can shape their careers and future contributions to science.”
Alongside the breakfast, AJAS delegates participated in a variety of enriching experiences, including laboratory tours, conference sessions, and hands-on research activities.
“I am so excited to be able to discuss my research with experts and get some guidance on the next steps in my academic trajectory,” said Andrew Wesel, a delegate from California.
A defining feature of the AJAS experience was its emphasis on mentorship and collaboration rather than competition. Delegates were officially inducted as lifetime Fellows of the American Junior Academy of Science at the conclusion of the conference, joining a distinguished network of scientists and researchers.
Sponsored by the MIT School of Science and School of Engineering, the breakfast underscored MIT’s longstanding commitment to fostering young scientific talent. Faculty and researchers took the opportunity to encourage students to pursue careers in STEM fields, providing insights into the pathways available to them.
“It was a joy to spend time with such passionate students,” says Kristala Prather, head of the Department of Chemical Engineering at MIT. “One of the brightest moments for me was sitting next to a young woman who will be joining MIT in the fall — I just have to convince her to study ChemE!”
Markus Buehler receives 2025 Washington Award Materials scientist is honored for his academic leadership and innovative research that bridge engineering and nature.MIT Professor Markus J. Buehler has been named the recipient of the 2025 Washington Award, one of the nation’s oldest and most esteemed engineering honors.
The Washington Award is conferred to “an engineer(s) whose professional attainments have preeminently advanced the welfare of humankind,” recognizing those who have made a profound impact on society through engineering innovation. Past recipients of this award include influential figures such as Herbert Hoover, the award’s inaugural recipient in 1919, as well as Orville Wright, Henry Ford, Neil Armstrong, John Bardeen, and renowned MIT affiliates Vannevar Bush, Robert Langer, and software engineer Margaret Hamilton.
Buehler was selected for his “groundbreaking accomplishments in computational modeling and mechanics of biological materials, and his contributions to engineering education and leadership in academia.” Buehler has authored over 500 peer-reviewed publications, pioneering the atomic-level properties and structures of biomaterials such as silk, elastin, and collagen, utilizing computational modeling to characterize, design, and create sustainable materials with features spanning from the nano- to the macro- scale. Buehler was the first to explain how hydrogen bonds, molecular confinement, and hierarchical architectures govern the mechanics of biological materials via the development of a theory that bridges molecular interactions with macroscale properties.
His innovative research includes the development of physics-aware artificial intelligence methods that integrate computational mechanics, bioinformatics, and generative AI to explore universal design principles of biological and bioinspired materials. His work has advanced the understanding of hierarchical structures in nature, revealing the mechanics by which complex biomaterials achieve remarkable strength, flexibility, and resilience through molecular interactions across scales.
Buehler's research included the use of deep learning models to predict and generate new protein structures, self-assembling peptides, and sustainable biomimetic materials. His work on materiomusic — converting molecular structures into musical compositions — has provided new insights into the hidden patterns within biological systems.
Buehler is the Jerry McAfee (1940) Professor in Engineering in the departments of Civil and Environmental Engineering (CEE) and Mechanical Engineering. He served as the department head of CEE from 2013 to 2020, as well as in other leadership roles, including as president of the Society of Engineering Science.
A dedicated educator, Buehler has played a vital role in mentoring future engineers, leading K-12 STEM summer camps to inspire the next generation and serving as an instructor for MIT Professional Education summer courses.
His achievements have been recognized with numerous prestigious honors, including the Feynman Prize, the Drucker Medal, the Leonardo da Vinci Award, and the J.R. Rice Medal, and election to the National Academy of Engineering. His work continues to push the boundaries of computational science, materials engineering, and biomimetic design.
The Washington Award was presented during National Engineers Week in February, in a ceremony attended by members of prominent engineering societies, including the Western Society of Engineers; the American Institute of Mining, Metallurgical and Petroleum Engineers; the American Society of Civil Engineers; the American Society of Mechanical Engineers; the Institute of Electrical and Electronics Engineers; the National Society of Professional Engineers; and the American Nuclear Society. The event also celebrated nearly 100 pre-college students recognized for their achievements in regional STEM competitions, highlighting the next generation of engineering talent.
Seeing more in expansion microscopyNew methods light up lipid membranes and let researchers see sets of proteins inside cells with high resolution.In biology, seeing can lead to understanding, and researchers in Professor Edward Boyden’s lab at the McGovern Institute for Brain Research are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy — a high-resolution imaging technique the group introduced in 2015 — so researchers everywhere can see more when they look at cells and tissues under a light microscope.
“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT. “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.
With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in open-access form in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.
Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.
Since first developing expansion microscopy, Boyden and his team have continued to enhance the method — increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.
Visualizing cell membranes
One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the Feb. 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.
Tay Shin SM ’20, PhD ’23, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”
The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.
After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.
Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.
Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.
Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.
One sample, many proteins
To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.
A key to that new method, reported Nov. 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoc Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.
After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.
Expansion microscopy lets biologists visualize some of cells’ tiniest features — but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.
To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.
The team used multiExR to look at amyloid plaques — the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high-throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease — but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.
Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the U.S. Army, Cancer Research U.K., the New York Stem Cell Foundation, the U.S. National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures, Samsung, MathWorks, the Collamore-Rogers Fellowship, the U.S. National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.
Times Higher Education ranks MIT No. 1 in arts and humanities, business and economics, and social sciencesWorldwide honors for 2025 span disciplines across three schools.The 2025 Times Higher Education World University Ranking has ranked MIT first in three subject categories: Arts and Humanities, Business and Economics, and Social Sciences.
The Times Higher Education World University Ranking is an annual publication of university rankings by Times Higher Education, a leading British education magazine. The subject rankings are based on 18 rigorous performance indicators. Criteria include teaching, research environment, research volume and influence, industry, and international outlook.
Disciplines included in the 2025 top-ranked subjects are housed in the School of Humanities, Arts, and Social Sciences (SHASS), the School of Architecture and Planning (SA+P), and the MIT Sloan School of Management.
“The rankings are a testament to the extraordinary quality of the research and teaching that takes place in SHASS and across MIT,” says Agustín Rayo, Kenan Sahin Dean of SHASS and professor of philosophy. “There has never been a more important time to ensure that we train students who understand the social, economic, political, and human aspects of the great challenges of our time.”
The Arts and Humanities ranking evaluated 750 universities from 72 countries in the disciplines of languages, literature, and linguistics; history, philosophy, and theology; architecture; archaeology; and art, performing arts, and design. This marks the first time MIT has earned the top spot in this subject since Times Higher Education began publishing rankings in 2011.
The ranking for Business and Economics evaluated 990 institutions from 85 countries and territories across three core disciplines: business and management; accounting and finance; and, economics and econometrics. This is the fourth consecutive year MIT has been ranked first in this subject.
The Social Sciences ranking evaluated 1,093 institutions from 100 countries and territories in the disciplines of political science and international studies; sociology, geography, communication and media studies; and anthropology. The areas under evaluation include political science and international relations; sociology; geography; communication and media studies; and anthropology. MIT claimed the top spot alone in this subject, after tying for first in 2024 with Stanford University.
In other subjects, MIT was also named among the top universities, ranking third in Computer Science, Engineering, and Life Sciences, and fourth in Physical Sciences. Overall, MIT ranked second in the Times Higher Education 2025 World University Ranking.
A personalized heart implant wins MIT Sloan health care prizeSpheric Bio’s implants are designed to grow in a channel of the heart to better fit the patient’s anatomy and prevent strokes.An MIT startup’s personalized heart implants, designed to help prevent strokes, won this year’s MIT Sloan Healthcare Innovation Prize (SHIP) on Thursday.
Spheric Bio’s implants grow inside the body once injected, to fit within the patient’s unique anatomy. This could improve stroke prevention because existing implants are one-size-fits-all devices that can fail to fully block the most at-risk regions, leading to leakages and other complications.
“Our mission is to transform stroke prevention by building personalized medical devices directly inside patients’ hearts,” said Connor Verheyen PhD ’23, a postdoc in the Harvard-MIT Program in Health Sciences and Technology (HST), who made the winning pitch.
Verheyen’s co-founders are MIT Associate Professor Ellen Roche and HST postdoc Markus Horvath PhD ’22.
Spheric Bio was one of seven teams that pitched their solution at the event, which was held in the MIT Media Lab and kicked off the MIT Sloan Healthcare and BioInnovations Conference.
Spheric took home the event’s $25,000 first-place prize. The second-place prize went to nurtur, another MIT alumnus-founded startup, that has developed an artificial intelligence-powered platform designed to detect and prevent postpartum depression. Last summer, nurtur participated in the delta v startup accelerator program organized by the Martin Trust Center for MIT Entrepreneurship.
The audience choice award was given to Merunova, which is using AI and MRI diagnostics to improve the diagnosis and treatment of spinal cord disorders. Merunova was co-founded by Dheera Ananthakrishnan, a former spine surgeon who completed an executive MBA from the MIT Sloan School of Management in 2023.
Personalized stroke prevention
Spheric Bio’s first implants aim to solve the problem of atrial fibrillation, a condition that causes areas of the heart to beat irregularly and rapidly, leading to a dramatic increase in stroke risk. The problem begins when blood pools and clots in the heart. Those clots then move to the brain and cause a stroke.
“This is a problem I’ve witnessed firsthand in my family,” says Verheyen. “It’s so common that millions of families around the world have had to experience a loved one go through a stroke as well.”
Patients with atrial fibrillation today can either go on blood thinners, in many cases for years or even life, or undergo a procedure in which surgeons insert a device into the heart to close off an area known as the left atrial appendage, where about 90 percent of such originate.
The implants on the market today for that procedure are typically prefabricated metal devices that don’t account for the wide variations seen in patient heart anatomy. Verheyen says up to half of the devices fail to seal the appendage. They can also lead to complications and complex care pathways designed to manage those shortcomings.
“There’s a fundamental mismatch between the devices available and what human patients actually look like,” says Verheyen. “Humans are infinitely variable in shape and size, and these tissues in particular are really soft, complex, delicate tissues. It leaves you with a pretty profound incompatibility.”
Spheric Bio’s implants are designed to conform to a patient’s anatomy like water filling a glass. The implant is made of biomaterials developed over years of research at MIT. They are delivered through a catheter and then expand and self-heal to custom fit the patient.
“This gives us complete closure of the appendage for every patient, every time,” said Verheyen, who has successfully tested the device in animals. “It also allows us to reduce device-related complications and simplifies deployment for operators.”
Verheyen conducted his PhD work on medical imaging and medical physics in Roche’s lab. Roche is also the associate head of Department of Mechanical Engineering at MIT.
Innovations for impact
The 23rd annual pitch competition offered anyone interested in health care innovation a look at the promising new solutions being developed at universities. The event is open to all early-stage health care startups with at least one student or recent graduate co-founder.
The event was the result of a months-long process in which more than 100 applicants were whittled down over the course of three rounds by a group of 20 judges.
The final competition also kicked off the MIT Sloan Healthcare and BioInnovations Conference, which took place Feb. 27 and 28. This year’s conference was titled From Innovation to Impact: The Changing Face of Healthcare, and featured keynotes with health care industry veterans including Chris Boerner, the CEO of Bristole Myers Squibb, and James Davis, the CEO of Quest Diagnostics.
The competition’s keynote was delivered by Iterative Health CEO Jonathan Ng, who was a finalist in the competition in 2017. Ng expressed admiration for this year’s contestants.
“It’s inspiring to look around and see people who want to change the world,” said Ng, whose company is using cameras and AI to improve colorectal cancer screening. “There’s a lot of easier industries to work in, but MIT is such a good place to find your tribe: to find people who want to make the same sort of impact on the world as you.”
Five years, five triumphs in Putnam Math CompetitionUndergrads sweep Putnam Fellows for fifth year in a row and continue Elizabeth Lowell Putnam winning streak.For the fifth time in the history of the annual William Lowell Putnam Mathematical Competition, and for the fifth year in a row, MIT swept all five of the contest’s top spots.
The top five scorers each year are named Putnam Fellows. Senior Brian Liu and juniors Papon Lapate and Luke Robitaille are now three-time Putnam Fellows, sophomore Jiangqi Dai earned his second win, and first-year Qiao Sun earned his first. Each receives a $2,500 award. This is also the fifth time that any school has had all five Putnam Fellows.
MIT’s team also came in first. The team was made up of Lapate, Robitaille, and Sun (in alphabetical order); Lapate and Robitaille were also on last year’s winning team. This is MIT’s ninth first-place win in the past 11 competitions. Teams consist of the three top scorers from each institution. The institution with the first-place team receives a $25,000 award, and each team member receives $1,000.
First-year Jessica Wan was the top-scoring woman, finishing in the top 25, which earned her the $1,000 Elizabeth Lowell Putnam Prize. She is the eighth MIT student to receive this honor since the award was created in 1992. This is the sixth year in a row that an MIT woman has won the prize.
In total, 69 MIT students scored within the top 100. Beyond the top five scorers, MIT took nine of the next 11 spots (each receiving a $1,000 award), and seven of the next nine spots (earning $250 awards). Of the 75 receiving honorable mentions, 48 were from MIT. A total of 3,988 students took the exam in December, including 222 MIT students.
This exam is considered to be the most prestigious university-level mathematics competition in the United States and Canada.
The Putnam is known for its difficulty: While a perfect score is 120, this year’s top score was 90, and the median was just 2. While many MIT students scored well, the Department of Mathematics is proud of everyone who just took the exam, says Professor Michel Goemans, head of the Department of Mathematics.
“Year after year, I am so impressed by the sheer number of students at MIT that participate in the Putnam competition,” Goemans says. “In no other college or university in the world can one find hundreds of students who get a kick out of thinking about math problems. So refreshing!”
Adds Professor Bjorn Poonen, who helped MIT students prepare for the exam this year, “The incredible competition performance is just one manifestation of MIT’s vibrant community of students who love doing math and discussing math with each other, students who through their hard work in this environment excel in ways beyond competitions, too.”
While the annual Putnam Competition is administered to thousands of undergraduate mathematics students across the United States and Canada, in recent years around 70 of its top 100 performers have been MIT students. Since 2000, MIT has placed among the top five teams 23 times.
MIT’s success in the Putnam exam isn’t surprising. MIT’s recent Putnam coaches are four-time Putnam Fellow Bjorn Poonen and three-time Putnam Fellow Yufei Zhao ’10, PhD ’15.
MIT is also a top destination for medalists participating in the International Mathematics Olympiad (IMO) for high school students. Indeed, over the last decade MIT has enrolled almost every American IMO medalist, and more international IMO gold medalists than the universities of any other single country, according to forthcoming research from the Global Talent Fund (GTF), which offers scholarship and training programs for math Olympiad students and coaches.
IMO participation is a strong predictor of future achievement. According to the International Mathematics Olympiad Foundation, about half of Fields Medal winners are IMO alums — but it’s not the only ingredient.
“Recruiting the most talented students is only the beginning. A top-tier university education — with excellent professors, supportive mentors, and an engaging peer community — is key to unlocking their full potential," says GTF President Ruchir Agarwal. "MIT’s sustained Putnam success shows how the right conditions deliver spectacular results. The catalytic reaction of MIT’s concentration of math talent and the nurturing environment of Building 2 should accelerate advancements in fundamental science for years and decades to come.”
Many MIT mathletes see competitions not only as a way to hone their mathematical aptitude, but also as a way to create a strong sense of community, to help inspire and educate the next generation.
Chris Peterson SM ’13, director of communications and special projects at MIT Admissions and Student Financial Services, points out that many MIT students with competition math experience volunteer to help run programs for K-12 students including HMMT and Math Prize for Girls, and mentor research projects through the Program for Research in Mathematics, Engineering and Science (PRIMES).
Many of the top scorers are also alumni of the PRIMES high school outreach program. Two of this year’s Putnam Fellows, Liu and Robitaille, are PRIMES alumni, as are four of the next top 11, and six out of the next nine winners, along with many of the students receiving honorable mentions. Pavel Etingof, a math professor who is also PRIMES’ chief research advisor, states that among the 25 top winners, 12 (48 percent) are PRIMES alumni.
“We at PRIMES are very proud of our alumnae’s fantastic showing at the Putnam Competition,” says PRIMES director Slava Gerovitch PhD ’99. “PRIMES serves as a pipeline of mathematical excellence from high school through undergraduate studies, and beyond.”
Along the same lines, a collaboration between the MIT Department of Mathematics and MISTI-Africa has sent MIT students with Olympiad experience abroad during the Independent Activities Period (IAP) to coach high school students who hope to compete for their national teams.
First-years at MIT also take class 18.A34 (Mathematical Problem Solving), known informally as the Putnam Seminar, not only to hone their Putnam exam skills, but also to make new friends.
“Many people think of math competitions as primarily a way to identify and recognize talent, which of course they are,” says Peterson. “But the community convened by and through these competitions generates educational externalities that collectively exceed the sum of individual accomplishment.”
Math Community and Outreach Officer Michael King also notes the camaraderie that forms around the test.
“My favorite time of the Putnam day is right after the problem session, when the students all jump up, run over to their friends, and begin talking animatedly,” says King, who also took the exam as an undergraduate student. “They cheer each other’s successes, debate problem solutions, commiserate over missed answers, and share funny stories. It’s always amazing to work with the best math students in the world, but the most rewarding aspect is seeing the friendships that develop.”
A full list of the winners can be found on the Putnam website.
Rohit Karnik named director of J-WAFSThe mechanical engineering professor will lead MIT’s only program specifically focused on water and food for human need.Rohit Karnik, the Tata Professor in the MIT Department of Mechanical Engineering, has been named the new director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), effective March 1. Karnik, who has served as associate director of J-WAFS since 2023, succeeds founding director John H. Lienhard V, Abdul Latif Jameel Professor of Water and Mechanical Engineering.
Karnik assumes the role of director at a pivotal time for J-WAFS, as it celebrates its 10th anniversary. Announcing the appointment today in a letter to the J-WAFS research community, Vice President for Research Ian A. Waitz noted Karnik’s deep involvement with the lab’s research efforts and programming, as well as his accolades as a researcher, teacher, leader, and mentor. “I am delighted that Rohit will bring his talent and vision to bear on the J-WAFS mission, ensuring the program sustains its direct support of research on campus and its important impact around the world,” Waitz wrote.
J-WAFS is the only program at MIT focused exclusively on water and food research. Since 2015, the lab has made grants totaling approximately $25M to researchers across the Institute, including from all five schools and 40 departments, labs, and centers. It has supported 300 faculty, research staff, and students combined. Furthermore, the J-WAFS Solutions Program, which supports efforts to commercialize innovative water and food technologies, has spun out 12 companies and two open-sourced products.
“We launched J-WAFS with the aim of building a community of water and food researchers at MIT, taking advantage of MIT’s strengths in so many disciplines that contribute to these most essential human needs,” writes Lienhard, who will retire this June. “After a decade’s work, that community is strong and visible. I am delighted that Rohit has agreed to take the reins. He will bring the program to the next level.”
Lienhard has served as director since founding J-WAFS in 2014, along with executive director Renee J. Robins ’83, who last fall shared her intent to retire as well.
“It’s a big change for a program to turn over both the director and executive director roles at the same time,” says Robins. “Having worked alongside Rohit as our associate director for the past couple of years, I am greatly assured that J-WAFS will be in good hands with a new and steady leadership team.”
Karnik became associate director of J-WAFS in July 2023, a move that coincided with the start of a sabbatical for Lienhard. Before that time, Karnik was already well engaged with J-WAFS as a grant recipient, reviewer, and community member. As associate director, Rohit has been integral to J-WAFS operations, planning, and grant management, including the proposal selection process. He was instrumental in planning the second J-WAFS Grand Challenge grant and led workshops at which researchers brainstormed proposal topics and formed teams. Karnik also engaged with J-WAFS’ corporate partners, helped plan lectures and events, and offered project oversight.
“The experience gave me broad exposure to the amazing ideas and research at MIT in the water and food space, and the collaborations and synergies across departments and schools that enable excellence in research,” says Karnik. “The strengths of J-WAFS lie in being able to support principal investigators in pursuing research to address humanity’s water and food needs; in creating a community of students though the fellowship program and support of student clubs; and in bringing people together at seminars, workshops, and other events. All of this is made possible by the endowment and a dedicated team with close involvement in the projects after the grants are awarded.”
J-WAFS was established through a generous gift from Community Jameel, an independent, global organization advancing science to help communities thrive in a rapidly changing world. The lab was named in honor of the late Abdul Latif Jameel, the founder of the Abdul Latif Jameel company and father of MIT alumnus Mohammed Jameel ’78, who founded and chairs Community Jameel.
J-WAFS’ operations are carried out by a small but passionate team of people at MIT who are dedicated to the mission of securing water and food systems. That mission is more important than ever, as climate change, urbanization, and a growing global population are putting tremendous stress on the world’s water and food supplies. These challenges drive J-WAFS’ efforts to mobilize the research, innovation, and technology that can sustainably secure humankind’s most vital resources.
As director, Karnik will help shape the research agenda and key priorities for J-WAFS and usher the program into its second decade.
Karnik originally joined MIT as a postdoc in the departments of Mechanical and Chemical Engineering in October 2006. In September 2007, he became an assistant professor of mechanical engineering at MIT, before being promoted to associate professor in 2012. His research group focuses on the physics of micro- and nanofluidic flows and applying that to the design of micro- and nanofluidic systems for applications in water, healthcare, energy, and the environment. Past projects include ones on membranes for water filtration and chemical separations, sensors for water, and water filters from waste wood. Karnik has served as associate department head and interim co-department head in the Department of Mechanical Engineering. He also serves as faculty director of the New Engineering Education Transformation (NEET) program in the School of Engineering.
Before coming to MIT, Karnik received a bachelor’s degree from the Indian Institute of Technology in Bombay, and a master’s and PhD from the University of California at Berkeley, all in mechanical engineering. He has authored numerous publications, is co-inventor on several patents, and has received awards and honors including the National Science Foundation CAREER Award, the U.S. Department of Energy Early Career Award, the MIT Office of Graduate Education’s Committed to Caring award, and election to the National Academy of Inventors as a senior member.
Lienhard, J-WAFS’ outgoing director, has served on the MIT faculty since 1988. His research and educational efforts have focused on heat and mass transfer, water purification and desalination, thermodynamics, and separation processes. Lienhard has directly supervised more than 90 PhD and master’s theses, and he is the author of over 300 peer-reviewed papers and three textbooks. He holds more than 40 U.S. patents, most commercialized through startup companies with his students. One of these, the water treatment company Gradiant Corporation, is now valued over $1 billion and employs more than 1,200 people. Lienhard has received many awards, including the 2024 Lifetime Achievement Award of the International Desalination and Reuse Association.
Since 1998, Renee Robins has worked on the conception, launch, and development of a number of large interdisciplinary, international, and partnership-based research and education collaborations at MIT and elsewhere. She served in roles for the Cambridge MIT Institute, the MIT Portugal Program, the Mexico City Program, the Program on Emerging Technologies, and the Technology and Policy Program. She holds two undergraduate degrees from MIT, in biology and humanities/anthropology, and a master’s degree in public policy from Carnegie Mellon University. She has overseen significant growth in J-WAFS’ activities, funding, staffing, and collaborations over the past decade. In 2021, she was awarded an Infinite Mile Award in the area of the Offices of the Provost and Vice President for Research, in recognition of her contributions within her role at J-WAFS to help the Institute carry out its mission.
“John and Renee have done a remarkable job in establishing J-WAFS and bringing it up to its present form,” says Karnik. “I’m committed to making sure that the key aspects of J-WAFS that bring so much value to the MIT community, the nation, and the world continue to function well. MIT researchers and alumni in the J-WAFS community are already having an impact on addressing humanity’s water and food needs, and I believe that there is potential for MIT to have an even greater positive impact on securing humanity’s vital resources in the future.”
Collaborating to advance research and innovation on essential chips for AIAgreement between MIT Microsystems Technology Laboratories and GlobalFoundries aims to deliver power efficiencies for data centers and ultra-low power consumption for intelligent devices at the edge.The following is a joint announcement from the MIT Microsystems Technology Laboratories and GlobalFoundries.
MIT and GlobalFoundries (GF), a leading manufacturer of essential semiconductors, have announced a new research agreement to jointly pursue advancements and innovations for enhancing the performance and efficiency of critical semiconductor technologies. The collaboration will be led by MIT’s Microsystems Technology Laboratories (MTL) and GF’s research and development team, GF Labs.
With an initial research focus on artificial intelligence and other applications, the first projects are expected to leverage GF’s differentiated silicon photonics technology, which monolithically integrates radio frequency silicon-on-insulator (RF SOI), CMOS (complementary metal-oxide semiconductor), and optical features on a single chip to realize power efficiencies for data centers, and GF’s 22FDX platform, which delivers ultra-low power consumption for intelligent devices at the edge.
“The collaboration between MIT MTL and GF exemplifies the power of academia-industry cooperation in tackling the most pressing challenges in semiconductor research,” says Tomás Palacios, MTL director and the Clarence J. LeBel Professor of Electrical Engineering and Computer Science. Palacios will serve as the MIT faculty lead for this research initiative.
“By bringing together MIT's world-renowned capabilities with GF's leading semiconductor platforms, we are positioned to drive significant research advancements in GF’s essential chip technologies for AI,” says Gregg Bartlett, chief technology officer at GF. “This collaboration underscores our commitment to innovation and highlights our dedication to developing the next generation of talent in the semiconductor industry. Together, we will research transformative solutions in the industry.”
“Integrated circuit technologies are the core driving a broad spectrum of applications ranging from mobile computing and communication devices to automotive, energy, and cloud computing,” says Anantha P. Chandrakasan, dean of MIT's School of Engineering, chief innovation and strategy officer, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “This collaboration allows MIT’s exceptional research community to leverage GlobalFoundries’ wide range of industry domain experts and advanced process technologies to drive exciting innovations in microelectronics across domains — while preparing our students to take on leading roles in the workforce of the future.”
The new research agreement was formalized at a signing ceremony on campus at MIT. It builds upon GF’s successful past and ongoing engagements with the university. GF serves on MTL’s Microsystems Industrial Group, which brings together industry and academia to engage in research. MIT faculty are active participants in GF’s University Partnership Program focused on joint semiconductor research and prototyping. Additionally, GF and MIT collaborate on several workforce development initiatives, including through the Northeast Microelectronics Coalition, a U.S. Department of Defense Microelectronics Commons Hub.
Will neutrons compromise the operation of superconducting magnets in a fusion plant?Tests suggest these powerful magnets will not suffer immediate loss of performance during irradiation.High-temperature superconducting magnets made from REBCO, an acronym for rare earth barium copper oxide, make it possible to create an intense magnetic field that can confine the extremely hot plasma needed for fusion reactions, which combine two hydrogen atoms to form an atom of helium, releasing a neutron in the process.
But some early tests suggested that neutron irradiation inside a fusion power plant might instantaneously suppress the superconducting magnets’ ability to carry current without resistance (called critical current), potentially causing a reduction in the fusion power output.
Now, a series of experiments has clearly demonstrated that this instantaneous effect of neutron bombardment, known as the “beam on effect,” should not be an issue during reactor operation, thus clearing the path for projects such as the ARC fusion system being developed by MIT spinoff company Commonwealth Fusion Systems.
The findings were reported in the journal Superconducting Science and Technology, in a paper by MIT graduate student Alexis Devitre and professors Michael Short, Dennis Whyte, and Zachary Hartwig, along with six others.
“Nobody really knew if it would be a concern,” Short explains. He recalls looking at these early findings: “Our group thought, man, somebody should really look into this. But now, luckily, the result of the paper is: It’s conclusively not a concern.”
The possible issue first arose during some initial tests of the REBCO tapes planned for use in the ARC system. “I can remember the night when we first tried the experiment,” Devitre recalls. “We were all down in the accelerator lab, in the basement. It was a big shocker because suddenly the measurement we were looking at, the critical current, just went down by 30 percent” when it was measured under radiation conditions (approximating those of the fusion system), as opposed to when it was only measured after irradiation.
Before that, researchers had irradiated the REBCO tapes and then tested them afterward, Short says. “We had the idea to measure while irradiating, the way it would be when the reactor’s really on,” he says. “And then we observed this giant difference, and we thought, oh, this is a big deal. It’s a margin you’d want to know about if you’re designing a reactor.”
After a series of carefully calibrated tests, it turned out the drop in critical current was not caused by the irradiation at all, but was just an effect of temperature changes brought on by the proton beam used for the irradiation experiments. This is something that would not be a factor in an actual fusion plant, Short says.
“We repeated experiments ‘oh so many times’ and collected about a thousand data points,” Devitre says. They then went through a detailed statistical analysis to show that the effects were exactly the same, under conditions where the material was just heated as when it was both heated and irradiated.
This excluded the possibility that the instantaneous suppression of the critical current had anything to do with the “beam on effect,” at least within the sensitivity of their tests. “Our experiments are quite sensitive,” Short says. “We can never say there’s no effect, but we can say that there’s no important effect.”
To carry out these tests required building a special facility for the purpose. Only a few such facilities exist in the world. “They’re all custom builds, and without this, we wouldn’t have been able to find out the answer,” he says.
The finding that this specific issue is not a concern for the design of fusion plants “illustrates the power of negative results. If you can conclusively prove that something doesn’t happen, you can stop scientists from wasting their time hunting for something that doesn’t exist.” And in this case, Short says, “You can tell the fusion companies: ‘You might have thought this effect would be real, but we’ve proven that it’s not, and you can ignore it in your designs.’ So that’s one more risk retired.”
That could be a relief to not only Commonwealth Fusion Systems but also several other companies that are also pursuing fusion plant designs, Devitre says. “There’s a bunch. And it’s not just fusion companies,” he adds. There remains the important issue of longer-term degradation of the REBCO that would occur over years or decades, which the group is presently investigating. Others are pursuing the use of these magnets for satellite thrusters and particle accelerators to study subatomic physics, where the effect could also have been a concern. For all these uses, “this is now one less thing to be concerned about,” Devitre says.
The research team also included David Fischer, Kevin Woller, Maxwell Rae, Lauryn Kortman, and Zoe Fisher at MIT, and N. Riva at Proxima Fusion in Germany. This research was supported by Eni S.p.A. through the MIT Energy Initiative.
An ancient RNA-guided system could simplify delivery of gene editing therapiesThe programmable proteins are compact, modular, and can be directed to modify DNA in human cells.A vast search of natural diversity has led scientists at MIT’s McGovern Institute for Brain Research and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox.
These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.
These findings are reported online Feb. 27 in the journal Science.
“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.
“Nature is pretty incredible,” says Zhang, who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.
In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR-Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.
At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionary relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.
Zhang’s team discovered more than 20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region — its TIGR arrays — encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.
Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.
They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact — a quarter of the size Cas9, on average — making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.
Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses, as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.
This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.
Sometimes, when competitors collaborate, everybody winsEngineers developed a planning tool that can help independent entities decide when they should invest in joint projects.One large metropolis might have several different train systems, from local intercity lines to commuter trains to longer regional lines.
When designing a system of train tracks, stations, and schedules in this network, should rail operators assume each entity operates independently, seeking only to maximize its own revenue? Or that they fully cooperate all the time with a joint plan, putting their own interest aside?
In the real world, neither assumption is very realistic.
Researchers from MIT and ETH Zurich have developed a new planning tool that mixes competition and cooperation to help operators in a complex, multiregional network strategically determine when and how they should work together.
Their framework is unusual because it incorporates co-investment and payoff-sharing mechanisms that identify which joint infrastructure projects a stakeholder should invest in with other operators to maximize collective benefits. The tool can help mobility stakeholders, such as governments, transport agencies, and firms, determine the right time to collaborate, how much they should invest in cooperative projects, how the profits should be distributed, and what would happen if they withdrew from the negotiations.
“It might seem counterintuitive, but sometimes you want to invest in your opponent so that, at some point, this investment will come back to you. Thanks to game theory, one can formalize this intuition to give rise to an interesting class of problems,” says Gioele Zardini, the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering at MIT, a principal investigator in the Laboratory for Information and Decision Systems (LIDS), an affiliate faculty with the Institute for Data, Systems, and Society (IDSS), and senior author of a paper on this planning framework.
Numerical analysis shows that, by investing a portion of their budget into some shared infrastructure projects, independent operators can earn more revenue than if they operated completely noncooperatively.
In the example of the rail operators, the researchers demonstrate that co-investment also benefits users by improving regional train service. This win-win situation encourages more people to take the train, boosting revenues for operators and reducing emissions from automobiles, says Mingjia He, a graduate student at ETH Zurich and lead author.
“The key point here is that transport network design is not a zero-sum game. One operator’s gain doesn’t have to mean the others’ loss. By shifting the perception from isolated, self-optimization to strategic interaction, cooperation can create greater value for everyone involved,” she says.
Beyond transportation, this planning framework could help companies in a crowded industry or governments of neighboring countries test co-investment strategies.
He and Zardini are joined on the paper by ETH Zurich researchers Andrea Censi and Emilio Frazzoli. The research will be presented at the 2025 American Control Conference (ACC), and the paper has been selected as a Student Best Paper Award finalist.
Mixing cooperation and competition
Building transportation infrastructure in a multiregional network typically requires a huge investment of time and resources. Major infrastructure projects have an outsized impact that can stretch far beyond one region or operator.
Each region has its own priorities and decision-makers, such as local transportation authorities, which often results in the failure of coordination.
“If local systems are designed separately, regional travel may be more difficult, making the whole system less efficient. But if self-interested stakeholders don’t benefit from coordination, they are less likely to support the plan,” He says.
To find the best mix of cooperation and competition, the researchers used game theory to build a framework that enables operators to align interests and improve regional cooperation in a way that benefits all.
For instance, last year the Swiss government agreed to invest 50 million euros to electrify and expand part of a regional rail network in Germany, with the goal of creating a faster rail connection between three Swiss cities.
The researchers’ planning framework could help independent entities, from regional governments to rail operators, identify when and how to undertake such collaborations.
The first step involves simulating the outcomes if operators don’t collaborate. Then, using the co-investment and payoff-sharing mechanisms, the decision-maker can explore cooperative approaches.
To identify a fair way to split revenues from shared projects, the researchers design a payoff-sharing mechanism based on a game theory concept known as the Nash bargaining solution. This technique will determine how much benefit operators would receive in different cooperative scenarios, taking into account the benefits they would achieve with no collaboration.
The benefits of co-investment
Once they had designed the planning framework, the researchers tested it on a simulated transportation network with multiple competing rail operators. They assessed various co-investment ratios across multiple years to identify the best decisions for operators.
In the end, they found that a semicooperative approach leads to the highest returns for all stakeholders. For instance, in one scenario, by co-investing 50 percent of their total budgets into shared infrastructure projects, all operators maximized their returns.
In another scenario, they show that by investing just 3.3 percent of their total budget in the first year of a multiyear cooperative project, operators can boost outcomes by 30 percent across three metrics: revenue, reduced costs for customers, and lower emissions.
“This proves that a small, up-front investment can lead to significant long-term benefits,” He says.
When they applied their framework to more realistic multiregional networks where all regions weren’t the same size, this semicooperative approach achieved even better results.
However, their analyses indicate that returns don’t increase in a linear way — sometimes increasing the co-investment ratio does not increase the benefit for operators.
Success is a multifaceted issue that depends on how much is invested by all operators, which projects are chosen, when investment happens, and how the budget is distributed over time, He explains.
“These strategic decisions are complex, which is why simulations and optimization are necessary to find the best cooperation and negotiation strategies. Our framework can help operators make smarter investment choices and guide them through the negotiation process,” she says.
The framework could also be applied to other complex network design problems, such as in communications or energy distribution.
In the future, the researchers want to build a user-friendly interface that will allow a stakeholder to easily explore different collaborative options. They also want to consider more complex scenarios, such as the role policy plays in shared infrastructure decisions or the robust cooperative strategies that handle risks and uncertainty.
This work was supported, in part, by the ETH Zurich Mobility Initiative and the ETH Zurich Foundation.
MIT physicists find unexpected crystals of electrons in an ultrathin materialRhombohedral graphene reveals new exotic interacting electron states.MIT physicists report the unexpected discovery of electrons forming crystalline structures in a material only billionths of a meter thick. The work adds to a gold mine of discoveries originating from the material, which the same team discovered about three years ago.
In a paper published Jan. 22 in Nature, the team describes how electrons in devices made, in part, of the material can become solid, or form crystals, by changing the voltage applied to the devices when they are kept at a temperature similar to that of outer space. Under the same conditions, they also showed the emergence of two new electronic states that add to work they reported last year showing that electrons can split into fractions of themselves.
The physicists were able to make the discoveries thanks to new custom-made filters for better insulation of the equipment involved in the work. These allowed them to cool their devices to a temperature an order of magnitude colder than they achieved for the earlier results.
The team also observed all of these phenomena using two slightly different “versions” of the material, one composed of five layers of atomically thin carbon; the other composed of four layers. This indicates “that there’s a family of materials where you can get this kind of behavior, which is exciting,” says Long Ju, an assistant professor in the MIT Department of Physics who led the work. Ju is also affiliated with MIT’s Materials Research Laboratory and Research Lab of Electronics.
Referring to the material, known as rhombohedral pentalayer graphene, Ju says, “We found a gold mine, and every scoop is revealing something new.”
New material
Rhombohedral pentalayer graphene is essentially a special form of pencil lead. Pencil lead, or graphite, is composed of graphene, a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure. Rhombohedral pentalayer graphene is composed of five layers of graphene stacked in a specific overlapping order.
Since Ju and colleagues discovered the material, they have tinkered with it by adding layers of another material they thought might accentuate the graphene’s properties, or even produce new phenomena. For example, in 2023 they created a sandwich of rhombohedral pentalayer graphene with “buns” made of hexagonal boron nitride. By applying different voltages, or amounts of electricity, to the sandwich, they discovered three important properties never before seen in natural graphite.
Last year, Ju and colleagues reported yet another important and even more surprising phenomenon: Electrons became fractions of themselves upon applying a current to a new device composed of rhombohedral pentalayer graphene and hexagonal boron nitride. This is important because this “fractional quantum Hall effect” has only been seen in a few systems, usually under very high magnetic fields. The Ju work showed that the phenomenon could occur in a fairly simple material without a magnetic field. As a result, it is called the “fractional quantum anomalous Hall effect” (anomalous indicates that no magnetic field is necessary).
New results
In the current work, the Ju team reports yet more unexpected phenomena from the general rhombohedral graphene/boron nitride system when it is cooled to 30 millikelvins (1 millikelvin is equivalent to -459.668 degrees Fahrenheit). In last year’s paper, Ju and colleagues reported six fractional states of electrons. In the current work, they report discovering two more of these fractional states.
They also found another unusual electronic phenomenon: the integer quantum anomalous Hall effect in a wide range of electron densities. The fractional quantum anomalous Hall effect was understood to emerge in an electron “liquid” phase, analogous to water. In contrast, the new state that the team has now observed can be interpreted as an electron “solid” phase — resembling the formation of electronic “ice” — that can also coexist with the fractional quantum anomalous Hall states when the system’s voltage is carefully tuned at ultra-low temperatures.
One way to think about the relation between the integer and fractional states is to imagine a map created by tuning electric voltages: By tuning the system with different voltages, you can create a “landscape” similar to a river (which represents the liquid-like fractional states) cutting through glaciers (which represent the solid-like integer effect), Ju explains.
Ju notes that his team observed all of these phenomena not only in pentalayer rhombohedral graphene, but also in rhombohedral graphene composed of four layers. This creates a family of materials, and indicates that other “relatives” may exist.
“This work shows how rich this material is in exhibiting exotic phenomena. We’ve just added more flavor to this already very interesting material,” says Zhengguang Lu, a co-first author of the paper. Lu, who conducted the work as a postdoc at MIT, is now on the faculty at Florida State University.
In addition to Ju and Lu, other principal authors of the Nature paper are Tonghang Han and Yuxuan Yao, both of MIT. Lu, Han, and Yao are co-first authors of the paper who contributed equally to the work. Other MIT authors are Jixiang Yang, Junseok Seo, Lihan Shi, and Shenyong Ye. Additional members of the team are Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
This work was supported by a Sloan Fellowship, a Mathworks Fellowship, the U.S. Department of Energy, the Japan Society for the Promotion of Science KAKENHI, and the World Premier International Research Initiative of Japan. Device fabrication was performed at the Harvard Center for Nanoscale Systems and MIT.nano.
Fiber computer allows apparel to run apps and “understand” the wearerMIT researchers developed a fiber computer and networked several of them into a garment that learns to identify physical activities.What if the clothes you wear could care for your health?
MIT researchers have developed an autonomous programmable computer in the form of an elastic fiber, which could monitor health conditions and physical activity, alerting the wearer to potential health risks in real-time. Clothing containing the fiber computer was comfortable and machine washable, and the fibers were nearly imperceptible to the wearer, the researchers report.
Unlike on-body monitoring systems known as “wearables,” which are located at a single point like the chest, wrist, or finger, fabrics and apparel have an advantage of being in contact with large areas of the body close to vital organs. As such, they present a unique opportunity to measure and understand human physiology and health.
The fiber computer contains a series of microdevices, including sensors, a microcontroller, digital memory, bluetooth modules, optical communications, and a battery, making up all the necessary components of a computer in a single elastic fiber.
The researchers added four fiber computers to a top and a pair of leggings, with the fibers running along each limb. In their experiments, each independently programmable fiber computer operated a machine-learning model that was trained to autonomously recognize exercises performed by the wearer, resulting in an average accuracy of about 70 percent.
Surprisingly, once the researchers allowed the individual fiber computers to communicate among themselves, their collective accuracy increased to nearly 95 percent.
“Our bodies broadcast gigabytes of data through the skin every second in the form of heat, sound, biochemicals, electrical potentials, and light, all of which carry information about our activities, emotions, and health. Unfortunately, most — if not all — of it gets absorbed and then lost in the clothes we wear. Wouldn’t it be great if we could teach clothes to capture, analyze, store, and communicate this important information in the form of valuable health and activity insights?” says Yoel Fink, a professor of materials science and engineering at MIT, a principal investigator in the Research Laboratory of Electronics (RLE) and the Institute for Soldier Nanotechnologies (ISN), and senior author of a paper on the research, which appears today in Nature.
The use of the fiber computer to understand health conditions and help prevent injury will soon undergo a significant real-world test as well. U.S. Army and Navy service members will be conducting a monthlong winter research mission to the Arctic, covering 1,000 kilometers in average temperatures of -40 degrees Fahrenheit. Dozens of base layer merino mesh shirts with fiber computers will be providing real-time information on the health and activity of the individuals participating on this mission, called Musk Ox II.
“In the not-too-distant future, fiber computers will allow us to run apps and get valuable health care and safety services from simple everyday apparel. We are excited to see glimpses of this future in the upcoming Arctic mission through our partners in the U.S. Army, Navy, and DARPA. Helping to keep our service members safe in the harshest environments is a honor and privilege,” Fink says.
He is joined on the paper by co-lead authors Nikhil Gupta, an MIT materials science and engineering graduate student; Henry Cheung MEng ’23; and Syamantak Payra ’22, currently a graduate student at Stanford University; John Joannopoulos, the Francis Wright Professor of Physics at MIT and director of the Institute for Soldier Nanotechnologies; as well as others at MIT, Rhode Island School of Design, and Brown University.
Fiber focus
The fiber computer builds on more than a decade of work in the Fibers@MIT lab at the RLE and was supported primarily by ISN. In previous papers, the researchers demonstrated methods for incorporating semiconductor devices, optical diodes, memory units, elastic electrical contacts, and sensors into fibers that could be formed into fabrics and garments.
“But we hit a wall in terms of the complexity of the devices we could incorporate into the fiber because of how we were making it. We had to rethink the whole process. At the same time, we wanted to make it elastic and flexible so it would match the properties of traditional fabrics,” says Gupta.
One of the challenges that researchers surmounted is the geometric mismatch between a cylindrical fiber and a planar chip. Connecting wires to small, conductive areas, known as pads, on the outside of each planar microdevice proved to be difficult and prone to failure because complex microdevices have many pads, making it increasingly difficult to find room to attach each wire reliably.
In this new design, the researchers map the 2D pad alignment of each microdevice to a 3D layout using a flexible circuit board called an interposer, which they wrapped into a cylinder. They call this the “maki” design. Then, they attach four separate wires to the sides of the “maki” roll and connected all the components together.
“This advance was crucial for us in terms of being able to incorporate higher functionality computing elements, like the microcontroller and Bluetooth sensor, into the fiber,” says Gupta.
This versatile folding technique could be used with a variety of microelectronic devices, enabling them to incorporate additional functionality.
In addition, the researchers fabricated the new fiber computer using a type of thermoplastic elastomer that is several times more flexible than the thermoplastics they used previously. This material enabled them to form a machine-washable, elastic fiber that can stretch more than 60 percent without failure.
They fabricate the fiber computer using a thermal draw process that the Fibers@MIT group pioneered in the early 2000s. The process involves creating a macroscopic version of the fiber computer, called a preform, that contains each connected microdevice.
This preform is hung in a furnace, melted, and pulled down to form a fiber, which also contains embedded lithium-ion batteries so it can power itself.
“A former group member, Juliette Marion, figured out how to create elastic conductors, so even when you stretch the fiber, the conductors don’t break. We can maintain functionality while stretching it, which is crucial for processes like knitting, but also for clothes in general,” Gupta says.
Bring out the vote
Once the fiber computer is fabricated, the researchers use a braiding technique to cover the fiber with traditional yarns, such as polyester, merino wool, nylon, and even silk.
In addition to gathering data on the human body using sensors, each fiber computer incorporates LEDs and light sensors that enable multiple fibers in one garment to communicate, creating a textile network that can perform computation.
Each fiber computer also includes a Bluetooth communication system to send data wirelessly to a device like a smartphone, which can be read by a user.
The researchers leveraged these communication systems to create a textile network by sewing four fiber computers into a garment, one in each sleeve. Each fiber ran an independent neural network that was trained to identify exercises like squats, planks, arm circles, and lunges.
“What we found is that the ability of a fiber computer to identify human activity was only about 70 percent accurate when located on a single limb, the arms or legs. However, when we allowed the fibers sitting on all four limbs to ‘vote,’ they collectively reached nearly 95 percent accuracy, demonstrating the importance of residing on multiple body areas and forming a network between autonomous fiber computers that does not need wires and interconnects,” Fink says.
Moving forward, the researchers want to use the interposer technique to incorporate additional microdevices.
Arctic insights
In February, a multinational team equipped with computing fabrics will travel for 30 days and 1,000 kilometers in the Arctic. The fabrics will help keep the team safe, and set the stage for future physiological “digital twinning” models.
“As a leader with more than a decade of Arctic operational experience, one of my main concerns is how to keep my team safe from debilitating cold weather injuries — a primary threat to operators in the extreme cold,” says U.S. Army Major Mathew Hefner, the commander of Musk Ox II. “Conventional systems just don’t provide me with a complete picture. We will be wearing the base layer computing fabrics on us 24/7 to help us better understand the body’s response to extreme cold and ultimately predict and prevent injury.”
Karl Friedl, U.S. Army Research Institute of Environmental Medicine senior research scientist of performance physiology, noted that the MIT programmable computing fabric technology may become a “gamechanger for everyday lives.”
“Imagine near-term fiber computers in fabrics and apparel that sense and respond to the environment and to the physiological status of the individual, increasing comfort and performance, providing real-time health monitoring and providing protection against external threats. Soldiers will be the early adopters and beneficiaries of this new technology, integrated with AI systems using predictive physiological models and mission-relevant tools to enhance survivability in austere environments,” Friedl says.
“The convergence of classical fibers and fabrics with computation and machine learning has only begun. We are exploring this exciting future not only through research and field testing, but importantly in an MIT Department of Materials Science and Engineering course ‘Computing Fabrics,’ taught with Professor Anais Missakian from the Rhode Island School of Design,” adds Fink.
This research was supported, in part, by the U.S. Army Research Office Institute for Soldier Nanotechnology (ISN), the U.S. Defense Threat Reduction Agency, the U.S. National Science Foundation, the Fannie and John Hertz Foundation Fellowship, the Paul and Daisy Soros Foundation Fellowship for New Americans, the Stanford-Knight Hennessy Scholars Program, and the Astronaut Scholarship Foundation.
A protein from tiny tardigrades may help cancer patients tolerate radiation therapyWhen scientists stimulated cells to produce a protein that helps “water bears” survive extreme environments, the tissue showed much less DNA damage after radiation treatment.About 60 percent of all cancer patients in the United States receive radiation therapy as part of their treatment. However, this radiation can have severe side effects that often end up being too difficult for patients to tolerate.
Drawing inspiration from a tiny organism that can withstand huge amounts of radiation, researchers at MIT, Brigham and Women’s Hospital, and the University of Iowa have developed a new strategy that may protect patients from this kind of damage. Their approach makes use of a protein from tardigrades, often also called “water bears,” which are usually less than a millimeter in length.
When the researchers injected messenger RNA encoding this protein into mice, they found that it generated enough protein to protect cells’ DNA from radiation-induced damage. If developed for use in humans, this approach could benefit many cancer patients, the researchers say.
“Radiation can be very helpful for many tumors, but we also recognize that the side effects can be limiting. There’s an unmet need with respect to helping patients mitigate the risk of damaging adjacent tissue,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.
Traverso and James Byrne, an assistant professor of radiation oncology at the University of Iowa, are the senior authors of the study, which appears today in Nature Biomedical Engineering. The paper’s lead authors are Ameya Kirtane, an instructor in medicine at Harvard Medical School and a visiting scientist at MIT’s Koch Institute for Integrative Cancer Research, and Jianling Bi, a research scientist at the University of Iowa.
Extreme survival
Radiation is often used to treat cancers of the head and neck, where it can damage the mouth or throat, making it very painful to eat or drink. It is also commonly used for gastrointestinal cancers, which can lead to rectal bleeding. Many patients end up delaying treatments or stopping them altogether.
“This affects a huge number of patients, and it can manifest as something as simple as mouth sores, which can limit a person’s ability to eat because it’s so painful, to requiring hospitalization because people are suffering so terribly from the pain, weight loss, or bleeding. It can be pretty dangerous, and it’s something that we really wanted to try and address,” Byrne says.
Currently, there are very few ways to prevent radiation damage in cancer patients. There are a handful of drugs that can be given to try to reduce the damage, and for prostate cancer patients, a hydrogel can be used to create a physical barrier between the prostate and the rectum during radiation treatment.
For several years, Traverso and Byrne have been working on developing new ways to prevent radiation damage. In the new study, they were inspired by the extraordinary survival ability of tardigrades. Found all over the world, usually in aquatic environments, these organisms are well known for their resilience to extreme conditions. Scientists have even sent them into space, where they were shown to survive extreme dehydration and cosmic radiation.
One key component of tardigrades’ defense systems is a unique damage suppressor protein called Dsup, which binds to DNA and helps protect it from radiation-induced damage. This protein plays a major role in tardigrades’ ability to survive radiation doses 2,000 to 3,000 times higher than what a human being can tolerate.
When brainstorming ideas for novel ways to protect cancer patients from radiation, the researchers wondered if they might be able to deliver messenger RNA encoding Dsup to patient tissues before radiation treatment. This mRNA would trigger cells to transiently express the protein, protecting DNA during the treatment. After a few hours, the mRNA and protein would disappear.
For this to work, the researchers needed a way to deliver mRNA that would generate large amounts of protein in the target tissues. They screened libraries of delivery particles containing both polymer and lipid components, which have been used separately to achieve efficient mRNA delivery. From these screens, they identified one polymer-lipid particle that was best-suited for delivery to the colon, and another that was optimized to deliver mRNA to mouth tissue.
“We thought that perhaps by combining these two systems — polymers and lipids — we may be able to get the best of both worlds and get highly potent RNA delivery. And that’s essentially what we saw,” Kirtane says. “One of the strengths of our approach is that we are using a messenger RNA, which just temporarily expresses the protein, so it’s considered far safer than something like DNA, which may be incorporated into the cells’ genome.”
Protection from radiation
After showing that these particles could successfully deliver mRNA to cells grown in the lab, the researchers tested whether this approach could effectively protect tissue from radiation in a mouse model.
They injected the particles into either the cheek or the rectum several hours before giving a dose of radiation similar to what cancer patients would receive. In these mice, the researchers saw a 50 percent reduction in the amount of double-stranded DNA breaks caused by radiation.
“This study shows great promise and is a really novel idea leveraging natural mechanisms of protection again DNA damage for the purpose of protecting healthy cells during radiation treatments for cancer,” says Ben Ho Park, director of the Vanderbilt-Ingram Cancer Center at Vanderbilt University Medical Center, who was not involved in the study.
The researchers also showed that the protective effect of the Dsup protein did not spread beyond the injection site, which is important because they don’t want to protect the tumor itself from the effects of radiation. To make this treatment more feasible for potential use in humans, the researchers now plan to work on developing a version of the Dsup protein that would not provoke an immune response, as the original tardigrade protein likely would.
If developed for use in humans, this protein could also potentially be used to protect against DNA damage caused by chemotherapy drugs, the researchers say. Another possible application would be to help prevent radiation damage in astronauts in space.
Other authors of the paper include Netra Rajesh, Chaoyang Tang, Miguel Jimenez, Emily Witt, Megan McGovern, Arielle Cafi, Samual Hatfield, Lauren Rosenstock, Sarah Becker, Nicole Machado, Veena Venkatachalam, Dylan Freitas, Xisha Huang, Alvin Chan, Aaron Lopes, Hyunjoon Kim, Nayoon Kim, Joy Collins, Michelle Howard, Srija Manchkanti, and Theodore Hong.
The research was funded by the Prostate Cancer Foundation Young Investigator Award, the U.S. Department of Defense Prostate Cancer Program Early Investigator Award, a Hope Funds for Cancer Research Fellowship, the American Cancer Society, the National Cancer Institute, MIT’s Department of Mechanical Engineering, and the U.S. Advanced Research Projects Agency for Health.
Helping the immune system attack tumorsStefani Spranger is working to discover why some cancers don’t respond to immunotherapy, in hopes of making them more vulnerable to it.In addition to patrolling the body for foreign invaders, the immune system also hunts down and destroys cells that have become cancerous or precancerous. However, some cancer cells end up evading this surveillance and growing into tumors.
Once established, tumor cells often send out immunosuppressive signals, which leads T cells to become “exhausted” and unable to attack the tumor. In recent years, some cancer immunotherapy drugs have shown great success in rejuvenating those T cells so they can begin attacking tumors again.
While this approach has proven effective against cancers such as melanoma, it doesn’t work as well for others, including lung and ovarian cancer. MIT Associate Professor Stefani Spranger is trying to figure out how those tumors are able to suppress immune responses, in hopes of finding new ways to galvanize T cells into attacking them.
“We really want to understand why our immune system fails to recognize cancer,” Spranger says. “And I’m most excited about the really hard-to-treat cancers because I think that’s where we can make the biggest leaps.”
Her work has led to a better understanding of the factors that control T-cell responses to tumors, and raised the possibility of improving those responses through vaccination or treatment with immune-stimulating molecules called cytokines.
“We’re working on understanding what exactly the problem is, and then collaborating with engineers to find a good solution,” she says.
Jumpstarting T cells
As a student in Germany, where students often have to choose their college major while still in high school, Spranger envisioned going into the pharmaceutical industry and chose to major in biology. At Ludwig Maximilian University in Munich, her course of study began with classical biology subjects such as botany and zoology, and she began to doubt her choice. But, once she began taking courses in cell biology and immunology, her interest was revived and she continued into a biology graduate program at the university.
During a paper discussion class early in her graduate school program, Spranger was assigned to a Science paper on a promising new immunotherapy treatment for melanoma. This strategy involves isolating tumor-infiltrating T-cells during surgery, growing them into large numbers, and then returning them to the patient. For more than 50 percent of those patients, the tumors were completely eliminated.
“To me, that changed the world,” Spranger recalls. “You can take the patient’s own immune system, not really do all that much to it, and then the cancer goes away.”
Spranger completed her PhD studies in a lab that worked on further developing that approach, known as adoptive T-cell transfer therapy. At that point, she still was leaning toward going into pharma, but after finishing her PhD in 2011, her husband, also a biologist, convinced her that they should both apply for postdoc positions in the United States.
They ended up at the University of Chicago, where Spranger worked in a lab that studies how the immune system responds to tumors. There, she discovered that while melanoma is usually very responsive to immunotherapy, there is a small fraction of melanoma patients whose T cells don’t respond to the therapy at all. That got her interested in trying to figure out why the immune system doesn’t always respond to cancer the way that it should, and in finding ways to jumpstart it.
During her postdoc, Spranger also discovered that she enjoyed mentoring students, which she hadn’t done as a graduate student in Germany. That experience drew her away from going into the pharmaceutical industry, in favor of a career in academia.
“I had my first mentoring teaching experience having an undergrad in the lab, and seeing that person grow as a scientist, from barely asking questions to running full experiments and coming up with hypotheses, changed how I approached science and my view of what academia should be for,” she says.
Modeling the immune system
When applying for faculty jobs, Spranger was drawn to MIT by the collaborative environment of MIT and its Koch Institute for Integrative Cancer Research, which offered the chance to collaborate with a large community of engineers who work in the field of immunology.
“That community is so vibrant, and it’s amazing to be a part of it,” she says.
Building on the research she had done as a postdoc, Spranger wanted to explore why some tumors respond well to immunotherapy, while others do not. For many of her early studies, she used a mouse model of non-small-cell lung cancer. In human patients, the majority of these tumors do not respond well to immunotherapy.
“We build model systems that resemble each of the different subsets of non-responsive non-small cell lung cancer, and we’re trying to really drill down to the mechanism of why the immune system is not appropriately responding,” she says.
As part of that work, she has investigated why the immune system behaves differently in different types of tissue. While immunotherapy drugs called checkpoint inhibitors can stimulate a strong T-cell response in the skin, they don’t do nearly as much in the lung. However, Spranger has shown that T cell responses in the lung can be improved when immune molecules called cytokines are also given along with the checkpoint inhibitor.
Those cytokines work, in part, by activating dendritic cells — a class of immune cells that help to initiate immune responses, including activation of T cells.
“Dendritic cells are the conductor for the orchestra of all the T cells, although they’re a very sparse cell population,” Spranger says. “They can communicate which type of danger they sense from stressed cells and then instruct the T cells on what they have to do and where they have to go.”
Spranger’s lab is now beginning to study other types of tumors that don’t respond at all to immunotherapy, including ovarian cancer and glioblastoma. Both the brain and the peritoneal cavity appear to suppress T-cell responses to tumors, and Spranger hopes to figure out how to overcome that immunosuppression.
“We’re specifically focusing on ovarian cancer and glioblastoma, because nothing’s working right now for those cancers,” she says. “We want to understand what we have to do in those sites to induce a really good anti-tumor immune response.”
MIT engineers prepare to send three payloads to the moonData from the devices will help future astronauts navigate the moon’s south polar region and search for frozen water.Three MIT payloads will soon hitch a ride to the moon in a step toward establishing a permanent base on the lunar surface.
In the coming days, weather permitting, MIT engineers and scientists will send three payloads into space, on a course set for the moon’s south polar region. Scientists believe this area, with its permanently shadowed regions, could host hidden reservoirs of frozen water, which could serve to sustain future lunar settlements and fuel missions beyond the moon.
NASA plans to send astronauts to the moon’s south pole in 2027 as part of the Artemis III mission, which will be the first time humans touch down on the lunar surface since the Apollo era and the first time any human sets foot on its polar region. In advance of that journey, the MIT payloads will provide data about the area that can help prepare Artemis astronauts for navigating the frozen terrain.
The payloads include two novel technologies — a small depth-mapping camera and a thumb-sized mini-rover — along with a wafer-thin “record,” etched with the voices of people from around the world speaking in their native languages. All three payloads will be carried by a larger, suitcase-sized rover built by the space contractor Lunar Outpost.
As the main rover drives around the moon’s surface, exploring the polar terrain, the MIT camera, mounted on the front of the rover, will take the first ever 3D images of the lunar landscape captured from the surface of the Moon using time of flight technology. These images will beam back to Earth, where they can be used to train Artemis astronauts in visual simulations of the polar terrain and can be incorporated into advanced spacesuits with synthetic vision helmets.
Meanwhile, the mini-rover, dubbed “AstroAnt,” will wheel around the roof of the main rover and take temperature readings to monitor the larger vehicle’s operation. If it’s successful, AstroAnt could work as part of a team of miniature helper bots, performing essential tasks in future missions, such as clearing dust from solar panels and checking for cracks in lunar habitats and infrastructure.
All three MIT payloads, along with the Lunar Outpost rover, will launch to the moon aboard a SpaceX Falcon 9 rocket and touch down in the moon’s south polar region in a lander built by space company Intuitive Machines. The mission as a whole, which includes a variety of other payloads in addition to MIT’s, is named IM-2, for Intuitive Machines’ second trip to the moon. IM-2 aims to identify the presence and amount of water-ice on the moon’s south pole, using a combination of instruments, including an ice drill mounted to the lander, and a robotic “hopper” that will bounce along the surface to search for water in hard-to-reach regions.
The lunar landing, which engineers anticipate will be around noon on March 6, will mark the first time MIT has set active technology on the moon’s surface since the Apollo era, when MIT’s Instrumentation Laboratory, now the independent Draper Laboratory, provided the landmark Apollo Guidance Computer that navigated astronauts to the moon and back.
MIT engineers see their part in the new mission, which they’ve named “To the Moon to Stay,” as the first of many on the way to establishing a permanent presence on the lunar surface.
“Our goal is not just to visit the moon but to build a thriving ecosystem that supports humanity’s expansion into space,” says Dava Newman, Apollo Program Professor of Astronautics at MIT, director of the MIT Media Lab, and former NASA deputy administrator.
Institute’s roots
MIT’s part in the lunar mission is led by the Space Exploration Initiative (SEI), a research collaborative within the Media Lab that aims to enable a “sci-fi future” of space exploration. The SEI, which was founded in 2016 by media arts and sciences alumna Ariel Ekblaw SM ’17, PhD ’20, develops, tests, and deploys futuristic space-grade technologies that are intended to help humans establish sustainable settlements in space.
In the spring of 2021, SEI and MIT’s Department of Aeronautics and Astronautics (AeroAstro) offered a course, MAS.839/16.893 (Operating in the Lunar Environment), that tasked teams of students to design payloads that meet certain objectives related to NASA’s Artemis missions to the moon. The class was taught by Ekblaw and AeroAstro’s Jeffrey Hoffman, MIT professor of the practice and former NASA astronaut, who helped students test their payload designs in the field, including in remote regions of Norway that resemble the moon’s barren landscape, and in parabolic flights that mimic the moon’s weak gravity.
Out of that class, Ekblaw and Hoffman chose to further develop two payload designs: a laser-based 3D camera system and the AstroAnt — a tiny, autonomous inspection robot. Both designs grew out of prior work. AstroAnt was originally a side project as part of Ekblaw’s PhD, based on work originally developed by Artem Dementyev in the Media Lab’s Responsive Environments group, while the 3D camera was a PhD focus for AeroAstro alumna Cody Paige ’23, who helped develop and test the camera design and implement VR/XR technology with Newman, in collaboration with NASA Ames Research Center.
As both designs were fine-tuned, Ekblaw raised funds and established a contract with Lunar Outpost (co-founded by MIT AeroAstro alumnus Forrest Meyen SM ’13, PhD ’17) to pair the payloads with the company’s moon-bound rover. SEI Mission Integrator Sean Auffinger oversaw integration and test efforts, together with Lunar Outpost, to support these payloads for operation in a novel, extreme environment.
“This mission has deep MIT roots,” says Ekblaw, who is the principal investigator for the MIT arm of the IM-2 mission, and a visiting scientist at the Media Lab. “This will be historic in that we’ve never landed technology or a rover in this area of the lunar south pole. It’s a really hard place to land — there are big boulders, and deep dust. So, it’s a bold attempt.”
Systems on
The site of the IM-2 landing is Mons Mouton Plateau — a flat-topped mountain at the moon’s south pole that lies just north of Shackleton Crater, which is a potential landing site for NASA’s Artemis astronauts. After the Intuitive Machines lander touches down, it will effectively open its garage door and let Lunar Outpost’s rover drive out to explore the polar landscape. Once the rover acclimates to its surroundings, it will begin to activate its instruments, including MIT’s 3D camera.
“It will be the first time we’re using this specific imaging technology on the lunar surface,” notes Paige, who is the current SEI director.
The camera, which will be mounted on the front of the main rover, is designed to shine laser light onto a surface and measure the time it takes for the light to bounce back to the camera. This “time-of-flight” is a measurement of distance, which can also be translated into surface topography, such as the depth of individual craters and crevices.
“Because we’re using a laser light, we can look without using sunlight,” Paige explains. “And we don’t know exactly what we’ll find. Some of the things we’re looking for are centimeter-sized holes, in areas that are permanently shadowed or frozen, that might contain water-ice. Those are the kinds of landscapes we’re really excited to see.”
Paige expects that the camera will send images back to Earth in next-day data packets, which will be processed by custom software developed by the team’s lead software engineer, Don Derek Haddad, allowing the camera’s science team to analyze the images as the rover traverses the terrain.
As the camera maps the moon’s surface, AstroAnt — which is smaller and lighter than an airpod case — will deploy from a tiny garage atop the main rover’s roof. The AstroAnt will drive around on magnetic wheels that allow it to stick to the rover’s surface without falling off. To the AstroAnt’s undercarriage, Ekblaw and her team, led by Media Lab graduate student Fangzheng Liu, fixed a thermopile — a small sensor that takes measurements of the main rover’s temperature, which can be used to monitor the vehicle’s thermal performance.
“If we can test this one AstroAnt on the moon, then we imagine having these really capable, roving swarms that can help astronauts do autonomous repair, inspection, diagnostics, and servicing,” Ekblaw says. “In the future, we could put little windshield wipers on them to help clear dust from solar panels, or put a pounding bar on them to induce tiny vibrations to detect defects in a habitat. There’s a lot of potential once we get to swarm scale.”
Eyes on the moon
The third MIT payload that will be affixed to the main rover is dubbed the Humanity United with MIT Art and Nanotechnology in Space, or HUMANS project. Led by MIT AeroAstro alumna Maya Nasr ’18, SM ’21, PhD ’23, HUMANS is a 2-inch disc made from a silicon wafer engraved with nanometer-scale etchings using technology provided by MIT.nano. The engravings are inspired by The Golden Record, a phonograph record that was sent into space with NASA’s Voyager probes in 1977. The HUMANS record is engraved with recordings of people from around the world, speaking in their native languages about what space exploration and humanity mean to them.
“We are carrying the hopes, dreams, and stories of people from all backgrounds,” Nasr says. “(It’s a) powerful reminder that space is not the privilege of a few, but the shared legacy of all.”
The MIT Media Lab plans to display the March 6 landing on a screen in the building’s atrium for the public to watch in real-time. Researchers from MIT’s Department of Architecture, led by Associate Professor Skylar Tibbits, have also built a lunar mission control room — a circular, architectural space where the engineers will monitor and control the mission’s payloads. If all goes well, the MIT team see the mission as the first step toward putting permanent boots on the surface of the moon, and even beyond.
“Our return to the Moon is not just about advancing technology — it’s about inspiring the next generation of explorers who are alive today and will travel to the moon in their lifetime,” Ekblaw says. “This historic mission for MIT brings students, staff and faculty together from across the Institute on a foundational mission that will support a future sustainable lunar settlement.”
An “All-American” vision of service to othersFormer NFL linebacker Spencer Paysinger keynotes the 51st annual MLK Celebration, with a message focused on building community.Spencer Paysinger has already been many things in his life, including a Super Bowl-winning linebacker, a writer and producer of the hit television series “All-American,” and local-business entrepreneur. But as he explained during his keynote speech at MIT’s 51st annual event celebrating the life and legacy of Martin Luther King Jr., Paysinger would prefer to think about his journey in additional terms: whether he has been able to serve others along the way.
“As I stand up here today talking about Dr. King’s mission, Dr. King’s dream, why we’re all here today, to me it all leans back into community,” Paysinger said. “I want to be judged by what I have done for others.”
Being able to reach out to others, in good times and bad, was a theme of the annual event, which took place in MIT’s Walker Memorial (Building 50), on Thursday. As Paysinger noted, his own career is marked by being a “team player” and finding reward in shared endeavors.
“For me, I’m at my best when I have people on the right and on the left of me attempting to reach the same dream,” Paysinger said. “We can have different ideologies, we can come from different backgrounds, of race, socioeconomic backgrounds. … At the end of the day it comes back to the mindset we need to have. It’s rooted in community, it’s rooted in togetherness.”
The event featured an array of talks delivered by students, campus leaders, and guests, along with musical interludes, and drew hundreds from the MIT community.
In opening remarks, MIT President Sally A. Kornbluth praised Paysinger, saying his “perseverance and tenacity are a fantastic example to us all.”
Kornbluth also spoke about the values, and value, of MIT itself. American universities and colleges, she noted, have long “been a point of national pride and a source of international envy. … They and we have always been valued as centers of excellence creativity, innovation, and an infinitely renewable source of leadership.”
Moving forward, Kornbluth noted, the MIT community will continue to pursue excellence and provide mutual respect for others.
“MIT is in the talent business,” Kornbluth said. “Our success, and living up to our great mission, depends on our ability to attract extraordinarily talented people and to create a community in which everyone earns a place here to do their very best work. … Everyone at MIT is here because they deserve to be here. Every staff member, every faculty member, every postdoc, every student, every one of us. Every one of us is a full member of this community, and every member of our community is valued as a human being, and valued for what they contribute to our mission.”
Paysinger lauded the array of speakers as well as the friendly atmosphere at the event, where attendees sat around luncheon tables, talking and getting to know each other before and after the slate of talks.
“You guys actively and literally in 45 minutes have changed my view of what MIT is,” Paysinger said.
In his NFL career, Paysinger was a linebacker who played with the New York Giants, Miami Dolphins, and Carolina Panthers, from 2011 through 2017, appearing in 94 regular-season games and five playoff games. He saw action in Super Bowl XLVI, when the Giants beat the New England Patriots, 22-17, something he joked about a few times for his Massachusetts audience. Paysinger’s former New York Giants teammate, fellow linebacker Mark Herzlich, was also in attendance on Thursday.
Paysinger grew up in South Central Los Angeles, long perceived from the outside as a place of danger and deprivation. And while he experienced those things, Paysinger said, his home neighborhood also had its “all-American” side, as kids raced bikes down the block and grew to know each other. Paysinger attended Beverly Hills High School, starring as a wide receiver, then signed with the University of Oregon, where he converted to linebacker. Oregon and Paysinger reached college football’s national championship game in his senior season, 2010.
In his talk, Paysinger emphasized the twists and turns of his journey through football, from changing positions on the field to changing teams. He noted that, in sports as in life, moving beyond our comfort zone can help us thrive in the long run.
“I was scared, I wasn’t sure of myself, when my coaches decided to make that change for me,” Paysinger said. However, he added, “I knew that [from] leaning into the uncomfortableness of the moment, the other side could be greater for me.”
The NFL soon beckoned, along with a Super Bowl ring. But Paysinger received a jolt beyond the boundaries of sports in 2015, when his former Giants teammate and close friend Tyler Sash died suddently at age 27. Among other things, Paysinger began thinking about life after football more systematically and began his screenwriting efforts in earnest, even as his football career was still ongoing.
“All-American,” now entering its 7th season on the CW Network, is loosely based on his own background, capturing the dynamics of his experiences as a player and team member. It has become one of the longest-running sports-based shows on television. Paysinger is also an entrepreneur who founded Hilltop Kitchen and Coffee, a chain of eateries in underserved areas around Los Angeles, and has helped develop other local businesses as well.
And while every new venture is a fresh challenge, Paysinger said, we can often accomplish more than we realize: “I’m not coming from a mindset of deciding whether I can or can’t do something, but if I want to or not.”
Sophomore Michael Ewing provided welcoming remarks and introduced Paysinger. He read aloud a quote from King chosen as a central motif of this year’s celebration: “We must come to see that the end we seek is a society at peace with itself, a society that can live with its conscience.”
For his own part, Ewing said, “When I read these words, I think of a society that aspires to improve its circumstances, address existing issues, and create a more positive and just environment for all.” At MIT, Ewing added, there is “a community where students, professors, and others come together to achieve at the highest levels, united by a shared desire to learn and grow. … The process of collaborating, disagreeing, building with others who are different — this is the key to growth and development.”
The annual MLK Celebration featured further reflections from students, including second-year undergraduate Siddhu Pachipala, a political science and economics double-major. Pachipala began his remarks by recounting a social media exchange he once had with a congressional account, the tenor of which he soon regretted.
“Looking back, I think it was a missed opportunity,” Pachipala said. “Why was my first instinct … to turn it into a battle? … We train ourselves to believe that if we’re not scoring hits, we’re losing, and gestures of decency are traps, that an extended hand must be slapped away. Martin Luther King Jr. took politics to be something more substantial. He had a serious vision of justice, one we’ve gathered today to honor. But he knew that justice had a prerequisite: friendship.”
Elshareef Kabbashi, a graduate student in architecture, offered additional remarks, noting that “Dr. King’s dream was never confined to a single movement, nation, or moment in history,” but rather aimed at creating “human dignity everywhere.”
E. Denise Simmons, mayor of the City of Cambridge, also spoke, and lauded “the entire MIT community for keeping this tradition alive for 51 years.” She added: “It’s Dr. King’s wisdom, his courage, his moral clarity, that helped light the path forward. And I ask each of you to continue to shine that light.”
The luncheon included the presentation of the annual Dr. Martin Luther King Jr. Leadership Awards Recipients, given this year to Cordelia Price ’78, SM ’80; Pouya Alimagham; Ciarra Ortiz; Sahal Ahmed; William Gibbs; and Maxine Samuels.
On a day full of thoughts about King and his vision, Paysinger underscored the salience of community by highlighting another of his favorite King passages: “Every man must decide whether he will walk in the light of creative altruism or in the darkness of destructive selfishness. This is the judgment. Life’s most persistent and urgent question is, ‘What are you doing for others?’”
High-speed videos show what happens when a droplet splashes into a poolFindings may help predict how rain and irrigation systems launch particles and pathogens from watery surfaces, with implications for industry, agriculture, and public health.Rain can freefall at speeds of up to 25 miles per hour. If the droplets land in a puddle or pond, they can form a crown-like splash that, with enough force, can dislodge any surface particles and launch them into the air.
Now MIT scientists have taken high-speed videos of droplets splashing into a deep pool, to track how the fluid evolves, above and below the water line, frame by millisecond frame. Their work could help to predict how spashing droplets, such as from rainstorms and irrigation systems, may impact watery surfaces and aerosolize surface particles, such as pollen on puddles or pesticides in agricultural runoff.
The team carried out experiments in which they dispensed water droplets of various sizes and from various heights into a pool of water. Using high-speed imaging, they measured how the liquid pool deformed as the impacting droplet hit the pool’s surface.
Across all their experiments, they observed a common splash evolution: As a droplet hit the pool, it pushed down below the surface to form a “crater,” or cavity. At nearly the same time, a wall of liquid rose above the surface, forming a crown. Interestingly, the team observed that small, secondary droplets were ejected from the crown before the crown reached its maximum height. This entire evolution happens in a fraction of a second.
Scientists have caught snapshots of droplet splashes in the past, such as the famous “Milk Drop Coronet” — a photo of a drop of milk in mid-splash, taken by the late MIT professor Harold “Doc” Edgerton, who invented a photographic technique to capture quickly moving objects.
The new work represents the first time scientists have used such high-speed images to model the entire splash dynamics of a droplet in a deep pool, combining what happens both above and below the surface. The team has used the imaging to gather new data central to build a mathematical model that predicts how a droplet’s shape will morph and merge as it hits a pool’s surface. They plan to use the model as a baseline to explore to what extent a splashing droplet might drag up and launch particles from the water pool.
“Impacts of drops on liquid layers are ubiquitous,” says study author Lydia Bourouiba, a professor in the MIT departments of Civil and Environmental Engineering and Mechanical Engineering, and a core member of the Institute for Medical Engineering and Science (IMES). “Such impacts can produce myriads of secondary droplets that could act as carriers for pathogens, particles, or microbes that are on the surface of impacted pools or contaminated water bodies. This work is key in enabling prediction of droplet size distributions, and potentially also what such drops can carry with them.”
Bourouiba and her mentees have published their results in the Journal of Fluid Mechanics. MIT co-authors include former graduate student Raj Dandekar PhD ’22, postdoc (Eric) Naijian Shen, and student mentee Boris Naar.
Above and below
At MIT, Bourouiba heads up the Fluid Dynamics of Disease Transmission Laboratory, part of the Fluids and Health Network, where she and her team explore the fundamental physics of fluids and droplets in a range of environmental, energy, and health contexts, including disease transmission. For their new study, the team looked to better understand how droplets impact a deep pool — a seemingly simple phenomenon that nevertheless has been tricky to precisely capture and characterize.
Bourouiba notes that there have been recent breakthroughs in modeling the evolution of a splashing droplet below a pool’s surface. As a droplet hits a pool of water, it breaks through the surface and drags air down through the pool to create a short-lived crater. Until now, scientists have focused on the evolution of this underwater cavity, mainly for applications in energy harvesting. What happens above the water, and how a droplet’s crown-like shape evolves with the cavity below, remained less understood.
“The descriptions and understanding of what happens below the surface, and above, have remained very much divorced,” says Bourouiba, who believes such an understanding can help to predict how droplets launch and spread chemicals, particles, and microbes into the air.
Splash in 3D
To study the coupled dynamics between a droplet’s cavity and crown, the team set up an experiment to dispense water droplets into a deep pool. For the purposes of their study, the researchers considered a deep pool to be a body of water that is deep enough that a splashing droplet would remain far away from the pool’s bottom. In these terms, they found that a pool with a depth of at least 20 centimeters was sufficient for their experiments.
They varied each droplet’s size, with an average diameter of about 5 millimeters. They also dispensed droplets from various heights, causing the droplets to hit the pool’s surface at different speeds, which on average was about 5 meters per second. The overall dynamics, Bourouiba says, should be similar to what occurs on the surface of a puddle or pond during an average rainstorm.
“This is capturing the speed at which raindrops fall,” she says. “These wouldn’t be very small, misty drops. This would be rainstorm drops for which one needs an umbrella.”
Using high-speed imaging techniques inspired by Edgerton’s pioneering photography, the team captured videos of pool-splashing droplets, at rates of up to 12,500 frames per second. They then applied in-house imaging processing methods to extract key measurements from the image sequences, such as the changing width and depth of the underwater cavity, and the evolving diameter and height of the rising crown. The researchers also captured especially tricky measurements, of the crown’s wall thickness profile and inner flow — the cylinder that rises out of the pool, just before it forms a rim and points that are characteristic of a crown.
“This cylinder-like wall of rising liquid, and how it evolves in time and space, is at the heart of everything,” Bourouiba says. “It’s what connects the fluid from the pool to what will go into the rim and then be ejected into the air through smaller, secondary droplets.”
The researchers worked the image data into a set of “evolution equations,” or a mathematical model that relates the various properties of an impacting droplet, such as the width of its cavity and the thickness and speed profiles of its crown wall, and how these properties change over time, given a droplet’s starting size and impact speed.
“We now have a closed-form mathematical expression that people can use to see how all these quantities of a splashing droplet change over space and time,” says co-author Shen, who plans, with Bourouiba, to apply the new model to the behavior of secondary droplets and understanding how a splash end-up dispersing particles such as pathogens and pesticides. “This opens up the possibility to study all these problems of splash in 3D, with self-contained closed-formed equations, which was not possible before.”
This research was supported, in part, by the Department of Agriculture-National Institute of Food and Agriculture Specialty Crop Research Initiative; the Richard and Susan Smith Family Foundation; the National Science Foundation; the Centers for Disease Control and Prevention-National Institute for Occupational Safety and Health; Inditex; and the National Institute of Allergy and Infectious Diseases of the National Institutes of Health.
Professor Anthony Sinskey, biologist, inventor, entrepreneur, and Center for Biomedical Innovation co-founder, dies at 84Colleagues remember the longtime MIT professor as a supportive, energetic collaborator who seemed to know everyone at the Institute.Longtime MIT Professor Anthony “Tony” Sinskey ScD ’67, who was also the co-founder and faculty director of the Center for Biomedical Innovation (CBI), passed away on Feb. 12 at his home in New Hampshire. He was 84.
Deeply engaged with MIT, Sinskey left his mark on the Institute as much through the relationships he built as the research he conducted. Colleagues say that throughout his decades on the faculty, Sinskey’s door was always open.
“He was incredibly generous in so many ways,” says Graham Walker, an American Cancer Society Professor at MIT. “He was so willing to support people, and he did it out of sheer love and commitment. If you could just watch Tony in action, there was so much that was charming about the way he lived. I’ve said for years that after they made Tony, they broke the mold. He was truly one of a kind.”
Sinskey’s lab at MIT explored methods for metabolic engineering and the production of biomolecules. Over the course of his research career, he published more than 350 papers in leading peer-reviewed journals for biology, metabolic engineering, and biopolymer engineering, and filed more than 50 patents. Well-known in the biopharmaceutical industry, Sinskey contributed to the founding of multiple companies, including Metabolix, Tepha, Merrimack Pharmaceuticals, and Genzyme Corporation. Sinskey’s work with CBI also led to impactful research papers, manufacturing initiatives, and educational content since its founding in 2005.
Across all of his work, Sinskey built a reputation as a supportive, collaborative, and highly entertaining friend who seemed to have a story for everything.
“Tony would always ask for my opinions — what did I think?” says Barbara Imperiali, MIT’s Class of 1922 Professor of Biology and Chemistry, who first met Sinskey as a graduate student. “Even though I was younger, he viewed me as an equal. It was exciting to be able to share my academic journey with him. Even later, he was continually opening doors for me, mentoring, connecting. He felt it was his job to get people into a room together to make new connections.”
Sinskey grew up in the small town of Collinsville, Illinois, and spent nights after school working on a farm. For his undergraduate degree, he attended the University of Illinois, where he got a job washing dishes at the dining hall. One day, as he recalled in a 2020 conversation, he complained to his advisor about the dishwashing job, so the advisor offered him a job washing equipment in his microbiology lab.
In a development that would repeat itself throughout Sinskey’s career, he befriended the researchers in the lab and started learning about their work. Soon he was showing up on weekends and helping out. The experience inspired Sinskey to go to graduate school, and he only applied to one place.
Sinskey earned his ScD from MIT in nutrition and food science in 1967. He joined MIT’s faculty a few years later and never left.
“He loved MIT and its excellence in research and education, which were incredibly important to him,” Walker says. “I don’t know of another institution this interdisciplinary — there’s barely a speed bump between departments — so you can collaborate with anybody. He loved that. He also loved the spirit of entrepreneurship, which he thrived on. If you heard somebody wanted to get a project done, you could run around, get 10 people, and put it together. He just loved doing stuff like that.”
Working across departments would become a signature of Sinskey’s research. His original office was on the first floor of MIT’s Building 56, right next to the parking lot, so he’d leave his door open in the mornings and afternoons and colleagues would stop in and chat.
“One of my favorite things to do was to drop in on Tony when I saw that his office door was open,” says Chris Kaiser, MIT’s Amgen Professor of Biology. “We had a whole range of things we liked to catch up on, but they always included his perspectives looking back on his long history at MIT. It also always included hopes for the future, including tracking trajectories of MIT students, whom he doted on.”
Long before the internet, colleagues describe Sinskey as a kind of internet unto himself, constantly leveraging his vast web of relationships to make connections and stay on top of the latest science news.
“He was an incredibly gracious person — and he knew everyone,” Imperiali says. “It was as if his Rolodex had no end. You would sit there and he would say, ‘Call this person.’ or ‘Call that person.’ And ‘Did you read this new article?’ He had a wonderful view of science and collaboration, and he always made that a cornerstone of what he did. Whenever I’d see his door open, I’d grab a cup of tea and just sit there and talk to him.”
When the first recombinant DNA molecules were produced in the 1970s, it became a hot area of research. Sinskey wanted to learn more about recombinant DNA, so he hosted a large symposium on the topic at MIT that brought in experts from around the world.
“He got his name associated with recombinant DNA for years because of that,” Walker recalls. “People started seeing him as Mr. Recombinant DNA. That kind of thing happened all the time with Tony.”
Sinskey’s research contributions extended beyond recombinant DNA into other microbial techniques to produce amino acids and biodegradable plastics. He co-founded CBI in 2005 to improve global health through the development and dispersion of biomedical innovations. The center adopted Sinskey’s collaborative approach in order to accelerate innovation in biotechnology and biomedical research, bringing together experts from across MIT’s schools.
“Tony was at the forefront of advancing cell culture engineering principles so that making biomedicines could become a reality. He knew early on that biomanufacturing was an important step on the critical path from discovering a drug to delivering it to a patient,” says Stacy Springs, the executive director of CBI. “Tony was not only my boss and mentor, but one of my closest friends. He was always working to help everyone reach their potential, whether that was a colleague, a former or current researcher, or a student. He had a gentle way of encouraging you to do your best.”
“MIT is one of the greatest places to be because you can do anything you want here as long as it’s not a crime,” Sinskey joked in 2020. “You can do science, you can teach, you can interact with people — and the faculty at MIT are spectacular to interact with.”
Sinskey shared his affection for MIT with his family. His wife, the late ChoKyun Rha ’62, SM ’64, SM ’66, ScD ’67, was a professor at MIT for more than four decades and the first woman of Asian descent to receive tenure at MIT. His two sons also attended MIT — Tong-ik Lee Sinskey ’79, SM ’80 and Taeminn Song MBA ’95, who is the director of strategy and strategic initiatives for MIT Information Systems and Technology (IS&T).
Song recalls: “He was driven by same goal my mother had: to advance knowledge in science and technology by exploring new ideas and pushing everyone around them to be better.”
Around 10 years ago, Sinskey began teaching a class with Walker, Course 7.21/7.62 (Microbial Physiology). Walker says their approach was to treat the students as equals and learn as much from them as they taught. The lessons extended beyond the inner workings of microbes to what it takes to be a good scientist and how to be creative. Sinskey and Rha even started inviting the class over to their home for Thanksgiving dinner each year.
“At some point, we realized the class was turning into a close community,” Walker says. “Tony had this endless supply of stories. It didn’t seem like there was a topic in biology that Tony didn’t have a story about either starting a company or working with somebody who started a company.”
Over the last few years, Walker wasn’t sure they were going to continue teaching the class, but Sinskey remarked it was one of the things that gave his life meaning after his wife’s passing in 2021. That decided it.
After finishing up this past semester with a class-wide lunch at Legal Sea Foods, Sinskey and Walker agreed it was one of the best semesters they’d ever taught.
In addition to his two sons, Sinskey is survived by his daughter-in-law Hyunmee Elaine Song, five grandchildren, and two great grandsons. He has two brothers, Terry Sinskey (deceased in 1975) and Timothy Sinskey, and a sister, Christine Sinskey Braudis.
Gifts in Sinskey’s memory can be made to the ChoKyun Rha (1962) and Anthony J Sinskey (1967) Fund.
MIT biologists discover a new type of control over RNA splicingThey identified proteins that influence splicing of about half of all human introns, allowing for more complex types of gene regulation.RNA splicing is a cellular process that is critical for gene expression. After genes are copied from DNA into messenger RNA, portions of the RNA that don’t code for proteins, called introns, are cut out and the coding portions are spliced back together.
This process is controlled by a large protein-RNA complex called the spliceosome. MIT biologists have now discovered a new layer of regulation that helps to determine which sites on the messenger RNA molecule the spliceosome will target.
The research team discovered that this type of regulation, which appears to influence the expression of about half of all human genes, is found throughout the animal kingdom, as well as in plants. The findings suggest that the control of RNA splicing, a process that is fundamental to gene expression, is more complex than previously known.
“Splicing in more complex organisms, like humans, is more complicated than it is in some model organisms like yeast, even though it’s a very conserved molecular process. There are bells and whistles on the human spliceosome that allow it to process specific introns more efficiently. One of the advantages of a system like this may be that it allows more complex types of gene regulation,” says Connor Kenny, an MIT graduate student and the lead author of the study.
Christopher Burge, the Uncas and Helen Whitaker Professor of Biology at MIT, is the senior author of the study, which appears today in Nature Communications.
Building proteins
RNA splicing, a process discovered in the late 1970s, allows cells to precisely control the content of the mRNA transcripts that carry the instructions for building proteins.
Each mRNA transcript contains coding regions, known as exons, and noncoding regions, known as introns. They also include sites that act as signals for where splicing should occur, allowing the cell to assemble the correct sequence for a desired protein. This process enables a single gene to produce multiple proteins; over evolutionary timescales, splicing can also change the size and content of genes and proteins, when different exons become included or excluded.
The spliceosome, which forms on introns, is composed of proteins and noncoding RNAs called small nuclear RNAs (snRNAs). In the first step of spliceosome assembly, an snRNA molecule known as U1 snRNA binds to the 5’ splice site at the beginning of the intron. Until now, it had been thought that the binding strength between the 5’ splice site and the U1 snRNA was the most important determinant of whether an intron would be spliced out of the mRNA transcript.
In the new study, the MIT team discovered that a family of proteins called LUC7 also helps to determine whether splicing will occur, but only for a subset of introns — in human cells, up to 50 percent.
Before this study, it was known that LUC7 proteins associate with U1 snRNA, but the exact function wasn’t clear. There are three different LUC7 proteins in human cells, and Kenny’s experiments revealed that two of these proteins interact specifically with one type of 5’ splice site, which the researchers called “right-handed.” A third human LUC7 protein interacts with a different type, which the researchers call “left-handed.”
The researchers found that about half of human introns contain a right- or left-handed site, while the other half do not appear to be controlled by interaction with LUC7 proteins. This type of control appears to add another layer of regulation that helps remove specific introns more efficiently, the researchers say.
“The paper shows that these two different 5’ splice site subclasses exist and can be regulated independently of one another,” Kenny says. “Some of these core splicing processes are actually more complex than we previously appreciated, which warrants more careful examination of what we believe to be true about these highly conserved molecular processes.”
“Complex splicing machinery”
Previous work has shown that mutation or deletion of one of the LUC7 proteins that bind to right-handed splice sites is linked to blood cancers, including about 10 percent of acute myeloid leukemias (AMLs). In this study, the researchers found that AMLs that lost a copy of the LUC7L2 gene have inefficient splicing of right-handed splice sites. These cancers also developed the same type of altered metabolism seen in earlier work.
“Understanding how the loss of this LUC7 protein in some AMLs alters splicing could help in the design of therapies that exploit these splicing differences to treat AML,” Burge says. “There are also small molecule drugs for other diseases such as spinal muscular atrophy that stabilize the interaction between U1 snRNA and specific 5’ splice sites. So the knowledge that particular LUC7 proteins influence these interactions at specific splice sites could aid in improving the specificity of this class of small molecules.”
Working with a lab led by Sascha Laubinger, a professor at Martin Luther University Halle-Wittenberg, the researchers found that introns in plants also have right- and left-handed 5’ splice sites that are regulated by Luc7 proteins.
The researchers’ analysis suggests that this type of splicing arose in a common ancestor of plants, animals, and fungi, but it was lost from fungi soon after they diverged from plants and animals.
“A lot what we know about how splicing works and what are the core components actually comes from relatively old yeast genetics work,” Kenny says. “What we see is that humans and plants tend to have more complex splicing machinery, with additional components that can regulate different introns independently.”
The researchers now plan to further analyze the structures formed by the interactions of Luc7 proteins with mRNA and the rest of the spliceosome, which could help them figure out in more detail how different forms of Luc7 bind to different 5’ splice sites.
The research was funded by the U.S. National Institutes of Health and the German Research Foundation.
Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilienceMIT engineers propose a new “local electricity market” to tap into the power potential of homeowners’ grid-edge devices.There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.
In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.
Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.
In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.
In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.
The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.
The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.
“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”
The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.
Power boost
The team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.
“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”
The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.
“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”
While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.
Attack mode
In their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.
For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices.
The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.
If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.
The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.
“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”
In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.
“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”
This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative.
Chip-based system for terahertz waves could enable more efficient, sensitive electronicsResearchers developed a scalable, low-cost device that can generate high-power terahertz waves on a chip, without bulky silicon lenses.The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.
But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.
Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.
To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.
By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.
This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.
“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.
He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagle, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.
Making waves
Terahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.
One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.
But a property known as the dielectric constant gets in the way of a smooth transmission.
The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.
Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal.
The MIT researchers approached this problem differently.
They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.
They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.
A scalable approach
They chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.
“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.
In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.
“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.
Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.
One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.
“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.
The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.
Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.
This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program.
Like human brains, large language models reason about diverse data in a general wayA new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.
Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.
These findings could help scientists train future LLMs that are better able to handle diverse data.
“LLMs are big black boxes. They have achieved very impressive performance, but we have very little knowledge about their internal working mechanisms. I hope this can be an early step to better understand how they work so we can improve upon them and better control them when needed,” says Zhaofeng Wu, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this research.
His co-authors include Xinyan Velocity Yu, a graduate student at the University of Southern California (USC); Dani Yogatama, an associate professor at USC; Jiasen Lu, a research scientist at Apple; and senior author Yoon Kim, an assistant professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Learning Representations.
Integrating diverse data
The researchers based the new study upon prior work which hinted that English-centric LLMs use English to perform reasoning processes on various languages.
Wu and his collaborators expanded this idea, launching an in-depth study into the mechanisms LLMs use to process diverse data.
An LLM, which is composed of many interconnected layers, splits input text into words or sub-words called tokens. The model assigns a representation to each token, which enables it to explore the relationships between tokens and generate the next word in a sequence. In the case of images or audio, these tokens correspond to particular regions of an image or sections of an audio clip.
The researchers found that the model’s initial layers process data in its specific language or modality, like the modality-specific spokes in the human brain. Then, the LLM converts tokens into modality-agnostic representations as it reasons about them throughout its internal layers, akin to how the brain’s semantic hub integrates diverse information.
The model assigns similar representations to inputs with similar meanings, despite their data type, including images, audio, computer code, and arithmetic problems. Even though an image and its text caption are distinct data types, because they share the same meaning, the LLM would assign them similar representations.
For instance, an English-dominant LLM “thinks” about a Chinese-text input in English before generating an output in Chinese. The model has a similar reasoning tendency for non-text inputs like computer code, math problems, or even multimodal data.
To test this hypothesis, the researchers passed a pair of sentences with the same meaning but written in two different languages through the model. They measured how similar the model’s representations were for each sentence.
Then they conducted a second set of experiments where they fed an English-dominant model text in a different language, like Chinese, and measured how similar its internal representation was to English versus Chinese. The researchers conducted similar experiments for other data types.
They consistently found that the model’s representations were similar for sentences with similar meanings. In addition, across many data types, the tokens the model processed in its internal layers were more like English-centric tokens than the input data type.
“A lot of these input data types seem extremely different from language, so we were very surprised that we can probe out English-tokens when the model processes, for example, mathematic or coding expressions,” Wu says.
Leveraging the semantic hub
The researchers think LLMs may learn this semantic hub strategy during training because it is an economical way to process varied data.
“There are thousands of languages out there, but a lot of the knowledge is shared, like commonsense knowledge or factual knowledge. The model doesn’t need to duplicate that knowledge across languages,” Wu says.
The researchers also tried intervening in the model’s internal layers using English text when it was processing other languages. They found that they could predictably change the model outputs, even though those outputs were in other languages.
Scientists could leverage this phenomenon to encourage the model to share as much information as possible across diverse data types, potentially boosting efficiency.
But on the other hand, there could be concepts or knowledge that are not translatable across languages or data types, like culturally specific knowledge. Scientists might want LLMs to have some language-specific processing mechanisms in those cases.
“How do you maximally share whenever possible but also allow languages to have some language-specific processing mechanisms? That could be explored in future work on model architectures,” Wu says.
In addition, researchers could use these insights to improve multilingual models. Often, an English-dominant model that learns to speak another language will lose some of its accuracy in English. A better understanding of an LLM’s semantic hub could help researchers prevent this language interference, he says.
“Understanding how language models process inputs across languages and modalities is a key question in artificial intelligence. This paper makes an interesting connection to neuroscience and shows that the proposed ‘semantic hub hypothesis’ holds in modern language models, where semantically similar representations of different data types are created in the model’s intermediate layers,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work. “The hypothesis and experiments nicely tie and extend findings from previous works and could be influential for future research on creating better multimodal models and studying links between them and brain function and cognition in humans.”
This research is funded, in part, by the MIT-IBM Watson AI Lab.
MIT spinout maps the body’s metabolites to uncover the hidden drivers of diseaseReviveMed uses AI to gather large-scale data on metabolites — molecules like lipids, cholesterol, and sugar — to match patients with therapeutics.Biology is never simple. As researchers make strides in reading and editing genes to treat disease, for instance, a growing body of evidence suggests that the proteins and metabolites surrounding those genes can’t be ignored.
The MIT spinout ReviveMed has created a platform for measuring metabolites — products of metabolism like lipids, cholesterol, sugar, and carbs — at scale. The company is using those measurements to uncover why some patients respond to treatments when others don’t and to better understand the drivers of disease.
“Historically, we’ve been able to measure a few hundred metabolites with high accuracy, but that’s a fraction of the metabolites that exist in our bodies,” says ReviveMed CEO Leila Pirhaji PhD ’16, who founded the company with Professor Ernest Fraenkel. “There’s a massive gap between what we’re accurately measuring and what exists in our body, and that’s what we want to tackle. We want to tap into the powerful insights from underutilized metabolite data.”
ReviveMed’s progress comes as the broader medical community is increasingly linking dysregulated metabolites to diseases like cancer, Alzheimer’s, and cardiovascular disease. ReviveMed is using its platform to help some of the largest pharmaceutical companies in the world find patients that stand to benefit from their treatments. It’s also offering software to academic researchers for free to help gain insights from untapped metabolite data.
“With the field of AI booming, we think we can overcome data problems that have limited the study of metabolites,” Pirhaji says. “There’s no foundation model for metabolomics, but we see how these models are changing various fields such as genomics, so we’re starting to pioneer their development.”
Finding a challenge
Pirhaji was born and raised in Iran before coming to MIT in 2010 to pursue her PhD in biological engineering. She had previously read Fraenkel’s research papers and was excited to contribute to the network models he was building, which integrated data from sources like genomes, proteomes, and other molecules.
“We were thinking about the big picture in terms of what you can do when you can measure everything — the genes, the RNA, the proteins, and small molecules like metabolites and lipids,” says Fraenkel, who currently serves on ReviveMed’s board of directors. “We’re probably only able to measure something like 0.1 percent of small molecules in the body. We thought there had to be a way to get as comprehensive a view of those molecules as we have for the other ones. That would allow us to map out all of the changes occurring in the cell, whether it's in the context of cancer or development or degenerative diseases.”
About halfway through her PhD, Pirhaji sent some samples to a collaborator at Harvard University to collect data on the metabolome — the small molecules that are the products of metabolic processes. The collaborator sent Pirhaji back a huge excel sheet with thousands of lines of data — but they told her she’s better off ignoring everything beyond the top 100 rows because they had no idea what the other data meant. She took that as a challenge.
“I started thinking maybe we could use our network models to solve this problem,” Pirhaji recalls. “There was a lot of ambiguity in the data, and it was very interesting to me because no one had tried this before. It seemed like a big gap in the field.”
Pirhaji developed a huge knowledge graph that included millions of interactions between proteins and metabolites. The data was rich but messy — Pirhaji called it a “hair ball” that couldn’t tell researchers anything about disease. To make it more useful, she created a new way to characterize metabolic pathways and features. In a 2016 paper in Nature Methods, she described the system and used it to analyze metabolic changes in a model of Huntington’s disease.
Initially, Pirhaji had no intention of starting a company, but she started realizing the technology’s commercial potential in the final years of her PhD.
“There’s no entrepreneurial culture in Iran,” Pirhaji says. “I didn’t know how to start a company or turn science into a startup, so I leveraged everything MIT offered.”
Pirhaji began taking classes at the MIT Sloan School of Management, including Course 15.371 (Innovation Teams), where she teamed up with classmates to think about how to apply her technology. She also used the MIT Venture Mentoring Service and MIT Sandbox, and took part in the Martin Trust Center for MIT Entrepreneurship’s delta v startup accelerator.
When Pirhaji and Fraenkel officially founded ReviveMed, they worked with MIT’s Technology Licensing Office to access the patents around their work. Pirhaji has since further developed the platform to solve other problems she discovered from talks with hundreds of leaders in pharmaceutical companies.
ReviveMed began by working with hospitals to uncover how lipids are dysregulated in a disease known as metabolic dysfunction-associated steatohepatitis. In 2020, ReviveMed worked with Bristol Myers Squibb to predict how subsets of cancer patients would respond to the company’s immunotherapies.
Since then, ReviveMed has worked with several companies, including four of the top 10 global pharmaceutical companies, to help them understand the metabolic mechanisms behind their treatments. Those insights help identify the patients that stand to benefit the most from different therapies more quickly.
“If we know which patients will benefit from every drug, it would really decrease the complexity and time associated with clinical trials,” Pirhaji says. “Patients will get the right treatments faster.”
Generative models for metabolomics
Earlier this year, ReviveMed collected a dataset based on 20,000 patient blood samples that it used to create digital twins of patients and generative AI models for metabolomics research. ReviveMed is making its generative models available to nonprofit academic researchers, which could accelerate our understanding of how metabolites influence a range of diseases.
“We’re democratizing the use of metabolomic data,” Pirhaji says. “It’s impossible for us to have data from every single patient in the world, but our digital twins can be used to find patients that could benefit from treatments based on their demographics, for instance, by finding patients that could be at risk of cardiovascular disease.”
The work is part of ReviveMed’s mission to create metabolic foundation models that researchers and pharmaceutical companies can use to understand how diseases and treatments change the metabolites of patients.
“Leila solved a lot of really hard problems you face when you’re trying to take an idea out of the lab and turn it into something that’s robust and reproducible enough to be deployed in biomedicine,” Fraenkel says. “Along the way, she also realized the software that she’s developed is incredibly powerful by itself and could be transformational.”