QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2025, the organization announced today.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 13 straight years.
Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity — a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?
A new study from MIT Professor Ila Fiete suggests a surprising answer.
In findings published Feb. 18 in Nature, Fiete, an associate investigator in the McGovern Institute for Brain Research and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.
Joining two big ideas
“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona PhD '25, a former graduate student and K. Lisa Yang ICoN Center graduate fellow, and postdoc Sarthak Chandra also led the study.
Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition — small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.
Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.
Modular systems in the brain
The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale — they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.
No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.
Modular systems in nature
The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread, and also to vary, smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries — distinct ecological “neighborhoods” that don’t overlap.
Fiete’s study suggests why: local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection — and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.
A self-organizing world
One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same — they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.
The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.
Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”
Study: The ozone hole is healing, thanks to global reduction of CFCsNew results show with high statistical confidence that ozone recovery is going strong.A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.
Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.
“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”
The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.
Roots of ozone recovery
Within the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.
In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.
The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.
In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.
“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.
Anthropogenic healing
In their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.
Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.
“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”
The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.
They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.
The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.
“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”
If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.
“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”
This research was supported, in part, by the National Science Foundation and NASA.
Study suggests new molecular strategy for treating fragile X syndromeEnhancing activity of a specific component of neurons’ “NMDA” receptors normalized protein synthesis, neural activity, and seizure susceptibility in the hippocampus of fragile X lab mice.Building on more than two decades of research, a study by MIT neuroscientists at The Picower Institute for Learning and Memory reports a new way to treat pathology and symptoms of fragile X syndrome, the most common genetically-caused autism spectrum disorder. The team showed that augmenting a novel type of neurotransmitter signaling reduced hallmarks of fragile X in mouse models of the disorder.
The new approach, described in Cell Reports, works by targeting a specific molecular subunit of “NMDA” receptors that they discovered plays a key role in how neurons synthesize proteins to regulate their connections, or “synapses,” with other neurons in brain circuits. The scientists showed that in fragile X model mice, increasing the receptor’s activity caused neurons in the hippocampus region of the brain to increase molecular signaling that suppressed excessive bulk protein synthesis, leading to other key improvements.
Setting the table
“One of the things I find most satisfying about this study is that the pieces of the puzzle fit so nicely into what had come before,” says study senior author Mark Bear, Picower Professor in MIT’s Department of Brain and Cognitive Sciences. Former postdoc Stephanie Barnes, now a lecturer at the University of Glasgow, is the study’s lead author.
Bear’s lab studies how neurons continually edit their circuit connections, a process called “synaptic plasticity” that scientists believe to underlie the brain’s ability to adapt to experience and to form and process memories. These studies led to two discoveries that set the table for the newly published advance. In 2011, Bear’s lab showed that fragile X and another autism disorder, tuberous sclerosis (Tsc), represented two ends of a continuum of a kind of protein synthesis in the same neurons. In fragile X there was too much. In Tsc there was too little. When lab members crossbred fragile X and Tsc mice, in fact, their offspring emerged healthy, as the mutations of each disorder essentially canceled each other out.
More recently, Bear’s lab showed a different dichotomy. It has long been understood from their influential work in the 1990s that the flow of calcium ions through NMDA receptors can trigger a form of synaptic plasticity called “long-term depression” (LTD). But in 2020, they found that another mode of signaling by the receptor — one that did not require ion flow — altered protein synthesis in the neuron and caused a physical shrinking of the dendritic “spine” structures housing synapses.
For Bear and Barnes, these studies raised the prospect that if they could pinpoint how NMDA receptors affect protein synthesis they might identify a new mechanism that could be manipulated therapeutically to address fragile X (and perhaps tuberous sclerosis) pathology and symptoms. That would be an important advance to complement ongoing work Bear’s lab has done to correct fragile X protein synthesis levels via another receptor called mGluR5.
Receptor dissection
In the new study, Bear and Barnes’ team decided to use the non-ionic effect on spine shrinkage as a readout to dissect how NMDARs signal protein synthesis for synaptic plasticity in hippocampus neurons. They hypothesized that the dichotomy of ionic effects on synaptic function and non-ionic effects on spine structure might derive from the presence of two distinct components of NMDA receptors: “subunits” called GluN2A and GluN2B. To test that, they used genetic manipulations to knock out each of the subunits. When they did so, they found that knocking out “2A” or “2B” could eliminate LTD, but that only knocking out 2B affected spine size. Further experiments clarified that 2A and 2B are required for LTD, but that spine shrinkage solely depends on the 2B subunit.
The next task was to resolve how the 2B subunit signals spine shrinkage. A promising possibility was a part of the subunit called the “carboxyterminal domain,” or CTD. So, in a new experiment Bear and Barnes took advantage of a mouse that had been genetically engineered by researchers at the University of Edinburgh so that the 2A and 2B CTDs could be swapped with one another. A telling result was that when the 2B subunit lacked its proper CTD, the effect on spine structure disappeared. The result affirmed that the 2B subunit signals spine shrinkage via its CTD.
Another consequence of replacing the CTD of the 2B subunit was an increase in bulk protein synthesis that resembled findings in fragile X. Conversely, augmenting the non-ionic signaling through the 2B subunit suppressed bulk protein synthesis, reminiscent of Tsc.
Treating fragile X
Putting the pieces together, the findings indicated that augmenting signaling through the 2B subunit might, like introducing the mutation causing Tsc, rescue aspects of fragile X.
Indeed, when the scientists swapped in the 2B subunit CTD of NMDA receptor in fragile X model mice they found correction of not only the excessive bulk protein synthesis, but also altered synaptic plasticity, and increased electrical excitability that are hallmarks of the disease. To see if a treatment that targets NMDA receptors might be effective in fragile X, they tried an experimental drug called Glyx-13. This drug binds to the 2B subunit of NMDA receptors to augment signaling. The researchers found that this treatment can also normalize protein synthesis and reduced sound-induced seizures in the fragile X mice.
The team now hypothesizes, based on another prior study in the lab, that the beneficial effect to fragile X mice of the 2B subunit’s CTD signaling is that it shifts the balance of protein synthesis away from an all-too-efficient translation of short messenger RNAs (which leads to excessive bulk protein synthesis) toward a lower-efficiency translation of longer messenger RNAs.
Bear says he does not know what the prospects are for Glyx-13 as a clinical drug, but he noted that there are some drugs in clinical development that specifically target the 2B subunit of NMDA receptors.
In addition to Bear and Barnes, the study’s other authors are Aurore Thomazeau, Peter Finnie, Max Heinreich, Arnold Heynen, Noboru Komiyama, Seth Grant, Frank Menniti, and Emily Osterweil.
The FRAXA Foundation, The Picower Institute for Learning and Memory, The Freedom Together Foundation, and the National Institutes of Health funded the study.
Breakfast of champions: MIT hosts top young scientistsAt an MIT-led event at AJAS/AAAS, researchers connect with MIT faculty, Nobel laureates, and industry leaders to share their work, gain mentorship, and explore future careers in science.On Feb. 14, some of the nation’s most talented high school researchers convened in Boston for the annual American Junior Academy of Science (AJAS) conference, held alongside the American Association for the Advancement of Science (AAAS) annual meeting. As a highlight of the event, MIT once again hosted its renowned “Breakfast with Scientists,” offering students a unique opportunity to connect with leading scientific minds from around the world.
The AJAS conference began with an opening reception at the MIT Schwarzman College of Computing, where professor of biology and chemistry Catherine Drennan delivered the keynote address, welcoming 162 high school students from 21 states. Delegates were selected through state Academy of Science competitions, earning the chance to share their work and connect with peers and professionals in science, technology, engineering, and mathematics (STEM).
Over breakfast, students engaged with distinguished scientists, including MIT faculty, Nobel laureates, and industry leaders, discussing research, career paths, and the broader impact of scientific discovery.
Amy Keating, MIT biology department head, sat at a table with students ranging from high school juniors to college sophomores. The group engaged in an open discussion about life as a scientist at a leading institution like MIT. One student expressed concern about the competitive nature of innovative research environments, prompting Keating to reassure them, saying, “MIT has a collaborative philosophy rather than a competitive one.”
At another table, Nobel laureate and former MIT postdoc Gary Ruvkun shared a lighthearted moment with students, laughing at a TikTok video they had created to explain their science fair project. The interaction reflected the innate curiosity and excitement that drives discovery at all stages of a scientific career.
Donna Gerardi, executive director of the National Association of Academies of Science, highlighted the significance of the AJAS program. “These students are not just competing in science fairs; they are becoming part of a larger scientific community. The connections they make here can shape their careers and future contributions to science.”
Alongside the breakfast, AJAS delegates participated in a variety of enriching experiences, including laboratory tours, conference sessions, and hands-on research activities.
“I am so excited to be able to discuss my research with experts and get some guidance on the next steps in my academic trajectory,” said Andrew Wesel, a delegate from California.
A defining feature of the AJAS experience was its emphasis on mentorship and collaboration rather than competition. Delegates were officially inducted as lifetime Fellows of the American Junior Academy of Science at the conclusion of the conference, joining a distinguished network of scientists and researchers.
Sponsored by the MIT School of Science and School of Engineering, the breakfast underscored MIT’s longstanding commitment to fostering young scientific talent. Faculty and researchers took the opportunity to encourage students to pursue careers in STEM fields, providing insights into the pathways available to them.
“It was a joy to spend time with such passionate students,” says Kristala Prather, head of the Department of Chemical Engineering at MIT. “One of the brightest moments for me was sitting next to a young woman who will be joining MIT in the fall — I just have to convince her to study ChemE!”
Seeing more in expansion microscopyNew methods light up lipid membranes and let researchers see sets of proteins inside cells with high resolution.In biology, seeing can lead to understanding, and researchers in Professor Edward Boyden’s lab at the McGovern Institute for Brain Research are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy — a high-resolution imaging technique the group introduced in 2015 — so researchers everywhere can see more when they look at cells and tissues under a light microscope.
“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT. “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.
With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in open-access form in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.
Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.
Since first developing expansion microscopy, Boyden and his team have continued to enhance the method — increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.
Visualizing cell membranes
One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the Feb. 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.
Tay Shin SM ’20, PhD ’23, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”
The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.
After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.
Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.
Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.
Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.
One sample, many proteins
To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.
A key to that new method, reported Nov. 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoc Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.
After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.
Expansion microscopy lets biologists visualize some of cells’ tiniest features — but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.
To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.
The team used multiExR to look at amyloid plaques — the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high-throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease — but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.
Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the U.S. Army, Cancer Research U.K., the New York Stem Cell Foundation, the U.S. National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures, Samsung, MathWorks, the Collamore-Rogers Fellowship, the U.S. National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.
Five years, five triumphs in Putnam Math CompetitionUndergrads sweep Putnam Fellows for fifth year in a row and continue Elizabeth Lowell Putnam winning streak.For the fifth time in the history of the annual William Lowell Putnam Mathematical Competition, and for the fifth year in a row, MIT swept all five of the contest’s top spots.
The top five scorers each year are named Putnam Fellows. Senior Brian Liu and juniors Papon Lapate and Luke Robitaille are now three-time Putnam Fellows, sophomore Jiangqi Dai earned his second win, and first-year Qiao Sun earned his first. Each receives a $2,500 award. This is also the fifth time that any school has had all five Putnam Fellows.
MIT’s team also came in first. The team was made up of Lapate, Robitaille, and Sun (in alphabetical order); Lapate and Robitaille were also on last year’s winning team. This is MIT’s ninth first-place win in the past 11 competitions. Teams consist of the three top scorers from each institution. The institution with the first-place team receives a $25,000 award, and each team member receives $1,000.
First-year Jessica Wan was the top-scoring woman, finishing in the top 25, which earned her the $1,000 Elizabeth Lowell Putnam Prize. She is the eighth MIT student to receive this honor since the award was created in 1992. This is the sixth year in a row that an MIT woman has won the prize.
In total, 69 MIT students scored within the top 100. Beyond the top five scorers, MIT took nine of the next 11 spots (each receiving a $1,000 award), and seven of the next nine spots (earning $250 awards). Of the 75 receiving honorable mentions, 48 were from MIT. A total of 3,988 students took the exam in December, including 222 MIT students.
This exam is considered to be the most prestigious university-level mathematics competition in the United States and Canada.
The Putnam is known for its difficulty: While a perfect score is 120, this year’s top score was 90, and the median was just 2. While many MIT students scored well, the Department of Mathematics is proud of everyone who just took the exam, says Professor Michel Goemans, head of the Department of Mathematics.
“Year after year, I am so impressed by the sheer number of students at MIT that participate in the Putnam competition,” Goemans says. “In no other college or university in the world can one find hundreds of students who get a kick out of thinking about math problems. So refreshing!”
Adds Professor Bjorn Poonen, who helped MIT students prepare for the exam this year, “The incredible competition performance is just one manifestation of MIT’s vibrant community of students who love doing math and discussing math with each other, students who through their hard work in this environment excel in ways beyond competitions, too.”
While the annual Putnam Competition is administered to thousands of undergraduate mathematics students across the United States and Canada, in recent years around 70 of its top 100 performers have been MIT students. Since 2000, MIT has placed among the top five teams 23 times.
MIT’s success in the Putnam exam isn’t surprising. MIT’s recent Putnam coaches are four-time Putnam Fellow Bjorn Poonen and three-time Putnam Fellow Yufei Zhao ’10, PhD ’15.
MIT is also a top destination for medalists participating in the International Mathematics Olympiad (IMO) for high school students. Indeed, over the last decade MIT has enrolled almost every American IMO medalist, and more international IMO gold medalists than the universities of any other single country, according to forthcoming research from the Global Talent Fund (GTF), which offers scholarship and training programs for math Olympiad students and coaches.
IMO participation is a strong predictor of future achievement. According to the International Mathematics Olympiad Foundation, about half of Fields Medal winners are IMO alums — but it’s not the only ingredient.
“Recruiting the most talented students is only the beginning. A top-tier university education — with excellent professors, supportive mentors, and an engaging peer community — is key to unlocking their full potential," says GTF President Ruchir Agarwal. "MIT’s sustained Putnam success shows how the right conditions deliver spectacular results. The catalytic reaction of MIT’s concentration of math talent and the nurturing environment of Building 2 should accelerate advancements in fundamental science for years and decades to come.”
Many MIT mathletes see competitions not only as a way to hone their mathematical aptitude, but also as a way to create a strong sense of community, to help inspire and educate the next generation.
Chris Peterson SM ’13, director of communications and special projects at MIT Admissions and Student Financial Services, points out that many MIT students with competition math experience volunteer to help run programs for K-12 students including HMMT and Math Prize for Girls, and mentor research projects through the Program for Research in Mathematics, Engineering and Science (PRIMES).
Many of the top scorers are also alumni of the PRIMES high school outreach program. Two of this year’s Putnam Fellows, Liu and Robitaille, are PRIMES alumni, as are four of the next top 11, and six out of the next nine winners, along with many of the students receiving honorable mentions. Pavel Etingof, a math professor who is also PRIMES’ chief research advisor, states that among the 25 top winners, 12 (48 percent) are PRIMES alumni.
“We at PRIMES are very proud of our alumnae’s fantastic showing at the Putnam Competition,” says PRIMES director Slava Gerovitch PhD ’99. “PRIMES serves as a pipeline of mathematical excellence from high school through undergraduate studies, and beyond.”
Along the same lines, a collaboration between the MIT Department of Mathematics and MISTI-Africa has sent MIT students with Olympiad experience abroad during the Independent Activities Period (IAP) to coach high school students who hope to compete for their national teams.
First-years at MIT also take class 18.A34 (Mathematical Problem Solving), known informally as the Putnam Seminar, not only to hone their Putnam exam skills, but also to make new friends.
“Many people think of math competitions as primarily a way to identify and recognize talent, which of course they are,” says Peterson. “But the community convened by and through these competitions generates educational externalities that collectively exceed the sum of individual accomplishment.”
Math Community and Outreach Officer Michael King also notes the camaraderie that forms around the test.
“My favorite time of the Putnam day is right after the problem session, when the students all jump up, run over to their friends, and begin talking animatedly,” says King, who also took the exam as an undergraduate student. “They cheer each other’s successes, debate problem solutions, commiserate over missed answers, and share funny stories. It’s always amazing to work with the best math students in the world, but the most rewarding aspect is seeing the friendships that develop.”
A full list of the winners can be found on the Putnam website.
An ancient RNA-guided system could simplify delivery of gene editing therapiesThe programmable proteins are compact, modular, and can be directed to modify DNA in human cells.A vast search of natural diversity has led scientists at MIT’s McGovern Institute for Brain Research and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox.
These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.
These findings are reported online Feb. 27 in the journal Science.
“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.
“Nature is pretty incredible,” says Zhang, who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.
In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR-Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.
At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionary relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.
Zhang’s team discovered more than 20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region — its TIGR arrays — encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.
Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.
They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact — a quarter of the size Cas9, on average — making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.
Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses, as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.
This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.
MIT physicists find unexpected crystals of electrons in an ultrathin materialRhombohedral graphene reveals new exotic interacting electron states.MIT physicists report the unexpected discovery of electrons forming crystalline structures in a material only billionths of a meter thick. The work adds to a gold mine of discoveries originating from the material, which the same team discovered about three years ago.
In a paper published Jan. 22 in Nature, the team describes how electrons in devices made, in part, of the material can become solid, or form crystals, by changing the voltage applied to the devices when they are kept at a temperature similar to that of outer space. Under the same conditions, they also showed the emergence of two new electronic states that add to work they reported last year showing that electrons can split into fractions of themselves.
The physicists were able to make the discoveries thanks to new custom-made filters for better insulation of the equipment involved in the work. These allowed them to cool their devices to a temperature an order of magnitude colder than they achieved for the earlier results.
The team also observed all of these phenomena using two slightly different “versions” of the material, one composed of five layers of atomically thin carbon; the other composed of four layers. This indicates “that there’s a family of materials where you can get this kind of behavior, which is exciting,” says Long Ju, an assistant professor in the MIT Department of Physics who led the work. Ju is also affiliated with MIT’s Materials Research Laboratory and Research Lab of Electronics.
Referring to the material, known as rhombohedral pentalayer graphene, Ju says, “We found a gold mine, and every scoop is revealing something new.”
New material
Rhombohedral pentalayer graphene is essentially a special form of pencil lead. Pencil lead, or graphite, is composed of graphene, a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure. Rhombohedral pentalayer graphene is composed of five layers of graphene stacked in a specific overlapping order.
Since Ju and colleagues discovered the material, they have tinkered with it by adding layers of another material they thought might accentuate the graphene’s properties, or even produce new phenomena. For example, in 2023 they created a sandwich of rhombohedral pentalayer graphene with “buns” made of hexagonal boron nitride. By applying different voltages, or amounts of electricity, to the sandwich, they discovered three important properties never before seen in natural graphite.
Last year, Ju and colleagues reported yet another important and even more surprising phenomenon: Electrons became fractions of themselves upon applying a current to a new device composed of rhombohedral pentalayer graphene and hexagonal boron nitride. This is important because this “fractional quantum Hall effect” has only been seen in a few systems, usually under very high magnetic fields. The Ju work showed that the phenomenon could occur in a fairly simple material without a magnetic field. As a result, it is called the “fractional quantum anomalous Hall effect” (anomalous indicates that no magnetic field is necessary).
New results
In the current work, the Ju team reports yet more unexpected phenomena from the general rhombohedral graphene/boron nitride system when it is cooled to 30 millikelvins (1 millikelvin is equivalent to -459.668 degrees Fahrenheit). In last year’s paper, Ju and colleagues reported six fractional states of electrons. In the current work, they report discovering two more of these fractional states.
They also found another unusual electronic phenomenon: the integer quantum anomalous Hall effect in a wide range of electron densities. The fractional quantum anomalous Hall effect was understood to emerge in an electron “liquid” phase, analogous to water. In contrast, the new state that the team has now observed can be interpreted as an electron “solid” phase — resembling the formation of electronic “ice” — that can also coexist with the fractional quantum anomalous Hall states when the system’s voltage is carefully tuned at ultra-low temperatures.
One way to think about the relation between the integer and fractional states is to imagine a map created by tuning electric voltages: By tuning the system with different voltages, you can create a “landscape” similar to a river (which represents the liquid-like fractional states) cutting through glaciers (which represent the solid-like integer effect), Ju explains.
Ju notes that his team observed all of these phenomena not only in pentalayer rhombohedral graphene, but also in rhombohedral graphene composed of four layers. This creates a family of materials, and indicates that other “relatives” may exist.
“This work shows how rich this material is in exhibiting exotic phenomena. We’ve just added more flavor to this already very interesting material,” says Zhengguang Lu, a co-first author of the paper. Lu, who conducted the work as a postdoc at MIT, is now on the faculty at Florida State University.
In addition to Ju and Lu, other principal authors of the Nature paper are Tonghang Han and Yuxuan Yao, both of MIT. Lu, Han, and Yao are co-first authors of the paper who contributed equally to the work. Other MIT authors are Jixiang Yang, Junseok Seo, Lihan Shi, and Shenyong Ye. Additional members of the team are Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
This work was supported by a Sloan Fellowship, a Mathworks Fellowship, the U.S. Department of Energy, the Japan Society for the Promotion of Science KAKENHI, and the World Premier International Research Initiative of Japan. Device fabrication was performed at the Harvard Center for Nanoscale Systems and MIT.nano.
Helping the immune system attack tumorsStefani Spranger is working to discover why some cancers don’t respond to immunotherapy, in hopes of making them more vulnerable to it.In addition to patrolling the body for foreign invaders, the immune system also hunts down and destroys cells that have become cancerous or precancerous. However, some cancer cells end up evading this surveillance and growing into tumors.
Once established, tumor cells often send out immunosuppressive signals, which leads T cells to become “exhausted” and unable to attack the tumor. In recent years, some cancer immunotherapy drugs have shown great success in rejuvenating those T cells so they can begin attacking tumors again.
While this approach has proven effective against cancers such as melanoma, it doesn’t work as well for others, including lung and ovarian cancer. MIT Associate Professor Stefani Spranger is trying to figure out how those tumors are able to suppress immune responses, in hopes of finding new ways to galvanize T cells into attacking them.
“We really want to understand why our immune system fails to recognize cancer,” Spranger says. “And I’m most excited about the really hard-to-treat cancers because I think that’s where we can make the biggest leaps.”
Her work has led to a better understanding of the factors that control T-cell responses to tumors, and raised the possibility of improving those responses through vaccination or treatment with immune-stimulating molecules called cytokines.
“We’re working on understanding what exactly the problem is, and then collaborating with engineers to find a good solution,” she says.
Jumpstarting T cells
As a student in Germany, where students often have to choose their college major while still in high school, Spranger envisioned going into the pharmaceutical industry and chose to major in biology. At Ludwig Maximilian University in Munich, her course of study began with classical biology subjects such as botany and zoology, and she began to doubt her choice. But, once she began taking courses in cell biology and immunology, her interest was revived and she continued into a biology graduate program at the university.
During a paper discussion class early in her graduate school program, Spranger was assigned to a Science paper on a promising new immunotherapy treatment for melanoma. This strategy involves isolating tumor-infiltrating T-cells during surgery, growing them into large numbers, and then returning them to the patient. For more than 50 percent of those patients, the tumors were completely eliminated.
“To me, that changed the world,” Spranger recalls. “You can take the patient’s own immune system, not really do all that much to it, and then the cancer goes away.”
Spranger completed her PhD studies in a lab that worked on further developing that approach, known as adoptive T-cell transfer therapy. At that point, she still was leaning toward going into pharma, but after finishing her PhD in 2011, her husband, also a biologist, convinced her that they should both apply for postdoc positions in the United States.
They ended up at the University of Chicago, where Spranger worked in a lab that studies how the immune system responds to tumors. There, she discovered that while melanoma is usually very responsive to immunotherapy, there is a small fraction of melanoma patients whose T cells don’t respond to the therapy at all. That got her interested in trying to figure out why the immune system doesn’t always respond to cancer the way that it should, and in finding ways to jumpstart it.
During her postdoc, Spranger also discovered that she enjoyed mentoring students, which she hadn’t done as a graduate student in Germany. That experience drew her away from going into the pharmaceutical industry, in favor of a career in academia.
“I had my first mentoring teaching experience having an undergrad in the lab, and seeing that person grow as a scientist, from barely asking questions to running full experiments and coming up with hypotheses, changed how I approached science and my view of what academia should be for,” she says.
Modeling the immune system
When applying for faculty jobs, Spranger was drawn to MIT by the collaborative environment of MIT and its Koch Institute for Integrative Cancer Research, which offered the chance to collaborate with a large community of engineers who work in the field of immunology.
“That community is so vibrant, and it’s amazing to be a part of it,” she says.
Building on the research she had done as a postdoc, Spranger wanted to explore why some tumors respond well to immunotherapy, while others do not. For many of her early studies, she used a mouse model of non-small-cell lung cancer. In human patients, the majority of these tumors do not respond well to immunotherapy.
“We build model systems that resemble each of the different subsets of non-responsive non-small cell lung cancer, and we’re trying to really drill down to the mechanism of why the immune system is not appropriately responding,” she says.
As part of that work, she has investigated why the immune system behaves differently in different types of tissue. While immunotherapy drugs called checkpoint inhibitors can stimulate a strong T-cell response in the skin, they don’t do nearly as much in the lung. However, Spranger has shown that T cell responses in the lung can be improved when immune molecules called cytokines are also given along with the checkpoint inhibitor.
Those cytokines work, in part, by activating dendritic cells — a class of immune cells that help to initiate immune responses, including activation of T cells.
“Dendritic cells are the conductor for the orchestra of all the T cells, although they’re a very sparse cell population,” Spranger says. “They can communicate which type of danger they sense from stressed cells and then instruct the T cells on what they have to do and where they have to go.”
Spranger’s lab is now beginning to study other types of tumors that don’t respond at all to immunotherapy, including ovarian cancer and glioblastoma. Both the brain and the peritoneal cavity appear to suppress T-cell responses to tumors, and Spranger hopes to figure out how to overcome that immunosuppression.
“We’re specifically focusing on ovarian cancer and glioblastoma, because nothing’s working right now for those cancers,” she says. “We want to understand what we have to do in those sites to induce a really good anti-tumor immune response.”
Three from MIT named 2025 Gates Cambridge ScholarsMarkey Freudenburg-Puricelli, Abigail Schipper ’24, and Rachel Zhang ’21 will pursue graduate studies at Cambridge University in the U.K.MIT senior Markey Freudenburg-Puricelli and alumnae Abigail (“Abbie”) Schipper ’24 and Rachel Zhang ’21 have been selected as Gates Cambridge Scholars and will begin graduate studies this fall in the field of their choice at Cambridge University in the U.K.
Now celebrating its 25th year, the Gates Cambridge program provides fully funded post-graduate scholarships to outstanding applicants from countries outside of the U.K. The mission of Gates Cambridge is to build a global network of future leaders committed to changing the world for the better.
Students interested in applying to Gates Cambridge should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.
Markey Freudenburg-Puricelli
Freudenburg-Puricelli is majoring in Earth, atmospheric, and planetary sciences and minoring in Spanish. Her passion for geoscience has led her to travel to different corners of the world to conduct geologic fieldwork. These experiences have motivated her to pursue a career in developing scientific policy and environmental regulation that can protect those most vulnerable to climate change. As a Gates Cambridge Scholar, she will pursue an MPhil in environmental policy.
Arriving at MIT, Freudenburg-Puricelli joined the Terrascope first-year learning community, which focuses on hands-on education relating to global environmental issues. She then became an undergraduate research assistant in the McGee Lab for Paleoclimate and Geochronology, where she gathered and interpreted data used to understand climate features of permafrost across northern Canada.
Following a summer internship in Chile researching volcanoes at the Universidad Católica del Norte, Freudenburg-Puricelli joined the Gehring Lab for Plant Genetics, Epigenetics, and Seed Biology. Last summer, she traveled to Peru to work with the Department of Paleontology at the Universidad Nacional de Piura, conducting fieldwork and preserving and organizing fossil specimens. Freudenburg-Puricelli has also done fieldwork on sedimentology in New Mexico, geological mapping in the Mojave Desert, and field oceanography onboard the SSV Corwith Cramer.
On campus, Freudenburg-Puricelli is an avid glassblower and has been a teaching assistant at the MIT glassblowing lab. She is also a tour guide for the MIT Office of Admissions and has volunteered with the Department of Earth, Atmospheric and Planetary Sciences’ first-year pre-orientation program.
Abigail “Abbie” Schipper ’24
Originally from Portland, Oregon, Schipper graduated from MIT with a BS in mechanical engineering and a minor in biology. At Cambridge, she will pursue an MPhil in engineering, researching medical devices used in pre-hospital trauma systems in low- and middle-income countries with the Cambridge Health Systems Design group.
At MIT, Schipper was a member of MIT Emergency Medical Services, volunteering on the ambulance and serving as the heartsafe officer and director of ambulance operations. Inspired by her work in CPR education, she helped create the LifeSaveHer project, which aims to decrease the gender disparity in out-of-hospital cardiac arrest survival outcomes through the creation of female CPR mannequins and associated research. This team was the first-place winner of the 2023 PKG IDEAS Competition and a recipient of the Eloranta Research Fellowship.
Schipper’s work has also focused on designing medical devices for low-resource or extreme environments. As an undergraduate, she performed research in the lab of Professor Giovanni Traverso, where she worked on a project designing a drug delivery implant for regions with limited access to surgery. During a summer internship at the University College London Collaborative Center for Inclusion Health, she worked with the U.K.’s National Health Service to create durable, low-cost carbon dioxide sensors to approximate the risk of airborne infectious disease transmission in shelters for people experiencing homelessness.
After graduation, Schipper interned at SAGA Space Architecture through MISTI Denmark, designing life support systems for an underwater habitat that will be used for astronaut training and oceanographic research.
Schipper was a member of the Concourse learning community, Sigma Kappa Sorority, and her living group, Burton 3rd. In her free time, she enjoys fixing bicycles and playing the piano.
Rachel Zhang ’21
Zhang graduated from MIT with a BS in physics in 2021. During her senior year, she was a recipient of the Joel Matthews Orloff Award. She then earned an MS in astronomy at Northwestern University. An internship at the Center for Computational Astrophysics at the Flatiron Institute deepened her interest in the applications of machine learning for astronomy. At Cambridge, she will pursue a PhD in applied mathematics and theoretical physics.
Study: Even after learning the right idea, humans and animals still seem to test other approachesNew research adds evidence that learning a successful strategy for approaching a task doesn’t prevent further exploration, even if doing so reduces performance.Maybe it’s a life hack or a liability, or a little of both. A surprising result in a new MIT study may suggest that people and animals alike share an inherent propensity to keep updating their approach to a task even when they have already learned how they should approach it, and even if the deviations sometimes lead to unnecessary error.
The behavior of “exploring” when one could just be “exploiting” could make sense for at least two reasons, says Mriganka Sur, senior author of the study published Feb. 18 in Current Biology. Just because a task’s rules seem set one moment doesn’t mean they’ll stay that way in this uncertain world, so altering behavior from the optimal condition every so often could help reveal needed adjustments. Moreover, trying new things when you already know what you like is a way of finding out whether there might be something even better out there than the good thing you’ve got going on right now.
“If the goal is to maximize reward, you should never deviate once you have found the perfect solution, yet you keep exploring,” says Sur, the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “Why? It’s like food. We all like certain foods, but we still keep trying different foods because you never know, there might be something you could discover.”
Predicting timing
Former research technician Tudor Dragoi, now a graduate student at Boston University, led the study in which he and fellow members of the Sur Lab explored how humans and marmosets, a small primate, make predictions about event timing.
Three humans and two marmosets were given a simple task. They’d see an image on a screen for some amount of time — the amount of time varied from one trial to the next within a limited range — and they simply had to hit a button (marmosets poked a tablet while humans clicked a mouse) when the image disappeared. Success was defined as reacting as quickly as possible to the image’s disappearance without hitting the button too soon. Marmosets received a juice reward on successful trials.
Though marmosets needed more training time than humans, the subjects all settled into the same reasonable pattern of behavior regarding the task. The longer the image stayed on the screen, the faster their reaction time to its disappearance. This behavior follows the “hazard model” of prediction in which, if the image can only last for so long, the longer it’s still there, the more likely it must be to disappear very soon. The subjects learned this and overall, with more experience, their reaction times became faster.
But as the experiment continued, Sur and Dragoi’s team noticed something surprising was also going on. Mathematical modeling of the reaction time data revealed that both the humans and marmosets were letting the results of the immediate previous trial influence what they did on the next trial, even though they had already learned what to do. If the image was only on the screen briefly in one trial, on the next round subjects would decrease reaction time a bit (presumably expecting a shorter image duration again) whereas if the image lingered, they’d increase reaction time (presumably because they figured they’d have a longer wait).
Those results add to ones from a similar study Sur’s lab published in 2023, in which they found that even after mice learned the rules of a different cognitive task, they’d arbitrarily deviate from the winning strategy every so often. In that study, like this one, learning the successful strategy didn’t prevent subjects from continuing to test alternatives, even if it meant sacrificing reward.
“The persistence of behavioral changes even after task learning may reflect exploration as a strategy for seeking and setting on an optimal internal model of the environment,” the scientists wrote in the new study.
Relevance for autism
The similarity of the human and marmoset behaviors is an important finding as well, Sur says. That’s because differences in making predictions about one’s environment is posited to be a salient characteristic of autism spectrum disorders. Because marmosets are small, are inherently social, and are more cognitively complex than mice, work has begun in some labs to establish marmoset autism models, but a key component was establishing that they model autism-related behaviors well. By demonstrating that marmosets model neurotypical human behavior regarding predictions, the study therefore adds weight to the emerging idea that marmosets can indeed provide informative models for autism studies.
In addition to Dragoi and Sur, other authors of the paper are Hiroki Sugihara, Nhat Le, Elie Adam, Jitendra Sharma, Guoping Feng, and Robert Desimone.
The Simons Foundation Autism Research Initiative supported the research through the Simons Center for the Social Brain at MIT.
AI system predicts protein fragments that can bind to or inhibit a targetFragFold, developed by MIT Biology researchers, is a computational method with potential for impact on biological research and therapeutic applications.All biological function is dependent on how different proteins interact with each other. Protein-protein interactions facilitate everything from transcribing DNA and controlling cell division to higher-level functions in complex organisms.
Much remains unclear, however, about how these functions are orchestrated on the molecular level, and how proteins interact with each other — either with other proteins or with copies of themselves.
Recent findings have revealed that small protein fragments have a lot of functional potential. Even though they are incomplete pieces, short stretches of amino acids can still bind to interfaces of a target protein, recapitulating native interactions. Through this process, they can alter that protein’s function or disrupt its interactions with other proteins.
Protein fragments could therefore empower both basic research on protein interactions and cellular processes, and could potentially have therapeutic applications.
Recently published in Proceedings of the National Academy of Sciences, a new method developed in the Department of Biology builds on existing artificial intelligence models to computationally predict protein fragments that can bind to and inhibit full-length proteins in E. coli. Theoretically, this tool could lead to genetically encodable inhibitors against any protein.
The work was done in the lab of associate professor of biology and Howard Hughes Medical Institute investigator Gene-Wei Li in collaboration with the lab of Jay A. Stein (1968) Professor of Biology, professor of biological engineering, and department head Amy Keating.
Leveraging machine learning
The program, called FragFold, leverages AlphaFold, an AI model that has led to phenomenal advancements in biology in recent years due to its ability to predict protein folding and protein interactions.
The goal of the project was to predict fragment inhibitors, which is a novel application of AlphaFold. The researchers on this project confirmed experimentally that more than half of FragFold’s predictions for binding or inhibition were accurate, even when researchers had no previous structural data on the mechanisms of those interactions.
“Our results suggest that this is a generalizable approach to find binding modes that are likely to inhibit protein function, including for novel protein targets, and you can use these predictions as a starting point for further experiments,” says co-first and corresponding author Andrew Savinov, a postdoc in the Li Lab. “We can really apply this to proteins without known functions, without known interactions, without even known structures, and we can put some credence in these models we’re developing.”
One example is FtsZ, a protein that is key for cell division. It is well-studied but contains a region that is intrinsically disordered and, therefore, especially challenging to study. Disordered proteins are dynamic, and their functional interactions are very likely fleeting — occurring so briefly that current structural biology tools can’t capture a single structure or interaction.
The researchers leveraged FragFold to explore the activity of fragments of FtsZ, including fragments of the intrinsically disordered region, to identify several new binding interactions with various proteins. This leap in understanding confirms and expands upon previous experiments measuring FtsZ’s biological activity.
This progress is significant in part because it was made without solving the disordered region’s structure, and because it exhibits the potential power of FragFold.
“This is one example of how AlphaFold is fundamentally changing how we can study molecular and cell biology,” Keating says. “Creative applications of AI methods, such as our work on FragFold, open up unexpected capabilities and new research directions.”
Inhibition, and beyond
The researchers accomplished these predictions by computationally fragmenting each protein and then modeling how those fragments would bind to interaction partners they thought were relevant.
They compared the maps of predicted binding across the entire sequence to the effects of those same fragments in living cells, determined using high-throughput experimental measurements in which millions of cells each produce one type of protein fragment.
AlphaFold uses co-evolutionary information to predict folding, and typically evaluates the evolutionary history of proteins using something called multiple sequence alignments for every single prediction run. The MSAs are critical, but are a bottleneck for large-scale predictions — they can take a prohibitive amount of time and computational power.
For FragFold, the researchers instead pre-calculated the MSA for a full-length protein once, and used that result to guide the predictions for each fragment of that full-length protein.
Savinov, together with Keating Lab alumnus Sebastian Swanson PhD ’23, predicted inhibitory fragments of a diverse set of proteins in addition to FtsZ. Among the interactions they explored was a complex between lipopolysaccharide transport proteins LptF and LptG. A protein fragment of LptG inhibited this interaction, presumably disrupting the delivery of lipopolysaccharide, which is a crucial component of the E. coli outer cell membrane essential for cellular fitness.
“The big surprise was that we can predict binding with such high accuracy and, in fact, often predict binding that corresponds to inhibition,” Savinov says. “For every protein we’ve looked at, we’ve been able to find inhibitors.”
The researchers initially focused on protein fragments as inhibitors because whether a fragment could block an essential function in cells is a relatively simple outcome to measure systematically. Looking forward, Savinov is also interested in exploring fragment function outside inhibition, such as fragments that can stabilize the protein they bind to, enhance or alter its function, or trigger protein degradation.
Design, in principle
This research is a starting point for developing a systemic understanding of cellular design principles, and what elements deep-learning models may be drawing on to make accurate predictions.
“There’s a broader, further-reaching goal that we’re building towards,” Savinov says. “Now that we can predict them, can we use the data we have from predictions and experiments to pull out the salient features to figure out what AlphaFold has actually learned about what makes a good inhibitor?”
Savinov and collaborators also delved further into how protein fragments bind, exploring other protein interactions and mutating specific residues to see how those interactions change how the fragment interacts with its target.
Experimentally examining the behavior of thousands of mutated fragments within cells, an approach known as deep mutational scanning, revealed key amino acids that are responsible for inhibition. In some cases, the mutated fragments were even more potent inhibitors than their natural, full-length sequences.
“Unlike previous methods, we are not limited to identifying fragments in experimental structural data,” says Swanson. “The core strength of this work is the interplay between high-throughput experimental inhibition data and the predicted structural models: the experimental data guides us towards the fragments that are particularly interesting, while the structural models predicted by FragFold provide a specific, testable hypothesis for how the fragments function on a molecular level.”
Savinov is excited about the future of this approach and its myriad applications.
“By creating compact, genetically encodable binders, FragFold opens a wide range of possibilities to manipulate protein function,” Li agrees. “We can imagine delivering functionalized fragments that can modify native proteins, change their subcellular localization, and even reprogram them to create new tools for studying cell biology and treating diseases.”
MIT faculty, alumni named 2025 Sloan Research FellowsAnnual award honors early-career researchers for creativity, innovation, and research accomplishments.Seven MIT faculty and 21 additional MIT alumni are among 126 early-career researchers honored with 2025 Sloan Research Fellowships by the Alfred P. Sloan Foundation.
The recipients represent the MIT departments of Biology; Chemical Engineering; Chemistry; Civil and Environmental Engineering; Earth, Atmospheric and Planetary Sciences; Economics; Electrical Engineering and Computer Science; Mathematics; and Physics as well as the Music and Theater Arts Section and the MIT Sloan School of Management.
The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.
“The Sloan Research Fellows represent the very best of early-career science, embodying the creativity, ambition, and rigor that drive discovery forward,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “These extraordinary scholars are already making significant contributions, and we are confident they will shape the future of their fields in remarkable ways.”
Including this year’s recipients, a total of 333 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. MIT and Northwestern University are tied for having the most faculty in the 2025 cohort of fellows, each with seven. The MIT recipients are:
Ariel L. Furst is the Paul M. Cook Career Development Professor of Chemical Engineering at MIT. Her lab combines biological, chemical, and materials engineering to solve challenges in human health and environmental sustainability, with lab members developing technologies for implementation in low-resource settings to ensure equitable access to technology. Furst completed her PhD in the lab of Professor Jacqueline K. Barton at Caltech developing new cancer diagnostic strategies based on DNA charge transport. She was then an A.O. Beckman Postdoctoral Fellow in the lab of Professor Matthew Francis at the University of California at Berkeley, developing sensors to monitor environmental pollutants. She is the recipient of the NIH New Innovator Award, the NSF CAREER Award, and the Dreyfus Teacher-Scholar Award. She is passionate about STEM outreach and increasing participation of underrepresented groups in engineering.
Mohsen Ghaffari SM ’13, PhD ’17 is an associate professor in the Department of Electrical Engineering and Computer Science (EECS) as well as the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research explores the theory of distributed and parallel computation, and he has had influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing (which resolved several decades-old open problems), improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation. His work has been recognized with best paper awards at the IEEE Symposium on Foundations of Computer Science (FOCS), ACM-SIAM Symposium on Discrete Algorithms (SODA), ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), the ACM Symposium on Principles of Distributed Computing (PODC), and the International Symposium on Distributed Computing (DISC), the European Research Council's Starting Grant, and a Google Faculty Research Award, among others.
Marzyeh Ghassemi PhD ’17 is an associate professor within EECS and the Institute for Medical Engineering and Science (IMES). Ghassemi earned two bachelor’s degrees in computer science and electrical engineering from New Mexico State University as a Goldwater Scholar; her MS in biomedical engineering from Oxford University as a Marshall Scholar; and her PhD in computer science from MIT. Following stints as a visiting researcher with Alphabet’s Verily and an assistant professor at University of Toronto, Ghassemi joined EECS and IMES as an assistant professor in July 2021. (IMES is the home of the Harvard-MIT Program in Health Sciences and Technology.) She is affiliated with the Laboratory for Information and Decision Systems (LIDS), the MIT-IBM Watson AI Lab, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Institute for Data, Systems, and Society (IDSS), and CSAIL. Ghassemi’s research in the Healthy ML Group creates a rigorous quantitative framework in which to design, develop, and place machine learning models in a way that is robust and useful, focusing on health settings. Her contributions range from socially-aware model construction to improving subgroup- and shift-robust learning methods to identifying important insights in model deployment scenarios that have implications in policy, health practice, and equity. Among other awards, Ghassemi has been named one of MIT Technology Review’s 35 Innovators Under 35 and an AI2050 Fellow, as well as receiving the 2018 Seth J. Teller Award, the 2023 MIT Prize for Open Data, a 2024 NSF CAREER Award, and the Google Research Scholar Award. She founded the nonprofit Association for Health, Inference and Learning (AHLI) and her work has been featured in popular press such as Forbes, Fortune, MIT News, and The Huffington Post.
Darcy McRose is the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Civil and Environmental Engineering. She is an environmental microbiologist who draws on techniques from genetics, chemistry, and geosciences to understand the ways microbes control nutrient cycling and plant health. Her laboratory uses small molecules, or “secondary metabolites,” made by plants and microbes as tractable experiments tools to study microbial activity in complex environments like soils and sediments. In the long term, this work aims to uncover fundamental controls on microbial physiology and community assembly that can be used to promote agricultural sustainability, ecosystem health, and human prosperity.
Sarah Millholland, an assistant professor of physics at MIT and member of the Kavli Institute for Astrophysics and Space Research, is a theoretical astrophysicist who studies extrasolar planets, including their formation and evolution, orbital dynamics, and interiors/atmospheres. She studies patterns in the observed planetary orbital architectures, referring to properties like the spacings, eccentricities, inclinations, axial tilts, and planetary size relationships. She specializes in investigating how gravitational interactions such as tides, resonances, and spin dynamics sculpt observable exoplanet properties. She is the 2024 recipient of the Vera Rubin Early Career Award for her contributions to the formation and dynamics of extrasolar planetary systems. She plans to use her Sloan Fellowship to explore how tidal physics shape the diversity of orbits and interiors of exoplanets orbiting close to their stars.
Emil Verner is the Albert F. (1942) and Jeanne P. Clear Career Development Associate Professor of Global Management and an associate professor of finance at the MIT Sloan School of Management. His research lies at the intersection of finance and macroeconomics, with a particular focus on understanding the causes and consequences of financial crises over the past 150 years. Verner’s recent work examines the drivers of bank runs and insolvency during banking crises, the role of debt booms in amplifying macroeconomic fluctuations, the effectiveness of debt relief policies during crises, and how financial crises impact political polarization and support for populist parties. Before joining MIT, he earned a PhD in economics from Princeton University.
Christian Wolf, the Rudi Dornbusch Career Development Assistant Professor of Economics and a faculty research fellow at the National Bureau of Economic Research, works in macroeconomics, monetary economics, and time series econometrics. His work focuses on the development and application of new empirical methods to address classic macroeconomic questions and to evaluate how robust the answers are to a range of common modeling assumptions. His research has provided path-breaking insights on monetary transmission mechanisms and fiscal policy. In a separate strand of work, Wolf has substantially deepened our understanding of the appropriate methods macroeconomists should use to estimate impulse response functions — how key economic variables respond to policy changes or unexpected shocks.
The following MIT alumni also received fellowships:
Jason Altschuler SM ’18, PhD ’22
David Bau III PhD ’21
Rene Boiteau PhD ’16
Lynne Chantranupong PhD ’17
Lydia B. Chilton ’06, ’07, MNG ’09
Jordan Cotler ’15
Alexander Ji PhD ’17
Sarah B. King ’10
Allison Z. Koenecke ’14
Eric Larson PhD ’18
Chen Lian ’15, PhD ’20
Huanqian Loh ’06
Ian J. Moult PhD ’16
Lisa Olshansky PhD ’15
Andrew Owens SM ’13, PhD ’16
Matthew Rognlie PhD ’16
David Rolnick ’12, PhD ’18
Shreya Saxena PhD ’17
Mark Sellke ’18
Amy X. Zhang PhD ’19
Aleksandr V. Zhukhovitskiy PhD ’16
Longtime MIT Professor Anthony “Tony” Sinskey ScD ’67, who was also the co-founder and faculty director of the Center for Biomedical Innovation (CBI), passed away on Feb. 12 at his home in New Hampshire. He was 84.
Deeply engaged with MIT, Sinskey left his mark on the Institute as much through the relationships he built as the research he conducted. Colleagues say that throughout his decades on the faculty, Sinskey’s door was always open.
“He was incredibly generous in so many ways,” says Graham Walker, an American Cancer Society Professor at MIT. “He was so willing to support people, and he did it out of sheer love and commitment. If you could just watch Tony in action, there was so much that was charming about the way he lived. I’ve said for years that after they made Tony, they broke the mold. He was truly one of a kind.”
Sinskey’s lab at MIT explored methods for metabolic engineering and the production of biomolecules. Over the course of his research career, he published more than 350 papers in leading peer-reviewed journals for biology, metabolic engineering, and biopolymer engineering, and filed more than 50 patents. Well-known in the biopharmaceutical industry, Sinskey contributed to the founding of multiple companies, including Metabolix, Tepha, Merrimack Pharmaceuticals, and Genzyme Corporation. Sinskey’s work with CBI also led to impactful research papers, manufacturing initiatives, and educational content since its founding in 2005.
Across all of his work, Sinskey built a reputation as a supportive, collaborative, and highly entertaining friend who seemed to have a story for everything.
“Tony would always ask for my opinions — what did I think?” says Barbara Imperiali, MIT’s Class of 1922 Professor of Biology and Chemistry, who first met Sinskey as a graduate student. “Even though I was younger, he viewed me as an equal. It was exciting to be able to share my academic journey with him. Even later, he was continually opening doors for me, mentoring, connecting. He felt it was his job to get people into a room together to make new connections.”
Sinskey grew up in the small town of Collinsville, Illinois, and spent nights after school working on a farm. For his undergraduate degree, he attended the University of Illinois, where he got a job washing dishes at the dining hall. One day, as he recalled in a 2020 conversation, he complained to his advisor about the dishwashing job, so the advisor offered him a job washing equipment in his microbiology lab.
In a development that would repeat itself throughout Sinskey’s career, he befriended the researchers in the lab and started learning about their work. Soon he was showing up on weekends and helping out. The experience inspired Sinskey to go to graduate school, and he only applied to one place.
Sinskey earned his ScD from MIT in nutrition and food science in 1967. He joined MIT’s faculty a few years later and never left.
“He loved MIT and its excellence in research and education, which were incredibly important to him,” Walker says. “I don’t know of another institution this interdisciplinary — there’s barely a speed bump between departments — so you can collaborate with anybody. He loved that. He also loved the spirit of entrepreneurship, which he thrived on. If you heard somebody wanted to get a project done, you could run around, get 10 people, and put it together. He just loved doing stuff like that.”
Working across departments would become a signature of Sinskey’s research. His original office was on the first floor of MIT’s Building 56, right next to the parking lot, so he’d leave his door open in the mornings and afternoons and colleagues would stop in and chat.
“One of my favorite things to do was to drop in on Tony when I saw that his office door was open,” says Chris Kaiser, MIT’s Amgen Professor of Biology. “We had a whole range of things we liked to catch up on, but they always included his perspectives looking back on his long history at MIT. It also always included hopes for the future, including tracking trajectories of MIT students, whom he doted on.”
Long before the internet, colleagues describe Sinskey as a kind of internet unto himself, constantly leveraging his vast web of relationships to make connections and stay on top of the latest science news.
“He was an incredibly gracious person — and he knew everyone,” Imperiali says. “It was as if his Rolodex had no end. You would sit there and he would say, ‘Call this person.’ or ‘Call that person.’ And ‘Did you read this new article?’ He had a wonderful view of science and collaboration, and he always made that a cornerstone of what he did. Whenever I’d see his door open, I’d grab a cup of tea and just sit there and talk to him.”
When the first recombinant DNA molecules were produced in the 1970s, it became a hot area of research. Sinskey wanted to learn more about recombinant DNA, so he hosted a large symposium on the topic at MIT that brought in experts from around the world.
“He got his name associated with recombinant DNA for years because of that,” Walker recalls. “People started seeing him as Mr. Recombinant DNA. That kind of thing happened all the time with Tony.”
Sinskey’s research contributions extended beyond recombinant DNA into other microbial techniques to produce amino acids and biodegradable plastics. He co-founded CBI in 2005 to improve global health through the development and dispersion of biomedical innovations. The center adopted Sinskey’s collaborative approach in order to accelerate innovation in biotechnology and biomedical research, bringing together experts from across MIT’s schools.
“Tony was at the forefront of advancing cell culture engineering principles so that making biomedicines could become a reality. He knew early on that biomanufacturing was an important step on the critical path from discovering a drug to delivering it to a patient,” says Stacy Springs, the executive director of CBI. “Tony was not only my boss and mentor, but one of my closest friends. He was always working to help everyone reach their potential, whether that was a colleague, a former or current researcher, or a student. He had a gentle way of encouraging you to do your best.”
“MIT is one of the greatest places to be because you can do anything you want here as long as it’s not a crime,” Sinskey joked in 2020. “You can do science, you can teach, you can interact with people — and the faculty at MIT are spectacular to interact with.”
Sinskey shared his affection for MIT with his family. His wife, the late ChoKyun Rha ’62, SM ’64, SM ’66, ScD ’67, was a professor at MIT for more than four decades and the first woman of Asian descent to receive tenure at MIT. His two sons also attended MIT — Tong-ik Lee Sinskey ’79, SM ’80 and Taeminn Song MBA ’95, who is the director of strategy and strategic initiatives for MIT Information Systems and Technology (IS&T).
Song recalls: “He was driven by same goal my mother had: to advance knowledge in science and technology by exploring new ideas and pushing everyone around them to be better.”
Around 10 years ago, Sinskey began teaching a class with Walker, Course 7.21/7.62 (Microbial Physiology). Walker says their approach was to treat the students as equals and learn as much from them as they taught. The lessons extended beyond the inner workings of microbes to what it takes to be a good scientist and how to be creative. Sinskey and Rha even started inviting the class over to their home for Thanksgiving dinner each year.
“At some point, we realized the class was turning into a close community,” Walker says. “Tony had this endless supply of stories. It didn’t seem like there was a topic in biology that Tony didn’t have a story about either starting a company or working with somebody who started a company.”
Over the last few years, Walker wasn’t sure they were going to continue teaching the class, but Sinskey remarked it was one of the things that gave his life meaning after his wife’s passing in 2021. That decided it.
After finishing up this past semester with a class-wide lunch at Legal Sea Foods, Sinskey and Walker agreed it was one of the best semesters they’d ever taught.
In addition to his two sons, Sinskey is survived by his daughter-in-law Hyunmee Elaine Song, five grandchildren, and two great grandsons. He has two brothers, Terry Sinskey (deceased in 1975) and Timothy Sinskey, and a sister, Christine Sinskey Braudis.
Gifts in Sinskey’s memory can be made to the ChoKyun Rha (1962) and Anthony J Sinskey (1967) Fund.
MIT biologists discover a new type of control over RNA splicingThey identified proteins that influence splicing of about half of all human introns, allowing for more complex types of gene regulation.RNA splicing is a cellular process that is critical for gene expression. After genes are copied from DNA into messenger RNA, portions of the RNA that don’t code for proteins, called introns, are cut out and the coding portions are spliced back together.
This process is controlled by a large protein-RNA complex called the spliceosome. MIT biologists have now discovered a new layer of regulation that helps to determine which sites on the messenger RNA molecule the spliceosome will target.
The research team discovered that this type of regulation, which appears to influence the expression of about half of all human genes, is found throughout the animal kingdom, as well as in plants. The findings suggest that the control of RNA splicing, a process that is fundamental to gene expression, is more complex than previously known.
“Splicing in more complex organisms, like humans, is more complicated than it is in some model organisms like yeast, even though it’s a very conserved molecular process. There are bells and whistles on the human spliceosome that allow it to process specific introns more efficiently. One of the advantages of a system like this may be that it allows more complex types of gene regulation,” says Connor Kenny, an MIT graduate student and the lead author of the study.
Christopher Burge, the Uncas and Helen Whitaker Professor of Biology at MIT, is the senior author of the study, which appears today in Nature Communications.
Building proteins
RNA splicing, a process discovered in the late 1970s, allows cells to precisely control the content of the mRNA transcripts that carry the instructions for building proteins.
Each mRNA transcript contains coding regions, known as exons, and noncoding regions, known as introns. They also include sites that act as signals for where splicing should occur, allowing the cell to assemble the correct sequence for a desired protein. This process enables a single gene to produce multiple proteins; over evolutionary timescales, splicing can also change the size and content of genes and proteins, when different exons become included or excluded.
The spliceosome, which forms on introns, is composed of proteins and noncoding RNAs called small nuclear RNAs (snRNAs). In the first step of spliceosome assembly, an snRNA molecule known as U1 snRNA binds to the 5’ splice site at the beginning of the intron. Until now, it had been thought that the binding strength between the 5’ splice site and the U1 snRNA was the most important determinant of whether an intron would be spliced out of the mRNA transcript.
In the new study, the MIT team discovered that a family of proteins called LUC7 also helps to determine whether splicing will occur, but only for a subset of introns — in human cells, up to 50 percent.
Before this study, it was known that LUC7 proteins associate with U1 snRNA, but the exact function wasn’t clear. There are three different LUC7 proteins in human cells, and Kenny’s experiments revealed that two of these proteins interact specifically with one type of 5’ splice site, which the researchers called “right-handed.” A third human LUC7 protein interacts with a different type, which the researchers call “left-handed.”
The researchers found that about half of human introns contain a right- or left-handed site, while the other half do not appear to be controlled by interaction with LUC7 proteins. This type of control appears to add another layer of regulation that helps remove specific introns more efficiently, the researchers say.
“The paper shows that these two different 5’ splice site subclasses exist and can be regulated independently of one another,” Kenny says. “Some of these core splicing processes are actually more complex than we previously appreciated, which warrants more careful examination of what we believe to be true about these highly conserved molecular processes.”
“Complex splicing machinery”
Previous work has shown that mutation or deletion of one of the LUC7 proteins that bind to right-handed splice sites is linked to blood cancers, including about 10 percent of acute myeloid leukemias (AMLs). In this study, the researchers found that AMLs that lost a copy of the LUC7L2 gene have inefficient splicing of right-handed splice sites. These cancers also developed the same type of altered metabolism seen in earlier work.
“Understanding how the loss of this LUC7 protein in some AMLs alters splicing could help in the design of therapies that exploit these splicing differences to treat AML,” Burge says. “There are also small molecule drugs for other diseases such as spinal muscular atrophy that stabilize the interaction between U1 snRNA and specific 5’ splice sites. So the knowledge that particular LUC7 proteins influence these interactions at specific splice sites could aid in improving the specificity of this class of small molecules.”
Working with a lab led by Sascha Laubinger, a professor at Martin Luther University Halle-Wittenberg, the researchers found that introns in plants also have right- and left-handed 5’ splice sites that are regulated by Luc7 proteins.
The researchers’ analysis suggests that this type of splicing arose in a common ancestor of plants, animals, and fungi, but it was lost from fungi soon after they diverged from plants and animals.
“A lot what we know about how splicing works and what are the core components actually comes from relatively old yeast genetics work,” Kenny says. “What we see is that humans and plants tend to have more complex splicing machinery, with additional components that can regulate different introns independently.”
The researchers now plan to further analyze the structures formed by the interactions of Luc7 proteins with mRNA and the rest of the spliceosome, which could help them figure out in more detail how different forms of Luc7 bind to different 5’ splice sites.
The research was funded by the U.S. National Institutes of Health and the German Research Foundation.
J-WAFS: Supporting food and water research across MITFor the past decade, the Abdul Latif Jameel Water and Food Systems Lab has strengthened MIT faculty efforts in water and food research and innovation.MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has transformed the landscape of water and food research at MIT, driving faculty engagement and catalyzing new research and innovation in these critical areas. With philanthropic, corporate, and government support, J-WAFS’ strategic approach spans the entire research life cycle, from support for early-stage research to commercialization grants for more advanced projects.
Over the past decade, J-WAFS has invested approximately $25 million in direct research funding to support MIT faculty pursuing transformative research with the potential for significant impact. “Since awarding our first cohort of seed grants in 2015, it’s remarkable to look back and see that over 10 percent of the MIT faculty have benefited from J-WAFS funding,” observes J-WAFS Executive Director Renee J. Robins ’83. “Many of these professors hadn’t worked on water or food challenges before their first J-WAFS grant.”
By fostering interdisciplinary collaborations and supporting high-risk, high-reward projects, J-WAFS has amplified the capacity of MIT faculty to pursue groundbreaking research that addresses some of the world’s most pressing challenges facing our water and food systems.
Drawing MIT faculty to water and food research
J-WAFS open calls for proposals enable faculty to explore bold ideas and develop impactful approaches to tackling critical water and food system challenges. Professor Patrick Doyle’s work in water purification exemplifies this impact. “Without J-WAFS, I would have never ventured into the field of water purification,” Doyle reflects. While previously focused on pharmaceutical manufacturing and drug delivery, exposure to J-WAFS-funded peers led him to apply his expertise in soft materials to water purification. “Both the funding and the J-WAFS community led me to be deeply engaged in understanding some of the key challenges in water purification and water security,” he explains.
Similarly, Professor Otto Cordero of the Department of Civil and Environmental Engineering (CEE) leveraged J-WAFS funding to pivot his research into aquaculture. Cordero explains that his first J-WAFS seed grant “has been extremely influential for my lab because it allowed me to take a step in a new direction, with no preliminary data in hand.” Cordero’s expertise is in microbial communities. He was previous unfamiliar with aquaculture, but he saw the relevance of microbial communities the health of farmed aquatic organisms.
Supporting early-career faculty
New assistant professors at MIT have particularly benefited from J-WAFS funding and support. J-WAFS has played a transformative role in shaping the careers and research trajectories of many new faculty members by encouraging them to explore novel research areas, and in many instances providing their first MIT research grant.
Professor Ariel Furst reflects on how pivotal J-WAFS’ investment has been in advancing her research. “This was one of the first grants I received after starting at MIT, and it has truly shaped the development of my group’s research program,” Furst explains. With J-WAFS’ backing, her lab has achieved breakthroughs in chemical detection and remediation technologies for water. “The support of J-WAFS has enabled us to develop the platform funded through this work beyond the initial applications to the general detection of environmental contaminants and degradation of those contaminants,” she elaborates.
Karthish Manthiram, now a professor of chemical engineering and chemistry at Caltech, explains how J-WAFS’ early investment enabled him and other young faculty to pursue ambitious ideas. “J-WAFS took a big risk on us,” Manthiram reflects. His research on breaking the nitrogen triple bond to make ammonia for fertilizer was initially met with skepticism. However, J-WAFS’ seed funding allowed his lab to lay the groundwork for breakthroughs that later attracted significant National Science Foundation (NSF) support. “That early funding from J-WAFS has been pivotal to our long-term success,” he notes.
These stories underscore the broad impact of J-WAFS’ support for early-career faculty, and its commitment to empowering them to address critical global challenges and innovate boldly.
Fueling follow-on funding
J-WAFS seed grants enable faculty to explore nascent research areas, but external funding for continued work is usually necessary to achieve the full potential of these novel ideas. “It’s often hard to get funding for early stage or out-of-the-box ideas,” notes J-WAFS Director Professor John H. Lienhard V. “My hope, when I founded J-WAFS in 2014, was that seed grants would allow PIs [principal investigators] to prove out novel ideas so that they would be attractive for follow-on funding. And after 10 years, J-WAFS-funded research projects have brought more than $21 million in subsequent awards to MIT.”
Professor Retsef Levi led a seed study on how agricultural supply chains affect food safety, with a team of faculty spanning the MIT schools Engineering and Science as well as the MIT Sloan School of Management. The team parlayed their seed grant research into a multi-million-dollar follow-on initiative. Levi reflects, “The J-WAFS seed funding allowed us to establish the initial credibility of our team, which was key to our success in obtaining large funding from several other agencies.”
Dave Des Marais was an assistant professor in the Department of CEE when he received his first J-WAFS seed grant. The funding supported his research on how plant growth and physiology are controlled by genes and interact with the environment. The seed grant helped launch his lab’s work addressing enhancing climate change resilience in agricultural systems. The work led to his Faculty Early Career Development (CAREER) Award from the NSF, a prestigious honor for junior faculty members. Now an associate professor, Des Marais’ ongoing project to further investigate the mechanisms and consequences of genomic and environmental interactions is supported by the five-year, $1,490,000 NSF grant. “J-WAFS providing essential funding to get my new research underway,” comments Des Marais.
Stimulating interdisciplinary collaboration
Des Marais’ seed grant was also key to developing new collaborations. He explains, “the J-WAFS grant supported me to develop a collaboration with Professor Caroline Uhler in EECS/IDSS [the Department of Electrical Engineering and Computer Science/Institute for Data, Systems, and Society] that really shaped how I think about framing and testing hypotheses. One of the best things about J-WAFS is facilitating unexpected connections among MIT faculty with diverse yet complementary skill sets.”
Professors A. John Hart of the Department of Mechanical Engineering and Benedetto Marelli of CEE also launched a new interdisciplinary collaboration with J-WAFS funding. They partnered to join expertise in biomaterials, microfabrication, and manufacturing, to create printed silk-based colorimetric sensors that detect food spoilage. “The J-WAFS Seed Grant provided a unique opportunity for multidisciplinary collaboration,” Hart notes.
Professors Stephen Graves in the MIT Sloan School of Management and Bishwapriya Sanyal in the Department of Urban Studies and Planning (DUSP) partnered to pursue new research on agricultural supply chains. With field work in Senegal, their J-WAFS-supported project brought together international development specialists and operations management experts to study how small firms and government agencies influence access to and uptake of irrigation technology by poorer farmers. “We used J-WAFS to spur a collaboration that would have been improbable without this grant,” they explain. Being part of the J-WAFS community also introduced them to researchers in Professor Amos Winter’s lab in the Department of Mechanical Engineering working on irrigation technologies for low-resource settings. DUSP doctoral candidate Mark Brennan notes, “We got to share our understanding of how irrigation markets and irrigation supply chains work in developing economies, and then we got to contrast that with their understanding of how irrigation system models work.”
Timothy Swager, professor of chemistry, and Rohit Karnik, professor of mechanical engineering and J-WAFS associate director, collaborated on a sponsored research project supported by Xylem, Inc. through the J-WAFS Research Affiliate program. The cross-disciplinary research, which targeted the development of ultra-sensitive sensors for toxic PFAS chemicals, was conceived following a series of workshops hosted by J-WAFS. Swager and Karnik were two of the participants, and their involvement led to the collaborative proposal that Xylem funded. “J-WAFS funding allowed us to combine Swager lab’s expertise in sensing with my lab’s expertise in microfluidics to develop a cartridge for field-portable detection of PFAS,” says Karnik. “J-WAFS has enriched my research program in so many ways,” adds Swager, who is now working to commercialize the technology.
Driving global collaboration and impact
J-WAFS has also helped MIT faculty establish and advance international collaboration and impactful global research. By funding and supporting projects that connect MIT researchers with international partners, J-WAFS has not only advanced technological solutions, but also strengthened cross-cultural understanding and engagement.
Professor Matthew Shoulders leads the inaugural J-WAFS Grand Challenge project. In response to the first J-WAFS call for “Grand Challenge” proposals, Shoulders assembled an interdisciplinary team based at MIT to enhance and provide climate resilience to agriculture by improving the most inefficient aspect of photosynthesis, the notoriously-inefficient carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk/high-reward project following a competitive process that engaged external reviewers through a several rounds of iterative proposal development. The technical feedback to the team led them to researchers with complementary expertise from the Australian National University. “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
Professor Leon Glicksman and Research Engineer Eric Verploegen’s team designed a low-cost cooling chamber to preserve fruits and vegetables harvested by smallholder farmers with no access to cold chain storage. J-WAFS’ guidance motivated the team to prioritize practical considerations informed by local collaborators, ensuring market competitiveness. “As our new idea for a forced-air evaporative cooling chamber was taking shape, we continually checked that our solution was evolving in a direction that would be competitive in terms of cost, performance, and usability to existing commercial alternatives,” explains Verploegen, who is currently an MIT D-Lab affiliate. Following the team’s initial seed grant, the team secured a J-WAFS Solutions commercialization grant, which Verploegen say “further motivated us to establish partnerships with local organizations capable of commercializing the technology earlier in the project than we might have done otherwise.” The team has since shared an open-source design as part of its commercialization strategy to maximize accessibility and impact.
Bringing corporate sponsored research opportunities to MIT faculty
J-WAFS also plays a role in driving private partnerships, enabling collaborations that bridge industry and academia. Through its Research Affiliate Program, for example, J-WAFS provides opportunities for faculty to collaborate with industry on sponsored research, helping to convert scientific discoveries into licensable intellectual property (IP) that companies can turn into commercial products and services.
J-WAFS introduced professor of mechanical engineering Alex Slocum to a challenge presented by its research affiliate company, Xylem: how to design a more energy-efficient pump for fluctuating flows. With centrifugal pumps consuming an estimated 6 percent of U.S. electricity annually, Slocum and his then-graduate student Hilary Johnson SM '18, PhD '22 developed an innovative variable volute mechanism that reduces energy usage. “Xylem envisions this as the first in a new category of adaptive pump geometry,” comments Johnson. The research produced a pump prototype and related IP that Xylem is working on commercializing. Johnson notes that these outcomes “would not have been possible without J-WAFS support and facilitation of the Xylem industry partnership.” Slocum adds, “J-WAFS enabled Hilary to begin her work on pumps, and Xylem sponsored the research to bring her to this point … where she has an opportunity to do far more than the original project called for.”
Swager speaks highly of the impact of corporate research sponsorship through J-WAFS on his research and technology translation efforts. His PFAS project with Karnik described above was also supported by Xylem. “Xylem was an excellent sponsor of our research. Their engagement and feedback were instrumental in advancing our PFAS detection technology, now on the path to commercialization,” Swager says.
Looking forward
What J-WAFS has accomplished is more than a collection of research projects; a decade of impact demonstrates how J-WAFS’ approach has been transformative for many MIT faculty members. As Professor Mathias Kolle puts it, his engagement with J-WAFS “had a significant influence on how we think about our research and its broader impacts.” He adds that it “opened my eyes to the challenges in the field of water and food systems and the many different creative ideas that are explored by MIT.”
This thriving ecosystem of innovation, collaboration, and academic growth around water and food research has not only helped faculty build interdisciplinary and international partnerships, but has also led to the commercialization of transformative technologies with real-world applications. C. Cem Taşan, the POSCO Associate Professor of Metallurgy who is leading a J-WAFS Solutions commercialization team that is about to launch a startup company, sums it up by noting, “Without J-WAFS, we wouldn’t be here at all.”
As J-WAFS looks to the future, its continued commitment — supported by the generosity of its donors and partners — builds on a decade of success enabling MIT faculty to advance water and food research that addresses some of the world’s most pressing challenges.
Unlocking the secrets of fusion’s core with AI-enhanced simulationsFusion’s future depends on decoding plasma’s mysteries. Simulations can help keep research on track and reveal more efficient ways to generate fusion energy.Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.
Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.
In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.
The biggest and best of what’s never been built
Forty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.
In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.
The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”
After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.
“Just dropped in to see what condition my condition was in”
Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”
The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.
Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”
Viewing the universe through ripples in spacePhysicist Salvatore Vitale is looking for new sources of gravitational waves, to reach beyond what we can learn about the universe through light alone.In early September 2015, Salvatore Vitale, who was then a research scientist at MIT, stopped home in Italy for a quick visit with his parents after attending a meeting in Budapest. The meeting had centered on the much-anticipated power-up of Advanced LIGO — a system scientists hoped would finally detect a passing ripple in space-time known as a gravitational wave.
Albert Einstein had predicted the existence of these cosmic reverberations nearly 100 years earlier and thought they would be impossible to measure. But scientists including Vitale believed they might have a shot with their new ripple detector, which was scheduled, finally, to turn on in a few days. At the meeting in Budapest, team members were excited, albeit cautious, acknowledging that it could be months or years before the instruments picked up any promising signs.
However, the day after he arrived for his long-overdue visit with his family, Vitale received a huge surprise.
“The next day, we detect the first gravitational wave, ever,” he remembers. “And of course I had to lock myself in a room and start working on it.”
Vitale and his colleagues had to work in secrecy to prevent the news from getting out before they could scientifically confirm the signal and characterize its source. That meant that no one — not even his parents — could know what he was working on. Vitale departed for MIT and promised that he would come back to visit for Christmas.
“And indeed, I fly back home on the 25th of December, and on the 26th we detect the second gravitational wave! At that point I had to swear them to secrecy and tell them what happened, or they would strike my name from the family record,” he says, only partly in jest.
With the family peace restored, Vitale could focus on the path ahead, which suddenly seemed bright with gravitational discoveries. He and his colleagues, as part of the LIGO Scientific Collaboration, announced the detection of the first gravitational wave in February 2016, confirming Einstein’s prediction. For Vitale, the moment also solidified his professional purpose.
“Had LIGO not detected gravitational waves when it did, I would not be where I am today,” Vitale says. “For sure I was very lucky to be doing this at the right time, for me, and for the instrument and the science.”
A few months after, Vitale joined the MIT faculty as an assistant professor of physics. Today, as a recently tenured associate professor, he is working with his students to analyze a bounty of gravitational signals, from Advanced LIGO as well as Virgo (a similar detector in Italy) and KAGRA, in Japan. The combined power of these observatories is enabling scientists to detect at least one gravitational wave a week, which has revealed a host of extreme sources, from merging black holes to colliding neutron stars.
“Gravitational waves give us a different view of the same universe, which could teach us about things that are very hard to see with just photons,” Vitale says.
Random motion
Vitale is from Reggio di Calabria, a small coastal city in the south of Italy, right at “the tip of the boot,” as he says. His family owned and ran a local grocery store, where he spent so much time as a child that he could recite the names of nearly all the wines in the store.
When he was 9 years old, he remembers stopping in at the local newsstand, which also sold used books. He gathered all the money he had in order to purchase two books, both by Albert Einstein. The first was a collection of letters from the physicist to his friends and family. The second was his theory of relativity.
“I read the letters, and then went through the second book and remember seeing these weird symbols that didn’t mean anything to me,” Vitale recalls.
Nevertheless, the kid was hooked, and continued reading up on physics, and later, quantum mechanics. Toward the end of high school, it wasn’t clear if Vitale could go on to college. Large grocery chains had run his parents’ store out of business, and in the process, the family lost their home and were struggling to recover their losses. But with his parents’ support, Vitale applied and was accepted to the University of Bologna, where he went on to earn a bachelor’s and a master’s in theoretical physics, specializing in general relativity and approximating ways to solve Einstein’s equations. He went on to pursue his PhD in theoretical physics at the Pierre and Marie Curie University in Paris.
“Then, things changed in a very, very random way,” he says.
Vitale’s PhD advisor was hosting a conference, and Vitale volunteered to hand out badges and flyers and help guests get their bearings. That first day, one guest drew his attention.
“I see this guy sitting on the floor, kind of banging his head against his computer because he could not connect his Ubuntu computer to the Wi-Fi, which back then was very common,” Vitale says. “So I tried to help him, and failed miserably, but we started chatting.”
The guest happened to be a professor from Arizona who specialized in analyzing gravitational-wave signals. Over the course of the conference, the two got to know each other, and the professor invited Vitale to Arizona to work with his research group. The unexpected opportunity opened a door to gravitational-wave physics that Vitale might have passed by otherwise.
“When I talk to undergrads and how they can plan their career, I say I don’t know that you can,” Vitale says. “The best you can hope for is a random motion that, overall, goes in the right direction.”
High risk, high reward
Vitale spent two months at Embry-Riddle Aeronautical University in Prescott, Arizona, where he analyzed simulated data of gravitational waves. At that time, around 2009, no one had detected actual signals of gravitational waves. The first iteration of the LIGO detectors began observations in 2002 but had so far come up empty.
“Most of my first few years was working entirely with simulated data because there was no real data in the first place. That led a lot of people to leave the field because it was not an obvious path,” Vitale says.
Nevertheless, the work he did in Arizona only piqued his interest, and Vitale chose to specialize in gravitational-wave physics, returning to Paris to finish up his PhD, then going on to a postdoc position at NIKHEF, the Dutch National Institute for Subatomic Physics at the University of Amsterdam. There, he joined on as a member of the Virgo collaboration, making further connections among the gravitational-wave community.
In 2012, he made the move to Cambridge, Massachusetts, where he started as a postdoc at MIT’s LIGO Laboratory. At that time, scientists there were focused on fine-tuning Advanced LIGO’s detectors and simulating the types of signals that they might pick up. Vitale helped to develop an algorithm to search for signals likely to be gravitational waves.
Just before the detectors turned on for the first observing run, Vitale was promoted to research scientist. And as luck would have it, he was working with MIT students and colleagues on one of the two algorithms that picked up what would later be confirmed to be the first ever gravitational wave.
“It was exciting,” Vitale recalls. “Also, it took us several weeks to convince ourselves that it was real.”
In the whirlwind that followed the official announcement, Vitale became an assistant professor in MIT’s physics department. In 2017, in recognition of the discovery, the Nobel Prize in Physics was awarded to three pivotal members of the LIGO team, including MIT’s Rainier Weiss. Vitale and other members of the LIGO-Virgo collaboration attended the Nobel ceremony later on, in Stockholm, Sweden — a moment that was captured in a photograph displayed proudly in Vitale’s office.
Vitale was promoted to associate professor in 2022 and earned tenure in 2024. Unfortunately his father passed away shortly before the tenure announcement. “He would have been very proud,” Vitale reflects.
Now, in addition to analyzing gravitational-wave signals from LIGO, Virgo, and KAGRA, Vitale is pushing ahead on plans for an even bigger, better LIGO successor. He is part of the Cosmic Explorer Project, which aims to build a gravitational-wave detector that is similar in design to LIGO but 10 times bigger. At that scale, scientists believe such an instrument could pick up signals from sources that are much farther away in space and time, even close to the beginning of the universe.
Then, scientists could look for never-before-detected sources, such as the very first black holes formed in the universe. They could also search within the same neighborhood as LIGO and Virgo, but with higher precision. Then, they might see gravitational signals that Einstein didn’t predict.
“Einstein developed the theory of relativity to explain everything from the motion of Mercury, which circles the sun every 88 days, to objects such as black holes that are 30 times the mass of the sun and move at half the speed of light,” Vitale says. “There’s no reason the same theory should work for both cases, but so far, it seems so, and we’ve found no departure from relativity. But you never know, and you have to keep looking. It’s high risk, for high reward.”
AI model deciphers the code in proteins that tells them where to goWhitehead Institute and CSAIL researchers created a machine-learning model to predict and generate protein localization, with implications for understanding and remedying disease.Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do. More recently, researchers are coming to appreciate that a protein’s localization is also critical for its function. Cells are full of compartments that help to organize their many denizens. Along with the well-known organelles that adorn the pages of biology textbooks, these spaces also include a variety of dynamic, membrane-less compartments that concentrate certain molecules together to perform shared functions. Knowing where a given protein localizes, and who it co-localizes with, can therefore be useful for better understanding that protein and its role in the healthy or diseased cell, but researchers have lacked a systematic way to predict this information.
Meanwhile, protein structure has been studied for over half-a-century, culminating in the artificial intelligence tool AlphaFold, which can predict protein structure from a protein’s amino acid code, the linear string of building blocks within it that folds to create its structure. AlphaFold and models like it have become widely used tools in research.
Proteins also contain regions of amino acids that do not fold into a fixed structure, but are instead important for helping proteins join dynamic compartments in the cell. MIT Professor Richard Young and colleagues wondered whether the code in those regions could be used to predict protein localization in the same way that other regions are used to predict structure. Other researchers have discovered some protein sequences that code for protein localization, and some have begun developing predictive models for protein localization. However, researchers did not know whether a protein’s localization to any dynamic compartment could be predicted based on its sequence, nor did they have a comparable tool to AlphaFold for predicting localization.
Now, Young, also member of the Whitehead Institute for Biological Research; Young lab postdoc Henry Kilgore; Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in MIT's Department of Electrical Engineering and Computer Science and principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and colleagues have built such a model, which they call ProtGPS. In a paper published on Feb. 6 in the journal Science, with first authors Kilgore and Barzilay lab graduate students Itamar Chinn, Peter Mikhael, and Ilan Mitnikov, the cross-disciplinary team debuts their model. The researchers show that ProtGPS can predict to which of 12 known types of compartments a protein will localize, as well as whether a disease-associated mutation will change that localization. Additionally, the research team developed a generative algorithm that can design novel proteins to localize to specific compartments.
“My hope is that this is a first step towards a powerful platform that enables people studying proteins to do their research,” Young says, “and that it helps us understand how humans develop into the complex organisms that they are, how mutations disrupt those natural processes, and how to generate therapeutic hypotheses and design drugs to treat dysfunction in a cell.”
The researchers also validated many of the model’s predictions with experimental tests in cells.
“It really excited me to be able to go from computational design all the way to trying these things in the lab,” Barzilay says. “There are a lot of exciting papers in this area of AI, but 99.9 percent of those never get tested in real systems. Thanks to our collaboration with the Young lab, we were able to test, and really learn how well our algorithm is doing.”
The researchers trained and tested ProtGPS on two batches of proteins with known localizations. They found that it could correctly predict where proteins end up with high accuracy. The researchers also tested how well ProtGPS could predict changes in protein localization based on disease-associated mutations within a protein. Many mutations — changes to the sequence for a gene and its corresponding protein — have been found to contribute to or cause disease based on association studies, but the ways in which the mutations lead to disease symptoms remain unknown.
Figuring out the mechanism for how a mutation contributes to disease is important because then researchers can develop therapies to fix that mechanism, preventing or treating the disease. Young and colleagues suspected that many disease-associated mutations might contribute to disease by changing protein localization. For example, a mutation could make a protein unable to join a compartment containing essential partners.
They tested this hypothesis by feeding ProtGOS more than 200,000 proteins with disease-associated mutations, and then asking it to both predict where those mutated proteins would localize and measure how much its prediction changed for a given protein from the normal to the mutated version. A large shift in the prediction indicates a likely change in localization.
The researchers found many cases in which a disease-associated mutation appeared to change a protein’s localization. They tested 20 examples in cells, using fluorescence to compare where in the cell a normal protein and the mutated version of it ended up. The experiments confirmed ProtGPS’s predictions. Altogether, the findings support the researchers’ suspicion that mis-localization may be an underappreciated mechanism of disease, and demonstrate the value of ProtGPS as a tool for understanding disease and identifying new therapeutic avenues.
“The cell is such a complicated system, with so many components and complex networks of interactions,” Mitnikov says. “It’s super interesting to think that with this approach, we can perturb the system, see the outcome of that, and so drive discovery of mechanisms in the cell, or even develop therapeutics based on that.”
The researchers hope that others begin using ProtGPS in the same way that they use predictive structural models like AlphaFold, advancing various projects on protein function, dysfunction, and disease.
The researchers were excited about the possible uses of their prediction model, but they also wanted their model to go beyond predicting localizations of existing proteins, and allow them to design completely new proteins. The goal was for the model to make up entirely new amino acid sequences that, when formed in a cell, would localize to a desired location. Generating a novel protein that can actually accomplish a function — in this case, the function of localizing to a specific cellular compartment — is incredibly difficult. In order to improve their model’s chances of success, the researchers constrained their algorithm to only design proteins like those found in nature. This is an approach commonly used in drug design, for logical reasons; nature has had billions of years to figure out which protein sequences work well and which do not.
Because of the collaboration with the Young lab, the machine learning team was able to test whether their protein generator worked. The model had good results. In one round, it generated 10 proteins intended to localize to the nucleolus. When the researchers tested these proteins in the cell, they found that four of them strongly localized to the nucleolus, and others may have had slight biases toward that location as well.
“The collaboration between our labs has been so generative for all of us,” Mikhael says. “We’ve learned how to speak each other’s languages, in our case learned a lot about how cells work, and by having the chance to experimentally test our model, we’ve been able to figure out what we need to do to actually make the model work, and then make it work better.”
Being able to generate functional proteins in this way could improve researchers’ ability to develop therapies. For example, if a drug must interact with a target that localizes within a certain compartment, then researchers could use this model to design a drug to also localize there. This should make the drug more effective and decrease side effects, since the drug will spend more time engaging with its target and less time interacting with other molecules, causing off-target effects.
The machine learning team members are enthused about the prospect of using what they have learned from this collaboration to design novel proteins with other functions beyond localization, which would expand the possibilities for therapeutic design and other applications.
“A lot of papers show they can design a protein that can be expressed in a cell, but not that the protein has a particular function,” Chinn says. “We actually had functional protein design, and a relatively huge success rate compared to other generative models. That’s really exciting to us, and something we would like to build on.”
All of the researchers involved see ProtGPS as an exciting beginning. They anticipate that their tool will be used to learn more about the roles of localization in protein function and mis-localization in disease. In addition, they are interested in expanding the model’s localization predictions to include more types of compartments, testing more therapeutic hypotheses, and designing increasingly functional proteins for therapies or other applications.
“Now that we know that this protein code for localization exists, and that machine learning models can make sense of that code and even create functional proteins using its logic, that opens up the door for so many potential studies and applications,” Kilgore says.
Study reveals the Phoenix galaxy cluster in the act of extreme coolingObservations from NASA’s James Webb Space Telescope help to explain the cluster’s mysterious starburst, usually only seen in younger galaxies.The core of a massive cluster of galaxies appears to be pumping out far more stars than it should. Now researchers at MIT and elsewhere have discovered a key ingredient within the cluster that explains the core’s prolific starburst.
In a new study published in Nature, the scientists report using NASA’s James Webb Space Telescope (JWST) to observe the Phoenix cluster — a sprawling collection of gravitationally bound galaxies that circle a central massive galaxy some 5.8 billion light years from Earth. The cluster is the largest of its kind that scientists have so far observed. For its size and estimated age, the Phoenix should be what astronomers call “red and dead” — long done with any star formation that is characteristic of younger galaxies.
But astronomers previously discovered that the core of the Phoenix cluster appeared surprisingly bright, and the central galaxy seemed to be churning out stars at an extremely vigorous rate. The observations raised a mystery: How was the Phoenix fueling such rapid star formation?
In younger galaxies, the “fuel” for forging stars is in the form of extremely cold and dense clouds of interstellar gas. For the much older Phoenix cluster, it was unclear whether the central galaxy could undergo the extreme cooling of gas that would be required to explain its stellar production, or whether cold gas migrated in from other, younger galaxies.
Now, the MIT team has gained a much clearer view of the cluster’s core, using JWST’s far-reaching, infrared-measuring capabilities. For the first time, they have been able to map regions within the core where there are pockets of “warm” gas. Astronomers have previously seen hints of both very hot gas, and very cold gas, but nothing in between.
The detection of warm gas confirms that the Phoenix cluster is actively cooling and able to generate a huge amount of stellar fuel on its own.
“For the first time we have a complete picture of the hot-to-warm-to-cold phase in star formation, which has really never been observed in any galaxy,” says study lead author Michael Reefe, a physics graduate student in MIT’s Kavli Institute for Astrophysics and Space Research. “There is a halo of this intermediate gas everywhere that we can see.”
“The question now is, why this system?” adds co-author Michael McDonald, associate professor of physics at MIT. “This huge starburst could be something every cluster goes through at some point, but we’re only seeing it happen currently in one cluster. The other possibility is that there’s something divergent about this system, and the Phoenix went down a path that other systems don’t go. That would be interesting to explore.”
Hot and cold
The Phoenix cluster was first spotted in 2010 by astronomers using the South Pole Telescope in Antarctica. The cluster comprises about 1,000 galaxies and lies in the constellation Phoenix, after which it is named. Two years later, McDonald led an effort to focus in on Phoenix using multiple telescopes, and discovered that the cluster’s central galaxy was extremely bright. The unexpected luminosity was due to a firehose of star formation. He and his colleagues estimated that this central galaxy was turning out stars at a staggering rate of about 1,000 per year.
“Previous to the Phoenix, the most star-forming galaxy cluster in the universe had about 100 stars per year, and even that was an outlier. The typical number is one-ish,” McDonald says. “The Phoenix is really offset from the rest of the population.”
Since that discovery, scientists have checked in on the cluster from time to time for clues to explain the abnormally high stellar production. They have observed pockets of both ultrahot gas, of about 1 million degrees Fahrenheit, and regions of extremely cold gas, of 10 kelvins, or 10 degrees above absolute zero.
The presence of very hot gas is no surprise: Most massive galaxies, young and old, host black holes at their cores that emit jets of extremely energetic particles that can continually heat up the galaxy’s gas and dust throughout a galaxy’s lifetime. Only in a galaxy’s early stages does some of this million-degree gas cool dramatically to ultracold temperatures that can then form stars. For the Phoenix cluster’s central galaxy, which should be well past the stage of extreme cooling, the presence of ultracold gas presented a puzzle.
“The question has been: Where did this cold gas come from?” McDonald says. “It’s not a given that hot gas will ever cool, because there could be black hole or supernova feedback. So, there are a few viable options, the simplest being that this cold gas was flung into the center from other nearby galaxies. The other is that this gas somehow is directly cooling from the hot gas in the core.”
Neon signs
For their new study, the researchers worked under a key assumption: If the Phoenix cluster’s cold, star-forming gas is coming from within the central galaxy, rather than from the surrounding galaxies, the central galaxy should have not only pockets of hot and cold gas, but also gas that’s in a “warm” in-between phase. Detecting such intermediate gas would be like catching the gas in the midst of extreme cooling, serving as proof that the core of the cluster was indeed the source of the cold stellar fuel.
Following this reasoning, the team sought to detect any warm gas within the Phoenix core. They looked for gas that was somewhere between 10 kelvins and 1 million kelvins. To search for this Goldilocks gas in a system that is 5.8 billion light years away, the researchers looked to JWST, which is capable of observing farther and more clearly than any observatory to date.
The team used the Medium-Resolution Spectrometer on JWST’s Mid-Infrared Instrument (MIRI), which enables scientists to map light in the infrared spectrum. In July of 2023, the team focused the instrument on the Phoenix core and collected 12 hours’ worth of infrared images. They looked for a specific wavelength that is emitted when gas — specifically neon gas — undergoes a certain loss of ions. This transition occurs at around 300,000 kelvins, or 540,000 degrees Fahrenheit — a temperature that happens to be within the “warm” range that the researchers looked to detect and map. The team analyzed the images and mapped the locations where warm gas was observed within the central galaxy.
“This 300,000-degree gas is like a neon sign that’s glowing in a specific wavelength of light, and we could see clumps and filaments of it throughout our entire field of view,” Reefe says. “You could see it everywhere.”
Based on the extent of warm gas in the core, the team estimates that the central galaxy is undergoing a huge degree of extreme cooling and is generating an amount of ultracold gas each year that is equal to the mass of about 20,000 suns. With that kind of stellar fuel supply, the team says it’s very likely that the central galaxy is indeed generating its own starburst, rather than using fuel from surrounding galaxies.
“I think we understand pretty completely what is going on, in terms of what is generating all these stars,” McDonald says. “We don’t understand why. But this new work has opened a new way to observe these systems and understand them better.”
This work was funded, in part, by NASA.
Mapping mRNA through its life cycle within a cellXiao Wang’s studies of how and where RNA is translated could lead to the development of better RNA therapeutics and vaccines.When Xiao Wang applied to faculty jobs, many of the institutions where she interviewed thought her research proposal — to study the life cycle of RNA in cells and how it influences normal development and disease — was too broad.
However, that was not the case when she interviewed at MIT, where her future colleagues embraced her ideas and encouraged her to be even more bold.
“What I’m doing now is even broader, even bolder than what I initially proposed,” says Wang, who holds joint appointments in the Department of Chemistry and the Broad Institute of MIT and Harvard. “I got great support from all my colleagues in my department and at Broad so that I could get the resources to conduct what I wanted to do. It’s also a demonstration of how brave the students are. There is a really innovative culture and environment here, so the students are not scared by taking on something that might sound weird or unrealistic.”
Wang’s work on RNA brings together students from chemistry, biology, computer science, neuroscience, and other fields. In her lab, research is focused on developing tools that pinpoint where in a given cell different types of messenger RNA are translated into proteins — information that can offer insight into how cells control their fate and what goes wrong in disease, especially in the brain.
“The joint position between MIT Chemistry and the Broad Institute was very attractive to me because I was trained as a chemist, and I would like to teach and recruit students from chemistry. But meanwhile, I also wanted to get exposure to biomedical topics and have collaborators outside chemistry. I can collaborate with biologists, doctors, as well as computational scientists who analyze all these daunting data,” she says.
Imaging RNA
Wang began her career at MIT in 2019, just before the Covid-19 pandemic began. Until that point, she hardly knew anyone in the Boston area, but she found a warm welcome.
“I wasn’t trained at MIT, and I had never lived in Boston before. At first, I had very small social circles, just with my colleagues and my students, but amazingly, even during the pandemic, I never felt socially isolated. I just felt so plugged in already even though it’s very a close, small circle,” she says.
Growing up in China, Wang became interested in science in middle school, when she was chosen to participate in China’s National Olympiad in math and chemistry. That gave her the chance to learn college-level course material, and she ended up winning a gold medal in the nationwide chemistry competition.
“That exposure was enough to draw me into initially mathematics, but later on more into chemistry. That’s how I got interested in a more science-oriented major and then career path,” Wang says.
At Peking University, she majored in chemistry and molecular engineering. There, she worked with Professor Jian Pei, who gave her the opportunity to work independently on her own research project.
“I really like to do research because every day you have a hypothesis, you have a design, and you make it happen. It’s like playing a video game: You have this roughly daily feedback loop. Sometimes it’s a reward, sometimes it’s not. I feel it’s more interesting than taking a class, so I think that made me decide I should apply for graduate school,” she says.
As a graduate student at the University of Chicago, she became interested in RNA while doing a rotation in the lab of Chuan He, a professor of chemistry. He was studying chemical modifications that affect the function of messenger RNA — the molecules that carry protein-building instructions from DNA to ribosomes, where proteins are assembled.
Wang ended up joining He’s lab, where she studied a common mRNA modification known as m6A, which influences how efficiently mRNA is translated into protein and how fast it gets degraded in the cell. She also began to explore how mRNA modifications affect embryonic development. As a model for these studies, she was using zebrafish, which have transparent embryos that develop from fertilized eggs into free-swimming larvae within two days. That got her interested in developing methods that could reveal where different types of RNA were being expressed, by imaging the entire organism.
Such an approach, she soon realized, could also be useful for studying the brain. As a postdoc at Stanford University, she started to develop RNA imaging methods, working with Professor Karl Deisseroth. There are existing techniques for identifying mRNA molecules that are expressed in individual cells, but those don’t offer information about exactly where in the cells different types of mRNA are located. She began developing a technique called STARmap that could accomplish this type of “spatial transcriptomics.”
Using this technique, researchers first use formaldehyde to crosslink all of the mRNA molecules in place. Then, the tissue is washed with fluorescent DNA probes that are complementary to the target mRNA sequences. These probes can then be imaged and sequenced, revealing the locations of each mRNA sequence within a cell. This allows for the visualization of mRNA molecules that encode thousands of different genes within single cells.
“I was leveraging my background in the chemistry of RNA to develop this RNA-centered brain mapping technology, which allows you to use RNA expression profiles to define brain cell types and also visualize their spatial architecture,” Wang says.
Tracking the RNA life cycle
Members of Wang’s lab are now working on expanding the capability of the STARmap technique so that it can be used to analyze brain function and brain wiring. They are also developing tools that will allow them to map the entire life cycle of mRNA molecules, from synthesis to translation to degradation, and track how these molecules are transported within a cell during their lifetime.
One of these tools, known as RIBOmap, pinpoints the locations of mRNA molecules as they are being translated at ribosomes. Another tool allows the researchers to measure how quickly mRNA is degraded after being transcribed.
“We are trying to develop a toolkit that will let us visualize every step of the RNA life cycle inside cells and tissues,” Wang says. “These are newer generations of tool development centered around these RNA biological questions.”
One of these central questions is how different cell types control their RNA life cycles differently, and how that affects their differentiation. Differences in RNA control may also be a factor in diseases such as Alzheimer’s. In a 2023 study, Wang and MIT Professor Morgan Sheng used a version of STARmap to discover how cells called microglia become more inflammatory as amyloid-beta plaques form in the brain. Wang’s lab is also pursuing studies of how differences in mRNA translation might affect schizophrenia and other neurological disorders.
“The reason we think there will be a lot of interesting biology to discover is because the formation of neural circuits is through synapses, and synapse formation and learning and memory are strongly associated with localized RNA translation, which involves multiple steps including RNA transport and recycling,” she says.
In addition to investigating those biological questions, Wang is also working on ways to boost the efficiency of mRNA therapeutics and vaccines by changing their chemical modifications or their topological structure.
“Our goal is to create a toolbox and RNA synthesis strategy where we can precisely tune the chemical modification on every particle of RNA,” Wang says. “We want to establish how those modifications will influence how fast mRNA can produce protein, and in which cell types they could be used to more efficiently produce protein.”
MIT method enables ultrafast protein labeling of tens of millions of densely packed cellsTissue processing advance can label proteins at the level of individual cells across large samples just as fast and uniformly as in dissociated single cells.A new technology developed at MIT enables scientists to label proteins across millions of individual cells in fully intact 3D tissues with unprecedented speed, uniformity, and versatility. Using the technology, the team was able to richly label large tissue samples in a single day. In their new study in Nature Biotechnology, they also demonstrate that the ability to label proteins with antibodies at the single-cell level across large tissue samples can reveal insights left hidden by other widely used labeling methods.
Profiling the proteins that cells are making is a staple of studies in biology, neuroscience, and related fields because the proteins a cell is expressing at a given moment can reflect the functions the cell is trying to perform or its response to its circumstances, such as disease or treatment. As much as microscopy and labeling technologies have advanced, enabling innumerable discoveries, scientists have still lacked a reliable and practical way of tracking protein expression at the level of millions of densely packed individual cells in whole, 3D intact tissues. Often confined to thin tissue sections under slides, scientists therefore haven’t had tools to thoroughly appreciate cellular protein expression in the whole, connected systems in which it occurs.
“Conventionally, investigating the molecules within cells requires dissociating tissue into single cells or slicing it into thin sections, as light and chemicals required for analysis cannot penetrate deep into tissues. Our lab developed technologies such as CLARITY and SHIELD, which enable investigation of whole organs by rendering them transparent, but we now needed a way to chemically label whole organs to gain useful scientific insights,” says study senior author Kwanghun Chung, associate professor in The Picower Institute for Learning and Memory, the departments of Chemical Engineering and Brain and Cognitive Sciences, and the Institute for Medical Engineering and Science at MIT. “If cells within a tissue are not uniformly processed, they cannot be quantitatively compared. In conventional protein labeling, it can take weeks for these molecules to diffuse into intact organs, making uniform chemical processing of organ-scale tissues virtually impossible and extremely slow.”
The new approach, called “CuRVE,” represents a major advance — years in the making — toward that goal by demonstrating a fundamentally new approach to uniformly processing large and dense tissues whole. In the study, the researchers explain how they overcame the technical barriers via an implementation of CuRVE called “eFLASH,” and provide copious vivid demonstrations of the technology, including how it yielded new neuroscience insights.
“This is a significant leap, especially in terms of the actual performance of the technology,” says co-lead author Dae Hee Yun PhD '24, a recent MIT graduate student who is now a senior application engineer at LifeCanvas Technologies, a startup company Chung founded to disseminate the tools his lab invents. The paper’s other lead author is Young-Gyun Park, a former MIT postdoc who’s now an assistant professor at KAIST in South Korea.
Clever chemistry
The fundamental reason why large, 3D tissue samples are hard to label uniformly is that antibodies seep into tissue very slowly, but are quick to bind to their target proteins. The practical effect of this speed mismatch is that simply soaking a brain in a bath of antibodies will mean that proteins are intensely well labeled on the outer edge of the tissue, but virtually none of the antibodies will find cells and proteins deeper inside.
To improve labeling, the team conceived of a way — the conceptual essence of CuRVE — to resolve the speed mismatch. The strategy was to continuously control the pace of antibody binding while at the same time speeding up antibody permeation throughout the tissue. To figure out how this could work and to optimize the approach, they built and ran a sophisticated computational simulation that enabled them to test different settings and parameters, including different binding rates and tissue densities and compositions.
Then they set out to implement their approach in real tissues. Their starting point was a previous technology, called “SWITCH,” in which Chung’s lab devised a way of temporarily turning off antibody binding, letting the antibodies permeate the tissue, and then turning binding back on. As well as it worked, Yun says, the team realized there could be substantial improvements if antibody binding speed could be controlled constantly, but the chemicals used in SWITCH were too harsh for such ongoing treatment. So the team screened a library of similar chemicals to find one that could more subtly and continuously throttle antibody binding speed. They found that deoxycholic acid was an ideal candidate. Using that chemical, the team could not only modulate antibody binding by varying the chemical’s concentration, but also by varying the labeling bath’s pH (or acidity).
Meanwhile, to speed up antibody movement through tissues, the team used another prior technology invented in the Chung Lab: stochastic electrotransport. That technology accelerates the dispersion of antibodies through tissue by applying electric fields.
Implementing this eFLASH system of accelerated dispersion with continuously modifiable binding speed produced the wide variety of labeling successes demonstrated in the paper. In all, the team reported using more than 60 different antibodies to label proteins in cells across large tissue samples.
Notably, each of these specimens was labeled within a day, an “ultra-fast” speed for whole, intact organs, the authors say. Moreover, different preparations did not require new optimization steps.
Valuable visualizations
Among the ways the team put eFLASH to the test was by comparing their labeling to another often-used method: genetically engineering cells to fluoresce when the gene for a protein of interest is being transcribed. The genetic method doesn’t require dispersing antibodies throughout tissue, but it can be prone to discrepancies because reporting gene transcription and actual protein production are not exactly the same thing. Yun added that while antibody labeling reliably and immediately reports on the presence of a target protein, the genetic method can be much less immediate and persistent, still fluorescing even when the actual protein is no longer present.
In the study the team employed both kinds of labeling simultaneously in samples. Visualizing the labels that way, they saw many examples in which antibody labeling and genetic labeling differed widely. In some areas of mouse brains, they found that two-thirds of the neurons expressing PV (a protein prominent in certain inhibitory neurons) according to antibody labeling, did not show any genetically-based fluorescence. In another example, only a tiny fraction of cells that reported expression via the genetic method of a protein called ChAT also reported it via antibody labeling. In other words, there were cases where genetic labeling both severely underreported or overreported protein expression compared to antibody labeling.
The researchers don’t mean to impugn the clear value of using the genetic reporting methods, but instead suggest that also using organ-wide antibody labeling, as eFLASH allows, can help put that data in a richer, more complete context. “Our discovery of large regionalized loss of PV-immunoreactive neurons in healthy adult mice and with high individual variability emphasizes the importance of holistic and unbiased phenotyping,” the authors write.
Or as Yun puts it, the two different kinds of labeling are “two different tools for the job.”
In addition to Yun, Park, and Chung, the paper’s other authors are Jae Hun Cho, Lee Kamentsky, Nicholas Evans, Nicholas DiNapoli, Katherine Xie, Seo Woo Choi, Alexandre Albanese, Yuxuan Tian, Chang Ho Sohn, Qiangge Zhang, Minyoung Kim, Justin Swaney, Webster Guan, Juhyuk Park, Gabi Drummond, Heejin Choi, Luzdary Ruelas, and Guoping Feng.
Funding for the study came from the Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award in Science and Engineering, a NARSAD Young Investigator Award, the McKnight Foundation, the Freedom Together Foundation, The Picower Institute for Learning and Memory, the NCSOFT Cultural Foundation, and the National Institutes of Health.
3 Questions: What the laws of physics tell us about CO2 removalIn a report on the feasibility of removing carbon dioxide from the atmosphere, physicists say these technologies are “not a magic bullet, but also not a no-go.”Human activities continue to pump billions of tons of carbon dioxide into the atmosphere each year, raising global temperatures and driving extreme weather events. As countries grapple with climate impacts and ways to significantly reduce carbon emissions, there have been various efforts to advance carbon dioxide removal (CDR) technologies that directly remove carbon dioxide from the air and sequester it for long periods of time.
Unlike carbon capture and storage technologies, which are designed to remove carbon dioxide at point sources such as fossil-fuel plants, CDR aims to remove carbon dioxide molecules that are already circulating in the atmosphere.
A new report by the American Physical Society and led by an MIT physicist provides an overview of the major experimental CDR approaches and determines their fundamental physical limits. The report focuses on methods that have the biggest potential for removing carbon dioxide, at the scale of gigatons per year, which is the magnitude that would be required to have a climate-stabilizing impact.
The new report was commissioned by the American Physical Society's Panel on Public Affairs, and appeared last week in the journal PRX. The report was chaired by MIT professor of physics Washington Taylor, who spoke with MIT News about CDR’s physical limitations and why it’s worth pursuing in tandem with global efforts to reduce carbon emissions.
Q: What motivated you to look at carbon dioxide removal systems from a physical science perspective?
A: The number one thing driving climate change is the fact that we’re taking carbon that has been stuck in the ground for 100 million years, and putting it in the atmosphere, and that’s causing warming. In the last few years there’s been a lot of interest both by the government and private entities in finding technologies to directly remove the CO2 from the air.
How to manage atmospheric carbon is the critical question in dealing with our impact on Earth’s climate. So, it’s very important for us to understand whether we can affect the carbon levels not just by changing our emissions profile but also by directly taking carbon out of the atmosphere. Physics has a lot to say about this because the possibilities are very strongly constrained by thermodynamics, mass issues, and things like that.
Q: What carbon dioxide removal methods did you evaluate?
A: They’re all at an early stage. It's kind of the Wild West out there in terms of the different ways in which companies are proposing to remove carbon from the atmosphere. In this report, we break down CDR processes into two classes: cyclic and once-through.
Imagine we are in a boat that has a hole in the hull and is rapidly taking on water. Of course, we want to plug the hole as quickly as we can. But even once we have fixed the hole, we need to get the water out so we aren't in danger of sinking or getting swamped. And this is particularly urgent if we haven't completely fixed the hole so we still have a slow leak. Now, imagine we have a couple of options for how to get the water out so we don’t sink.
The first is a sponge that we can use to absorb water, that we can then squeeze out and reuse. That’s a cyclic process in the sense that we have some material that we’re using over and over. There are cyclic CDR processes like chemical “direct air capture” (DAC), which acts basically like a sponge. You set up a big system with fans that blow air past some material that captures carbon dioxide. When the material is saturated, you close off the system and then use energy to essentially squeeze out the carbon and store it in a deep repository. Then you can reuse the material, in a cyclic process.
The second class of approaches is what we call “once-through.” In the boat analogy, it would be as if you try to fix the leak using cartons of paper towels. You let them saturate and then throw them overboard, and you use each roll once.
There are once-through CDR approaches, like enhanced rock weathering, that are designed to accelerate a natural process, by which certain rocks, when exposed to air, will absorb carbon from the atmosphere. Worldwide, this natural rock weathering is estimated to remove about 1 gigaton of carbon each year. “Enhanced rock weathering” is a CDR approach where you would dig up a lot of this rock, grind it up really small, to less than the width of a human hair, to get the process to happen much faster. The idea is, you dig up something, spread it out, and absorb CO2 in one go.
The key difference between these two processes is that the cyclic process is subject to the second law of thermodynamics and there’s an energy constraint. You can set an actual limit from physics, saying any cyclic process is going to take a certain amount of energy, and that cannot be avoided. For example, we find that for cyclic direct-air-capture (DAC) plants, based on second law limits, the absolute minimum amount of energy you would need to capture a gigaton of carbon is comparable to the total yearly electric energy consumption of the state of Virginia. Systems currently under development use at least three to 10 times this much energy on a per ton basis (and capture tens of thousands, not billions, of tons). Such systems also need to move a lot of air; the air that would need to pass through a DAC system to capture a gigaton of CO2 is comparable to the amount of air that passes through all the air cooling systems on the planet.
On the other hand, if you have a once-through process, you could in some respects avoid the energy constraint, but now you’ve got a materials constraint due to the central laws of chemistry. For once-through processes like enhanced rock weathering, that means that if you want to capture a gigaton of CO2, roughly speaking, you’re going to need a billion tons of rock.
So, to capture gigatons of carbon through engineered methods requires tremendous amounts of physical material, air movement, and energy. On the other hand, everything we’re doing to put that CO2 in the atmosphere is extensive too, so large-scale emissions reductions face comparable challenges.
Q: What does the report conclude, in terms of whether and how to remove carbon dioxide from the atmosphere?
A: Our initial prejudice was, CDR is just going to take so much energy, and there’s no way around that because of the second law of thermodynamics, regardless of the method.
But as we discussed, there is this nuance about cyclic versus once-through systems. And there are two points of view that we ended up threading a needle between. One is the view that CDR is a silver bullet, and we’ll just do CDR and not worry about emissions — we’ll just suck it all out of the atmosphere. And that’s not the case. It will be really expensive, and will take a lot of energy and materials to do large-scale CDR. But there’s another view, where people say, don’t even think about CDR. Even thinking about CDR will compromise our efforts toward emissions reductions. The report comes down somewhere in the middle, saying that CDR is not a magic bullet, but also not a no-go.
If we are serious about managing climate change, we will likely want substantial CDR in addition to aggressive emissions reductions. The report concludes that research and development on CDR methods should be selectively and prudently pursued despite the expected cost and energy and material requirements.
At a policy level, the main message is that we need an economic and policy framework that incentivizes emissions reductions and CDR in a common framework; this would naturally allow the market to optimize climate solutions. Since in many cases it is much easier and cheaper to cut emissions than it will likely ever be to remove atmospheric carbon, clearly understanding the challenges of CDR should help motivate rapid emissions reductions.
For me, I’m optimistic in the sense that scientifically we understand what it will take to reduce emissions and to use CDR to bring CO2 levels down to a slightly lower level. Now, it’s really a societal and economic problem. I think humanity has the potential to solve these problems. I hope that we can find common ground so that we can take actions as a society that will benefit both humanity and the broader ecosystems on the planet, before we end up having bigger problems than we already have.
Seeking climate connections among the oceans’ smallest organismsMIT oceanographer and biogeochemist Andrew Babbin has voyaged around the globe to investigate marine microbes and their influence on ocean health.Andrew Babbin tries to pack light for work trips. Along with the travel essentials, though, he also brings a roll each of electrical tape, duct tape, lab tape, a pack of cable ties, and some bungee cords.
“It’s my MacGyver kit: You never know when you have to rig something on the fly in the field or fix a broken bag,” Babbin says.
The trips Babbin takes are far out to sea, on month-long cruises, where he works to sample waters off the Pacific coast and out in the open ocean. In remote locations, repair essentials often come in handy, as when Babbin had to zip-tie a wrench to a sampling device to help it sink through an icy Antarctic lake.
Babbin is an oceanographer and marine biogeochemist who studies marine microbes and the ways in which they control the cycling of nitrogen between the ocean and the atmosphere. This exchange helps maintain healthy ocean ecosystems and supports the ocean’s capacity to store carbon.
By combining measurements that he takes in the ocean with experiments in his MIT lab, Babbin is working to understand the connections between microbes and ocean nitrogen, which could in turn help scientists identify ways to maintain the ocean’s health and productivity. His work has taken him to many coastal and open-ocean regions around the globe.
“You really become an oceanographer and an Earth scientist to see the world,” says Babbin, who recently earned tenure as the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “We embrace the diversity of places and cultures on this planet. To see just a small fraction of that is special.”
A powerful cycle
The ocean has been a constant presence for Babbin since childhood. His family is from Monmouth County, New Jersey, where he and his twin sister grew up playing along the Jersey shore. When they were teenagers, their parents took the kids on family cruise vacations.
“I always loved being on the water,” he says. “My favorite parts of any of those cruises were the days at sea, where you were just in the middle of some ocean basin with water all around you.”
In school, Babbin gravitated to the sciences, and chemistry in particular. After high school, he attended Columbia University, where a visit to the school’s Earth and environmental engineering department catalyzed a realization.
“For me, it was always this excitement about the water and about chemistry, and it was this pop of, ‘Oh wow, it doesn’t have to be one or the other,’” Babbin says.
He chose to major in Earth and environmental engineering, with a concentration in water resources and climate risks. After graduating in 2008, Babbin returned to his home state, where he attended Princeton University and set a course for a PhD in geosciences, with a focus on chemical oceanography and environmental microbiology. His advisor, oceanographer Bess Ward, took Babbin on as a member of her research group and invited him on several month-long cruises to various parts of the eastern tropical Pacific.
“I still remember that first trip,” Babbin recalls. “It was a whirlwind. Everyone else had been to sea a gazillion times and was loading the boat and strapping things down, and I had no idea of anything. And within a few hours, I was doing an experiment as the ship rocked back and forth!”
Babbin learned to deploy sampling cannisters overboard, then haul them back up and analyze the seawater inside for signs of nitrogen — an essential nutrient for all living things on Earth.
As it turns out, the plants and animals that depend on nitrogen to survive are unable to take it up from the atmosphere themselves. They require a sort of go-between, in the form of microbes that “fix” nitrogen, converting it from nitrogen gas to more digestible forms. In the ocean, this nitrogen fixation is done by highly specialized microbial species, which work to make nitrogen available to phytoplankton — microscopic plant-like organisms that are the foundation of the marine food chain. Phytoplankton are also a main route by which the ocean absorbs carbon dioxide from the atmosphere.
Microorganisms may also use these biologically available forms of nitrogen for energy under certain conditions, returning nitrogen to the atmosphere. These microbes can also release a byproduct of nitrous oxide, which is a potent greenhouse gas that also can catalyze ozone loss in the stratosphere.
Through his graduate work, at sea and in the lab, Babbin became fascinated with the cycling of nitrogen and the role that nitrogen-fixing microbes play in supporting the ocean’s ecosystems and the climate overall. A balance of nitrogen inputs and outputs sustains phytoplankton and maintains the ocean’s ability to soak up carbon dioxide.
“Some of the really pressing questions in ocean biogeochemistry pertain to this cycling of nitrogen,” Babbin says. “Understanding the ways in which this one element cycles through the ocean, and how it is central to ecosystem health and the planet’s climate, has been really powerful.”
In the lab and out to sea
After completing his PhD in 2014, Babbin arrived at MIT as a postdoc in the Department of Civil and Environmental Engineering.
“My first feeling when I came here was, wow, this really is a nerd’s playground,” Babbin says. “I embraced being part of a culture where we seek to understand the world better, while also doing the things we really want to do.”
In 2017, he accepted a faculty position in MIT’s Department of Earth, Atmospheric and Planetary Sciences. He set up his laboratory space, painted in his favorite brilliant orange, on the top floor of the Green Building.
His group uses 3D printers to fabricate microfluidic devices in which they reproduce the conditions of the ocean environment and study microbe metabolism and its effects on marine chemistry. In the field, Babbin has led research expeditions to the Galapagos Islands and parts of the eastern Pacific, where he has collected and analyzed samples of air and water for signs of nitrogen transformations and microbial activity. His new measuring station in the Galapagos is able to infer marine emissions of nitrous oxide across a large swath of the eastern tropical Pacific Ocean. His group has also sailed to southern Cuba, where the researchers studied interactions of microbes in coral reefs.
Most recently, Babbin traveled to Antarctica, where he set up camp next to frozen lakes and plumbed for samples of pristine ice water that he will analyze for genetic remnants of ancient microbes. Such preserved bacterial DNA could help scientists understand how microbes evolved and influenced the Earth’s climate over billions of years.
“Microbes are the terraformers,” Babbin notes. “They have been, since life evolved more than 3 billion years ago. We have to think about how they shape the natural world and how they will respond to the Anthropocene as humans monkey with the planet ourselves.”
Collective action
Babbin is now charting new research directions. In addition to his work at sea and in the lab, he is venturing into engineering, with a new project to design denitrifying capsules. While nitrogen is an essential nutrient for maintaining a marine ecosystem, too much nitrogen, such as from fertilizer that runs off into lakes and streams, can generate blooms of toxic algae. Babbin is looking to design eco-friendly capsules that scrub excess anthropogenic nitrogen from local waterways.
He’s also beginning the process of designing a new sensor to measure low-oxygen concentrations in the ocean. As the planet warms, the oceans are losing oxygen, creating “dead zones” where fish cannot survive. While others including Babbin have tried to map these oxygen minimum zones, or OMZs, they have done so sporadically, by dropping sensors into the ocean over limited range, depth, and times. Babbin’s sensors could potentially provide a more complete map of OMZs, as they would be deployed on wide-ranging, deep-diving, and naturally propulsive vehicles: sharks.
“We want to measure oxygen. Sharks need oxygen. And if you look at where the sharks don’t go, you might have a sense of where the oxygen is not,” says Babbin, who is working with marine biologists on ways to tag sharks with oxygen sensors. “A number of these large pelagic fish move up and down the water column frequently, so you can map the depth to which they dive to, and infer something about the behavior. And my suggestion is, you might also infer something about the ocean’s chemistry.”
When he reflects on what stimulates new ideas and research directions, Babbin credits working with others, in his own group and across MIT.
“My best thoughts come from this collective action,” Babbin says. “Particularly because we all have different upbringings and approach things from a different perspective.”
He’s bringing this collaborative spirit to his new role, as a mission director for MIT’s Climate Project. Along with Jesse Kroll, who is a professor of civil and environmental engineering and of chemical engineering, Babbin co-leads one of the project’s six missions: Restoring the Atmosphere, Protecting the Land and Oceans. Babbin and Kroll are planning a number of workshops across campus that they hope will generate new connections, and spark new ideas, particularly around ways to evaluate the effectiveness of different climate mitigation strategies and better assess the impacts of climate on society.
“One area we want to promote is thinking of climate science and climate interventions as two sides of the same coin,” Babbin says. “There’s so much action that’s trying to be catalyzed. But we want it to be the best action. Because we really have one shot at doing this. Time is of the essence.”
David McGee named head of the Department of Earth, Atmospheric and Planetary SciencesSpecialist in paleoclimate and geochronology is known for contributions to education and community.David McGee, the William R. Kenan Jr. Professor of Earth and Planetary Sciences at MIT, was recently appointed head of the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), effective Jan. 15. He assumes the role from Professor Robert van der Hilst, the Schlumberger Professor of Earth and Planetary Sciences, who led the department for 13 years.
McGee specializes in applying isotope geochemistry and geochronology to reconstruct Earth’s climate history, helping to ground-truth our understanding of how the climate system responds during periods of rapid change. He has also been instrumental in the growth of the department’s community and culture, having served as EAPS associate department head since 2020.
“David is an amazing researcher who brings crucial, data-based insights to aid our response to climate change,” says dean of the School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics Nergis Mavalvala. “He is also a committed and caring educator, providing extraordinary investment in his students’ learning experiences, and through his direction of Terrascope, one of our unique first-year learning communities focused on generating solutions to sustainability challenges.”
“I am energized by the incredible EAPS community, by Rob’s leadership over the last 13 years, and by President Kornbluth’s call for MIT to innovate effective and wise responses to climate change,” says McGee. “EAPS has a unique role in this time of reckoning with planetary boundaries — our collective path forward needs to be guided by a deep understanding of the Earth system and a clear sense of our place in the universe.”
McGee’s research seeks to understand the Earth system’s response to past climate changes. Using geochemical analysis and uranium-series dating, McGee and his group investigate stalagmites, ancient lake deposits, and deep-sea sediments from field sites around the world to trace patterns of wind and precipitation, water availability in drylands, and permafrost stability through space and time. Armed with precise chronologies, he aims to shed light on drivers of historical hydroclimatic shifts and provide quantitative tests of climate model performance.
Beyond research, McGee has helped shape numerous Institute initiatives focused on environment, climate, and sustainability, including serving on the MIT Climate and Sustainability Consortium Faculty Steering Committee and the faculty advisory board for the MIT Environment and Sustainability Minor.
McGee also co-chaired MIT's Climate Education Working Group, one of three working groups established under the Institute's Fast Forward climate action plan. The group identified opportunities to strengthen climate- and sustainability-related education at the Institute, from curricular offerings to experiential learning opportunities and beyond.
In April 2023, the working group hosted the MIT Symposium for Advancing Climate Education, featuring talks by McGee and others on how colleges and universities can innovate and help students develop the skills, capacities, and perspectives they’ll need to live, lead, and thrive in a world being remade by the accelerating climate crisis.
“David is reimagining MIT undergraduate education to include meaningful collaborations with communities outside of MIT, teaching students that scientific discovery is important, but not always enough to make impact for society,” says van der Hilst. “He will help shape the future of the department with this vital perspective.”
From the start of his career, McGee has been dedicated to sharing his love of exploration with students. He earned a master’s degree in teaching and spent seven years as a teacher in middle school and high school classrooms before earning his PhD in Earth and environmental sciences from Columbia University. He joined the MIT faculty in 2012, and in 2018 received the Excellence in Mentoring Award from MIT’s Undergraduate Advising and Academic Programming office. In 2015, he became the director of MIT’s Terrascope first-year learning community.
“David's exemplary teaching in Terrascope comes through his understanding that effective solutions must be found where science intersects with community engagement to forge ethical paths forward,” adds van der Hilst. In 2023, for his work with Terrascope, McGee received the school’s highest award, the School of Science Teaching Prize. In 2022, he was named a Margaret MacVicar Faculty Fellow, the highest teaching honor at MIT.
As associate department head, McGee worked alongside van der Hilst and student leaders to promote EAPS community engagement, improve internal supports and reporting structures, and bolster opportunities for students to pursue advanced degrees and STEM careers.
Superconducting materials are similar to the carpool lane in a congested interstate. Like commuters who ride together, electrons that pair up can bypass the regular traffic, moving through the material with zero friction.
But just as with carpools, how easily electron pairs can flow depends on a number of conditions, including the density of pairs that are moving through the material. This “superfluid stiffness,” or the ease with which a current of electron pairs can flow, is a key measure of a material’s superconductivity.
Physicists at MIT and Harvard University have now directly measured superfluid stiffness for the first time in “magic-angle” graphene — materials that are made from two or more atomically thin sheets of graphene twisted with respect to each other at just the right angle to enable a host of exceptional properties, including unconventional superconductivity.
This superconductivity makes magic-angle graphene a promising building block for future quantum-computing devices, but exactly how the material superconducts is not well-understood. Knowing the material’s superfluid stiffness will help scientists identify the mechanism of superconductivity in magic-angle graphene.
The team’s measurements suggest that magic-angle graphene’s superconductivity is primarily governed by quantum geometry, which refers to the conceptual “shape” of quantum states that can exist in a given material.
The results, which are reported today in the journal Nature, represent the first time scientists have directly measured superfluid stiffness in a two-dimensional material. To do so, the team developed a new experimental method which can now be used to make similar measurements of other two-dimensional superconducting materials.
“There’s a whole family of 2D superconductors that is waiting to be probed, and we are really just scratching the surface,” says study co-lead author Joel Wang, a research scientist in MIT’s Research Laboratory of Electronics (RLE).
The study’s co-authors from MIT’s main campus and MIT Lincoln Laboratory include co-lead author and former RLE postdoc Miuko Tanaka as well as Thao Dinh, Daniel Rodan-Legrain, Sameia Zaman, Max Hays, Bharath Kannan, Aziza Almanakly, David Kim, Bethany Niedzielski, Kyle Serniak, Mollie Schwartz, Jeffrey Grover, Terry Orlando, Simon Gustavsson, Pablo Jarillo-Herrero, and William D. Oliver, along with Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Magic resonance
Since its first isolation and characterization in 2004, graphene has proven to be a wonder substance of sorts. The material is effectively a single, atom-thin sheet of graphite consisting of a precise, chicken-wire lattice of carbon atoms. This simple configuration can exhibit a host of superlative qualities in terms of graphene’s strength, durability, and ability to conduct electricity and heat.
In 2018, Jarillo-Herrero and colleagues discovered that when two graphene sheets are stacked on top of each other, at a precise “magic” angle, the twisted structure — now known as magic-angle twisted bilayer graphene, or MATBG — exhibits entirely new properties, including superconductivity, in which electrons pair up, rather than repelling each other as they do in everyday materials. These so-called Cooper pairs can form a superfluid, with the potential to superconduct, meaning they could move through a material as an effortless, friction-free current.
“But even though Cooper pairs have no resistance, you have to apply some push, in the form of an electric field, to get the current to move,” Wang explains. “Superfluid stiffness refers to how easy it is to get these particles to move, in order to drive superconductivity.”
Today, scientists can measure superfluid stiffness in superconducting materials through methods that generally involve placing a material in a microwave resonator — a device which has a characteristic resonance frequency at which an electrical signal will oscillate, at microwave frequencies, much like a vibrating violin string. If a superconducting material is placed within a microwave resonator, it can change the device’s resonance frequency, and in particular, its “kinetic inductance,” by an amount that scientists can directly relate to the material’s superfluid stiffness.
However, to date, such approaches have only been compatible with large, thick material samples. The MIT team realized that to measure superfluid stiffness in atomically thin materials like MATBG would require a new approach.
“Compared to MATBG, the typical superconductor that is probed using resonators is 10 to 100 times thicker and larger in area,” Wang says. “We weren’t sure if such a tiny material would generate any measurable inductance at all.”
A captured signal
The challenge to measuring superfluid stiffness in MATBG has to do with attaching the supremely delicate material to the surface of the microwave resonator as seamlessly as possible.
“To make this work, you want to make an ideally lossless — i.e., superconducting — contact between the two materials,” Wang explains. “Otherwise, the microwave signal you send in will be degraded or even just bounce back instead of going into your target material.”
Will Oliver’s group at MIT has been developing techniques to precisely connect extremely delicate, two-dimensional materials, with the goal of building new types of quantum bits for future quantum-computing devices. For their new study, Tanaka, Wang, and their colleagues applied these techniques to seamlessly connect a tiny sample of MATBG to the end of an aluminum microwave resonator. To do so, the group first used conventional methods to assemble MATBG, then sandwiched the structure between two insulating layers of hexagonal boron nitride, to help maintain MATBG’s atomic structure and properties.
“Aluminum is a material we use regularly in our superconducting quantum computing research, for example, aluminum resonators to read out aluminum quantum bits (qubits),” Oliver explains. “So, we thought, why not make most of the resonator from aluminum, which is relatively straightforward for us, and then add a little MATBG to the end of it? It turned out to be a good idea.”
“To contact the MATBG, we etch it very sharply, like cutting through layers of a cake with a very sharp knife,” Wang says. “We expose a side of the freshly-cut MATBG, onto which we then deposit aluminum — the same material as the resonator — to make a good contact and form an aluminum lead.”
The researchers then connected the aluminum leads of the MATBG structure to the larger aluminum microwave resonator. They sent a microwave signal through the resonator and measured the resulting shift in its resonance frequency, from which they could infer the kinetic inductance of the MATBG.
When they converted the measured inductance to a value of superfluid stiffness, however, the researchers found that it was much larger than what conventional theories of superconductivity would have predicted. They had a hunch that the surplus had to do with MATBG’s quantum geometry — the way the quantum states of electrons correlate to one another.
“We saw a tenfold increase in superfluid stiffness compared to conventional expectations, with a temperature dependence consistent with what the theory of quantum geometry predicts,” Tanaka says. “This was a ‘smoking gun’ that pointed to the role of quantum geometry in governing superfluid stiffness in this two-dimensional material.”
“This work represents a great example of how one can use sophisticated quantum technology currently used in quantum circuits to investigate condensed matter systems consisting of strongly interacting particles,” adds Jarillo-Herrero.
This research was funded, in part, by the U.S. Army Research Office, the National Science Foundation, the U.S. Air Force Office of Scientific Research, and the U.S. Under Secretary of Defense for Research and Engineering. The work was carried out, in part, through the use of MIT.nano’s facilities.
A complementary study on magic-angle twisted trilayer graphene (MATTG), conducted by a collaboration between Philip Kim’s group at Harvard University and Jarillo-Herrero’s group at MIT appears in the same issue of Nature.
How telecommunications cables can image the ground beneath usBy making use of MIT’s existing fiber optic infrastructure, PhD student Hilary Chang imaged the ground underneath campus, a method that can be used to characterize seismic hazards.When people think about fiber optic cables, its usually about how they’re used for telecommunications and accessing the internet. But fiber optic cables — strands of glass or plastic that allow for the transmission of light — can be used for another purpose: imaging the ground beneath our feet.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) PhD student Hilary Chang recently used the MIT fiber optic cable network to successfully image the ground underneath campus using a method known as distributed acoustic sensing (DAS). By using existing infrastructure, DAS can be an efficient and effective way to understand ground composition, a critical component for assessing the seismic hazard of areas, or how at risk they are from earthquake damage.
“We were able to extract very nice, coherent waves from the surroundings, and then use that to get some information about the subsurface,” says Chang, the lead author of a recent paper describing her work that was co-authored with EAPS Principal Research Scientist Nori Nakata.
Dark fibers
The MIT campus fiber optic system, installed from 2000 to 2003, services internal data transport between labs and buildings as well as external transport, such as the campus internet (MITNet). There are three major cable hubs on campus from which lines branch out into buildings and underground, much like a spiderweb.
The network allocates a certain number of strands per building, some of which are “dark fibers,” or cables that are not actively transporting information. Each campus fiber hub has redundant backbone cables between them so that, in the event of a failure, network transmission can switch to the dark fibers without loss of network services.
DAS can use existing telecommunication cables and ambient wavefields to extract information about the materials they pass through, making it a valuable tool for places like cities or the ocean floor, where conventional sensors can’t be deployed. Chang, who studies earthquake waveforms and the information we can extract from them, decided to try it out on the MIT campus.
In order to get access to the fiber optic network for the experiment, Chang reached out to John Morgante, a manager of infrastructure project engineering with MIT Information Systems and Technology (IS&T). Morgante has been at MIT since 1998 and was involved with the original project installing the fiber optic network, and was thus able to provide personal insight into selecting a route.
“It was interesting to listen to what they were trying to accomplish with the testing,” says Morgante. While IS&T has worked with students before on various projects involving the school’s network, he said that “in the physical plant area, this is the first that I can remember that we’ve actually collaborated on an experiment together.”
They decided on a path starting from a hub in Building 24, because it was the longest running path that was entirely underground; above-ground wires that cut through buildings wouldn’t work because they weren’t grounded, and thus were useless for the experiment. The path ran from east to west, beginning in Building 24, traveling under a section of Massachusetts Ave., along parts of Amherst and Vassar streets, and ending at Building W92.
“[Morgante] was really helpful,” says Chang, describing it as “a very good experience working with the campus IT team.”
Locating the cables
After renting an interrogator, a device that sends laser pulses to sense ambient vibrations along the fiber optic cables, Chang and a group of volunteers were given special access to connect it to the hub in Building 24. They let it run for five days.
To validate the route and make sure that the interrogator was working, Chang conducted a tap test, in which she hit the ground with a hammer several times to record the precise GPS coordinates of the cable. Conveniently, the underground route is marked by maintenance hole covers that serve as good locations to do the test. And, because she needed the environment to be as quiet as possible to collect clean data, she had to do it around 2 a.m.
“I was hitting it next to a dorm and someone yelled ‘shut up,’ probably because the hammer blows woke them up,” Chang recalls. “I was sorry.” Thankfully, she only had to tap at a few spots and could interpolate the locations for the rest.
During the day, Chang and her fellow students — Denzel Segbefia, Congcong Yuan, and Jared Bryan — performed an additional test with geophones, another instrument that detects seismic waves, out on Brigg’s Field where the cable passed under it to compare the signals. It was an enjoyable experience for Chang; when the data were collected in 2022, the campus was coming out of pandemic measures, with remote classes sometimes still in place. “It was very nice to have everyone on the field and do something with their hands,” she says.
The noise around us
Once Chang collected the data, she was able to see plenty of environmental activity in the waveforms, including the passing of cars, bikes, and even when the train that runs along the northern edge of campus made its nightly passes.
After identifying the noise sources, Chang and Nakata extracted coherent surface waves from the ambient noises and used the wave speeds associated with different frequencies to understand the properties of the ground the cables passed through. Stiffer materials allow fast velocities, while softer material slows it.
“We found out that the MIT campus is built on soft materials overlaying a relatively hard bedrock,” Chang says, which confirms previously known, albeit lower-resolution, information about the geology of the area that had been collected using seismometers.
Information like this is critical for regions that are susceptible to destructive earthquakes and other seismic hazards, including the Commonwealth of Massachusetts, which has experienced earthquakes as recently as this past week. Areas of Boston and Cambridge characterized by artificial fill during rapid urbanization are especially at risk due to its subsurface structure being more likely to amplify seismic frequencies and damage buildings. This non-intrusive method for site characterization can help ensure that buildings meet code for the correct seismic hazard level.
“Destructive seismic events do happen, and we need to be prepared,” she says.
Eleven MIT faculty receive Presidential Early Career AwardsFaculty members and additional MIT alumni are among 400 scientists and engineers recognized for outstanding leadership potential.Eleven MIT faculty, including nine from the School of Engineering and two from the School of Science, were awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). Fifteen additional MIT alumni were also honored.
Established in 1996 by President Bill Clinton, the PECASE is awarded to scientists and engineers “who show exceptional potential for leadership early in their research careers.” The latest recipients were announced by the White House on Jan. 14 under President Joe Biden. Fourteen government agencies recommended researchers for the award.
The MIT faculty and alumni honorees are among 400 scientists and engineers recognized for innovation and scientific contributions. Those from the School of Engineering and School of Science who were honored are:
Additional MIT alumni who were honored include: Ambika Bajpayee MNG ’07, PhD ’15; Katherine Bouman SM ’13, PhD ’17; Walter Cheng-Wan Lee ’95, MNG ’95, PhD ’05; Ismaila Dabo PhD ’08; Ying Diao SM ’10, PhD ’12; Eno Ebong ’99; Soheil Feizi- Khankandi SM ’10, PhD ’16; Mark Finlayson SM ’01, PhD ’12; Chelsea B. Finn ’14; Grace Xiang Gu SM ’14, PhD ’18; David Michael Isaacson PhD ’06, AF ’16; Lewei Lin ’05; Michelle Sander PhD ’12; Kevin Solomon SM ’08, PhD ’12; and Zhiting Tian PhD ’14.
Introducing the MIT Generative AI Impact Consortium The consortium will bring researchers and industry together to focus on impact.From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.
Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.
“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”
Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”
Developing the blueprint for generative AI’s next leap
The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:
Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there's no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.
“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.
"What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time," says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.
A “perfect match” of academia and industry
At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.
The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.
“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”
Industry partners: Collaborating on AI’s evolution
At the core of the consortium’s mission is collaboration — bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.
Among the founding members is OpenAI, the creator of the generative AI chatbot ChatGPT.
“This type of collaboration between academics, practitioners, and labs is key to ensuring that generative AI evolves in ways that meaningfully benefit society,” says Anna Makanju, vice president of global impact at OpenAI, adding that OpenAI “is eager to work alongside MIT’s Generative AI Consortium to bridge the gap between cutting-edge AI research and the real-world expertise of diverse industries.”
The Coca-Cola Co. recognizes an opportunity to leverage AI innovation on a global scale. “We see a tremendous opportunity to innovate at the speed of AI and, leveraging The Coca-Cola Company's global footprint, make these cutting-edge solutions accessible to everyone,” says Pratik Thakar, global vice president and head of generative AI. “Both MIT and The Coca-Cola Company are deeply committed to innovation, while also placing equal emphasis on the legally and ethically responsible development and use of technology.”
For TWG Global, the consortium offers the ideal environment to share knowledge and drive advancements. “The strength of the consortium is its unique combination of industry leaders and academia, which fosters the exchange of valuable lessons, technological advancements, and access to pioneering research,” says Drew Cukor, head of data and artificial intelligence transformation. Cukor adds that TWG Global “is keen to share its insights and actively engage with leading executives and academics to gain a broader perspective of how others are configuring and adopting AI, which is why we believe in the work of the consortium.”
The Tata Group views the collaboration as a platform to address some of AI’s most pressing challenges. “The consortium enables Tata to collaborate, share knowledge, and collectively shape the future of generative AI, particularly in addressing urgent challenges such as ethical considerations, data privacy, and algorithmic biases,” says Aparna Ganesh, vice president of Tata Sons Ltd.
Similarly, SK Telecom sees its involvement as a launchpad for growth and innovation. Suk-geun (SG) Chung, SK Telecom executive vice president and chief AI global officer, explains, “Joining the consortium presents a significant opportunity for SK Telecom to enhance its AI competitiveness in core business areas, including AI agents, AI semiconductors, data centers (AIDC), and physical AI,” says Chung. “By collaborating with MIT and leveraging the SK AI R&D Center as a technology control tower, we aim to forecast next-generation generative AI technology trends, propose innovative business models, and drive commercialization through academic-industrial collaboration.”
Alan Lee, chief technology officer of Analog Devices (ADI), highlights how the consortium bridges key knowledge gaps for both his company and the industry at large. “ADI can’t hire a world-leading expert in every single corner case, but the consortium will enable us to access top MIT researchers and get them involved in addressing problems we care about, as we also work together with others in the industry towards common goals,” he says.
The consortium will host interactive workshops and discussions to identify and prioritize challenges. “It’s going to be a two-way conversation, with the faculty coming together with industry partners, but also industry partners talking with each other,” says Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research and statistics, who serves alongside Huttenlocher as co-chair of the GenAI Dean’s oversight group.
Preparing for the AI-enabled workforce of the future
With AI poised to disrupt industries and create new opportunities, one of the consortium’s core goals is to guide that change in a way that benefits both businesses and society.
“When the first commercial digital computers were introduced [the UNIVAC was delivered to the U.S. Census Bureau in 1951], people were worried about losing their jobs,” says Kraska. “And yes, jobs like large-scale, manual data entry clerks and human ‘computers,’ people tasked with doing manual calculations, largely disappeared over time. But the people impacted by those first computers were trained to do other jobs.”
The consortium aims to play a key role in preparing the workforce of tomorrow by educating global business leaders and employees on generative AI evolving uses and applications. With the pace of innovation accelerating, leaders face a flood of information and uncertainty.
“When it comes to educating leaders about generative AI, it’s about helping them navigate the complexity of the space right now, because there’s so much hype and hundreds of papers published daily,” says Kraska. “The hard part is understanding which developments could actually have a chance of changing the field and which are just tiny improvements. There's a kind of FOMO [fear of missing out] for leaders that we can help reduce.”
Defining success: Shared goals for generative AI impact
Success within the initiative is defined by shared progress, open innovation, and mutual growth. “Consortium participants recognize, I think, that when I share my ideas with you, and you share your ideas with me, we’re both fundamentally better off,” explains Farias. “Progress on generative AI is not zero-sum, so it makes sense for this to be an open-source initiative.”
While participants may approach success from different angles, they share a common goal of advancing generative AI for broad societal benefit. “There will be many success metrics,” says Perakis. “We’ll educate students, who will be networking with companies. Companies will come together and learn from each other. Business leaders will come to MIT and have discussions that will help all of us, not just the leaders themselves.”
For Analog Devices’ Alan Lee, success is measured in tangible improvements that drive efficiency and product innovation: “For us at ADI, it’s a better, faster quality of experience for our customers, and that could mean better products. It could mean faster design cycles, faster verification cycles, and faster tuning of equipment that we already have or that we’re going to develop for the future. But beyond that, we want to help the world be a better, more efficient place.”
Ganesh highlights success through the lens of real-world application. “Success will also be defined by accelerating AI adoption within Tata companies, generating actionable knowledge that can be applied in real-world scenarios, and delivering significant advantages to our customers and stakeholders,” she says.
Generative AI is no longer confined to isolated research labs — it’s driving innovation across industries and disciplines. At MIT, the technology has become a campus-wide priority, connecting researchers, students, and industry leaders to solve complex challenges and uncover new opportunities. “It's truly an MIT initiative,” says Farias, “one that’s much larger than any individual or department on campus.”
David Darmofal SM ’91, PhD ’93 named vice chancellor for undergraduate and graduate educationLongtime AeroAstro professor brings deep experience with academic and student life.David L. Darmofal SM ’91, PhD ’93 will serve as MIT’s next vice chancellor for undergraduate and graduate education, effective Feb. 17. Chancellor Melissa Nobles announced Darmofal’s appointment today in a letter to the MIT community.
Darmofal succeeds Ian A. Waitz, who stepped down in May to become MIT’s vice president for research, and Daniel E. Hastings, who has been serving in an interim capacity.
A creative innovator in research-based teaching and learning, Darmofal is the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. Since 2017, he and his wife Claudia have served as heads of house at The Warehouse, an MIT graduate residence.
“Dave knows the ins and outs of education and student life at MIT in a way that few do,” Nobles says. “He’s a head of house, an alum, and the parent of a graduate. Dave will bring decades of first-hand experience to the role.”
“An MIT education is incredibly special, combining passionate students, staff, and faculty striving to use knowledge and discovery to drive positive change for the world,” says Darmofal. “I am grateful for this opportunity to play a part in supporting MIT’s academic mission.”
Darmofal’s leadership experience includes service from 2008 to 2011 as associate and interim department head in the Department of Aeronautics and Astronautics, overseeing undergraduate and graduate programs. He was the AeroAstro director of digital education from 2020 to 2022, including leading the department’s response to remote learning during the Covid-19 pandemic. He currently serves as director of the MIT Aerospace Computational Science and Engineering Laboratory and is a member of the Center for Computational Science and Engineering (CCSE) in the MIT Stephen A. Schwarzman College of Computing.
As an MIT faculty member and administrator, Darmofal has been involved in designing more flexible degree programs, developing open digital-learning opportunities, creating first-year advising seminars, and enhancing professional and personal development opportunities for students. He also contributed his expertise in engineering pedagogy to the development of the Schwarzman College of Computing’s Common Ground efforts, to address the need for computing education across many disciplines.
“MIT students, staff, and faculty share a common bond as problem solvers. Talk to any of us about an MIT education, and you will get an earful on not only what we need to do better, but also how we can actually do it. The Office of the Vice Chancellor can help bring our community of problem solvers together to enable improvements in our academics,” says Darmofal.
Overseeing the academic arm of the Chancellor’s Office, the vice chancellor’s portfolio is extensive. Darmofal will lead professionals across more than a dozen units, covering areas such as recruitment and admissions, financial aid, student systems, advising, professional and career development, pedagogy, experiential learning, and support for MIT’s more than 100 graduate programs. He will also work collaboratively with many of MIT’s student organizations and groups, including with the leaders of the Undergraduate Association and the Graduate Student Council, and administer the relationship with the graduate student union.
“Dave will be a critical part of my office’s efforts to strengthen and expand critical connections across all areas of student life and learning,” Nobles says. She credits the search advisory group, co-chaired by professors Laurie Boyer and Will Tisdale, in setting the right tenor for such an important role and leading a thorough, inclusive process.
Darmofal’s research is focused on computational methods for partial differential equations, especially fluid dynamics. He earned his SM and PhD degrees in aeronautics and astronautics in 1991 and 1993, respectively, from MIT, and his BS in aerospace engineering in 1989 from the University of Michigan. Prior to joining MIT in 1998, he was an assistant professor in the Department of Aerospace Engineering at Texas A&M University from 1995 to 1998. Currently, he is the chair of AeroAstro’s Undergraduate Committee and the graduate officer for the CCSE PhD program.
“I want to echo something that Dan Hastings said recently,” Darmofal says. “We have a lot to be proud of when it comes to an MIT education. It’s more accessible than it has ever been. It’s innovative, with unmatched learning opportunities here and around the world. It’s home to academic research labs that attract the most talented scholars, creators, experimenters, and engineers. And ultimately, it prepares graduates who do good.”
Every cell in your body contains the same genetic sequence, yet each cell expresses only a subset of those genes. These cell-specific gene expression patterns, which ensure that a brain cell is different from a skin cell, are partly determined by the three-dimensional structure of the genetic material, which controls the accessibility of each gene.
MIT chemists have now come up with a new way to determine those 3D genome structures, using generative artificial intelligence. Their technique can predict thousands of structures in just minutes, making it much speedier than existing experimental methods for analyzing the structures.
Using this technique, researchers could more easily study how the 3D organization of the genome affects individual cells’ gene expression patterns and functions.
“Our goal was to try to predict the three-dimensional genome structure from the underlying DNA sequence,” says Bin Zhang, an associate professor of chemistry and the senior author of the study. “Now that we can do that, which puts this technique on par with the cutting-edge experimental techniques, it can really open up a lot of interesting opportunities.”
MIT graduate students Greg Schuette and Zhuohan Lao are the lead authors of the paper, which appears today in Science Advances.
From sequence to structure
Inside the cell nucleus, DNA and proteins form a complex called chromatin, which has several levels of organization, allowing cells to cram 2 meters of DNA into a nucleus that is only one-hundredth of a millimeter in diameter. Long strands of DNA wind around proteins called histones, giving rise to a structure somewhat like beads on a string.
Chemical tags known as epigenetic modifications can be attached to DNA at specific locations, and these tags, which vary by cell type, affect the folding of the chromatin and the accessibility of nearby genes. These differences in chromatin conformation help determine which genes are expressed in different cell types, or at different times within a given cell.
Over the past 20 years, scientists have developed experimental techniques for determining chromatin structures. One widely used technique, known as Hi-C, works by linking together neighboring DNA strands in the cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.
This method can be used on large populations of cells to calculate an average structure for a section of chromatin, or on single cells to determine structures within that specific cell. However, Hi-C and similar techniques are labor-intensive, and it can take about a week to generate data from one cell.
To overcome those limitations, Zhang and his students developed a model that takes advantage of recent advances in generative AI to create a fast, accurate way to predict chromatin structures in single cells. The AI model that they designed can quickly analyze DNA sequences and predict the chromatin structures that those sequences might produce in a cell.
“Deep learning is really good at pattern recognition,” Zhang says. “It allows us to analyze very long DNA segments, thousands of base pairs, and figure out what is the important information encoded in those DNA base pairs.”
ChromoGen, the model that the researchers created, has two components. The first component, a deep learning model taught to “read” the genome, analyzes the information encoded in the underlying DNA sequence and chromatin accessibility data, the latter of which is widely available and cell type-specific.
The second component is a generative AI model that predicts physically accurate chromatin conformations, having been trained on more than 11 million chromatin conformations. These data were generated from experiments using Dip-C (a variant of Hi-C) on 16 cells from a line of human B lymphocytes.
When integrated, the first component informs the generative model how the cell type-specific environment influences the formation of different chromatin structures, and this scheme effectively captures sequence-structure relationships. For each sequence, the researchers use their model to generate many possible structures. That’s because DNA is a very disordered molecule, so a single DNA sequence can give rise to many different possible conformations.
“A major complicating factor of predicting the structure of the genome is that there isn’t a single solution that we’re aiming for. There’s a distribution of structures, no matter what portion of the genome you’re looking at. Predicting that very complicated, high-dimensional statistical distribution is something that is incredibly challenging to do,” Schuette says.
Rapid analysis
Once trained, the model can generate predictions on a much faster timescale than Hi-C or other experimental techniques.
“Whereas you might spend six months running experiments to get a few dozen structures in a given cell type, you can generate a thousand structures in a particular region with our model in 20 minutes on just one GPU,” Schuette says.
After training their model, the researchers used it to generate structure predictions for more than 2,000 DNA sequences, then compared them to the experimentally determined structures for those sequences. They found that the structures generated by the model were the same or very similar to those seen in the experimental data.
“We typically look at hundreds or thousands of conformations for each sequence, and that gives you a reasonable representation of the diversity of the structures that a particular region can have,” Zhang says. “If you repeat your experiment multiple times, in different cells, you will very likely end up with a very different conformation. That’s what our model is trying to predict.”
The researchers also found that the model could make accurate predictions for data from cell types other than the one it was trained on. This suggests that the model could be useful for analyzing how chromatin structures differ between cell types, and how those differences affect their function. The model could also be used to explore different chromatin states that can exist within a single cell, and how those changes affect gene expression.
“ChromoGen provides a new framework for AI-driven discovery of genome folding principles and demonstrates that generative AI can bridge genomic and epigenomic features with 3D genome structure, pointing to future work on studying the variation of genome structure and function across a broad range of biological contexts,” says Jian Ma, a professor of computational biology at Carnegie Mellon University, who was not involved in the research.
Another possible application would be to explore how mutations in a particular DNA sequence change the chromatin conformation, which could shed light on how such mutations may cause disease.
“There are a lot of interesting questions that I think we can address with this type of model,” Zhang says.
The researchers have made all of their data and the model available to others who wish to use it.
The research was funded by the National Institutes of Health.
From bench to bedside, and beyondIn the United States and abroad, Matthew Dolan ’81 has served as a leader in immunology and virology.In medical school, Matthew Dolan ’81 briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work.
“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.”
Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the United States and abroad through the U.S. Air Force, Dolan has emerged as a leader in immunology and virology, and has served as director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and Covid-19, and has even been a guest speaker on NPR’s “Science Friday.”
“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.”
Pieces of the puzzle
Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge.
He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems such as ice machines or air conditioners, are solved at the interface between public health and ecology.
“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”
Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive.
“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.”
Choosing To serve
Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”
One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die.
“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.”
Lasting impacts
Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives.
Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future.
“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.”
Dolan understands that the most lasting impact he has had is, likely, teaching: Time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of health-care specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the U.S. departments of State and Defense, and taught those programs around the world.
“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.”
Rare and mysterious cosmic explosion: Gamma-ray burst or jetted tidal disruption event?Researchers characterize the peculiar Einstein Probe transient EP240408a.Highly energetic explosions in the sky are commonly attributed to gamma-ray bursts. We now understand that these bursts originate from either the merger of two neutron stars or the collapse of a massive star. In these scenarios, a newborn black hole is formed, emitting a jet that travels at nearly the speed of light. When these jets are directed toward Earth, we can observe them from vast distances — sometimes billions of light-years away — due to a relativistic effect known as Doppler boosting. Over the past decade, thousands of such gamma-ray bursts have been detected.
Since its launch in 2024, the Einstein Probe — an X-ray space telescope developed by the Chinese Academy of Sciences (CAS) in partnership with European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics — has been scanning the skies looking for energetic explosions, and in April the telescope observed an unusual event designated as EP240408A. Now an international team of astronomers, including Dheeraj Pasham from MIT, Igor Andreoni from University of North Carolina at Chapel Hill, and Brendan O’Connor from Carnegie Mellon University, and others have investigated this explosion using a slew of ground-based and space-based telescopes, including NuSTAR, Swift, Gemini, Keck, DECam, VLA, ATCA, and NICER, which was developed in collaboration with MIT.
An open-access report of their findings, published Jan. 27 in The Astrophysical Journal Letters, indicates that the characteristics of this explosion do not match those of typical gamma-ray bursts. Instead, it may represent a rare new class of powerful cosmic explosion — a jetted tidal disruption event, which occurs when a supermassive black hole tears apart a star.
“NICER’s ability to steer to pretty much any part of the sky and monitor for weeks has been instrumental in our understanding of these unusual cosmic explosions,” says Pasham, a research scientist at the MIT Kavli Institute for Astrophysics and Space Research.
While a jetted tidal disruption event is plausible, the researchers say the lack of radio emissions from this jet is puzzling. O’Connor surmises, “EP240408a ticks some of the boxes for several different kinds of phenomena, but it doesn’t tick all the boxes for anything. In particular, the short duration and high luminosity are hard to explain in other scenarios. The alternative is that we are seeing something entirely new!”
According to Pasham, the Einstein Probe is just beginning to scratch the surface of what seems possible. “I’m excited to chase the next weird explosion from the Einstein Probe”, he says, echoing astronomers worldwide who look forward to the prospect of discovering more unusual explosions from the farthest reaches of the cosmos.
Evelina Fedorenko receives Troland Award from National Academy of SciencesCognitive neuroscientist is recognized for her groundbreaking discoveries about the brain’s language system.The National Academy of Sciences (NAS) recently announced that MIT Associate Professor Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions toward understanding the language network in the human brain.
The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.
Fedorenko, an associate professor of brain and cognitive sciences and a McGovern Institute for Brain Research investigator, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.
Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.
She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.
Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington.
Smart carbon dioxide removal yields economic and environmental benefitsMIT study finds a diversified portfolio of carbon dioxide removal options delivers the best return on investment.Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.
Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products.
To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.
The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.
First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.
By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.
“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.
The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.
“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”
Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts.
MIT Press’ Direct to Open opens access to over 80 new monographsSupport for D2O in 2025 includes two new three-year, all-consortium commitments from the Florida Virtual Campus and the Big Ten Academic Alliance.The MIT Press has announced that Direct to Open (D2O) will open access to over 80 new monographs and edited book collections in the spring and fall publishing seasons, after reaching its full funding goal for 2025.
“It has been one of the greatest privileges of my career to contribute to this program and demonstrate that our academic community can unite to publish high-quality open-access monographs at scale,” says Amy Harris, senior manager of library relations and sales at the MIT Press. “We are deeply grateful to all of the consortia that have partnered with us and to the hundreds of libraries that have invested in this program. Together, we are expanding the public knowledge commons in ways that benefit scholars, the academy, and readers around the world.”
Among the highlights from the MIT Press’s fourth D2O funding cycle is a new three-year, consortium-wide commitment from the Florida Virtual Campus (FLVC) and a renewed three-year commitment from the Big Ten Academic Alliance (BTAA). These long-term collaborations will play a pivotal role in supporting the press’s open-access efforts for years to come.
“The Florida Virtual Campus is honored to participate in D2O in order to provide this collection of high-quality scholarship to more than 1.2 million students and faculty at the 28 state colleges and 12 state universities of Florida,” says Elijah Scott, executive director of library services for the Florida Virtual Campus. “The D2O program allows FLVC to make this research collection available to our member libraries while concurrently fostering the larger global aspiration of sustainable and equitable access to information.”
“The libraries of the Big Ten Academic Alliance are committed to supporting the creation of open-access content,” adds Kate McCready, program director for open publishing at the Big Ten Academic Alliance Library. “We're thrilled that our participation in D2O contributes to the opening of this collection, as well as championing the exploration of new models for opening scholarly monographs.”
In 2025, hundreds of libraries renewed their support thanks to the teams at consortia around the world, including the Council of Australasian University Librarians, the CBB Library Consortium, the California Digital Library, the Canadian Research Knowledge Network, CRL/NERL, the Greater Western Library Alliance, Jisc, Lyrasis, MOBIUS, PALCI, SCELC, and the Tri-College Library Consortium.
Launched in 2021, D2O is an innovative sustainable framework for open-access monographs that shifts publishing from a solely market-based, purchase model where individuals and libraries buy single e-books, to a collaborative, library-supported open-access model.
Many other models offer open-access opportunities on a title-by-title basis or within specific disciplines. D2O’s particular advantage is that it enables a press to provide open access to its entire list of scholarly books at scale, embargo-free, during each funding cycle. Thanks to D2O, all MIT Press monograph authors have the opportunity for their work to be published open access, with equal support to traditionally underserved and underfunded disciplines in the social sciences and humanities.
The MIT Press will now turn its attention to its fifth funding cycle and invites libraries and library consortia to participate. For details, please visit the MIT Press website or contact the Library Relations team.
Professor Emeritus Gerald Schneider, discoverer of the “two visual systems,” dies at 84An MIT affiliate for some 60 years, Schneider was an authority on the relationships between brain structure and behavior.Gerald E. Schneider, a professor emeritus of psychology and member of the MIT community for over 60 years, passed away on Dec. 11, 2024. He was 84.
Schneider was an authority on the relationships between brain structure and behavior, concentrating on neuronal development, regeneration or altered growth after brain injury, and the behavioral consequences of altered connections in the brain.
Using the Syrian golden hamster as his test subject of choice, Schneider made numerous contributions to the advancement of neuroscience. He laid out the concept of two visual systems — one for locating objects and one for the identification of objects — in a 1969 issue of Science, a milestone in the study of brain-behavior relationships. In 1973, he described a “pruning effect” in the optic tract axons of adult hamsters who had brain lesions early in life. In 2006, his lab reported a previously undiscovered nanobiomedical technology for tissue repair and restoration in Biological Sciences. The paper showed how a designed self-assembling peptide nanofiber scaffold could create a permissive environment for axons, not only to regenerate through the site of an acute injury in the optic tract of hamsters, but also to knit the brain tissue together.
His work shaped the research and thinking of numerous colleagues and trainees. Mriganka Sur, the Newton Professor of Neuroscience and former Department of Brain and Cognitive Sciences (BCS) department head, recalls how Schneider’s paper, “Is it really better to have your brain lesion early? A revision of the ‘Kennard Principle,’” published in 1979 in the journal Neuropsychologia, influenced his work on rewiring retinal projections to the auditory thalamus, which was used to derive principles of functional plasticity in the cortex.
“Jerry was an extremely innovative thinker. His hypothesis of two visual systems — for detailed spatial processing and for movement processing — based on his analysis of visual pathways in hamsters presaged and inspired later work on form and motion pathways in the primate brain,” Sur says. “His description of conservation of axonal arbor during development laid the foundation for later ideas about homeostatic mechanisms that co-regulate neuronal plasticity.”
Institute Professor Ann Graybiel was a colleague of Schneider’s for over five decades. She recalls early in her career being asked by Schneider to help make a map of the superior colliculus.
“I took it as an honor to be asked, and I worked very hard on this, with great excitement. It was my first such mapping, to be followed by much more in the future,” Graybiel recalls. “Jerry was fascinated by animal behavior, and from early on he made many discoveries using hamsters as his main animals of choice. He found that they could play. He found that they could operate in ways that seemed very sophisticated. And, yes, he mapped out pathways in their brains.”
Schneider was raised in Wheaton, Illinois, and graduated from Wheaton College in 1962 with a degree in physics. He was recruited to MIT by Hans-Lukas Teuber, one of the founders of the Department of Psychology, which eventually became the Department of Brain and Cognitive Sciences. Walle Nauta, another founder of the department, taught Schneider neuroanatomy. The pair were deeply influential in shaping his interests in neuroscience and his research.
“He admired them both very much and was very attached to them,” his daughter, Nimisha Schneider, says. “He was an interdisciplinary scholar and he liked that aspect of neuroscience, and he was fascinated by the mysteries of the human brain.”
Shortly after completing his PhD in psychology in 1966, he was hired as an assistant professor in 1967. He was named an associate professor in 1970, received tenure in 1975, and was appointed a full professor in 1977.
After his retirement in 2017, Schneider remained involved with the Department of BCS. Professor Pawan Sinha brought Schneider to campus for what would be his last on-campus engagement, as part of the “SilverMinds Series,” an initiative in the Sinha Lab to engage with scientists now in their “silver years.”
Schneider’s research made an indelible impact on Sinha, beginning as a graduate student when he was inspired by Schneider’s work linking brain structure and function. His work on nerve regeneration, which merged fundamental science and real-world impact, served as a “North Star” that guided Sinha’s own work as he established his lab as a junior faculty member.
“Even through the sadness of his loss, I am grateful for the inspiring example he has left for us of a life that so seamlessly combined brilliance, kindness, modesty, and tenacity,” Sinha says. “He will be missed.”
Schneider’s life centered around his research and teaching, but he also had many other skills and hobbies. Early in his life, he enjoyed painting, and as he grew older he was drawn to poetry. He was also skilled in carpentry and making furniture. He built the original hamster cages for his lab himself, along with numerous pieces of home furniture and shelving. He enjoyed nature anywhere it could be found, from the bees in his backyard to hiking and visiting state and national parks.
He was a Type 1 diabetic, and at the time of his death, he was nearing the completion of a book on the effects of hypoglycemia on the brain, which his family hopes to have published in the future. He was also the author of “Brain Structure and Its Origins,” published in 2014 by MIT Press.
He is survived by his wife, Aiping; his children, Cybele, Aniket, and Nimisha; and step-daughter Anna. He was predeceased by a daughter, Brenna. He is also survived by eight grandchildren and 10 great-grandchildren. A memorial in his honor was held on Jan. 11 at Saint James Episcopal Church in Cambridge.
Kingdoms collide as bacteria and cells form captivating connectionsStudying the pathogen R. parkeri, researchers discovered the first evidence of extensive and stable interkingdom contacts between a pathogen and a eukaryotic organelle.In biology textbooks, the endoplasmic reticulum is often portrayed as a distinct, compact organelle near the nucleus, and is commonly known to be responsible for protein trafficking and secretion. In reality, the ER is vast and dynamic, spread throughout the cell and able to establish contact and communication with and between other organelles. These membrane contacts regulate processes as diverse as fat metabolism, sugar metabolism, and immune responses.
Exploring how pathogens manipulate and hijack essential processes to promote their own life cycles can reveal much about fundamental cellular functions and provide insight into viable treatment options for understudied pathogens.
New research from the Lamason Lab in the Department of Biology at MIT recently published in the Journal of Cell Biology has shown that Rickettsia parkeri, a bacterial pathogen that lives freely in the cytosol, can interact in an extensive and stable way with the rough endoplasmic reticulum, forming previously unseen contacts with the organelle.
It’s the first known example of a direct interkingdom contact site between an intracellular bacterial pathogen and a eukaryotic membrane.
The Lamason Lab studies R. parkeri as a model for infection of the more virulent Rickettsia rickettsii. R. rickettsii, carried and transmitted by ticks, causes Rocky Mountain Spotted Fever. Left untreated, the infection can cause symptoms as severe as organ failure and death.
Rickettsia is difficult to study because it is an obligate pathogen, meaning it can only live and reproduce inside living cells, much like a virus. Researchers must get creative to parse out fundamental questions and molecular players in the R. parkeri life cycle, and much remains unclear about how R. parkeri spreads.
Detour to the junction
First author Yamilex Acevedo-Sánchez, a BSG-MSRP-Bio program alum and a graduate student at the time, stumbled across the ER and R. parkeri interactions while trying to observe Rickettsia reaching a cell junction.
The current model for Rickettsia infection involves R. parkeri spreading cell to cell by traveling to the specialized contact sites between cells and being engulfed by the neighboring cell in order to spread. Listeria monocytogenes, which the Lamason Lab also studies, uses actin tails to forcefully propel itself into a neighboring cell. By contrast, R. parkeri can form an actin tail, but loses it before reaching the cell junction. Somehow, R. parkeri is still able to spread to neighboring cells.
After an MIT seminar about the ER’s lesser-known functions, Acevedo-Sánchez developed a cell line to observe whether Rickettsia might be spreading to neighboring cells by hitching a ride on the ER to reach the cell junction.
Instead, she saw an unexpectedly high percentage of R. parkeri surrounded and enveloped by the ER, at a distance of about 55 nanometers. This distance is significant because membrane contacts for interorganelle communication in eukaryotic cells form connections from 10-80 nanometers wide. The researchers ruled out that what they saw was not an immune response, and the sections of the ER interacting with the R. parkeri were still connected to the wider network of the ER.
“I’m of the mind that if you want to learn new biology, just look at cells,” Acevedo-Sánchez says. “Manipulating the organelle that establishes contact with other organelles could be a great way for a pathogen to gain control during infection.”
The stable connections were unexpected because the ER is constantly breaking and reforming connections, lasting seconds or minutes. It was surprising to see the ER stably associating around the bacteria. As a cytosolic pathogen that exists freely in the cytosol of the cells it infects, it was also unexpected to see R. parkeri surrounded by a membrane at all.
Small margins
Acevedo-Sánchez collaborated with the Center for Nanoscale Systems at Harvard University to view her initial observations at higher resolution using focused ion beam scanning electron microscopy. FIB-SEM involves taking a sample of cells and blasting them with a focused ion beam in order to shave off a section of the block of cells. With each layer, a high-resolution image is taken. The result of this process is a stack of images.
From there, Acevedo-Sánchez marked what different areas of the images were — such as the mitochondria, Rickettsia, or the ER — and a program called ORS Dragonfly, a machine learning program, sorted through the thousand or so images to identify those categories. That information was then used to create 3D models of the samples.
Acevedo-Sánchez noted that less than 5 percent of R. parkeri formed connections with the ER — but small quantities of certain characteristics are known to be critical for R. parkeri infection. R. parkeri can exist in two states: motile, with an actin tail, and nonmotile, without it. In mutants unable to form actin tails, R. parkeri are unable to progress to adjacent cells — but in nonmutants, the percentage of R. parkeri that have tails starts at about 2 percent in early infection and never exceeds 15 percent at the height of it.
The ER only interacts with nonmotile R. parkeri, and those interactions increased 25-fold in mutants that couldn’t form tails.
Creating connections
Co-authors Acevedo-Sánchez, Patrick Woida, and Caroline Anderson also investigated possible ways the connections with the ER are mediated. VAP proteins, which mediate ER interactions with other organelles, are known to be co-opted by other pathogens during infection.
During infection by R. parkeri, VAP proteins were recruited to the bacteria; when VAP proteins were knocked out, the frequency of interactions between R. parkeri and the ER decreased, indicating R. parkeri may be taking advantage of these cellular mechanisms for its own purposes during infection.
Although Acevedo-Sánchez now works as a senior scientist at AbbVie, the Lamason Lab is continuing the work of exploring the molecular players that may be involved, how these interactions are mediated, and whether the contacts affect the host or bacteria’s life cycle.
Senior author and associate professor of biology Rebecca Lamason noted that these potential interactions are particularly interesting because bacteria and mitochondria are thought to have evolved from a common ancestor. The Lamason Lab has been exploring whether R. parkeri could form the same membrane contacts that mitochondria do, although they haven’t proven that yet. So far, R. parkeri is the only cytosolic pathogen that has been observed behaving this way.
“It’s not just bacteria accidentally bumping into the ER. These interactions are extremely stable. The ER is clearly extensively wrapping around the bacterium, and is still connected to the ER network,” Lamason says. “It seems like it has a purpose — what that purpose is remains a mystery.”
A new vaccine approach could help combat future coronavirus pandemicsThe nanoparticle-based vaccine shows promise against many variants of SARS-CoV-2, as well as related sarbecoviruses that could jump to humans.A new experimental vaccine developed by researchers at MIT and Caltech could offer protection against emerging variants of SARS-CoV-2, as well as related coronaviruses, known as sarbecoviruses, that could spill over from animals to humans.
In addition to SARS-CoV-2, the virus that causes COVID-19, sarbecoviruses — a subgenus of coronaviruses — include the virus that led to the outbreak of the original SARS in the early 2000s. Sarbecoviruses that currently circulate in bats and other mammals may also hold the potential to spread to humans in the future.
By attaching up to eight different versions of sarbecovirus receptor-binding proteins (RBDs) to nanoparticles, the researchers created a vaccine that generates antibodies that recognize regions of RBDs that tend to remain unchanged across all strains of the viruses. That makes it much more difficult for viruses to evolve to escape vaccine-induced antibodies.
“This work is an example of how bringing together computation and immunological experiments can be fruitful,” says Arup K. Chakraborty, the John M. Deutch Institute Professor at MIT and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MIT, MGH and Harvard University.
Chakraborty and Pamela Bjorkman, a professor of biology and biological engineering at Caltech, are the senior authors of the study, which appears today in Cell. The paper’s lead authors are Eric Wang PhD ’24, Caltech postdoc Alexander Cohen, and Caltech graduate student Luis Caldera.
Mosaic nanoparticles
The new study builds on a project begun in Bjorkman’s lab, in which she and Cohen created a “mosaic” 60-mer nanoparticle that presents eight different sarbecovirus RBD proteins. The RBD is the part of the viral spike protein that helps the virus get into host cells. It is also the region of the coronavirus spike protein that is usually targeted by antibodies against sarbecoviruses.
RBDs contain some regions that are variable and can easily mutate to escape antibodies. Most of the antibodies generated by mRNA COVID-19 vaccines target those variable regions because they are more easily accessible. That is one reason why mRNA vaccines need to be updated to keep up with the emergence of new strains.
If researchers could create a vaccine that stimulates production of antibodies that target RBD regions that can’t easily change and are shared across viral strains, it could offer broader protection against a variety of sarbecoviruses.
Such a vaccine would have to stimulate B cells that have receptors (which then become antibodies) that target those shared, or “conserved,” regions. When B cells circulating in the body encounter a vaccine or other antigen, their B cell receptors, each of which have two “arms,” are more effectively activated if two copies of the antigen are available for binding to each arm. The conserved regions tend to be less accessible to B cell receptors, so if a nanoparticle vaccine presents just one type of RBD, B cells with receptors that bind to the more accessible variable regions, are most likely to be activated.
To overcome this, the Caltech researchers designed a nanoparticle vaccine that includes 60 copies of RBDs from eight different related sarbecoviruses, which have different variable regions but similar conserved regions. Because eight different RBDs are displayed on each nanoparticle, it’s unlikely that two identical RBDs will end up next to each other. Therefore, when a B cell receptor encounters the nanoparticle immunogen, the B cell is more likely to become activated if its receptor can recognize the conserved regions of the RBD.
“The concept behind the vaccine is that by co-displaying all these different RBDs on the nanoparticle, you are selecting for B cells that recognize the conserved regions that are shared between them,” Cohen says. “As a result, you’re selecting for B cells that are more cross-reactive. Therefore, the antibody response would be more cross-reactive and you could potentially get broader protection.”
In studies conducted in animals, the researchers showed that this vaccine, known as mosaic-8, produced strong antibody responses against diverse strains of SARS-CoV-2 and other sarbecoviruses and protected from challenges by both SARS-CoV-2 and SARS-CoV (original SARS).
Broadly neutralizing antibodies
After these studies were published in 2021 and 2022, the Caltech researchers teamed up with Chakraborty’s lab at MIT to pursue computational strategies that could allow them to identify RBD combinations that would generate even better antibody responses against a wider variety of sarbecoviruses.
Led by Wang, the MIT researchers pursued two different strategies — first, a large-scale computational screen of many possible mutations to the RBD of SARS-CoV-2, and second, an analysis of naturally occurring RBD proteins from zoonotic sarbecoviruses.
For the first approach, the researchers began with the original strain of SARS-CoV-2 and generated sequences of about 800,000 RBD candidates by making substitutions in locations that are known to affect antibody binding to variable portions of the RBD. Then, they screened those candidates for their stability and solubility, to make sure they could withstand attachment to the nanoparticle and injection as a vaccine.
From the remaining candidates, the researchers chose 10 based on how different their variable regions were. They then used these to create mosaic nanoparticles coated with either two or five different RBD proteins (mosaic-2COM and mosaic-5COM).
In their second approach, instead of mutating the RBD sequences, the researchers chose seven naturally occurring RBD proteins, using computational techniques to select RBDs that were different from each other in regions that are variable, but retained their conserved regions. They used these to create another vaccine, mosaic-7COM.
Once the researchers produced the RBD-nanoparticles, they evaluated each one in mice. After each mouse received three doses of one of the vaccines, the researchers analyzed how well the resulting antibodies bound to and neutralized seven variants of SARS-CoV-2 and four other sarbecoviruses.
They also compared the mosaic nanoparticle vaccines to a nanoparticle with only one type of RBD displayed, and to the original mosaic-8 particle from their 2021, 2022, and 2024 studies. They found that mosaic-2COM and mosaic-5COM outperformed both of those vaccines, and mosaic-7COM showed the best responses of all. Mosaic-7COM elicited antibodies with binding to most of the viruses tested, and these antibodies were also able to prevent the viruses from entering cells.
The researchers saw similar results when they tested the new vaccines in mice that were previously vaccinated with a bivalent mRNA COVID-19 vaccine.
“We wanted to simulate the fact that people have already been infected and/or vaccinated against SARS-CoV-2,” Wang says. “In pre-vaccinated mice, mosaic-7COM is consistently giving the highest binding titers for both SARS-CoV-2 variants and other sarbecoviruses.”
Bjorkman’s lab has received funding from the Coalition for Epidemic Preparedness Innovations to do a clinical trial of the mosaic-8 RBD-nanoparticle. They also hope to move mosaic-7COM, which performed better in the current study, into clinical trials. The researchers plan to work on redesigning the vaccines so that they could be delivered as mRNA, which would make them easier to manufacture.
The research was funded by a National Science Foundation Graduate Research Fellowship, the National Institutes of Health, Wellcome Leap, the Bill and Melinda Gates Foundation, the Coalition for Epidemic Preparedness Innovations, and the Caltech Merkin Institute for Translational Research.
Toward video generative models of the molecular worldStarting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.As the capabilities of generative AI models have grown, you've probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.
More recently, generative AI has shown potential in helping chemists and biologists explore static molecules, like proteins and DNA. Models like AlphaFold can predict molecular structures to accelerate drug discovery, and the MIT-assisted “RFdiffusion,” for example, can help design new proteins. One challenge, though, is that molecules are constantly moving and jiggling, which is important to model when constructing new proteins and drugs. Simulating these motions on a computer using physics — a technique known as molecular dynamics — can be very expensive, requiring billions of time steps on supercomputers.
As a step toward simulating these behaviors more efficiently, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Mathematics researchers have developed a generative model that learns from prior data. The team’s system, called MDGen, can take a frame of a 3D molecule and simulate what will happen next like a video, connect separate stills, and even fill in missing frames. By hitting the “play button” on molecules, the tool could potentially help chemists design new molecules and closely study how well their drug prototypes for cancer and other diseases would interact with the molecular structure it intends to impact.
Co-lead author Bowen Jing SM ’22 says that MDGen is an early proof of concept, but it suggests the beginning of an exciting new research direction. “Early on, generative AI models produced somewhat simple videos, like a person blinking or a dog wagging its tail,” says Jing, a PhD student at CSAIL. “Fast forward a few years, and now we have amazing models like Sora or Veo that can be useful in all sorts of interesting ways. We hope to instill a similar vision for the molecular world, where dynamics trajectories are the videos. For example, you can give the model the first and 10th frame, and it’ll animate what’s in between, or it can remove noise from a molecular video and guess what was hidden.”
The researchers say that MDGen represents a paradigm shift from previous comparable works with generative AI in a way that enables much broader use cases. Previous approaches were “autoregressive,” meaning they relied on the previous still frame to build the next, starting from the very first frame to create a video sequence. In contrast, MDGen generates the frames in parallel with diffusion. This means MDGen can be used to, for example, connect frames at the endpoints, or “upsample” a low frame-rate trajectory in addition to pressing play on the initial frame.
This work was presented in a paper shown at the Conference on Neural Information Processing Systems (NeurIPS) this past December. Last summer, it was awarded for its potential commercial impact at the International Conference on Machine Learning’s ML4LMS Workshop.
Some small steps forward for molecular dynamics
In experiments, Jing and his colleagues found that MDGen’s simulations were similar to running the physical simulations directly, while producing trajectories 10 to 100 times faster.
The team first tested their model’s ability to take in a 3D frame of a molecule and generate the next 100 nanoseconds. Their system pieced together successive 10-nanosecond blocks for these generations to reach that duration. The team found that MDGen was able to compete with the accuracy of a baseline model, while completing the video generation process in roughly a minute — a mere fraction of the three hours that it took the baseline model to simulate the same dynamic.
When given the first and last frame of a one-nanosecond sequence, MDGen also modeled the steps in between. The researchers’ system demonstrated a degree of realism in over 100,000 different predictions: It simulated more likely molecular trajectories than its baselines on clips shorter than 100 nanoseconds. In these tests, MDGen also indicated an ability to generalize on peptides it hadn’t seen before.
MDGen’s capabilities also include simulating frames within frames, “upsampling” the steps between each nanosecond to capture faster molecular phenomena more adequately. It can even “inpaint” structures of molecules, restoring information about them that was removed. These features could eventually be used by researchers to design proteins based on a specification of how different parts of the molecule should move.
Toying around with protein dynamics
Jing and co-lead author Hannes Stärk say that MDGen is an early sign of progress toward generating molecular dynamics more efficiently. Still, they lack the data to make these models immediately impactful in designing drugs or molecules that induce the movements chemists will want to see in a target structure.
The researchers aim to scale MDGen from modeling molecules to predicting how proteins will change over time. “Currently, we’re using toy systems,” says Stärk, also a PhD student at CSAIL. “To enhance MDGen’s predictive capabilities to model proteins, we’ll need to build on the current architecture and data available. We don’t have a YouTube-scale repository for those types of simulations yet, so we’re hoping to develop a separate machine-learning method that can speed up the data collection process for our model.”
For now, MDGen presents an encouraging path forward in modeling molecular changes invisible to the naked eye. Chemists could also use these simulations to delve deeper into the behavior of medicine prototypes for diseases like cancer or tuberculosis.
“Machine learning methods that learn from physical simulation represent a burgeoning new frontier in AI for science,” says Bonnie Berger, MIT Simons Professor of Mathematics, CSAIL principal investigator, and senior author on the paper. “MDGen is a versatile, multipurpose modeling framework that connects these two domains, and we’re very excited to share our early models in this direction.”
“Sampling realistic transition paths between molecular states is a major challenge,” says fellow senior author Tommi Jaakkola, who is the MIT Thomas Siebel Professor of electrical engineering and computer science and the Institute for Data, Systems, and Society, and a CSAIL principal investigator. “This early work shows how we might begin to address such challenges by shifting generative modeling to full simulation runs.”
Researchers across the field of bioinformatics have heralded this system for its ability to simulate molecular transformations. “MDGen models molecular dynamics simulations as a joint distribution of structural embeddings, capturing molecular movements between discrete time steps,” says Chalmers University of Technology associate professor Simon Olsson, who wasn’t involved in the research. “Leveraging a masked learning objective, MDGen enables innovative use cases such as transition path sampling, drawing analogies to inpainting trajectories connecting metastable phases.”
The researchers’ work on MDGen was supported, in part, by the National Institute of General Medical Sciences, the U.S. Department of Energy, the National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency.
MIT physicists have created a new ultrathin, two-dimensional material with unusual magnetic properties that initially surprised the researchers before they went on to solve the complicated puzzle behind those properties’ emergence. As a result, the work introduces a new platform for studying how materials behave at the most fundamental level — the world of quantum physics.
Ultrathin materials made of a single layer of atoms have riveted scientists’ attention since the discovery of the first such material — graphene, composed of carbon — about 20 years ago. Among other advances since then, researchers have found that stacking individual sheets of the 2D materials, and sometimes twisting them at a slight angle to each other, can give them new properties, from superconductivity to magnetism. Enter the field of twistronics, which was pioneered at MIT by Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.
In the current research, reported in the Jan. 7 issue of Nature Physics, the scientists, led by Jarillo-Herrero, worked with three layers of graphene. Each layer was twisted on top of the next at the same angle, creating a helical structure akin to the DNA helix or a hand of three cards that are fanned apart.
“Helicity is a fundamental concept in science, from basic physics to chemistry and molecular biology. With 2D materials, one can create special helical structures, with novel properties which we are just beginning to understand. This work represents a new twist in the field of twistronics, and the community is very excited to see what else we can discover using this helical materials platform!” says Jarillo-Herrero, who is also affiliated with MIT’s Materials Research Laboratory.
Do the twist
Twistronics can lead to new properties in ultrathin materials because arranging sheets of 2D materials in this way results in a unique pattern called a moiré lattice. And a moiré pattern, in turn, has an impact on the behavior of electrons.
“It changes the spectrum of energy levels available to the electrons and can provide the conditions for interesting phenomena to arise,” says Sergio C. de la Barrera, one of three co-first authors of the recent paper. De la Barrera, who conducted the work while a postdoc at MIT, is now an assistant professor at the University of Toronto.
In the current work, the helical structure created by the three graphene layers forms two moiré lattices. One is created by the first two overlapping sheets; the other is formed between the second and third sheets.
The two moiré patterns together form a third moiré, a supermoiré, or “moiré of a moiré,” says Li-Qiao Xia, a graduate student in MIT physics and another of the three co-first authors of the Nature Physics paper. “It’s like a moiré hierarchy.” While the first two moiré patterns are only nanometers, or billionths of a meter, in scale, the supermoiré appears at a scale of hundreds of nanometers superimposed over the other two. You can only see it if you zoom out to get a much wider view of the system.
A major surprise
The physicists expected to observe signatures of this moiré hierarchy. They got a huge surprise, however, when they applied and varied a magnetic field. The system responded with an experimental signature for magnetism, one that arises from the motion of electrons. In fact, this orbital magnetism persisted to -263 degrees Celsius — the highest temperature reported in carbon-based materials to date.
But that magnetism can only occur in a system that lacks a specific symmetry — one that the team’s new material should have had. “So the fact that we saw this was very puzzling. We didn’t really understand what was going on,” says Aviram Uri, an MIT Pappalardo postdoc in physics and the third co-first author of the new paper.
Other authors of the paper include MIT professor of physics Liang Fu; Aaron Sharpe of Sandia National Laboratories; Yves H. Kwan of Princeton University; Ziyan Zhu, David Goldhaber-Gordon, and Trithep Devakul of Stanford University; and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
What was happening?
It turns out that the new system did indeed break the symmetry that prohibits the orbital magnetism the team observed, but in a very unusual way. “What happens is that the atoms in this system aren’t very comfortable, so they move in a subtle orchestrated way that we call lattice relaxation,” says Xia. And the new structure formed by that relaxation does indeed break the symmetry locally, on the moiré length scale.
This opens the possibility for the orbital magnetism the team observed. However, if you zoom out to view the system on the supermoiré scale, the symmetry is restored. “The moiré hierarchy turns out to support interesting phenomena at different length scales,” says de la Barrera.
Concludes Uri: “It’s a lot of fun when you solve a riddle and it’s such an elegant solution. We’ve gained new insights into how electrons behave in these complex systems, insights that we couldn’t have had unless our experimental observations forced to think about these things.”
This work was supported by the Army Research Office, the National Science Foundation, the Gordon and Betty Moore Foundation, the Ross M. Brown Family Foundation, an MIT Pappalardo Fellowship, the VATAT Outstanding Postdoctoral Fellowship in Quantum Science and Technology, the JSPS KAKENHI, and a Stanford Science Fellowship. This work was carried out, in part, through the use of MIT.nano facilities.
New START.nano cohort is developing solutions in health, data storage, power, and sustainable energyWith seven new startups, MIT.nano's program for hard-tech ventures expands to more than 20 companies.MIT.nano has announced seven new companies to join START.nano, a program aimed at speeding the transition of hard-tech innovation to market. The program supports new ventures through discounted use of MIT.nano’s facilities and access to the MIT innovation ecosystem.
The advancements pursued by the newly engages startups include wearables for health care, green alternatives to fossil fuel-based energy, novel battery technologies, enhancements in data systems, and interconnecting nanofabrication knowledge networks, among others.
“The transition of the grand idea that is imagined in the laboratory to something that a million people can use in their hands is a journey fraught with many challenges,” MIT.nano Director Vladimir Bulović said at the 2024 Nano Summit, where nine START.nano companies presented their work. The program provides resources to ease startups over the first two hurdles — finding stakeholders and building a well-developed prototype.
In addition to access to laboratory tools necessary to advance their technologies, START.nano companies receive advice from MIT.nano expert staff, are connected to MIT.nano Consortium companies, gain a broader exposure at MIT conferences and community events, and are eligible to join the MIT Startup Exchange.
“MIT.nano has allowed us to push our project to the frontiers of sensing by implementing advanced fabrication techniques using their machinery,” said Uroš Kuzmanović, CEO and founder of Biosens8. “START.nano has surrounded us with exciting peers, a strong support system, and a spotlight to present our work. By taking advantage of all that the program has to offer, BioSens8 is moving faster than we could anywhere else.”
Here are the seven new START.nano participants:
Analog Photonics is developing lidar and optical communications technology using silicon photonics.
Biosens8 is engineering novel devices to enable health ownership. Their research focuses on multiplexed wearables for hormones, neurotransmitters, organ health markers, and drug use that will give insight into the body's health state, opening the door to personalized medicine and proactive, data-driven health decisions.
Casimir, Inc. is working on power-generating nanotechnology that interacts with quantum fields to create a continuous source of power. The team compares their technology to a solar panel that works in the dark or a battery that never needs to be recharged.
Central Spiral focuses on lossless data compression. Their technology allows for the compression of any type of data, including those that are already compressed, reducing data storage and transmission costs, lowering carbon dioxide emissions, and enhancing efficiency.
FabuBlox connects stakeholders across the nanofabrication ecosystem and resolves issues of scattered, unorganized, and isolated fab knowledge. Their cloud-based platform combines a generative process design and simulation interface with GitHub-like repository building capabilities.
Metal Fuels is converting industrial waste aluminum to onsite energy and high-value aluminum/aluminum-oxide powders. Their approach combines existing mature technologies of molten metal purification and water atomization to develop a self-sustaining reactor that produces alumina of higher value than our input scrap aluminum feedstock, while also collecting the hydrogen off-gas.
PolyJoule, Inc. is an energy storage startup working on conductive polymer battery technology. The team’s goal is a grid battery of the future that is ultra-safe, sustainable, long living, and low-cost.
In addition to the seven startups that are actively using MIT.nano, nine other companies have been invited to join the latest START.nano cohort:
Launched in 2021, START.nano now comprises over 20 companies and eight graduates — ventures that have moved beyond the initial startup stages and some into commercialization.
Toward sustainable decarbonization of aviation in Latin AmericaSpecial report describes targets for advancing technologically feasible and economically viable strategies.According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?
To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.
Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.
Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.
SAF emissions, costs, and sources
Under an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.
As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.
Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.
“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.
Recommendations
The researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.
They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.
Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.
“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”
Financial support for this study was provided by LATAM Airlines and Airbus.
Modeling complex behavior with a simple organismBy studying the roundworm C. elegans, neuroscientist Steven Flavell explores how neural circuits give rise to behavior.The roundworm C. elegans is a simple animal whose nervous system has exactly 302 neurons. Each of the connections between those neurons has been comprehensively mapped, allowing researchers to study how they work together to generate the animal’s different behaviors.
Steven Flavell, an MIT associate professor of brain and cognitive sciences and investigator with The Picower Institute for Learning and Memory at MIT and the Howard Hughes Medical Institute, uses the worm as a model to study motivated behaviors such as feeding and navigation, in hopes of shedding light on the fundamental mechanisms that may also determine how similar behaviors are controlled in other animals.
In recent studies, Flavell’s lab has uncovered neural mechanisms underlying adaptive changes in the worms’ feeding behavior, and his lab has also mapped how the activity of each neuron in the animal’s nervous system affects the worms’ different behaviors.
Such studies could help researchers gain insight into how brain activity generates behavior in humans. “It is our aim to identify molecular and neural circuit mechanisms that may generalize across organisms,” he says, noting that many fundamental biological discoveries, including those related to programmed cell death, microRNA, and RNA interference, were first made in C. elegans.
“Our lab has mostly studied motivated state-dependent behaviors, like feeding and navigation. The machinery that’s being used to control these states in C. elegans — for example, neuromodulators — are actually the same as in humans. These pathways are evolutionarily ancient,” he says.
Drawn to the lab
Born in London to an English father and a Dutch mother, Flavell came to the United States in 1982 at the age of 2, when his father became chief scientific officer at Biogen. The family lived in Sudbury, Massachusetts, and his mother worked as a computer programmer and math teacher. His father later became a professor of immunology at Yale University.
Though Flavell grew up in a science family, he thought about majoring in English when he arrived at Oberlin College. A musician as well, Flavell took jazz guitar classes at Oberlin’s conservatory, and he also plays the piano and the saxophone. However, taking classes in psychology and physiology led him to discover that the field that most captivated him was neuroscience.
“I was immediately sold on neuroscience. It combined the rigor of the biological sciences with deep questions from psychology,” he says.
While in college, Flavell worked on a summer research project related to Alzheimer’s disease, in a lab at Case Western Reserve University. He then continued the project, which involved analyzing post-mortem Alzheimer’s tissue, during his senior year at Oberlin.
“My earliest research revolved around mechanisms of disease. While my research interests have evolved since then, my earliest research experiences were the ones that really got me hooked on working at the bench: running experiments, looking at brand new results, and trying to understand what they mean,” he says.
By the end of college, Flavell was a self-described lab rat: “I just love being in the lab.” He applied to graduate school and ended up going to Harvard Medical School for a PhD in neuroscience. Working with Michael Greenberg, Flavell studied how sensory experience and resulting neural activity shapes brain development. In particular, he focused on a family of gene regulators called MEF2, which play important roles in neuronal development and synaptic plasticity.
All of that work was done using mouse models, but Flavell transitioned to studying C. elegans during a postdoctoral fellowship working with Cori Bargmann at Rockefeller University. He was interested in studying how neural circuits control behavior, which seemed to be more feasible in simpler animal models.
“Studying how neurons across the brain govern behavior felt like it would be nearly intractable in a large brain — to understand all the nuts and bolts of how neurons interact with each other and ultimately generate behavior seemed daunting,” he says. “But I quickly became excited about studying this in C. elegans because at the time it was still the only animal with a full blueprint of its brain: a map of every brain cell and how they are all wired up together.”
That wiring diagram includes about 7,000 synapses in the entire nervous system. By comparison, a single human neuron may form more than 10,000 synapses. “Relative to those larger systems, the C. elegans nervous system is mind-bogglingly simple,” Flavell says.
Despite their much simpler organization, roundworms can execute complex behaviors such as feeding, locomotion, and egg-laying. They even sleep, form memories, and find suitable mating partners. The neuromodulators and cellular machinery that give rise to those behaviors are similar to those found in humans and other mammals.
“C. elegans has a fairly well-defined, smallish set of behaviors, which makes it attractive for research. You can really measure almost everything that the animal is doing and study it,” Flavell says.
How behavior arises
Early in his career, Flavell’s work on C. elegans revealed the neural mechanisms that underlie the animal’s stable behavioral states. When worms are foraging for food, they alternate between stably exploring the environment and pausing to feed. “The transition rates between those states really depend on all these cues in the environment. How good is the food environment? How hungry are they? Are there smells indicating a better nearby food source? The animal integrates all of those things and then adjusts their foraging strategy,” Flavell says.
These stable behavioral states are controlled by neuromodulators like serotonin. By studying serotonergic regulation of the worm’s behavioral states, Flavell’s lab has been able to uncover how this important system is organized. In a recent study, Flavell and his colleagues published an “atlas” of the C. elegans serotonin system. They identified every neuron that produces serotonin, every neuron that has serotonin receptors, and how brain activity and behavior change across the animal as serotonin is released.
“Our studies of how the serotonin system works to control behavior have already revealed basic aspects of serotonin signaling that we think ought to generalize all the way up to mammals,” Flavell says. “By studying the way that the brain implements these long-lasting states, we can tap into these basic features of neuronal function. With the resolution that you can get studying specific C. elegans neurons and the way that they implement behavior, we can uncover fundamental features of the way that neurons act.”
In parallel, Flavell’s lab has also been mapping out how neurons across the C. elegans brain control different aspects of behavior. In a 2023 study, Flavell’s lab mapped how changes in brain-wide activity relate to behavior. His lab uses special microscopes that can move along with the worms as they explore, allowing them to simultaneously track every behavior and measure the activity of every neuron in the brain. Using these data, the researchers created computational models that can accurately capture the relationship between brain activity and behavior.
This type of research requires expertise in many areas, Flavell says. When looking for faculty jobs, he hoped to find a place where he could collaborate with researchers working in different fields of neuroscience, as well as scientists and engineers from other departments.
“Being at MIT has allowed my lab to be much more multidisciplinary than it could have been elsewhere,” he says. “My lab members have had undergrad degrees in physics, math, computer science, biology, neuroscience, and we use tools from all of those disciplines. We engineer microscopes, we build computational models, we come up with molecular tricks to perturb neurons in the C. elegans nervous system. And I think being able to deploy all those kinds of tools leads to exciting research outcomes.”
MIT student encourages all learners to indulge their curiosity with MIT Open Learning's MITxJunior Shreya Mogulothu says taking an MITx class as a high school student opened her eyes to new possibilities.Shreya Mogulothu is naturally curious. As a high school student in New Jersey, she was interested in mathematics and theoretical computer science (TCS). So, when her curiosity compelled her to learn more, she turned to MIT Open Learning’s online resources and completed the Paradox and Infinity course on MITx Online.
“Coming from a math and TCS background, the idea of pushing against the limits of assumptions was really interesting,” says Mogulothu, now a junior at MIT. “I mean, who wouldn’t want to learn more about infinity?”
The class, taught by Agustín Rayo, professor of philosophy and the current dean of the School of Humanities, Arts, and Social Sciences, and David Balcarras, a former instructor in philosophy and fellow in the Digital Learning Lab at Open Learning, explores the intersection of math and philosophy and guides learners through thinking about paradoxes and open-ended problems, as well as the boundaries of theorizing and the limits of standard mathematical tools.
“We talked about taking regular assumptions about numbers and objects and pushing them to extremes,” Mogulothu says. “For example, what contradictions arise when you talk about an infinite set of things, like the infinite hats paradox?”
The infinite hats paradox, also known as Bacon’s Puzzle, involves an infinite line of people, each wearing one of two colors of hats. The puzzle posits that each individual can see only the hat of the person in front of them and must guess the color of their own hat. The puzzle challenges students to identify if there is a strategy that can ensure the least number of incorrect answers and to consider how strategy may change if there is a finite number of people. Mogulothu was thrilled that a class like this was available to her even though she wasn’t yet affiliated with MIT.
“My MITx experience was one of the reasons I came to MIT,” she says. “I really liked the course, and I was happy it was shared with people like me, who didn’t even go to the school. I thought that a place that encouraged even people outside of campus to learn like that would be a pretty good place to study.”
Looking back at the course, Balcarras says, “Shreya may have been the most impressive student in our online community of approximately 3,900 learners and 100 verified learners. I cannot single out another student whose performance rivaled hers.”
Because of her excellent performance, Mogulothu was invited to submit her work to the 2021 MITx Philosophy Awards. She won. In fact, Balcarras remembers, both papers she wrote for the course would have won. They demonstrated, he says, “an unusually high degree of precision, formal acumen, and philosophical subtlety for a high school student.”
Completing the course and winning the award was rewarding, Mogulothu says. It motivated her to keep exploring new things as a high school student, and then as a new student enrolled at MIT.
She came to college thinking she would declare a major in math or computer science. But when she looked at the courses she was most interested in, she realized she should pursue a physics major.
She has enjoyed the courses in her major, especially class STS.042J/8.225J (Einstein, Oppenheimer, Feynman: Physics in the 20th Century), taught by David Kaiser, the Germeshausen Professor of the History of Science and professor of physics. She took the course on campus, but it is also available on Open Learning’s MIT OpenCourseWare. As a student, she continues to use MIT Open Learning resources to check out courses and review syllabi as she plans her coursework.
In summer 2024, Mogulothu did research on gravitational wave detection at PIER, the partnership between research center DESY and the University of Hamburg, in Hamburg, Germany. She wants to pursue a PhD in physics to keep researching, expanding her mind, and indulging the curiosity that led her to MITx in the first place. She encourages all learners to feel comfortable and confident trying something entirely new.
“I went into the Paradox and Infinity course thinking, ‘yeah, math is cool, computer science is cool,’” she says. “But, actually taking the course and learning about things you don’t even expect to exist is really powerful. It increases your curiosity and is super rewarding to stick with something and realize how much you can learn and grow.”
Three MIT students — Yutao Gong, Brandon Man, and Andrii Zahorodnii — have been awarded 2025 Schwarzman Scholarships and will join the program’s 10th cohort to pursue a master’s degree in global affairs at Tsinghua University in Beijing, China.
The MIT students were selected from a pool of over 5,000 applicants. This year’s class of 150 scholars represents 38 countries and 105 universities from around the world.
The Schwarzman Scholars program aims to develop leadership skills and deepen understanding of China’s changing role in the world. The fully funded one-year master’s program at Tsinghua University emphasizes leadership, global affairs, and China. Scholars also gain exposure to China through mentoring, internships, and experiential learning.
MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in Career Advising and Professional Development and the Presidential Committee on Distinguished Fellowships.
Yutao Gong will graduate this spring from the Leaders for Global Operations program at the MIT Sloan School of Management, earning a dual MBA and a MS degree in civil and environmental engineering with a focus on manufacturing and operations. Gong, who hails from Shanghai, China, has academic, work, and social engagement experiences in China, the United States, Jordan, and Denmark. She was previously a consultant at Boston Consulting Group working on manufacturing, agriculture, sustainability, and renewable energy-related projects, and spent two years in Chicago and one year in Greater China as a global ambassador. Gong graduated magna cum laude from Duke University with double majors in environmental science and statistics, where she organized the Duke China-U.S. Summit.
Brandon Man, from Canada and Hong Kong, is a master’s student in the Department of Mechanical Engineering at MIT, where he studies generative artificial intelligence (genAI) for engineering design. Previously, he graduated from Cornell University magna cum laude with honors in computer science. With a wealth of experience in robotics — from assistive robots to next-generation spacesuits for NASA to Tencent’s robot dog, Max — he is now a co-founder of Sequestor, a genAI-powered data aggregation platform that enables carbon credit investors to perform faster due diligence. His goal is to bridge the best practices of the Eastern and Western tech worlds.
Andrii Zahorodnii, from Ukraine, will graduate this spring with a bachelor of science and a master of engineering degree in computer science and cognitive sciences. An engineer as well as a neuroscientist, he has conducted research at MIT with Professor Guangyu Robert Yang’s MetaConscious Group and the Fiete Lab. Zahorodnii is passionate about using AI to uncover insights into human cognition, leading to more-informed, empathetic, and effective global decision-making and policy. Besides driving the exchange of ideas as a TEDxMIT organizer, he strives to empower and inspire future leaders internationally and in Ukraine through the Ukraine Leadership and Technology Academy he founded.
How one brain circuit encodes memories of both places and eventsA new computational model explains how neurons linked to spatial navigation can also help store episodic memories.Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.
A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.
“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”
MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.
An index of memories
To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.
In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.
“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” Fiete says. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”
Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.
An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.
In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.
“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.
In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.
When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.
“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”
Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.
Modeling memory cliffs and palaces
The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.
While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.
The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.
The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.
The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.
The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.
X-ray flashes from a nearby supermassive black hole accelerate mysteriouslyTheir source could be the core of a dead star that’s teetering at the black hole’s edge, MIT astronomers report.One supermassive black hole has kept astronomers glued to their scopes for the last several years. First came a surprise disappearance, and now, a precarious spinning act.
The black hole in question is 1ES 1927+654, which is about as massive as a million suns and sits in a galaxy that is 270 million light-years away. In 2018, astronomers at MIT and elsewhere observed that the black hole’s corona — a cloud of whirling, white-hot plasma — suddenly disappeared, before reassembling months later. The brief though dramatic shut-off was a first in black hole astronomy.
Members of the MIT team have now caught the same black hole exhibiting more unprecedented behavior.
The astronomers have detected flashes of X-rays coming from the black hole at a steadily increasing clip. Over a period of two years, the flashes, at millihertz frequencies, increased from every 18 minutes to every seven minutes. This dramatic speed-up in X-rays has not been seen from a black hole until now.
The researchers explored a number of scenarios for what might explain the flashes. They believe the most likely culprit is a spinning white dwarf — an extremely compact core of a dead star that is orbiting around the black hole and getting precariously closer to its event horizon, the boundary beyond which nothing can escape the black hole’s gravitational pull. If this is the case, the white dwarf must be pulling off an impressive balancing act, as it could be coming right up to the black hole’s edge without actually falling in.
“This would be the closest thing we know of around any black hole,” says Megan Masterson, a graduate student in physics at MIT, who co-led the discovery. “This tells us that objects like white dwarfs may be able to live very close to an event horizon for a relatively extended period of time.”
The researchers present their findings today at the 245th meeting of the American Astronomical Society.
If a white dwarf is at the root of the black hole’s mysterious flashing, it would also give off gravitational waves, in a range that would be detectable by next-generation observatories such as the European Space Agency's Laser Interferometer Space Antenna (LISA).
“These new detectors are designed to detect oscillations on the scale of minutes, so this black hole system is in that sweet spot,” says co-author Erin Kara, associate professor of physics at MIT.
The study’s other co-authors include MIT Kavli members Christos Panagiotou, Joheen Chakraborty, Kevin Burdge, Riccardo Arcodia, Ronald Remillard, and Jingyi Wang, along with collaborators from multiple other institutions.
Nothing normal
Kara and Masterson were part of the team that observed 1ES 1927+654 in 2018, as the black hole’s corona went dark, then slowly rebuilt itself over time. For a while, the newly reformed corona — a cloud of highly energetic plasma and X-rays — was the brightest X-ray-emitting object in the sky.
“It was still extremely bright, though it wasn’t doing anything new for a couple years and was kind of gurgling along. But we felt we had to keep monitoring it because it was so beautiful,” Kara says. “Then we noticed something that has never really been seen before.”
In 2022, the team looked through observations of the black hole taken by the European Space Agency’s XMM-Newton, a space-based observatory that detects and measures X-ray emissions from black holes, neutron stars, galactic clusters, and other extreme cosmic sources. They noticed that X-rays from the black hole appeared to pulse with increasing frequency. Such “quasi-periodic oscillations” have only been observed in a handful of other supermassive black holes, where X-ray flashes appear with regular frequency.
In the case of 1ES 1927+654, the flickering seemed to steadily ramp up, from every 18 minutes to every seven minutes over the span of two years.
“We’ve never seen this dramatic variability in the rate at which it’s flashing,” Masterson says. “This looked absolutely nothing like a normal supermassive black hole.”
The fact that the flashing was detected in the X-ray band points to the strong possibility that the source is somewhere very close to the black hole. The innermost regions of a black hole are extremely high-energy environments, where X-rays are produced by fast-moving, hot plasma. X-rays are less likely to be seen at farther distances, where gas can circle more slowly in an accretion disk. The cooler environment of the disk can emit optical and ultraviolet light, but rarely gives off X-rays.
“Seeing something in the X-rays is already telling you you’re pretty close to the black hole,” Kara says. “When you see variability on the timescale of minutes, that’s close to the event horizon, and the first thing your mind goes to is circular motion, and whether something could be orbiting around the black hole.”
X-ray kick-up
Whatever was producing the X-ray flashes was doing so at an extremely close distance from the black hole, which the researchers estimate to be within a few million miles of the event horizon.
Masterson and Kara explored models for various astrophysical phenomena that could explain the X-ray patterns that they observed, including a possibility relating to the black hole’s corona.
“One idea is that this corona is oscillating, maybe blobbing back and forth, and if it starts to shrink, those oscillations get faster as the scales get smaller,” Masterson says. “But we’re in the very early stages of understanding coronal oscillations.”
Another promising scenario, and one that scientists have a better grasp on in terms of the physics involved, has to do with a daredevil of a white dwarf. According to their modeling, the researchers estimate the white dwarf could have been about one-tenth the mass of the sun. In contrast, the supermassive black hole itself is on the order of 1 million solar masses.
When any object gets this close to a supermassive black hole, gravitational waves are expected to be emitted, dragging the object closer to the black hole. As it circles closer, the white dwarf moves at a faster rate, which can explain the increasing frequency of X-ray oscillations that the team observed.
The white dwarf is practically at the precipice of no return and is estimated to be just a few million miles from the event horizon. However, the researchers predict that the star will not fall in. While the black hole’s gravity may pull the white dwarf inward, the star is also shedding part of its outer layer into the black hole. This shedding acts as a small kick-back, such that the white dwarf — an incredibly compact object itself — can resist crossing the black hole’s boundary.
“Because white dwarfs are small and compact, they’re very difficult to shred apart, so they can be very close to a black hole,” Kara says. “If this scenario is correct, this white dwarf is right at the turn around point, and we may see it get further away.”
The team plans to continue observing the system, with existing and future telescopes, to better understand the extreme physics at work in a black hole’s innermost environments. They are particularly excited to study the system once the space-based gravitational-wave detector LISA launches — currently planned for the mid 2030s — as the gravitational waves that the system should give off will be in a sweet spot that LISA can clearly detect.
“The one thing I’ve learned with this source is to never stop looking at it because it will probably teach us something new,” Masterson says. “The next step is just to keep our eyes open.”
A new way to determine whether a species will successfully invade an ecosystemMIT physicists develop a predictive formula, based on bacterial communities, that may also apply to other types of ecosystems, including the human GI tract.When a new species is introduced into an ecosystem, it may succeed in establishing itself, or it may fail to gain a foothold and die out. Physicists at MIT have now devised a formula that can predict which of those outcomes is most likely.
The researchers created their formula based on analysis of hundreds of different scenarios that they modeled using populations of soil bacteria grown in their laboratory. They now plan to test their formula in larger-scale ecosystems, including forests. This approach could also be helpful in predicting whether probiotics or fecal microbiota treatments (FMT) would successfully combat infections of the human GI tract.
“People eat a lot of probiotics, but many of them can never invade our gut microbiome at all, because if you introduce it, it does not necessarily mean that it can grow and colonize and benefit your health,” says Jiliang Hu SM ’19, PhD ’24, the lead author of the study.
MIT professor of physics Jeff Gore is the senior author of the paper, which appears today in the journal Nature Ecology and Evolution. Matthieu Barbier, a researcher at the Plant Health Institute Montpellier, and Guy Bunin, a professor of physics at Technion, are also authors of the paper.
Population fluctuations
Gore’s lab specializes in using microbes to analyze interspecies interactions in a controlled way, in hopes of learning more about how natural ecosystems behave. In previous work, the team has used bacterial populations to demonstrate how changing the environment in which the microbes live affects the stability of the communities they form.
In this study, the researchers wanted to study what determines whether an invasion by a new species will succeed or fail. In natural communities, ecologists have hypothesized that the more diverse an ecosystem is, the more it will resist an invasion, because most of the ecological niches will already be occupied and few resources are left for an invader.
However, in both natural and experimental systems, scientists have observed that this is not consistently true: While some highly diverse populations are resistant to invasion, other highly diverse populations are more likely to be invaded.
To explore why both of those outcomes can occur, the researchers set up more than 400 communities of soil bacteria, which were all native to the soil around MIT. The researchers established communities of 12 to 20 species of bacteria, and six days later, they added one randomly chosen species as the invader. On the 12th day of the experiment, they sequenced the genomes of all the bacteria to determine if the invader had established itself in the ecosystem.
In each community, the researchers also varied the nutrient levels in the culture medium on which the bacteria were grown. When nutrient levels were high, the microbes displayed strong interactions, characterized by heightened competition for food and other resources, or mutual inhibition through mechanisms such as pH-mediated cross-toxin effects. Some of these populations formed stable states in which the fraction of each microbe did not vary much over time, while others formed communities in which most of the species fluctuated in number.
The researchers found that these fluctuations were the most important factor in the outcome of the invasion. Communities that had more fluctuations tended to be more diverse, but they were also more likely to be invaded successfully.
“The fluctuation is not driven by changes in the environment, but it is internal fluctuation driven by the species interaction. And what we found is that the fluctuating communities are more readily invaded and also more diverse than the stable ones,” Hu says.
In some of the populations where the invader established itself, the other species remained, but in smaller numbers. In other populations, some of the resident species were outcompeted and disappeared completely. This displacement tended to happen more often in ecosystems when there were stronger competitive interactions between species.
In ecosystems that had more stable, less diverse populations, with stronger interactions between species, invasions were more likely to fail.
Regardless of whether the community was stable or fluctuating, the researchers found that the fraction of the original species that survived in the community before invasion predicts the probability of invasion success. This “survival fraction” could be estimated in natural communities by taking the ratio of the diversity within a local community (measured by the number of species in that area) to the regional diversity (number of species found in the entire region).
“It would be exciting to study whether the local and regional diversity could be used to predict susceptibility to invasion in natural communities,” Gore says.
Predicting success
The researchers also found that under certain circumstances, the order in which species arrived in the ecosystem played a role in whether an invasion was successful. When the interactions between species were strong, the chances of a species becoming successfully incorporated went down when that species was introduced after other species have already become established.
When the interactions are weak, this “priority effect” disappears and the same stable equilibrium is reached no matter what order the microbes arrived in.
“Under a strong interaction regime, we found the invader has some disadvantage because it arrived later. This is of interest in ecology because people have always found that in some cases the order in which species arrived matters a lot, while in the other cases it doesn't matter,” Hu says.
The researchers now plan to try to replicate their findings in ecosystems for which species diversity data is available, including the human gut microbiome. Their formula could allow them to predict the success of probiotic treatment, in which beneficial bacteria are consumed orally, or FMT, an experimental treatment for severe infections such as C. difficile, in which beneficial bacteria from a donor’s stool are transplanted into a patient’s colon.
“Invasions can be harmful or can be good depending on the context,” Hu says. “In some cases, like probiotics, or FMT to treat C. difficile infection, we want the healthy species to invade successfully. Also for soil protection, people introduce probiotics or beneficial species to the soil. In that case people also want the invaders to succeed.”
The research was funded by the Schmidt Polymath Award and the Sloan Foundation.