Senior Madison Wang, a double major in creative writing and chemistry, developed her passion for writing in middle school. Her interest in chemistry fit nicely alongside her commitment to producing engaging narratives.
Wang believes that world-building in stories supported by science and research can make for a more immersive reader experience.
“In science and in writing, you have to tell an effective story,” she says. “People respond well to stories.”
A native of Buffalo, New York, Wang applied early action for admission to MIT and learned quickly that the Institute was where she wanted to be. “It was a really good fit,” she says. “There was positive energy and vibes, and I had a great feeling overall.”
The power of science and good storytelling
“Chemistry is practical, complex, and interesting,” says Wang. “It’s about quantifying natural laws and understanding how reality works.”
Chemistry and writing both help us “see the world’s irregularity,” she continues. Together, they can erase the artificial and arbitrary line separating one from the other and work in concert to tell a more complete story about the world, the ways in which we participate in building it, and how people and objects exist in and move through it.
“Understanding magnetism, material properties, and believing in the power of magic in a good story … these are why we’re drawn to explore,” she says. “Chemistry describes why things are the way they are, and I use it for world-building in my creative writing.”
Wang lauds MIT’s creative writing program and cites a course she took with Comparative Media Studies/Writing Professor and Pulitzer Prize winner Junot Díaz as an affirmation of her choice. Seeing and understanding the world through the eyes of a scientist — its building blocks, the ways the pieces fit and function together — help explain her passion for chemistry, especially inorganic and physical chemistry.
Wang cites the work of authors like Sam Kean and Knight Science Journalism Program Director Deborah Blum as part of her inspiration to study science. The books “The Disappearing Spoon” by Kean and “The Poisoner’s Handbook” by Blum “both present historical perspectives, opting for a story style to discuss the events and people involved,” she says. “They each put a lot of work into bridging the gap between what can sometimes be sterile science and an effective narrative that gets people to care about why the science matters.”
Genres like fantasy and science fiction are complementary, according to Wang. “Constructing an effective world means ensuring readers understand characters’ motivations — the ‘why’ — and ensuring it makes sense,” she says. “It’s also important to show how actions and their consequences influence and motivate characters.”
As she explores the world’s building blocks inside and outside the classroom, Wang works to navigate multiple genres in her writing, as with her studies in chemistry. “I like romance and horror, too,” she says. “I have gripes with committing to a single genre, so I just take whatever I like from each and put them in my stories.”
In chemistry, Wang favors an environment in which scientists can regularly test their ideas. “It’s important to ground chemistry in the real world to create connections for students,” she argues. Advancements in the field have occurred, she notes, because scientists could exit the realm of theory and apply ideas practically.
“Fritz Haber’s work on ammonia synthesis revolutionized approaches to food supply chains,” she says, referring to the German chemist and Nobel laureate. “Converting nitrogen and hydrogen gas to ammonia for fertilizer marked a dramatic shift in how farming could work.” This kind of work could only result from the consistent, controlled, practical application of the theories scientists consider in laboratory environments.
A future built on collaboration and cooperation
Watching the world change dramatically and seeing humanity struggle to grapple with the implications of phenomena like climate change, political unrest, and shifting alliances, Wang emphasizes the importance of deconstructing silos in academia and the workplace. Technology can be a tool for harm, she notes, so inviting more people inside previously segregated spaces helps everyone.
Criticism in both chemistry and writing, Wang believes, are valuable tools for continuous improvement. Effective communication, explaining complex concepts, and partnering to develop long-term solutions are invaluable when working at the intersection of history, art, and science. In writing, Wang says, criticism can help define areas to improve writers’ stories and shape interesting ideas.
“We’ve seen the positive results that can occur with effective science writing, which requires rigor and fact-checking,” she says. “MIT’s cross-disciplinary approach to our studies, alongside feedback from teachers and peers, is a great set of tools to carry with us regardless of where we are.”
Wang explores connections between science and stories in her leisure time, too. “I’m a member of MIT’s Anime Club and I enjoy participating in MIT’s Sport Taekwondo Club,” she says. The competitive aspect in tae kwon do allows for her to feed her competitive drive and gets her out of her head. Her participation in DAAMIT (Digital Art and Animation at MIT) creates connections with different groups of people and gives her ideas she can use to tell better stories. “It’s fascinating exploring others’ minds,” she says.
Wang argues that there’s a false divide between science and the humanities and wants the work she does after graduation to bridge that divide. “Writing and learning about science can help,” she asserts. “Fields like conservation and history allow for continued exploration of that intersection.”
Ultimately, Wang believes it’s important to examine narratives carefully and to question notions of science’s inherent superiority over humanities fields. “The humanities and science have equal value,” she says.
A brief history of expansion microscopySince an MIT team introduced expansion microscopy in 2015, the technique has powered the science behind kidney disease, plant seeds, the microbiome, Alzheimer’s, viruses, and more.Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another, and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.
This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of MIT McGovern Institute for Brain Research investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.
“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute (HHMI) investigator, a professor of brain and cognitive sciences and biological engineering, and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.
Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.
“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”
Origins of ExM
To develop expansion microscopy, Boyden and his team turned to hydrogel, a material with remarkable water-absorbing properties that had already been put to practical use; it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.
After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.
Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers — a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a fourfold expansion.
Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now, anybody can go look at the building blocks of life and how they relate to each other.”
Empowering scientists
Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.
It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things — which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.
Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.
Always improving
Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher-resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.
They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less-costly diagnoses.
Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now re-stain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.
But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet — but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.
Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now, you can get images that look a lot like electron microscopy images, but on regular old light microscopes — the kind that everybody has access to,” Boyden says.
Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California at Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days.
And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”
Expanding possibilities
Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify — so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.
Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoc in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebra fish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.
“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.
His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network — how life really operates,” he says.
New model predicts a chemical reaction’s point of no returnChemists could use this quick computational method to design more efficient reactions that yield useful compounds, from fuels to pharmaceuticals.When chemists design new chemical reactions, one useful piece of information involves the reaction’s transition state — the point of no return from which a reaction must proceed.
This information allows chemists to try to produce the right conditions that will allow the desired reaction to occur. However, current methods for predicting the transition state and the path that a chemical reaction will take are complicated and require a huge amount of computational power.
MIT researchers have now developed a machine-learning model that can make these predictions in less than a second, with high accuracy. Their model could make it easier for chemists to design chemical reactions that could generate a variety of useful compounds, such as pharmaceuticals or fuels.
“We’d like to be able to ultimately design processes to take abundant natural resources and turn them into molecules that we need, such as materials and therapeutic drugs. Computational chemistry is really important for figuring out how to design more sustainable processes to get us from reactants to products,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering, a professor of chemistry, and the senior author of the new study.
Former MIT graduate student Chenru Duan PhD ’22, who is now at Deep Principle; former Georgia Tech graduate student Guan-Horng Liu, who is now at Meta; and Cornell University graduate student Yuanqi Du are the lead authors of the paper, which appears today in Nature Machine Intelligence.
Better estimates
For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. These transition states are so fleeting that they’re nearly impossible to observe experimentally.
As an alternative, researchers can calculate the structures of transition states using techniques based on quantum chemistry. However, that process requires a great deal of computing power and can take hours or days to calculate a single transition state.
“Ideally, we’d like to be able to use computational chemistry to design more sustainable processes, but this computation in itself is a huge use of energy and resources in finding these transition states,” Kulik says.
In 2023, Kulik, Duan, and others reported on a machine-learning strategy that they developed to predict the transition states of reactions. This strategy is faster than using quantum chemistry techniques, but still slower than what would be ideal because it requires the model to generate about 40 structures, then run those predictions through a “confidence model” to predict which states were most likely to occur.
One reason why that model needs to be run so many times is that it uses randomly generated guesses for the starting point of the transition state structure, then performs dozens of calculations until it reaches its final, best guess. These randomly generated starting points may be very far from the actual transition state, which is why so many steps are needed.
The researchers’ new model, React-OT, described in the Nature Machine Intelligence paper, uses a different strategy. In this work, the researchers trained their model to begin from an estimate of the transition state generated by linear interpolation — a technique that estimates each atom’s position by moving it halfway between its position in the reactants and in the products, in three-dimensional space.
“A linear guess is a good starting point for approximating where that transition state will end up,” Kulik says. “What the model’s doing is starting from a much better initial guess than just a completely random guess, as in the prior work.”
Because of this, it takes the model fewer steps and less time to generate a prediction. In the new study, the researchers showed that their model could make predictions with only about five steps, taking about 0.4 seconds. These predictions don’t need to be fed through a confidence model, and they are about 25 percent more accurate than the predictions generated by the previous model.
“That really makes React-OT a practical model that we can directly integrate to the existing computational workflow in high-throughput screening to generate optimal transition state structures,” Duan says.
“A wide array of chemistry”
To create React-OT, the researchers trained it on the same dataset that they used to train their older model. These data contain structures of reactants, products, and transition states, calculated using quantum chemistry methods, for 9,000 different chemical reactions, mostly involving small organic or inorganic molecules.
Once trained, the model performed well on other reactions from this set, which had been held out of the training data. It also performed well on other types of reactions that it hadn’t been trained on, and could make accurate predictions involving reactions with larger reactants, which often have side chains that aren’t directly involved in the reaction.
“This is important because there are a lot of polymerization reactions where you have a big macromolecule, but the reaction is occurring in just one part. Having a model that generalizes across different system sizes means that it can tackle a wide array of chemistry,” Kulik says.
The researchers are now working on training the model so that it can predict transition states for reactions between molecules that include additional elements, including sulfur, phosphorus, chlorine, silicon, and lithium.
“To quickly predict transition state structures is key to all chemical understanding,” says Markus Reiher, a professor of theoretical chemistry at ETH Zurich, who was not involved in the study. “The new approach presented in the paper could very much accelerate our search and optimization processes, bringing us faster to our final result. As a consequence, also less energy will be consumed in these high-performance computing campaigns. Any progress that accelerates this optimization benefits all sorts of computational chemical research.”
The MIT team hopes that other scientists will make use of their approach in designing their own reactions, and have created an app for that purpose.
“Whenever you have a reactant and product, you can put them into the model and it will generate the transition state, from which you can estimate the energy barrier of your intended reaction, and see how likely it is to occur,” Duan says.
The research was funded by the U.S. Army Research Office, the U.S. Department of Defense Basic Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the U.S. Office of Naval Research.
Astronomers discover a planet that’s rapidly disintegrating, producing a comet-like tailThe small and rocky lava world sheds an amount of material equivalent to the mass of Mount Everest every 30.5 hours.MIT astronomers have discovered a planet some 140 light-years from Earth that is rapidly crumbling to pieces.
The disintegrating world is about the mass of Mercury, although it circles about 20 times closer to its star than Mercury does to the sun, completing an orbit every 30.5 hours. At such close proximity to its star, the planet is likely covered in magma that is boiling off into space. As the roasting planet whizzes around its star, it is shedding an enormous amount of surface minerals and effectively evaporating away.
The astronomers spotted the planet using NASA’s Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission that monitors the nearest stars for transits, or periodic dips in starlight that could be signs of orbiting exoplanets. The signal that tipped the astronomers off was a peculiar transit, with a dip that fluctuated in depth every orbit.
The scientists confirmed that the signal is of a tightly orbiting rocky planet that is trailing a long, comet-like tail of debris.
“The extent of the tail is gargantuan, stretching up to 9 million kilometers long, or roughly half of the planet’s entire orbit,” says Marc Hon, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.
It appears that the planet is disintegrating at a dramatic rate, shedding an amount of material equivalent to one Mount Everest each time it orbits its star. At this pace, given its small mass, the researchers predict that the planet may completely disintegrate in about 1 million to 2 million years.
“We got lucky with catching it exactly when it’s really going away,” says Avi Shporer, a collaborator on the discovery who is also at the TESS Science Office. “It’s like on its last breath.”
Hon and Shporer, along with their colleagues, have published their results today in the Astrophysical Journal Letters. Their MIT co-authors include Saul Rappaport, Andrew Vanderburg, Jeroen Audenaert, William Fong, Jack Haviland, Katharine Hesse, Daniel Muthukrishna, Glen Petitpas, Ellie Schmelzer, Sara Seager, and George Ricker, along with collaborators from multiple other institutions.
Roasting away
The new planet, which scientists have tagged as BD+05 4868 Ab, was detected almost by happenstance.
“We weren’t looking for this kind of planet,” Hon says. “We were doing the typical planet vetting, and I happened to spot this signal that appeared very unusual.”
The typical signal of an orbiting exoplanet looks like a brief dip in a light curve, which repeats regularly, indicating that a compact body such as a planet is briefly passing in front of, and temporarily blocking, the light from its host star.
This typical pattern was unlike what Hon and his colleagues detected from the host star BD+05 4868 A, located in the constellation of Pegasus. Though a transit appeared every 30.5 hours, the brightness took much longer to return to normal, suggesting a long trailing structure still blocking starlight. Even more intriguing, the depth of the dip changed with each orbit, suggesting that whatever was passing in front of the star wasn’t always the same shape or blocking the same amount of light.
“The shape of the transit is typical of a comet with a long tail,” Hon explains. “Except that it’s unlikely that this tail contains volatile gases and ice as expected from a real comet — these would not survive long at such close proximity to the host star. Mineral grains evaporated from the planetary surface, however, can linger long enough to present such a distinctive tail.”
Given its proximity to its star, the team estimates that the planet is roasting at around 1,600 degrees Celsius, or close to 3,000 degrees Fahrenheit. As the star roasts the planet, any minerals on its surface are likely boiling away and escaping into space, where they cool into a long and dusty tail.
The dramatic demise of this planet is a consequence of its low mass, which is between that of Mercury and the moon. More massive terrestrial planets like the Earth have a stronger gravitational pull and therefore can hold onto their atmospheres. For BD+05 4868 Ab, the researchers suspect there is very little gravity to hold the planet together.
“This is a very tiny object, with very weak gravity, so it easily loses a lot of mass, which then further weakens its gravity, so it loses even more mass,” Shporer explains. “It’s a runaway process, and it’s only getting worse and worse for the planet.”
Mineral trail
Of the nearly 6,000 planets that astronomers have discovered to date, scientists know of only three other disintegrating planets beyond our solar system. Each of these crumbling worlds were spotted over 10 years ago using data from NASA’s Kepler Space Telescope. All three planets were spotted with similar comet-like tails. BD+05 4868 Ab has the longest tail and the deepest transits out of the four known disintegrating planets to date.
“That implies that its evaporation is the most catastrophic, and it will disappear much faster than the other planets,” Hon explains.
The planet’s host star is relatively close, and thus brighter than the stars hosting the other three disintegrating planets, making this system ideal for further observations using NASA’s James Webb Space Telescope (JWST), which can help determine the mineral makeup of the dust tail by identifying which colors of infrared light it absorbs.
This summer, Hon and graduate student Nicholas Tusay from Penn State University will lead observations of BD+05 4868 Ab using JWST. “This will be a unique opportunity to directly measure the interior composition of a rocky planet, which may tell us a lot about the diversity and potential habitability of terrestrial planets outside our solar system,” Hon says.
The researchers also will look through TESS data for signs of other disintegrating worlds.
“Sometimes with the food comes the appetite, and we are now trying to initiate the search for exactly these kinds of objects,” Shporer says. “These are weird objects, and the shape of the signal changes over time, which is something that’s difficult for us to find. But it’s something we’re actively working on.”
This work was supported, in part, by NASA.
MIT’s McGovern Institute is shaping brain science and improving human lives on a global scaleA quarter century after its founding, the McGovern Institute reflects on its discoveries in the areas of neuroscience, neurotechnology, artificial intelligence, brain-body connections, and therapeutics.In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision: to understand the human brain in all its complexity, and to leverage that understanding for the betterment of humanity.
Twenty-five years later, the McGovern Institute stands as a testament to the power of interdisciplinary collaboration, continuing to shape our understanding of the brain and improve the quality of life for people worldwide.
In the beginning
“This is, by any measure, a truly historic moment for MIT,” said MIT’s 15th president, Charles M. Vest, during his opening remarks at an event in 2000 to celebrate the McGovern gift agreement. “The creation of the McGovern Institute will launch one of the most profound and important scientific ventures of this century in what surely will be a cornerstone of MIT scientific contributions from the decades ahead.”
Vest tapped Phillip A. Sharp, MIT Institute professor emeritus of biology and Nobel laureate, to lead the institute, and appointed six MIT professors — Emilio Bizzi, Martha Constantine-Paton, Ann Graybiel PhD ’71, H. Robert Horvitz ’68, Nancy Kanwisher ’80, PhD ’86, and Tomaso Poggio — to represent its founding faculty. Construction began in 2003 on Building 46, a 376,000 square foot research complex at the northeastern edge of campus. MIT’s new “gateway from the north” would eventually house the McGovern Institute, the Picower Institute for Learning and Memory, and MIT’s Department of Brain and Cognitive Sciences.
Robert Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT, succeeded Sharp as director of the McGovern Institute in 2005, and assembled a distinguished roster of 22 faculty members, including a Nobel laureate, a Breakthrough Prize winner, two National Medal of Science/Technology awardees, and 15 members of the American Academy of Arts and Sciences.
A quarter century of innovation
On April 11, 2025, the McGovern Institute celebrated its 25th anniversary with a half-day symposium featuring presentations by MIT Institute Professor Robert Langer, alumni speakers from various McGovern labs, and Desimone, who is in his 20th year as director of the institute.
Desimone highlighted the institute’s recent discoveries, including the development of the CRISPR genome-editing system, which has culminated in the world’s first CRISPR gene therapy approved for humans — a remarkable achievement that is ushering in a new era of transformative medicine. In other milestones, McGovern researchers developed the first prosthetic limb fully controlled by the body’s nervous system; a flexible probe that taps into gut-brain communication; an expansion microscopy technique that paves the way for biology labs around the world to perform nanoscale imaging; and advanced computational models that demonstrate how we see, hear, use language, and even think about what others are thinking. Equally transformative has been the McGovern Institute’s work in neuroimaging, uncovering the architecture of human thought and establishing markers that signal the early emergence of mental illness, before symptoms even appear.
Synergy and open science
“I am often asked what makes us different from other neuroscience institutes and programs around the world,” says Desimone. “My answer is simple. At the McGovern Institute, the whole is greater than the sum of its parts.”
Many discoveries at the McGovern Institute have depended on collaborations across multiple labs, ranging from biological engineering to human brain imaging and artificial intelligence. In modern brain research, significant advances often require the joint expertise of people working in neurophysiology, behavior, computational analysis, neuroanatomy, and molecular biology. More than a dozen different MIT departments are represented by McGovern faculty and graduate students, and this synergy has led to insights and innovations that are far greater than what any single discipline could achieve alone.
Also baked into the McGovern ethos is a spirit of open science, where newly developed technologies are shared with colleagues around the world. Through hospital partnerships for example, McGovern researchers are testing their tools and therapeutic interventions in clinical settings, accelerating their discoveries into real-world solutions.
The McGovern legacy
Hundreds of scientific papers have emerged from McGovern labs over the past 25 years, but most faculty would argue that it’s the people — the young researchers — that truly define the McGovern Institute. Award-winning faculty often attract the brightest young minds, but many McGovern faculty also serve as mentors, creating a diverse and vibrant scientific community that is setting the global standard for brain research and its applications. Kanwisher, for example, has guided more than 70 doctoral students and postdocs who have gone on to become leading scientists around the world. Three of her former students, Evelina Fedorenko PhD ’07, Josh McDermott PhD ’06, and Rebecca Saxe PhD ’03, the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, are now her colleagues at the McGovern Institute. Other McGovern alumni shared stories of mentorship, science, and real-world impact at the 25th anniversary symposium.
Looking to the future, the McGovern community is more committed than ever to unraveling the mysteries of the brain and making a meaningful difference in lives of individuals at a global scale.
“By promoting team science, open communication, and cross-discipline partnerships,” says institute co-founder Lore Harp McGovern, “our culture demonstrates how individual expertise can be amplified through collective effort. I am honored to be the co-founder of this incredible institution — onward to the next 25 years!”
Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.
Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.
A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.
Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics.
In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.
“This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.
Loula is joined on the paper by co-lead authors Benjamin LeBrun, a research assistant at the Mila-Quebec Artificial Intelligence Institute, and Li Du, a graduate student at John Hopkins University; co-senior authors Vikash Mansinghka ’05, MEng ’09, PhD ’09, a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences; Alexander K. Lew SM ’20, an assistant professor at Yale University; Tim Vieira, a postdoc at ETH Zurich; and Timothy J. O’Donnell, an associate professor at McGill University and a Canada CIFAR AI Chair at Mila, who led the international team; as well as several others. The research will be presented at the International Conference on Learning Representations.
Enforcing structure and meaning
One common approach for controlling the structured text generated by LLMs involves checking an entire output, like a block of computer code, to make sure it is valid and will run error-free. If not, the user must start again, racking up computational resources.
On the other hand, a programmer could stop to check the output along the way. While this can ensure the code adheres to the programming language and is structurally valid, incrementally correcting the code may cause it to drift from the meaning the user intended, hurting its accuracy in the long run.
“It is much easier to enforce structure than meaning. We can quickly check whether something is in the right programming language, but to check its meaning you have to execute the code. Our work is also about dealing with these different types of information,” Loula says.
The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends.
“We are not trying to train an LLM to do this. Instead, we are engineering some knowledge that an expert would have and combining it with the LLM’s knowledge, which offers a very different approach to scaling than you see in deep learning,” Mansinghka adds.
They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.
Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest.
In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal. The user specifies their desired structure and meaning, as well as how to check the output, then the researchers’ architecture guides the LLM to do the rest.
“We’ve worked out the hard math so that, for any kinds of constraints you’d like to incorporate, you are going to get the proper weights. In the end, you get the right answer,” Loula says.
Boosting small models
To test their approach, they applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow.
When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation.
In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.
“We are very excited that we can allow these small models to punch way above their weight,” Loula says.
Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.
In the long run, this project could have broader applications for non-technical users. For instance, it could be combined with systems for automated data modeling, and querying generative models of databases.
The approach could also enable machine-assisted data analysis systems, where the user can converse with software that accurately models the meaning of the data and the questions asked by the user, adds Mansinghka.
“One of the fundamental questions of linguistics is how the meaning of words, phrases, and sentences can be grounded in models of the world, accounting for uncertainty and vagueness in meaning and reference. LLMs, predicting likely token sequences, don’t address this problem. Our paper shows that, in narrow symbolic domains, it is technically possible to map from words to distributions on grounded meanings. It’s a small step towards deeper questions in cognitive science, linguistics, and artificial intelligence needed to understand how machines can communicate about the world like we do,” says O’Donnell.
This research is funded and supported, in part, by the Canada CIFAR AI Chairs Program, the MIT Quest for Intelligence, and Convergent Research.
Workshop explores new advanced materials for a growing worldSpeakers described challenges and potential solutions for producing materials to meet demands associated with data centers, infrastructure, and other technology.It is clear that humankind needs increasingly more resources, from computing power to steel and concrete, to meet the growing demands associated with data centers, infrastructure, and other mainstays of society. New, cost-effective approaches for producing the advanced materials key to that growth were the focus of a two-day workshop at MIT on March 11 and 12.
A theme throughout the event was the importance of collaboration between and within universities and industries. The goal is to “develop concepts that everybody can use together, instead of everybody doing something different and then trying to sort it out later at great cost,” said Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering at MIT.
The workshop was produced by MIT’s Materials Research Laboratory (MRL), which has an industry collegium, and MIT’s Industrial Liaison Program.
The program included an address by Javier Sanfelix, lead of the Advanced Materials Team for the European Union. Sanfelix gave an overview of the EU’s strategy to developing advanced materials, which he said are “key enablers of the green and digital transition for European industry.”
That strategy has already led to several initiatives. These include a material commons, or shared digital infrastructure for the design and development of advanced materials, and an advanced materials academy for educating new innovators and designers. Sanfelix also described an Advanced Materials Act for 2026 that aims to put in place a legislative framework that supports the entire innovation cycle.
Sanfelix was visiting MIT to learn more about how the Institute is approaching the future of advanced materials. “We see MIT as a leader worldwide in technology, especially on materials, and there is a lot to learn about [your] industry collaborations and technology transfer with industry,” he said.
Innovations in steel and concrete
The workshop began with talks about innovations involving two of the most common human-made materials in the world: steel and cement. We’ll need more of both but must reckon with the huge amounts of energy required to produce them and their impact on the environment due to greenhouse-gas emissions during that production.
One way to address our need for more steel is to reuse what we have, said C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering (DMSE) and director of the Materials Research Laboratory.
But most of the existing approaches to recycling scrap steel involve melting the metal. “And whenever you are dealing with molten metal, everything goes up, from energy use to carbon-dioxide emissions. Life is more difficult,” Tasan said.
The question he and his team asked is whether they could reuse scrap steel without melting it. Could they consolidate solid scraps, then roll them together using existing equipment to create new sheet metal? From the materials-science perspective, Tasan said, that shouldn’t work, for several reasons.
But it does. “We’ve demonstrated the potential in two papers and two patent applications already,” he said. Tasan noted that the approach focuses on high-quality manufacturing scrap. “This is not junkyard scrap,” he said.
Tasan went on to explain how and why the new process works from a materials-science perspective, then gave examples of how the recycled steel could be used. “My favorite example is the stainless-steel countertops in restaurants. Do you really need the mechanical performance of stainless steel there?” You could use the recycled steel instead.
Hessam Azarijafari addressed another common, indispensable material: concrete. This year marks the 16th anniversary of the MIT Concrete Sustainability Hub (CSHub), which began when a set of industry leaders and politicians reached out to MIT to learn more about the benefits and environmental impacts of concrete.
The hub’s work now centers around three main themes: working toward a carbon-neutral concrete industry; the development of a sustainable infrastructure, with a focus on pavement; and how to make our cities more resilient to natural hazards through investment in stronger, cooler construction.
Azarijafari, the deputy director of the CSHub, went on to give several examples of research results that have come out of the CSHub. These include many models to identify different pathways to decarbonize the cement and concrete sector. Other work involves pavements, which the general public thinks of as inert, Azarijafari said. “But we have [created] a state-of-the-art model that can assess interactions between pavement and vehicles.” It turns out that pavement surface characteristics and structural performance “can influence excess fuel consumption by inducing an additional rolling resistance.”
Azarijafari emphasized the importance of working closely with policymakers and industry. That engagement is key “to sharing the lessons that we have learned so far.”
Toward a resource-efficient microchip industry
Consider the following: In 2020 the number of cell phones, GPS units, and other devices connected to the “cloud,” or large data centers, exceeded 50 billion. And data-center traffic in turn is scaling by 1,000 times every 10 years.
But all of that computation takes energy. And “all of it has to happen at a constant cost of energy, because the gross domestic product isn’t changing at that rate,” said Kimerling. The solution is to either produce much more energy, or make information technology much more energy-efficient. Several speakers at the workshop focused on the materials and components behind the latter.
Key to everything they discussed: adding photonics, or using light to carry information, to the well-established electronics behind today’s microchips. “The bottom line is that integrating photonics with electronics in the same package is the transistor for the 21st century. If we can’t figure out how to do that, then we’re not going to be able to scale forward,” said Kimerling, who is director of the MIT Microphotonics Center.
MIT has long been a leader in the integration of photonics with electronics. For example, Kimerling described the Integrated Photonics System Roadmap – International (IPSR-I), a global network of more than 400 industrial and R&D partners working together to define and create photonic integrated circuit technology. IPSR-I is led by the MIT Microphotonics Center and PhotonDelta. Kimerling began the organization in 1997.
Last year IPSR-I released its latest roadmap for photonics-electronics integration, “which outlines a clear way forward and specifies an innovative learning curve for scaling performance and applications for the next 15 years,” Kimerling said.
Another major MIT program focused on the future of the microchip industry is FUTUR-IC, a new global alliance for sustainable microchip manufacturing. Begun last year, FUTUR-IC is funded by the National Science Foundation.
“Our goal is to build a resource-efficient microchip industry value chain,” said Anuradha Murthy Agarwal, a principal research scientist at the MRL and leader of FUTUR-IC. That includes all of the elements that go into manufacturing future microchips, including workforce education and techniques to mitigate potential environmental effects.
FUTUR-IC is also focused on electronic-photonic integration. “My mantra is to use electronics for computation, [and] shift to photonics for communication to bring this energy crisis in control,” Agarwal said.
But integrating electronic chips with photonic chips is not easy. To that end, Agarwal described some of the challenges involved. For example, currently it is difficult to connect the optical fibers carrying communications to a microchip. That’s because the alignment between the two must be almost perfect or the light will disperse. And the dimensions involved are minuscule. An optical fiber has a diameter of only millionths of a meter. As a result, today each connection must be actively tested with a laser to ensure that the light will come through.
That said, Agarwal went on to describe a new coupler between the fiber and chip that could solve the problem and allow robots to passively assemble the chips (no laser needed). The work, which was conducted by researchers including MIT graduate student Drew Wenninger, Agarwal, and Kimerling, has been patented, and is reported in two papers. A second recent breakthrough in this area involving a printed micro-reflector was described by Juejun “JJ” Hu, John F. Elliott Professor of Materials Science and Engineering.
FUTUR-IC is also leading educational efforts for training a future workforce, as well as techniques for detecting — and potentially destroying — the perfluroalkyls (PFAS, or “forever chemicals”) released during microchip manufacturing. FUTUR-IC educational efforts, including virtual reality and game-based learning, were described by Sajan Saini, education director for FUTUR-IC. PFAS detection and remediation were discussed by Aristide Gumyusenge, an assistant professor in DMSE, and Jesus Castro Esteban, a postdoc in the Department of Chemistry.
Other presenters at the workshop included Antoine Allanore, the Heather N. Lechtman Professor of Materials Science and Engineering; Katrin Daehn, a postdoc in the Allanore lab; Xuanhe Zhao, the Uncas (1923) and Helen Whitaker Professor in the Department of Mechanical Engineering; Richard Otte, CEO of Promex; and Carl Thompson, the Stavros V. Salapatas Professor in Materials Science and Engineering.
Enhancing the future of teaching and learning at MITThe MIT Festival of Learning sparked discussions on better integrating a sense of purpose and social responsibility into hands-on education.As technology rapidly propels society forward, MIT is rethinking how it prepares students to face the world and its greatest challenges. Generations of educators have shared knowledge at MIT by connecting lessons to practical applications, but what does the Institute’s motto “mens et manus” (“mind and hand”), referring to hands-on learning, look like in the future?
This was the guiding question of the annual Festival of Learning, co-hosted by MIT Open Learning and the Office of the Vice Chancellor. MIT faculty, instructors, students, and staff engaged in meaningful discussions about teaching and learning as the Institute critically revisits its undergraduate academic program.
“Because the world is changing, we owe it to our students to reflect these realities in our academic experiences,” said Daniel E. Hastings, Cecil and Ida Green Education Professor of Aeronautics and Astronautics and then-interim vice chancellor. “It’s in our DNA to try new things at MIT.”
Fostering a greater sense of purpose
MIT emphasizes hands-on learning much like many engineering schools. What deeply concerned panelists like Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology, and Anthropology, is that students are not engaging in enough intellectual thinking via significant reading, textual interpretation, or involvement with uncertain questions.
Christopher Capozzola, senior associate dean for open learning, echoed this, saying, “We have designed a world in which [students] feel enormous pressure to maximize their career outcomes at the end” of their undergraduate education.
Students move in systems of explicit incentives, he said, such as grades and the General Institute Requirements, but also respond to unwritten incentives, like extracurriculars, internships, and prestige. “That’s our fault, not theirs,” Capozzola said, and identified this as an opportunity to improve the MIT curriculum.
How can educators encourage students to connect more with course material, instead of treating it as a means to an end? Adam Martin, professor of biology, always asks his students to challenge the status quo by incorporating test questions with data arguing against the models from the textbook.
“I want them to think,” Martin said. “I want them to challenge what we think is the frontier of the field.”
Considering context
One of the most significant topics of discussion was the importance of context in education. For example, class 7.102 (Introduction to Molecular Biology Techniques) uses story-based problem-solving to show students how the curriculum fits into real-world contexts.
The fictional premise driving 7.102 is that a child fell into the Charles River and caught an antibiotic-resistant bacterial infection. To save the child, students must characterize the bacteria and identify phages that could kill it.
“It really shows the students not only basic techniques, but what it’s like to be in a team and in a discovery situation,” said Martin.
This hands-on approach — collecting water, isolating the phages within, and comparing to more reliable sources — unlocks students’ imaginations, Martin said. In an environment intentionally designed to give students room to fail, the narrative incentivizes students to persist with repeated experimentation.
But Silbey, who is also a professor of behavioral and policy sciences at MIT Sloan School of Management, has noticed the reluctance of students to engage with nontechnical contexts. Students, she concluded, “have minimal understanding of how the action of any individual becomes part of something larger, durable, consequential through invisible but powerful mechanisms of aggregation.”
Educators agreed that contextual understanding was equally important to a STEM curriculum as technical instruction. “Teaching and thinking at that interface between technology and society is really crucial for making technologists feel responsible for the things that they create and the things that they use,” added Capozzola.
Amitava Mitra, founding executive director of MIT New Engineering Education Transformation (NEET), highlighted an example where students developed an effective technical solution to decarbonize homes in Ulaanbaatar, Mongolia. Or so they thought.
“Once we saw what was on the ground and understood the context — the social model, the social processes — we realized we had no clue,” the students told Mitra.
One way MIT is trying to bridge these gaps is through the Social and Ethical Responsibilities of Computing program. This curriculum integrates ethical considerations alongside computing courses to help students envision the social and moral consequences of their actions.
In one technical machinery lecture, Silbey’s students had trouble envisioning the negative impacts of autonomous vehicles. But after she shared the history of the regulation of dangerous products, she said many students became more open to examining potential ripple effects.
Creating interdisciplinary opportunities
The panelists viewed interdisciplinary education as critical preparation for the complexities of the real world.
“Whether it’s tackling climate change, creating sustainable infrastructure, creating cutting-edge technologies in life sciences or robotics, we need our engineers, social scientists, and scientists to work in teams cutting across disciplines to create solutions today,” said Mitra.
To expand opportunities for undergraduates to collaborate across academic departments and other campus units, NEET was launched in 2017. NEET is a project-based experiential learning curriculum that requires technical and social expertise. One student group, for example, is designing, building, and installing a solar-powered charging station at MIT Open Space. To introduce a project like this into MIT’s infrastructure, students must coordinate with a variety of Institute offices — such as Campus Planning, Engineering & Energy Management, and Insurance — and city groups, like the Cambridge Fire Department.
“It's an eye-opener for them,” said Mitra.
Capozzola noted how “para-curricular” activities like NEET, MIT Undergraduate Research Opportunities Program, MISTI, D-Lab, and others prove that effective hands-on education doesn’t have to be a formal credit-bearing program.
“Students put in enormous amounts of time and effort for things that shape them, that speak to their passion and this deep engagement,” Capozzola said. “This is a special area where I think MIT particularly excels.”
Moving forward together
In a panel featuring both MIT instructors and students, educators recognized that designing an effective curriculum requires balancing content across subjects or core topics while organizing materials on Canvas — MIT’s learning management system — in a way that’s intuitive for students. Instructors collaborated directly with students and staff via MIT’s Canvas Innovation Fund to make these improvements.
“There are things that the novice students see in what I’m teaching that I don’t see,” said Sean Robinson, lecturer in physics and associate director of the Helena Foundation Junior Laboratory. “Our class is aimed at taking people who think of themselves as physics students and getting them to think of themselves as physicists. I want junior colleagues.”
The biggest takeaway from student panelists was the importance of minimizing logistical struggles by structuring Canvas to guide students toward learning objectives. Cory Romanov ’24, technical instructor of physics, and McKenzie Dinesen, a senior in aerospace engineering and Russian and Eurasian studies, emphasized that explaining learning goals and organizing course content with clear deadlines were simple improvements that went a long way to enhance the student experience.
Emphasizing the benefit of feedback like this, Capozzola said, “It’s important to give people at MIT — students, staff, and others who are often closed out of conversations — a more democratic voice so that we can be a model for the university that we want to be in 25 years.”
As MIT continues to enhance its educational approach, the insights from the Festival of Learning highlight a crucial evolution in how students engage with knowledge. From rethinking course structures to integrating interdisciplinary and experiential learning, the panelists underscored the need for a curriculum that balances technical expertise with a deep understanding of social and ethical contexts.
“It’s important to equip students on the ‘mens’ side with the kinds of civic knowledge that they need to go out into the world,” said Capozzola, “but also the ‘manus,’ to be able to do the everyday work of getting your hands dirty and building democratic institutions.”
New study reveals how cleft lip and cleft palate can ariseMIT biologists have found that defects in some transfer RNA molecules can lead to the formation of these common conditions.Cleft lip and cleft palate are among the most common birth defects, occurring in about one in 1,050 births in the United States. These defects, which appear when the tissues that form the lip or the roof of the mouth do not join completely, are believed to be caused by a mix of genetic and environmental factors.
In a new study, MIT biologists have discovered how a genetic variant often found in people with these facial malformations leads to the development of cleft lip and cleft palate.
Their findings suggest that the variant diminishes cells’ supply of transfer RNA, a molecule that is critical for assembling proteins. When this happens, embryonic face cells are unable to fuse to form the lip and roof of the mouth.
“Until now, no one had made the connection that we made. This particular gene was known to be part of the complex involved in the splicing of transfer RNA, but it wasn’t clear that it played such a crucial role for this process and for facial development. Without the gene, known as DDX1, certain transfer RNA can no longer bring amino acids to the ribosome to make new proteins. If the cells can’t process these tRNAs properly, then the ribosomes can’t make protein anymore,” says Michaela Bartusel, an MIT research scientist and the lead author of the study.
Eliezer Calo, an associate professor of biology at MIT, is the senior author of the paper, which appears today in the American Journal of Human Genetics.
Genetic variants
Cleft lip and cleft palate, also known as orofacial clefts, can be caused by genetic mutations, but in many cases, there is no known genetic cause.
“The mechanism for the development of these orofacial clefts is unclear, mostly because they are known to be impacted by both genetic and environmental factors,” Calo says. “Trying to pinpoint what might be affected has been very challenging in this context.”
To discover genetic factors that influence a particular disease, scientists often perform genome-wide association studies (GWAS), which can reveal variants that are found more often in people who have a particular disease than in people who don’t.
For orofacial clefts, some of the genetic variants that have regularly turned up in GWAS appeared to be in a region of DNA that doesn’t code for proteins. In this study, the MIT team set out to figure out how variants in this region might influence the development of facial malformations.
Their studies revealed that these variants are located in an enhancer region called e2p24.2. Enhancers are segments of DNA that interact with protein-coding genes, helping to activate them by binding to transcription factors that turn on gene expression.
The researchers found that this region is in close proximity to three genes, suggesting that it may control the expression of those genes. One of those genes had already been ruled out as contributing to facial malformations, and another had already been shown to have a connection. In this study, the researchers focused on the third gene, which is known as DDX1.
DDX1, it turned out, is necessary for splicing transfer RNA (tRNA) molecules, which play a critical role in protein synthesis. Each transfer RNA molecule transports a specific amino acid to the ribosome — a cell structure that strings amino acids together to form proteins, based on the instructions carried by messenger RNA.
While there are about 400 different tRNAs found in the human genome, only a fraction of those tRNAs require splicing, and those are the tRNAs most affected by the loss of DDX1. These tRNAs transport four different amino acids, and the researchers hypothesize that these four amino acids may be particularly abundant in proteins that embryonic cells that form the face need to develop properly.
When the ribosomes need one of those four amino acids, but none of them are available, the ribosome can stall, and the protein doesn’t get made.
The researchers are now exploring which proteins might be most affected by the loss of those amino acids. They also plan to investigate what happens inside cells when the ribosomes stall, in hopes of identifying a stress signal that could potentially be blocked and help cells survive.
Malfunctioning tRNA
While this is the first study to link tRNA to craniofacial malformations, previous studies have shown that mutations that impair ribosome formation can also lead to similar defects. Studies have also shown that disruptions of tRNA synthesis — caused by mutations in the enzymes that attach amino acids to tRNA, or in proteins involved in an earlier step in tRNA splicing — can lead to neurodevelopmental disorders.
“Defects in other components of the tRNA pathway have been shown to be associated with neurodevelopmental disease,” Calo says. “One interesting parallel between these two is that the cells that form the face are coming from the same place as the cells that form the neurons, so it seems that these particular cells are very susceptible to tRNA defects.”
The researchers now hope to explore whether environmental factors linked to orofacial birth defects also influence tRNA function. Some of their preliminary work has found that oxidative stress — a buildup of harmful free radicals — can lead to fragmentation of tRNA molecules. Oxidative stress can occur in embryonic cells upon exposure to ethanol, as in fetal alcohol syndrome, or if the mother develops gestational diabetes.
“I think it is worth looking for mutations that might be causing this on the genetic side of things, but then also in the future, we would expand this into which environmental factors have the same effects on tRNA function, and then see which precautions might be able to prevent any effects on tRNAs,” Bartusel says.
The research was funded by the National Science Foundation Graduate Research Program, the National Cancer Institute, the National Institute of General Medical Sciences, and the Pew Charitable Trusts.
A chemist who tinkers with molecules’ structuresBy changing how atoms in a molecule are arranged relative to each other, Associate Professor Alison Wendlandt aims to create compounds with new chemical properties.Many biological molecules exist as “diastereomers” — molecules that have the same chemical structure but different spatial arrangements of their atoms. In some cases, these slight structural differences can lead to significant changes in the molecules’ functions or chemical properties.
As one example, the cancer drug doxorubicin can have heart-damaging side effects in a small percentage of patients. However, a diastereomer of the drug, known as epirubicin, which has a single alcohol group that points in a different direction, is much less toxic to heart cells.
“There are a lot of examples like that in medicinal chemistry where something that seems small, such as the position of a single atom in space, may actually be really profound,” says Alison Wendlandt, an associate professor of chemistry at MIT.
Wendlandt’s lab is focused on designing new tools that can convert these molecules into different forms. Her group is also working on similar tools that can change a molecule into a different constitutional isomer — a molecule that has an atom or chemical group located in a different spot, even though it has the same chemical formula as the original.
“If you have a target molecule and you needed to make it without such a tool, you would have to go back to the beginning and make the whole molecule again to get to the final structure that you wanted,” Wendlandt says.
These tools can also lend themselves to creating entirely new molecules that might be difficult or even impossible to build using traditional chemical synthesis techniques.
“We’re focused on a broad suite of selective transformations, the goal being to make the biggest impact on how you might envision making a molecule,” she says. “If you are able to open up access to the interconversion of molecular structures, you can then think completely differently about how you would make a molecule.”
From math to chemistry
As the daughter of two geologists, Wendlandt found herself immersed in science from a young age. Both of her parents worked at the Colorado School of Mines, and family vacations often involved trips to interesting geological formations.
In high school, she found math more appealing than chemistry, and she headed to the University of Chicago with plans to major in mathematics. However, she soon had second thoughts, after encountering abstract math.
“I was good at calculus and the kind of math you need for engineering, but when I got to college and I encountered topology and N-dimensional geometry, I realized I don’t actually have the skills for abstract math. At that point I became a little bit more open-minded about what I wanted to study,” she says.
Though she didn’t think she liked chemistry, an organic chemistry course in her sophomore year changed her mind.
“I loved the problem-solving aspect of it. I have a very, very bad memory, and I couldn’t memorize my way through the class, so I had to just learn it, and that was just so fun,” she says.
As a chemistry major, she began working in a lab focused on “total synthesis,” a research area that involves developing strategies to synthesize a complex molecule, often a natural compound, from scratch.
Although she loved organic chemistry, a lab accident — an explosion that injured a student in her lab and led to temporary hearing loss for Wendlandt — made her hesitant to pursue it further. When she applied to graduate schools, she decided to go into a different branch of chemistry — chemical biology. She studied at Yale University for a couple of years, but she realized that she didn’t enjoy that type of chemistry and left after receiving a master’s degree.
She worked in a lab at the University of Kentucky for a few years, then applied to graduate school again, this time at the University of Wisconsin. There, she worked in an organic chemistry lab, studying oxidation reactions that could be used to generate pharmaceuticals or other useful compounds from petrochemicals.
After finishing her PhD in 2015, Wendlandt went to Harvard University for a postdoc, working with chemistry professor Eric Jacobsen. There, she became interested in selective chemical reactions that generate a particular isomer, and began studying catalysts that could perform glycosylation — the addition of sugar molecules to other molecules — at specific sites.
Editing molecules
Since joining the MIT faculty in 2018, Wendlandt has worked on developing catalysts that can convert a molecule into its mirror image or an isomer of the original.
In 2022, she and her students developed a tool called a stereo-editor, which can alter the arrangement of chemical groups around a central atom known as a stereocenter. This editor consists of two catalysts that work together to first add enough energy to remove an atom from a stereocenter, then replace it with an atom that has the opposite orientation. That energy input comes from a photocatalyst, which converts captured light into energy.
“If you have a molecule with an existing stereocenter, and you need the other enantiomer, typically you would have to start over and make the other enantiomer. But this new method tries to interconvert them directly, so it gives you a way of thinking about molecules as dynamic,” Wendlandt says. “You could generate any sort of three-dimensional structure of that molecule, and then in an independent step later, you could completely reorganize the 3D structure.”
She has also developed tools that can convert common sugars such as glucose into other isomers, including allose and other sugars that are difficult to isolate from natural sources, and tools that can create new isomers of steroids and alcohols. She is now working on ways to convert six-membered carbon rings to seven or eight-membered rings, and to add, subtract, or replace some of the chemical groups attached to the rings.
“I’m interested in creating general tools that will allow us to interconvert static structures. So, that may be taking a certain functional group and moving it to another part of the molecule entirely, or taking large rings and making them small rings,” she says. “Instead of thinking of molecules that we assemble as static, we’re thinking about them now as potentially dynamic structures, which could change how we think about making organic molecules.”
This approach also opens up the possibility of creating brand new molecules that haven’t been seen before, Wendlandt says. This could be useful, for example, to create drug molecules that interact with a target enzyme in just the right way.
“There’s a huge amount of chemical space that’s still unknown, bizarre chemical space that just has not been made. That’s in part because maybe no one has been interested in it, or because it’s just too hard to make that specific thing,” she says. “These kinds of tools give you access to isomers that are maybe not easily made.”
Restoring healthy gene expression with programmable therapeuticsCAMP4 Therapeutics is targeting regulatory RNA, whose role in gene expression was first described by co-founder and MIT Professor Richard Young.Many diseases are caused by dysfunctional gene expression that leads to too much or too little of a given protein. Efforts to cure those diseases include everything from editing genes to inserting new genetic snippets into cells to injecting the missing proteins directly into patients.
CAMP4 is taking a different approach. The company is targeting a lesser-known player in the regulation of gene expression known as regulatory RNA. CAMP4 co-founder and MIT Professor Richard Young has shown that by interacting with molecules called transcription factors, regulatory RNA plays an important role in controlling how genes are expressed. CAMP4’s therapeutics target regulatory RNA to increase the production of proteins and put patients’ levels back into healthy ranges.
The company’s approach holds promise for treating diseases caused by defects in gene expression, such as metabolic diseases, heart conditions, and neurological disorders. Targeting regulatory RNAs as opposed to genes could also offer more precise treatments than existing approaches.
“If I just want to fix a single gene’s defective protein output, I don’t want to introduce something that makes that protein at high, uncontrolled amounts,” says Young, who is also a core member of the Whitehead Institute. “That’s a huge advantage of our approach: It’s more like a correction than sledgehammer.”
CAMP4’s lead drug candidate targets urea cycle disorders (UCDs), a class of chronic conditions caused by a genetic defect that limits the body’s ability to metabolize and excrete ammonia. A phase 1 clinical trial has shown CAMP4’s treatment is safe and tolerable for humans, and in preclinical studies the company has shown its approach can be used to target specific regulatory RNA in the cells of humans with UCDs to restore gene expression to healthy levels.
“This has the potential to treat very severe symptoms associated with UCDs,” says Young, who co-founded CAMP4 with cancer genetics expert Leonard Zon, a professor at Harvard Medical School. “These diseases can be very damaging to tissues and causes a lot of pain and distress. Even a small effect in gene expression could have a huge benefit to patients, who are generally young.”
Mapping out new therapeutics
Young, who has been a professor at MIT since 1984, has spent decades studying how genes are regulated. It’s long been known that molecules called transcription factors, which orchestrate gene expression, bind to DNA and proteins. Research published in Young’s lab uncovered a previously unknown way in which transcription factors can also bind to RNA. The finding indicated RNA plays an underappreciated role in controlling gene expression.
CAMP4 was founded in 2016 with the initial idea of mapping out the signaling pathways that govern the expression of genes linked to various diseases. But as Young’s lab discovered and then began to characterize the role of regulatory RNA in gene expression around 2020, the company pivoted to focus on targeting regulatory RNA using therapeutic molecules known as antisense oligonucleotides (ASOs), which have been used for years to target specific messenger RNA sequences.
CAMP4 began mapping the active regulatory RNAs associated with the expression of every protein-coding gene and built a database, which it calls its RAP Platform, that helps it quickly identify regulatory RNAs to target specific diseases and select ASOs that will most effectively bind to those RNAs.
Today, CAMP4 is using its platform to develop therapeutic candidates it believes can restore healthy protein levels to patients.
“The company has always been focused on modulating gene expression,” says CAMP4 Chief Financial Officer Kelly Gold MBA ’09. “At the simplest level, the foundation of many diseases is too much or too little of something being produced by the body. That is what our approach aims to correct.”
Accelerating impact
CAMP4 is starting by going after diseases of the liver and the central nervous system, where the safety and efficacy of ASOs has already been proven. Young believes correcting genetic expression without modulating the genes themselves will be a powerful approach to treating a range of complex diseases.
“Genetics is a powerful indicator of where a deficiency lies and how you might reverse that problem,” Young says. “There are many syndromes where we don’t have a complete understanding of the underlying mechanism of disease. But when a mutation clearly affects the output of a gene, you can now make a drug that can treat the disease without that complete understanding.”
As the company continues mapping the regulatory RNAs associated with every gene, Gold hopes CAMP4 can eventually minimize its reliance on wet-lab work and lean more heavily on machine learning to leverage its growing database and quickly identify regRNA targets for every disease it wants to treat.
In addition to its trials in urea cycle disorders, the company plans to launch key preclinical safety studies for a candidate targeting seizure disorders with a genetic basis, this year. And as the company continues exploring drug development efforts around the thousands of genetic diseases where increasing protein levels are can have a meaningful impact, it’s also considering collaborating with others to accelerate its impact.
“I can conceive of companies using a platform like this to go after many targets, where partners fund the clinical trials and use CAMP4 as an engine to target any disease where there’s a suspicion that gene upregulation or downregulation is the way to go,” Young says.
A visual pathway in the brain may do more than recognize objectsNew research using computational vision models suggests the brain’s “ventral stream” might be more versatile than previously thought.When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.
Consistent with this, in the past decade, MIT scientists have found that when computational models of the anatomy of the ventral stream are optimized to solve the task of object recognition, they are remarkably good predictors of the neural activities in the ventral stream.
However, in a new study, MIT researchers have shown that when they train these types of models on spatial tasks instead, the resulting models are also quite good predictors of the ventral stream’s neural activities. This suggests that the ventral stream may not be exclusively optimized for object recognition.
“This leaves wide open the question about what the ventral stream is being optimized for. I think the dominant perspective a lot of people in our field believe is that the ventral stream is optimized for object recognition, but this study provides a new perspective that the ventral stream could be optimized for spatial tasks as well,” says MIT graduate student Yudi Xie.
Xie is the lead author of the study, which will be presented at the International Conference on Learning Representations. Other authors of the paper include Weichen Huang, a visiting student through MIT’s Research Science Institute program; Esther Alter, a software engineer at the MIT Quest for Intelligence; Jeremy Schwartz, a sponsored research technical staff member; Joshua Tenenbaum, a professor of brain and cognitive sciences; and James DiCarlo, the Peter de Florez Professor of Brain and Cognitive Sciences, director of the Quest for Intelligence, and a member of the McGovern Institute for Brain Research at MIT.
Beyond object recognition
When we look at an object, our visual system can not only identify the object, but also determine other features such as its location, its distance from us, and its orientation in space. Since the early 1980s, neuroscientists have hypothesized that the primate visual system is divided into two pathways: the ventral stream, which performs object-recognition tasks, and the dorsal stream, which processes features related to spatial location.
Over the past decade, researchers have worked to model the ventral stream using a type of deep-learning model known as a convolutional neural network (CNN). Researchers can train these models to perform object-recognition tasks by feeding them datasets containing thousands of images along with category labels describing the images.
The state-of-the-art versions of these CNNs have high success rates at categorizing images. Additionally, researchers have found that the internal activations of the models are very similar to the activities of neurons that process visual information in the ventral stream. Furthermore, the more similar these models are to the ventral stream, the better they perform at object-recognition tasks. This has led many researchers to hypothesize that the dominant function of the ventral stream is recognizing objects.
However, experimental studies, especially a study from the DiCarlo lab in 2016, have found that the ventral stream appears to encode spatial features as well. These features include the object’s size, its orientation (how much it is rotated), and its location within the field of view. Based on these studies, the MIT team aimed to investigate whether the ventral stream might serve additional functions beyond object recognition.
“Our central question in this project was, is it possible that we can think about the ventral stream as being optimized for doing these spatial tasks instead of just categorization tasks?” Xie says.
To test this hypothesis, the researchers set out to train a CNN to identify one or more spatial features of an object, including rotation, location, and distance. To train the models, they created a new dataset of synthetic images. These images show objects such as tea kettles or calculators superimposed on different backgrounds, in locations and orientations that are labeled to help the model learn them.
The researchers found that CNNs that were trained on just one of these spatial tasks showed a high level of “neuro-alignment” with the ventral stream — very similar to the levels seen in CNN models trained on object recognition.
The researchers measure neuro-alignment using a technique that DiCarlo’s lab has developed, which involves asking the models, once trained, to predict the neural activity that a particular image would generate in the brain. The researchers found that the better the models performed on the spatial task they had been trained on, the more neuro-alignment they showed.
“I think we cannot assume that the ventral stream is just doing object categorization, because many of these other functions, such as spatial tasks, also can lead to this strong correlation between models’ neuro-alignment and their performance,” Xie says. “Our conclusion is that you can optimize either through categorization or doing these spatial tasks, and they both give you a ventral-stream-like model, based on our current metrics to evaluate neuro-alignment.”
Comparing models
The researchers then investigated why these two approaches — training for object recognition and training for spatial features — led to similar degrees of neuro-alignment. To do that, they performed an analysis known as centered kernel alignment (CKA), which allows them to measure the degree of similarity between representations in different CNNs. This analysis showed that in the early to middle layers of the models, the representations that the models learn are nearly indistinguishable.
“In these early layers, essentially you cannot tell these models apart by just looking at their representations,” Xie says. “It seems like they learn some very similar or unified representation in the early to middle layers, and in the later stages they diverge to support different tasks.”
The researchers hypothesize that even when models are trained to analyze just one feature, they also take into account “non-target” features — those that they are not trained on. When objects have greater variability in non-target features, the models tend to learn representations more similar to those learned by models trained on other tasks. This suggests that the models are using all of the information available to them, which may result in different models coming up with similar representations, the researchers say.
“More non-target variability actually helps the model learn a better representation, instead of learning a representation that’s ignorant of them,” Xie says. “It’s possible that the models, although they’re trained on one target, are simultaneously learning other things due to the variability of these non-target features.”
In future work, the researchers hope to develop new ways to compare different models, in hopes of learning more about how each one develops internal representations of objects based on differences in training tasks and training data.
“There could be still slight differences between these models, even though our current way of measuring how similar these models are to the brain tells us they’re on a very similar level. That suggests maybe there’s still some work to be done to improve upon how we can compare the model to the brain, so that we can better understand what exactly the ventral stream is optimized for,” Xie says.
The research was funded by the Semiconductor Research Corporation and the U.S. Defense Advanced Research Projects Agency.
Unparalleled student supportProfessors Andrew Vanderburg and Ariel White are honored as “Committed to Caring.”MIT Professors Andrew Vanderburg and Ariel White have been honored as Committed to Caring for their attentiveness to student needs and for creating a welcoming and inclusive culture. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Professor Vanderburg “is incredibly generous with his time, resources, and passion for mentoring the next generation of astronomers,” praised one of his students.
“Professor Ariel White has made my experience at MIT immeasurably better and I hope that one day I will be in a position to pay her kindness forward,” another student credited.
Andrew Vanderburg: Investing in student growth and development
Vanderburg is the Bruno B. Rossi Career Development Assistant Professor of Physics and is affiliated with the MIT Kavli Institute for Astrophysics and Space Research. His research focuses on studying exoplanets. Vanderburg is interested in developing cutting-edge techniques and methods to discover new planets outside of our solar system, and studying these planets to learn their detailed properties.
Ever respectful of students’ boundaries between their research and personal life, Vanderburg leads by example in striking a healthy balance. A nominator commented that he has recently been working on his wildlife photography skills, and has even shared some of his photos at the group’s meetings.
Balancing personal and work life is something that almost everyone Vanderburg knows struggles with, from undergraduate students to faculty. “I encourage my group members to spend free time doing things they enjoy outside of work,” Vanderburg says, “and I try to model that balanced behavior myself.”
Vanderburg also understands and accepts that sometimes personal lives can completely overwhelm everything else and affect work and studies. He offers, “when times like these inevitably happen, I just have to acknowledge that life is unpredictable, family comes first, and that the astronomy can wait.”
In addition, Vanderburg organizes group outings, such as hiking, apple picking, and Red Sox games, and occasionally hosts group gatherings at his home. An advisee noted that “these efforts make our group feel incredibly welcoming, and fosters friendship between all our team members.”
Vanderburg has provided individualized guidance and support to over a dozen students in his first two years as faculty at MIT. His students credit him with “meeting them where they are,” and say that he candidly addresses themes like imposter syndrome and student feelings of belonging in astronomy. Vanderburg is always ready to offer his fresh perspective and unwavering support to his students.
“I try to treat everyone in my group with kindness and support,” Vanderburg says, allowing his students to trust that he has their best interest at heart. Students feel this way as well; another nominator exclaimed that Vanderburg “genuinely and truly is one of the kindest humans I know.”
Vanderburg went above and beyond in offering his students support and insisting that his advisees will accomplish their goals. One nominator said, “his support meant the world to me at a time where I doubted my own abilities and potential.”
The Committed to Caring honor recognizes Vanderburg’s seemingly endless capacity to share his knowledge, support his students through difficult times, and invest in his mentees’ personal growth and development.
Ariel White: Student well-being and advocacy
White is an associate professor of political science who studies voting and voting rights, race, the criminal legal system, and bureaucratic behavior. Her research uses large datasets to measure individual-level experiences, and to shed light on people's everyday interactions with government. Her recent work investigates how potential voters react to experiences with punitive government policies, such as incarceration and immigration enforcement, and how people can make their way back into political life after these experiences.
She cares deeply about student well-being and departmental culture. One of her nominators shared a personal story describing that they were frequently belittled and insulted early in their graduate school journey. They had battled with whether this hurtful treatment was part of a typical grad school journey. The experience was negatively impacting their academic performance and feeling of belonging in the department.
When she learned of it, White immediately expressed concern and reinforced that the student deserved an environment that was conducive to learning and well-being, and then quickly took steps to talk to the peer to ensure their interactions improved.
“She wants me to feel valued, and is dedicated to both my growth as a scholar and my well-being as a person,” the nominator expressed. “This has been especially valuable as I found the adjustment to the department difficult and isolating.”
Another student commended, “I am constantly in awe of the time and effort that Ariel puts into leading by example, actively fostering an inclusive learning environment, and ensuring students feel heard and empowered.”
White is a radiant example of a professor who can have an outstanding publishing record while still treating graduate students with kindness and respect. She shows compassion and support to students, even those she does not advise. In the words of one nominator, “Ariel is the most caring person in this department.”
White has consistently expressed her desire to support her students and advocate for them. “I think one of the hardest transitions to make is the one from being a consumer of research to a producer of it.” Students work on the rather daunting prospect of developing an idea on their own for a solo project, and it can be hard to know where to start or how to keep going.
To address this, White says that she talks with advisees about what she’s seen work for her and for other students. She also encourages them to talk with their peers for advice and try out different ways of structuring their time or plan out goals.
“I try to help by explicitly highlighting these challenges and validating them: These are difficult things for nearly everyone who goes through the PhD program,” White adds.
One student reflected, “Ariel is the type of advisor that everyone should aspire to be, and that anyone would be lucky to have.”
Hundred-year storm tides will occur every few decades in Bangladesh, scientists reportWith projected global warming, the frequency of extreme storms will ramp up by the end of the century, according to a new study.Tropical cyclones are hurricanes that brew over the tropical ocean and can travel over land, inundating coastal regions. The most extreme cyclones can generate devastating storm tides — seawater that is heightened by the tides and swells onto land, causing catastrophic flood events in coastal regions. A new study by MIT scientists finds that, as the planet warms, the recurrence of destructive storm tides will increase tenfold for one of the hardest-hit regions of the world.
In a study appearing today in One Earth, the scientists report that, for the highly populated coastal country of Bangladesh, what was once a 100-year event could now strike every 10 years — or more often — by the end of the century.
In a future where fossil fuels continue to burn as they do today, what was once considered a catastrophic, once-in-a-century storm tide will hit Bangladesh, on average, once per decade. And the kind of storm tides that have occurred every decade or so will likely batter the country’s coast more frequently, every few years.
Bangladesh is one of the most densely populated countries in the world, with more than 171 million people living in a region roughly the size of New York state. The country has been historically vulnerable to tropical cyclones, as it is a low-lying delta that is easily flooded by storms and experiences a seasonal monsoon. Some of the most destructive floods in the world have occurred in Bangladesh, where it’s been increasingly difficult for agricultural economies to recover.
The study also finds that Bangladesh will likely experience tropical cyclones that overlap with the months-long monsoon season. Until now, cyclones and the monsoon have occurred at separate times during the year. But as the planet warms, the scientists’ modeling shows that cyclones will push into the monsoon season, causing back-to-back flooding events across the country.
“Bangladesh is very active in preparing for climate hazards and risks, but the problem is, everything they’re doing is more or less based on what they’re seeing in the present climate,” says study co-author Sai Ravela, principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We are now seeing an almost tenfold rise in the recurrence of destructive storm tides almost anywhere you look in Bangladesh. This cannot be ignored. So, we think this is timely, to say they have to pause and revisit how they protect against these storms.”
Ravela’s co-authors are Jiangchao Qiu, a postdoc in EAPS, and Kerry Emanuel, professor emeritus of atmospheric science at MIT.
Height of tides
In recent years, Bangladesh has invested significantly in storm preparedness, for instance in improving its early-warning system, fortifying village embankments, and increasing access to community shelters. But such preparations have generally been based on the current frequency of storms.
In this new study, the MIT team aimed to provide detailed projections of extreme storm tide hazards, which are flooding events where tidal effects amplify cyclone-induced storm surge, in Bangladesh under various climate-warming scenarios and sea-level rise projections.
“A lot of these events happen at night, so tides play a really strong role in how much additional water you might get, depending on what the tide is,” Ravela explains.
To evaluate the risk of storm tide, the team first applied a method of physics-based downscaling, which Emanuel’s group first developed over 20 years ago and has been using since to study hurricane activity in different parts of the world. The technique involves a low-resolution model of the global ocean and atmosphere that is embedded with a finer-resolution model that simulates weather patterns as detailed as a single hurricane. The researchers then scatter hurricane “seeds” in a region of interest and run the model forward to observe which seeds grow and make landfall over time.
To the downscaled model, the researchers incorporated a hydrodynamical model, which simulates the height of a storm surge, given the pattern and strength of winds at the time of a given storm. For any given simulated storm, the team also tracked the tides, as well as effects of sea level rise, and incorporated this information into a numerical model that calculated the storm tide, or the height of the water, with tidal effects as a storm makes landfall.
Extreme overlap
With this framework, the scientists simulated tens of thousands of potential tropical cyclones near Bangladesh, under several future climate scenarios, ranging from one that resembles the current day to one in which the world experiences further warming as a result of continued fossil fuel burning. For each simulation, they recorded the maximum storm tides along the coast of Bangladesh and noted the frequency of storm tides of various heights in a given climate scenario.
“We can look at the entire bucket of simulations and see, for this storm tide of say, 3 meters, we saw this many storms, and from that you can figure out the relative frequency of that kind of storm,” Qiu says. “You can then invert that number to a return period.”
A return period is the time it takes for a storm of a particular type to make landfall again. A storm that is considered a “100-year event” is typically more powerful and destructive, and in this case, creates more extreme storm tides, and therefore more catastrophic flooding, compared to a 10-year event.
From their modeling, Ravela and his colleagues found that under a scenario of increased global warming, the storms that previously were considered 100-year events, producing the highest storm tide values, can recur every decade or less by late-century. They also observed that, toward the end of this century, tropical cyclones in Bangladesh will occur across a broader seasonal window, potentially overlapping in certain years with the seasonal monsoon season.
“If the monsoon rain has come in and saturated the soil, a cyclone then comes in and it makes the problem much worse,” Ravela says. “People won’t have any reprieve between the extreme storm and the monsoon. There are so many compound and cascading effects between the two. And this only emerges because warming happens.”
Ravela and his colleagues are using their modeling to help experts in Bangladesh better evaluate and prepare for a future of increasing storm risk. And he says that the climate future for Bangladesh is in some ways not unique to this part of the world.
“This climate change story that is playing out in Bangladesh in a certain way will be playing out in a different way elsewhere,” Ravela notes. “Maybe where you are, the story is about heat stress, or amplifying droughts, or wildfires. The peril is different. But the underlying catastrophe story is not that different.”
This research is supported in part by the MIT Climate Resilience Early Warning Systems Climate Grand Challenges project, the Jameel Observatory JO-CREWSNet project; MIT Weather and Climate Extremes Climate Grand Challenges project; and Schmidt Sciences, LLC.
Four from MIT awarded 2025 Paul and Daisy Soros Fellowships for New AmericansFellowship honors contributions of immigrants to American society by awarding $90,000 in funding for graduate studies.MIT graduate students Sreekar Mantena and Arjun Ramani, and recent MIT alumni Rupert Li ’24 and Jupneet Singh ’23, have been named 2025 P.D. Soros Fellows. In addition, Soros Fellow Andre Ye will begin a PhD in computer science at MIT this fall.
Each year, the P.D. Soros Fellowship for New Americans awards 30 outstanding immigrants and children of immigrants $90,000 in graduate school financial support over a two-year period. The merit-based program selects fellows based on their achievements, potential to make meaningful contributions to their fields and communities, and dedication to the ideals of the United States represented in the Bill of Rights and the Constitution. This year’s fellows were selected from a competitive pool of more than 2,600 applicants nationwide.
Rupert Li ’24
The son of Chinese immigrants, Rupert Li was born and raised in Portland, Oregon. He graduated from MIT in 2024 with a double major in mathematics and computer science, economics, and data science, and earned an MEng in the latter subject.
Li was named a Marshall Scholar in 2023 and is currently pursuing a master’s degree in the Part III mathematics program at Cambridge University. His P.D. Soros Fellowship will support his pursuit of a PhD in mathematics at Stanford University.
Li’s first experience with mathematics research was as a high school student participant in the MIT PRIMES-USA program. He continued research in mathematics as an undergraduate at MIT, where he worked with professors Henry Cohn, Nike Sun, and Elchanan Mossel in the Department of Mathematics. Li also spent two summers at the Duluth REU (Research Experience for Undergraduates) program with Professor Joe Gallian.
Li’s research in probability, discrete geometry, and combinatorics culminated in him receiving the Barry Goldwater Scholarship, an honorable mention for the Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student, the Marshall Scholarship, and the Hertz Fellowship.
Beyond research, Li finds fulfillment in opportunities to give back to the math community that has supported him throughout his mathematical journey. This year marks the second time he has served as a graduate student mentor for the PRIMES-USA program, which sparked his mathematical career, and his first year as an advisor for the Duluth REU program.
Sreekar Mantena
Sreekar Mantena graduated Phi Beta Kappa from Harvard College with a degree in statistics and molecular biology. He is currently an MD student in biomedical informatics in the Harvard-MIT Program in Health Sciences and Technology (HST), where he works under Professor Soumya Raychaudhuri of the Broad Institute of MIT and Harvard. He is also pursuing a PhD in bioinformatics and integrative genomics at Harvard Medical School. In the future, Mantena hopes to blend compassion with computation as a physician-scientist who harnesses the power of machine learning and statistics to advance equitable health care delivery.
The son of Indian-American immigrants, Mantena was raised in North Carolina, where he grew up as fond of cheese grits as of his mother’s chana masala. Every summer of his childhood, he lived with his grandparents in Southern India, who instilled in him the importance of investing in one’s community and a love of learning.
As an undergraduate at Harvard, Mantena was inspired by the potential of statistics and data science to address gaps in health-care delivery. He founded the Global Alliance for Medical Innovation, a nonprofit organization that has partnered with physicians in six countries to develop data-driven medical technologies for underserved communities, including devices to detect corneal disease.
Mantena also pursued research in Professor Pardis Sabeti’s lab at the Broad Institute, where he built new algorithms to design diagnostic assays that improve the detection of infectious pathogens in resource-limited settings. He has co-authored over 20 scientific publications, and his lead-author work has been published in many journals, including Nature Biotechnology, The Lancet Digital Health, and the Journal of Pediatrics.
Arjun Ramani
Arjun Ramani, from West Lafayette, Indiana, is the son of immigrants from Tamil Nadu, India. He is currently pursuing a PhD in economics at MIT, where he studies technological change and innovation. Also the Carl Shapiro (1976) Fellow in the Department of Economics, Ramani hopes his research can inform policies and business practices that generate broadly shared economic growth.
Ramani’s dual interests in technology and the world led him to Stanford University, where he studied economics as an undergraduate and pursued a master’s in computer science, specializing in artificial intelligence. As data editor of the university’s newspaper, he started the Stanford Open Data Project to improve campus data transparency. During college, Ramani also spent time at the White House working on economic policy, in Ghana helping startups scale, and at Citadel in financial markets — all of which cultivated a broad interest in the economic world.
After graduating from Stanford, Ramani became The Economist’s global business and economics correspondent. He first covered technology and finance and later shifted to covering artificial intelligence after the technology took the world by storm in 2022.
In 2023, Ramani moved to India to cover the Indian economy in the lead-up to its election. There, he gained a much deeper appreciation for the social and institutional barriers that slowed technology adoption and catch-up growth. Ramani wrote or co-wrote six cover stories, was shortlisted for U.K. financial journalist of the year in 2024 for his AI and economics reporting, and co-authored a six-part special report on India’s economy.
Jupneet Singh ’23
Jupneet Singh, the daughter of Indian immigrants, is a Sikh-American who grew up deeply connected to her Punjabi and Sikh heritage in Somis, California. The Soros Fellowship will support her MD studies at Harvard Medical School’s HST program under the U.S. Air Force Health Professions Scholarship Program.
Singh plans to complete her medical residency as an active-duty U.S. Air Force captain, and after serving as a surgeon in the USAF she hopes to enter the United States Public Health Commissioned Corps. While Singh is the first in her family to serve in the U.S. armed services, she is proud to be carrying on a long Sikh military legacy.
Singh graduated from MIT in 2023 with a degree in chemistry and a concentration in history and won a Rhodes Scholarship to pursue two degrees at the University of Oxford: a master’s in public policy and a master’s in translational health sciences. At MIT, she served as the commander (highest-ranked cadet) of the Air Force ROTC Detachment and is now commissioned as a 2nd Lieutenant. She is the first woman Air Force ROTC Rhodes Scholar.
Singh has worked in de-addiction centers in Punjab, India. She also worked at the Ventura County Family Justice Center and Ventura County Medical Center Trauma Center, and published a first-author paper in The American Surgeon. She founded Pathways to Promise, a program to support the health of children affected by domestic violence. She has conducted research on fatty liver disease under Professor Alex Shalek at MIT and on maternal health inequalities at the National Perinatal Epidemiological Unit at Oxford.
Molecules that fight infection also act on the brain, inducing anxiety or sociabilityNew research on a cytokine called IL-17 adds to growing evidence that immune molecules can influence behavior during illness.Immune molecules called cytokines play important roles in the body’s defense against infection, helping to control inflammation and coordinating the responses of other immune cells. A growing body of evidence suggests that some of these molecules also influence the brain, leading to behavioral changes during illness.
Two new studies from MIT and Harvard Medical School, focused on a cytokine called IL-17, now add to that evidence. The researchers found that IL-17 acts on two distinct brain regions — the amygdala and the somatosensory cortex — to exert two divergent effects. In the amygdala, IL-17 can elicit feelings of anxiety, while in the cortex it promotes sociable behavior.
These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the studies.
“If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says.
Jun Huh, an associate professor of immunology at Harvard Medical School, is also a senior author of both studies, which appear today in Cell. One of the papers was led by Picower Institute Research Scientist Byeongjun Lee and former Picower Institute research scientist Jeong-Tae Kwon, and the other was led by Harvard Medical School postdoc Yunjin Lee and Picower Institute postdoc Tomoe Ishikawa.
Behavioral effects
Choi and Huh became interested in IL-17 several years ago, when they found it was involved in a phenomenon known as the fever effect. Large-scale studies of autistic children have found that for many of them, their behavioral symptoms temporarily diminish when they have a fever.
In a 2019 study in mice, Choi and Huh showed that in some cases of infection, IL-17 is released and suppresses a small region of the brain’s cortex known as S1DZ. Overactivation of neurons in this region can lead to autism-like behavioral symptoms in mice, including repetitive behaviors and reduced sociability.
“This molecule became a link that connects immune system activation, manifested as a fever, to changes in brain function and changes in the animals’ behavior,” Choi says.
IL-17 comes in six different forms, and there are five different receptors that can bind to it. In their two new papers, the researchers set out to map which of these receptors are expressed in different parts of the brain. This mapping revealed that a pair of receptors known as IL-17RA and IL-17RB is found in the cortex, including in the S1DZ region that the researchers had previously identified. The receptors are located in a population of neurons that receive proprioceptive input and are involved in controlling behavior.
When a type of IL-17 known as IL-17E binds to these receptors, the neurons become less excitable, which leads to the behavioral effects seen in the 2019 study.
“IL-17E, which we’ve shown to be necessary for behavioral mitigation, actually does act almost exactly like a neuromodulator in that it will immediately reduce these neurons’ excitability,” Choi says. “So, there is an immune molecule that’s acting as a neuromodulator in the brain, and its main function is to regulate excitability of neurons.”
Choi hypothesizes that IL-17 may have originally evolved as a neuromodulator, and later on was appropriated by the immune system to play a role in promoting inflammation. That idea is consistent with previous work showing that in the worm C. elegans, IL-17 has no role in the immune system but instead acts on neurons. Among its effects in worms, IL-17 promotes aggregation, a form of social behavior. Additionally, in mammals, IL-17E is actually made by neurons in the cortex, including S1DZ.
“There’s a possibility that a couple of forms of IL-17 perhaps evolved first and foremost to act as a neuromodulator in the brain, and maybe later were hijacked by the immune system also to act as immune modulators,” Choi says.
Provoking anxiety
In the other Cell paper, the researchers explored another brain location where they found IL-17 receptors — the amygdala. This almond-shaped structure plays an important role in processing emotions, including fear and anxiety.
That study revealed that in a region known as the basolateral amygdala (BLA), the IL-17RA and IL-17RE receptors, which work as a pair, are expressed in a discrete population of neurons. When these receptors bind to IL-17A and IL-17C, the neurons become more excitable, leading to an increase in anxiety.
The researchers also found that, counterintuitively, if animals are treated with antibodies that block IL-17 receptors, it actually increases the amount of IL-17C circulating in the body. This finding may help to explain unexpected outcomes observed in a clinical trial of a drug targeting the IL-17-RA receptor for psoriasis treatment, particularly regarding its potential adverse effects on mental health.
“We hypothesize that there’s a possibility that the IL-17 ligand that is upregulated in this patient cohort might act on the brain to induce suicide ideation, while in animals there is an anxiogenic phenotype,” Choi says.
During infections, this anxiety may be a beneficial response, keeping the sick individual away from others to whom the infection could spread, Choi hypothesizes.
“Other than its main function of fighting pathogens, one of the ways that the immune system works is to control the host behavior, to protect the host itself and also protect the community the host belongs to,” she says. “One of the ways the immune system is doing that is to use cytokines, secreted factors, to go to the brain as communication tools.”
The researchers found that the same BLA neurons that have receptors for IL-17 also have receptors for IL-10, a cytokine that suppresses inflammation. This molecule counteracts the excitability generated by IL-17, giving the body a way to shut off anxiety once it’s no longer useful.
Distinctive behaviors
Together, the two studies suggest that the immune system, and even a single family of cytokines, can exert a variety of effects in the brain.
“We have now different combinations of IL-17 receptors being expressed in different populations of neurons, in two different brain regions, that regulate very distinct behaviors. One is actually somewhat positive and enhances social behaviors, and another is somewhat negative and induces anxiogenic phenotypes,” Choi says.
Her lab is now working on additional mapping of IL-17 receptor locations, as well as the IL-17 molecules that bind to them, focusing on the S1DZ region. Eventually, a better understanding of these neuro-immune interactions may help researchers develop new treatments for neurological conditions such as autism or depression.
“The fact that these molecules are made by the immune system gives us a novel approach to influence brain function as means of therapeutics,” Choi says. “Instead of thinking about directly going for the brain, can we think about doing something to the immune system?”
The research was funded, in part, by Jeongho Kim and the Brain Impact Foundation Neuro-Immune Fund, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain, the Marcus Foundation, the N of One: Autism Research Foundation, the Burroughs Wellcome Fund, the Picower Institute Innovation Fund, the MIT John W. Jarve Seed Fund for Science Innovation, Young Soo Perry and Karen Ha, and the National Institutes of Health.
Surprise discovery could lead to improved catalysts for industrial reactionsUpending a long-held supposition, MIT researchers find a common catalyst works by cycling between two different forms.The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.
A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.
Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.
There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.
“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”
He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”
The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.
While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.
The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.
They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.
The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”
By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.
The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.
“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.
Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.
Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”
This work is “illuminating, something that will be worth teaching at the undergraduate level," says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. ... [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”
The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation.
MIT welcomes 2025 Heising-Simons Foundation 51 Pegasi b Fellow Jess SpeedieThe fellowship supports research contributing to the field of planetary science and astronomy.The MIT School of Science welcomes Jess Speedie, one of eight recipients of the 2025 51 Pegasi b Fellowship. The announcement was made March 27 by the Heising-Simons Foundation.
The 51 Pegasi b Fellowship, named after the first exoplanet discovered orbiting a sun-like star, was established in 2017 to provide postdocs with the opportunity to conduct theoretical, observational, and experimental research in planetary astronomy.
Speedie, who expects to complete her PhD in astronomy at the University of Victoria, Canada, this summer, will be hosted by the Department of Earth, Atmospheric and Planetary Sciences (EAPS). She will be mentored by Kerr-McGee Career Development Professor Richard Teague as she uses a combination of observational data and simulations to study the birth of planets and the processes of planetary formation.
“The planetary environment is where all the good stuff collects … it has the greatest potential for the most interesting things in the universe to happen, such as the origin of life,” she says. “Planets, for me, are where the stories happen.”
Speedie’s work has focused on understanding “cosmic nurseries” and the detection and characterization of the youngest planets in the galaxy. A lot of this work has made use of the Atacama Large Millimeter/submillimeter Array (ALMA), located in northern Chile. Made up of a collection of 66 parabolic dishes, ALMA studies the universe with radio wavelengths, and Speedie has developed a novel approach to find signals in the data of gravitational instability in protoplanetary disks, a method of planetary formation.
“One of the big, big questions right now in the community focused on planet formation is, where are the planets? It is that simple. We think they’re developing in these disks, but we’ve detected so few of them,” she says.
While working as a fellow, Speedie is aiming to develop an algorithm that carefully aligns and stacks a decade of ALMA observational data to correct for a blurring effect that happens when combining images captured at different times. Doing so should produce the sharpest, most sensitive images of early planetary systems to date.
She is also interested in studying infant planets, especially ones that may be forming in disks around protoplanets, rather than stars. Modeling how these ingredient materials in orbit behave could give astronomers a way to measure the mass of young planets.
“What’s exciting is the potential for discovery. I have this sense that the universe as a whole is infinitely more creative than human minds — the kinds of things that happen out there, you can’t make that up. It’s better than science fiction,” she says.
The other 51 Pegasi b Fellows and their host institutions this year are Nick Choksi (Caltech), Yan Liang (Yale University), Sagnick Mukherjee (Arizona State University), Matthew Nixon (Arizona State University), Julia Santos (Harvard University), Nour Skaf (University of Hawaii), and Jerry Xuan (University of California at Los Angeles).
The fellowship provides up to $450,000 of support over three years for independent research, a generous salary and discretionary fund, mentorship at host institutions, an annual summit to develop professional networks and foster collaboration, and an option to apply for another grant to support a future position in the United States.
Looking under the hood at the brain’s language system Associate Professor Evelina Fedorenko is working to decipher the internal structure and functions of the brain’s language-processing machinery.As a young girl growing up in the former Soviet Union, Evelina Fedorenko PhD ’07 studied several languages, including English, as her mother hoped that it would give her the chance to eventually move abroad for better opportunities.
Her language studies not only helped her establish a new life in the United States as an adult, but also led to a lifelong interest in linguistics and how the brain processes language. Now an associate professor of brain and cognitive sciences at MIT, Fedorenko studies the brain’s language-processing regions: how they arise, whether they are shared with other mental functions, and how each region contributes to language comprehension and production.
Fedorenko’s early work helped to identify the precise locations of the brain’s language-processing regions, and she has been building on that work to generate insight into how different neuronal populations in those regions implement linguistic computations.
“It took a while to develop the approach and figure out how to quickly and reliably find these regions in individual brains, given this standard problem of the brain being a little different across people,” she says. “Then we just kept going, asking questions like: Does language overlap with other functions that are similar to it? How is the system organized internally? Do different parts of this network do different things? There are dozens and dozens of questions you can ask, and many directions that we have pushed on.”
Among some of the more recent directions, she is exploring how the brain’s language-processing regions develop early in life, through studies of very young children, people with unusual brain architecture, and computational models known as large language models.
From Russia to MIT
Fedorenko grew up in the Russian city of Volgograd, which was then part of the Soviet Union. When the Soviet Union broke up in 1991, her mother, a mechanical engineer, lost her job, and the family struggled to make ends meet.
“It was a really intense and painful time,” Fedorenko recalls. “But one thing that was always very stable for me is that I always had a lot of love, from my parents, my grandparents, and my aunt and uncle. That was really important and gave me the confidence that if I worked hard and had a goal, that I could achieve whatever I dreamed about.”
Fedorenko did work hard in school, studying English, French, German, Polish, and Spanish, and she also participated in math competitions. As a 15-year-old, she spent a year attending high school in Alabama, as part of a program that placed students from the former Soviet Union with American families. She had been thinking about applying to universities in Europe but changed her plans when she realized the American higher education system offered more academic flexibility.
After being admitted to Harvard University with a full scholarship, she returned to the United States in 1998 and earned her bachelor’s degree in psychology and linguistics, while also working multiple jobs to send money home to help her family.
While at Harvard, she also took classes at MIT and ended up deciding to apply to the Institute for graduate school. For her PhD research at MIT, she worked with Ted Gibson, a professor of brain and cognitive sciences, and later, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience. She began by using functional magnetic resonance imaging (fMRI) to study brain regions that appeared to respond preferentially to music, but she soon switched to studying brain responses to language.
She found that working with Kanwisher, who studies the functional organization of the human brain but hadn’t worked much on language before, helped Fedorenko to build a research program free of potential biases baked into some of the early work on language processing in the brain.
“We really kind of started from scratch,” Fedorenko says, “combining the knowledge of language processing I have gained by working with Gibson and the rigorous neuroscience approaches that Kanwisher had developed when studying the visual system.”
After finishing her PhD in 2007, Fedorenko stayed at MIT for a few years as a postdoc funded by the National Institutes of Health, continuing her research with Kanwisher. During that time, she and Kanwisher developed techniques to identify language-processing regions in different people, and discovered new evidence that certain parts of the brain respond selectively to language. Fedorenko then spent five years as a research faculty member at Massachusetts General Hospital, before receiving an offer to join the faculty at MIT in 2019.
How the brain processes language
Since starting her lab at MIT’s McGovern Institute for Brain Research, Fedorenko and her trainees have made several discoveries that have helped to refine neuroscientists’ understanding of the brain’s language-processing regions, which are spread across the left frontal and temporal lobes of the brain.
In a series of studies, her lab showed that these regions are highly selective for language and are not engaged by activities such as listening to music, reading computer code, or interpreting facial expressions, all of which have been argued to be share similarities with language processing.
“We’ve separated the language-processing machinery from various other systems, including the system for general fluid thinking, and the systems for social perception and reasoning, which support the processing of communicative signals, like facial expressions and gestures, and reasoning about others’ beliefs and desires,” Fedorenko says. “So that was a significant finding, that this system really is its own thing.”
More recently, Fedorenko has turned her attention to figuring out, in more detail, the functions of different parts of the language processing network. In one recent study, she identified distinct neuronal populations within these regions that appear to have different temporal windows for processing linguistic content, ranging from just one word up to six words.
She is also studying how language-processing circuits arise in the brain, with ongoing studies in which she and a postdoc in her lab are using fMRI to scan the brains of young children, observing how their language regions behave even before the children have fully learned to speak and understand language.
Large language models (similar to ChatGPT) can help with these types of developmental questions, as the researchers can better control the language inputs to the model and have continuous access to its abilities and representations at different stages of learning.
“You can train models in different ways, on different kinds of language, in different kind of regimens. For example, training on simpler language first and then more complex language, or on language combined with some visual inputs. Then you can look at the performance of these language models on different tasks, and also examine changes in their internal representations across the training trajectory, to test which model best captures the trajectory of human language learning,” Fedorenko says.
To gain another window into how the brain develops language ability, Fedorenko launched the Interesting Brains Project several years ago. Through this project, she is studying people who experienced some type of brain damage early in life, such as a prenatal stroke, or brain deformation as a result of a congenital cyst. In some of these individuals, their conditions destroyed or significantly deformed the brain’s typical language-processing areas, but all of these individuals are cognitively indistinguishable from individuals with typical brains: They still learned to speak and understand language normally, and in some cases, they didn’t even realize that their brains were in some way atypical until they were adults.
“That study is all about plasticity and redundancy in the brain, trying to figure out what brains can cope with, and how” Fedorenko says. “Are there many solutions to build a human mind, even when the neural infrastructure is so different-looking?”
Deep-dive dinners are the norm for tuna and swordfish, MIT oceanographers findThese big fish get most of their food from the ocean’s “twilight zone,” a deep, dark region the commercial fishing industry is eyeing with interest.How far would you go for a good meal? For some of the ocean’s top predators, maintaining a decent diet requires some surprisingly long-distance dives.
MIT oceanographers have found that big fish like tuna and swordfish get a large fraction of their food from the ocean’s twilight zone — a cold and dark layer of the ocean about half a mile below the surface, where sunlight rarely penetrates. Tuna and swordfish have been known to take extreme plunges, but it was unclear whether these deep dives were for food, and to what extent the fishes’ diet depends on prey in the twilight zone.
In a study published recently in the ICES Journal of Marine Science, the MIT student-led team reports that the twilight zone is a major food destination for three predatory fish — bigeye tuna, yellowfin tuna, and swordfish. While the three species swim primarily in the shallow open ocean, the scientists found these fish are sourcing between 50 and 60 percent of their diet from the twilight zone.
The findings suggest that tuna and swordfish rely more heavily on the twilight zone than scientists had assumed. This implies that any change to the twilight zone’s food web, such as through increased fishing, could negatively impact fisheries of more shallow tuna and swordfish.
“There is increasing interest in commercial fishing in the ocean’s twilight zone,” says Ciara Willis, the study’s lead author, who was a PhD student in the MIT-Woods Hole Oceanographic Institution (WHOI) Joint Program when conducting the research and is now a postdoc at WHOI. “If we start heavily fishing that layer of the ocean, our study suggests that could have profound implications for tuna and swordfish, which are very reliant on the twilight zone and are highly valuable existing fisheries.”
The study’s co-authors include Kayla Gardener of MIT-WHOI, and WHOI researchers Martin Arostegui, Camrin Braun, Leah Hougton, Joel Llopiz, Annette Govindarajan, and Simon Thorrold, along with Walt Golet at the University of Maine.
Deep-ocean buffet
The ocean’s twilight zone is a vast and dim layer that lies between the sunlit surface waters and the ocean’s permanently dark, midnight zone. Also known as the midwater, or mesopelagic layer, the twilight zone stretches between 200 and 1,000 meters below the ocean’s surface and is home to a huge variety of organisms that have adapted to live in the darkness.
“This is a really understudied region of the ocean, and it’s filled with all these fantastic, weird animals,” Willis says.
In fact, it’s estimated that the biomass of fish in the twilight zone is somewhere close to 10 billion tons, much of which is concentrated in layers at certain depths. By comparison, the marine life that lives closer to the surface, Willis says, is “a thin soup,” which is slim pickings for large predators.
“It’s important for predators in the open ocean to find concentrated layers of food. And I think that’s what drives them to be interested in the ocean’s twilight zone,” Willis says. “We call it the ‘deep ocean buffet.’”
And much of this buffet is on the move. Many kinds of fish, squid, and other deep-sea organisms in the twilight zone will swim up to the surface each night to find food. This twilight community will descend back into darkness at dawn to avoid detection.
Scientists have observed that many large predatory fish will make regular dives into the twilight zone, presumably to feast on the deep-sea bounty. For instance, bigeye tuna spend much of their day making multiple short, quick plunges into the twilight zone, while yellowfin tuna dive down every few days to weeks. Swordfish, in contrast, appear to follow the daily twilight migration, feeding on the community as it rises and falls each day.
“We’ve known for a long time that these fish and many other predators feed on twilight zone prey,” Willis says. “But the extent to which they rely on this deep-sea food web for their forage has been unclear.”
Twilight signal
For years, scientists and fishers have found remnants of fish from the twilight zone in the stomach contents of larger, surface-based predators. This suggests that predator fish do indeed feed on twilight food, such as lanternfish, certain types of squid, and long, snake-like fish called barracudina. But, as Willis notes, stomach contents give just a “snapshot” of what a fish ate that day.
She and her colleagues wanted to know how big a role twilight food plays in the general diet of predator fish. For their new study, the team collaborated with fishermen in New Jersey and Florida, who fish for a living in the open ocean. They supplied the team with small tissue samples of their commercial catch, including samples of bigeye tuna, yellowfin tuna, and swordfish.
Willis and her advisor, Senior Scientist Simon Thorrold, brought the samples back to Thorrold’s lab at WHOI and analyzed the fish bits for essential amino acids — the key building blocks of proteins. Essential amino acids are only made by primary producers, or members of the base of the food web, such as phytoplankton, microbes, and fungi. Each of these producers makes essential amino acids with a slightly different carbon isotope configuration that then is conserved as the producers are consumed on up their respective food chains.
“One of the hypotheses we had was that we’d be able to distinguish the carbon isotopic signature of the shallow ocean, which would logically be more phytoplankton-based, versus the deep ocean, which is more microbially based,” Willis says.
The researchers figured that if a fish sample had one carbon isotopic make-up over another, it would be a sign that that fish feeds more on food from the deep, rather than shallow waters.
“We can use this [carbon isotope signature] to infer a lot about what food webs they’ve been feeding in, over the last five to eight months,” Willis says.
The team looked at carbon isotopes in tissue samples from over 120 samples including bigeye tuna, yellowfin tuna, and swordfish. They found that individuals from all three species contained a substantial amount of carbon derived from sources in the twilight zone. The researchers estimate that, on average, food from the twilight zone makes up 50 to 60 percent of the diet of the three predator species, with some slight variations among species.
“We saw the bigeye tuna were far and away the most consistent in where they got their food from. They didn’t vary much from individual to individual,” Willis says. “Whereas the swordfish and yellowfin tuna were more variable. That means if you start having big-scale fishing in the twilight zone, the bigeye tuna might be the ones who are most at risk from food web effects.”
The researchers note there has been increased interest in commercially fishing the twilight zone. While many fish in that region are not edible for humans, they are starting to be harvested as fishmeal and fish oil products. In ongoing work, Willis and her colleagues are evaluating the potential impacts to tuna fisheries if the twilight zone becomes a target for large-scale fishing.
“If predatory fish like tunas have 50 percent reliance on twilight zone food webs, and we start heavily fishing that region, that could lead to uncertainty around the profitability of tuna fisheries,” Willis says. “So we need to be very cautious about impacts on the twilight zone and the larger ocean ecosystem.”
This work was part of the Woods Hole Oceanographic Institution’s Ocean Twilight Zone Project, funded as part of the Audacious Project housed at TED. Willis was additionally supported by the Natural Sciences and Engineering Research Council of Canada and the MIT Martin Family Society of Fellows for Sustainability.
Professor Emeritus Frederick Greene, influential chemist who focused on free radicals, dies at 97The physical organic chemist and MIT professor for over 40 years is celebrated for his lasting impact on generations of chemists.Frederick “Fred” Davis Greene II, professor emeritus in the MIT Department of Chemistry who was accomplished in the field of physical organic chemistry and free radicals, passed away peacefully after a brief illness, surrounded by his family, on Saturday, March 22. He had been a member of the MIT community for over 70 years.
“Greene’s dedication to teaching, mentorship, and the field of physical organic chemistry is notable,” said Professor Troy Van Voorhis, head of the Department of Chemistry, upon learning of Greene’s passing. “He was also a constant source of joy to those who interacted with him, and his commitment to students and education was legendary. He will be sorely missed.”
Greene, a native of Glen Ridge, New Jersey, was born on July 7, 1927 to parents Phillips Foster Greene and Ruth Altman Greene. He spent his early years in China, where his father was a medical missionary with Yale-In-China. Greene and his family moved to the Philippines just ahead of the Japanese invasion prior to World War Il, and then back to the French Concession of Shanghai, and to the United States in 1940. He joined the U.S. Navy in December 1944, and afterwards earned his bachelor’s degree from Amherst College in 1949 and a PhD from Harvard University in 1952. Following a year at the University of California at Los Angeles as a research associate, he was appointed a professor of chemistry at MIT by then-Department Head Arthur C. Cope in 1953. Greene retired in 1995.
Greene’s research focused on peroxide decompositions and free radical chemistry, and he reported the remarkable bimolecular reaction between certain diacyl peroxides and electron-rich olefins and aromatics. He was also interested in small-ring heterocycles, e.g., the three-membered ring 2,3-diaziridinones. His research also covered strained olefins, the Greene-Viavattene diene, and 9, 9', 10, 10'-tetradehydrodianthracene.
Greene was elected to the American Academy of Arts and Sciences in 1965 and received an honorary doctorate from Amherst College for his research in free radicals. He served as editor-in-chief of the Journal of Organic Chemistry of the American Chemical Society from 1962 to 1988. He was awarded a special fellowship form the National Science Foundation and spent a year at Cambridge University, Cambridge, England, and was a member of the Chemical Society of London.
Greene and Professor James Moore of the University of Philadelphia worked closely with Greene’s wife, Theodora “Theo” W. Greene, in the conversion of her PhD thesis, which was overseen by Professor Elias J. Corey of Harvard University, into her book “Greene’s Protective Groups in Organic Synthesis.” The book became an indispensable reference for any practicing synthetic organic or medicinal chemist and is now in its fifth edition. Theo, who predeceased Fred in July 2005, was a tremendous partner to Greene, both personally and professionally. A careful researcher in her own right, she served as associate editor of the Journal of Organic Chemistry for many years.
Fred Greene was recently featured in a series of videos featuring Professor Emeritus Dietmar Seyferth (who passed away in 2020) that was spearheaded by Professor Rick Danheiser. The videos cover a range of topics, including Seyferth and Greene’s memories during the 1950s to mid-1970s of their fellow faculty members, how they came to be hired, the construction of various lab spaces, developments in teaching and research, the evolution of the department’s graduate program, and much more.
Danheiser notes that it was a privilege to share responsibility for the undergraduate class 5.43 (Advanced Organic Chemistry) with Greene. “Fred Greene was a fantastic teacher and inspired several generations of MIT undergraduate and graduate students with his superb lectures,” Danheiser recalls. The course they shared was Danheiser’s first teaching assignment at MIT, and he states that Greene’s “counsel and mentoring was invaluable to me.”
The Department of Chemistry recognized Greene’s contributions to its academic program by naming the annual student teaching award the “Frederick D. Greene Teaching Award.” This award recognizes outstanding contributions in teaching in chemistry by undergraduates. Since 1993 the award has been given to 46 students.
Dabney White Dixon PhD ’76 was one of many students with whom Greene formed a lifelong friendship and mentorship. Dixon shares, “Fred Greene was an outstanding scientist — intelligent, ethical, and compassionate in every aspect of his life. He possessed an exceptional breadth of knowledge in organic chemistry, particularly in mechanistic organic chemistry, as evidenced by his long tenure as editor of the Journal of Organic Chemistry (1962 to 1988). Weekly, large numbers of manuscripts flowed through his office. He had an acute sense of fairness in evaluating submissions and was helpful to those submitting manuscripts. His ability to navigate conflicting scientific viewpoints was especially evident during the heated debates over non-classical carbonium ions in the 1970s.
“Perhaps Fred’s greatest contribution to science was his mentorship. At a time when women were rare in chemistry PhD programs, Fred’s mentorship was particularly meaningful. I was the first woman in my scientific genealogical lineage to study chemistry, and his guidance gave me the confidence to overcome challenges. He and Theo provided a supportive and joyful environment, helping me forge a career in academia where I have since mentored 13 PhD students — an even mix of men and women — a testament to the social progress in science that Fred helped foster.
“Fred’s meticulous attention to detail was legendary. He insisted that every new molecule be fully characterized spectroscopically before he would examine the data. Through this, his students learned the importance of thoroughness, accuracy, and organization. He was also an exceptional judge of character, entrusting students with as much responsibility as they could handle. His honesty was unwavering — he openly acknowledged mistakes, setting a powerful example for his students.
“Shortly before the pandemic, I had the privilege of meeting Fred with two of his scientific ‘granddaughters’ — Elizabeth Draganova, then a postdoc at Tufts (now an assistant professor at Emory), and Cyrianne Keutcha, then a graduate student at Harvard (now a postdoc at Yale). As we discussed our work, it was striking how much science had evolved — from IR and NMR of small-ring heterocycles to surface plasmon resonance and cryo-electron microscopy of large biochemical systems. Yet, Fred’s intellectual curiosity remained as sharp as ever. His commitment to excellence, attention to detail, and passion for uncovering chemical mechanisms lived on in his scientific descendants.
“He leaves a scientific legacy of chemists who internalized his lessons on integrity, kindness, and rigorous analysis, carrying them forward to their own students and research. His impact on the field of chemistry — and on the lives of those fortunate enough to have known him — will endure.”
Carl Renner PhD ’74 felt fortunate and privileged to be a doctoral student in the Greene group from 1969 to 1973, and also his teaching assistant for his 5.43 course. Renner recalls, “He possessed a curious mind of remarkable clarity and discipline. He prepared his lectures meticulously and loved his students. He was extremely generous with his time and knowledge. I never heard him complain or say anything unkind. Everyone he encountered came away better for it.”
Gary Breton PhD ’91 credits the development of his interest in physical organic chemistry to his time spent in Greene’s class. Breton says, “During my time in the graduate chemistry program at MIT (1987-91) I had the privilege of learning from some of the world’s greatest minds in chemistry, including Dr. Fred Greene. At that time, all incoming graduate students in organic chemistry were assigned in small groups to a seminar-type course that met each week to work on the elucidation of reaction mechanisms, and I was assigned to Dr. Greene’s class. It was here that not only did Dr. Greene afford me a confidence in how to approach reaction mechanisms, but he also ignited my fascination with physical organic chemistry. I was only too happy to join his research group, and begin a love/hate relationship with reactive nitrogen-containing heterocycles that continues to this day in my own research lab as a chemistry professor.
“Anyone that knew Dr. Greene quickly recognized that he was highly intelligent and exceptionally knowledgeable about all things organic, but under his mentorship I also saw his creativity and cleverness. Beyond that, and even more importantly, I witnessed his kindness and generosity, and his subtle sense of humor. Dr. Greene’s enduring legacy is the large number of undergraduate students, graduate students, and postdocs whose lives he touched over his many years. He will be greatly missed.”
John Dolhun PhD ’73 recalls Greene’s love for learning, and that he “was one of the kindest persons that I have known.” Dolhun shares, “I met Fred Greene when I was a graduate student. His organic chemistry course was one of the most popular, and he was a top choice for many students’ thesis committees. When I returned to MIT in 2008 and reconnected with him, he was still endlessly curious — always learning, asking questions. A few years ago, he visited me and we had lunch. Back at the chemistry building, I reached for the elevator button and he said, ‘I always walk up the five flights of stairs.’ So, I walked up with him. Fred knew how to keep both mind and body in shape. He was truly a beacon of light in the department.”
Liz McGrath, retired chemistry staff member, warmly recalls the regular coffees and conversations she shared with Fred over two decades at the Institute. She shares, “Fred, who was already emeritus by the time of my arrival, imparted to me a deep interest in the history of MIT Chemistry’s events and colorful faculty. He had a phenomenal memory, which made his telling of the history so rich in its content. He was a true gentleman and sweet and kind to boot. ... I will remember him with much fondness.”
Greene is survived by his children, Alan, Carol, Elizabeth, and Phillips; nine grandchildren; and six great grandchildren. A memorial service will be held on April 5 at 11 a.m. at the First Congregational Church in Winchester, Massachusetts.
Collaboration between MIT and GE Vernova aims to develop and scale sustainable energy systemsThe MIT-GE Vernova Energy and Climate Alliance includes research, education, and career opportunities across the Institute.MIT and GE Vernova today announced the creation of the MIT-GE Vernova Energy and Climate Alliance to help develop and scale sustainable energy systems across the globe.
The alliance launches a five-year collaboration between MIT and GE Vernova, a global energy company that spun off from General Electric’s energy business in 2024. The endeavor will encompass research, education, and career opportunities for students, faculty, and staff across MIT’s five schools and the MIT Schwarzman College of Computing. It will focus on three main themes: decarbonization, electrification, and renewables acceleration.
“This alliance will provide MIT students and researchers with a tremendous opportunity to work on energy solutions that could have real-world impact,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of the School of Engineering. “GE Vernova brings domain knowledge and expertise deploying these at scale. When our researchers develop new innovative technologies, GE Vernova is strongly positioned to bring them to global markets.”
Through the alliance, GE Vernova is sponsoring research projects at MIT and providing philanthropic support for MIT research fellowships. The company will also engage with MIT’s community through participation in corporate membership programs and professional education.
“It’s a privilege to combine forces with MIT’s world-class faculty and students as we work together to realize an optimistic, innovation-driven approach to solving the world’s most pressing challenges,” says Scott Strazik, GE Vernova CEO. “Through this alliance, we are proud to be able to help drive new technologies while at the same time inspire future leaders to play a meaningful role in deploying technology to improve the planet at companies like GE Vernova.”
“This alliance embodies the spirit of the MIT Climate Project — combining cutting-edge research, a shared drive to tackle today’s toughest energy challenges, and a deep sense of optimism about what we can achieve together,” says Sally Kornbluth, president of MIT. “With the combined strengths of MIT and GE Vernova, we have a unique opportunity to make transformative progress in the flagship areas of electrification, decarbonization, and renewables acceleration.”
The alliance, comprising a $50 million commitment, will operate within MIT’s Office of Innovation and Strategy. It will fund approximately 12 annual research projects relating to the three themes, as well as three master’s student projects in MIT’s Technology and Policy Program. The research projects will address challenges like developing and storing clean energy, as well as the creation of robust system architectures that help sustainable energy sources like solar, wind, advanced nuclear reactors, green hydrogen, and more compete with carbon-emitting sources.
The projects will be selected by a joint steering committee composed of representatives from MIT and GE Vernova, following an annual Institute-wide call for proposals.
The collaboration will also create approximately eight endowed GE Vernova research fellowships for MIT students, to be selected by faculty and beginning in the fall. There will also be 10 student internships that will span GE Vernova’s global operations, and GE Vernova will also sponsor programming through MIT’s New Engineering Education Transformation (NEET), which equips students with career-oriented experiential opportunities. Additionally, the alliance will create professional education programming for GE Vernova employees.
“The internships and fellowships will be designed to bring students into our ecosystem,” says GE Vernova Chief Corporate Affairs Officer Roger Martella. “Students will walk our factory floor, come to our labs, be a part of our management teams, and see how we operate as business leaders. They’ll get a sense for how what they’re learning in the classroom is being applied in the real world.”
Philanthropic support from GE Vernova will also support projects in MIT’s Human Insight Collaborative (MITHIC), which launched last fall to elevate human-centered research and teaching. The projects will allow faculty to explore how areas like energy and cybersecurity influence human behavior and experiences.
In connection with the alliance, GE Vernova is expected to join several MIT consortia and membership programs, helping foster collaborations and dialogue between industry experts and researchers and educators across campus.
With operations across more than 100 countries, GE Vernova designs, manufactures, and services technologies to generate, transfer, and store electricity with a mission to decarbonize the world. The company is headquartered in Kendall Square, right down the road from MIT, which its leaders say is not a coincidence.
“We’re really good at taking proven technologies and commercializing them and scaling them up through our labs,” Martella says. “MIT excels at coming up with those ideas and being a sort of time machine that thinks outside the box to create the future. That’s why this such a great fit: We both have a commitment to research, innovation, and technology.”
The alliance is the latest in MIT’s rapidly growing portfolio of research and innovation initiatives around sustainable energy systems, which also includes the Climate Project at MIT. Separate from, but complementary to, the MIT-GE Vernova Alliance, the Climate Project is a campus-wide effort to develop technological, behavioral, and policy solutions to some of the toughest problems impeding an effective global climate response.
MIT affiliates named 2024 AAAS FellowsThe American Association for the Advancement of Science recognizes six current affiliates and 27 additional MIT alumni for their efforts to advance science and related fields.Six current MIT affiliates and 27 additional MIT alumni have been elected as fellows of the American Association for the Advancement of Science (AAAS).
The 2024 class of AAAS Fellows includes 471 scientists, engineers, and innovators, spanning all 24 of AAAS disciplinary sections, who are being recognized for their scientifically and socially distinguished achievements.
Noubar Afeyan PhD ’87, life member of the MIT Corporation, was named a AAAS Fellow “for outstanding leadership in biotechnology, in particular mRNA therapeutics, and for advocacy for recognition of the contributions of immigrants to economic and scientific progress.” Afeyan is the founder and CEO of the venture creation company Flagship Pioneering, which has built over 100 science-based companies to transform human health and sustainability. He is also the chairman and cofounder of Moderna, which was awarded a 2024 National Medal of Technology and Innovation for the development of its Covid-19 vaccine. Afeyan earned his PhD in biochemical engineering at MIT in 1987 and was a senior lecturer at the MIT Sloan School of Management for 16 years, starting in 2000. Among other activities at the Institute, he serves on the advisory board of the MIT Abdul Latif Jameel Clinic for Machine Learning and delivered MIT’s 2024 Commencement address.
Cynthia Breazeal SM ’93, ScD ’00 is a professor of media arts and sciences at MIT, where she founded and directs the Personal Robots group in the MIT Media Lab. At MIT Open Learning, she is the MIT dean for digital learning, and in this role, she leverages her experience in emerging digital technologies and business, research, and strategic initiatives to lead Open Learning’s business and research and engagement units. She is also the director of the MIT-wide Initiative on Responsible AI for Social Empowerment and Education (raise.mit.edu). She co-founded the consumer social robotics company, Jibo, Inc., where she served as chief scientist and chief experience officer. She is recognized for distinguished contributions in the field of artificial intelligence education, particularly around the use of social robots, and learning at scale.
Alan Edelman PhD ’89 is an applied mathematics professor for the Department of Mathematics and leads the Applied Computing Group of the Computer Science and Artificial Intelligence Laboratory, the MIT Julia Lab. He is recognized as a 2024 AAAS fellow for distinguished contributions and outstanding breakthroughs in high-performance computing, linear algebra, random matrix theory, computational science, and in particular for the development of the Julia programming language. Edelman has been elected a fellow of five different societies — AMS, the Society for Industrial and Applied Mathematics, the Association for Computing Machinery, the Institute of Electrical and Electronics Engineers, and AAAS.
Robert B. Millard '73, life member and chairman emeritus of the MIT Corporation, was named a 2024 AAAS Fellow for outstanding contributions to the scientific community and U.S. higher education "through exemplary leadership service to such storied institutions as AAAS and MIT." Millard joined the MIT Corporation as a term member in 2003 and was elected a life member in 2013. He served on the Executive Committee for 10 years and on the Investment Company Management Board for seven years, including serving as its chair for the last four years. He served as a member of the Visiting Committees for Physics, Architecture, and Chemistry. In addition, Millard has served as a member of the Linguistics and Philosophy Visiting Committee, the Corporation Development Committee, and the Advisory Council for the Council for the Arts. In 2011, Millard received the Bronze Beaver Award, the MIT Alumni Association’s highest honor for distinguished service.
Jagadeesh S. Moodera is a senior research scientist in the Department of Physics. His research interests include experimental condensed matter physics: spin polarized tunneling and nano spintronics; exchange coupled ferromagnet/superconductor interface, triplet pairing, nonreciprocal current transport and memory toward superconducting spintronics for quantum technology; and topological insulators/superconductors, including Majorana bound state studies in metallic systems. His research in the area of spin polarized tunneling led to a breakthrough in observing tunnel magnetoresistance (TMR) at room temperature in magnetic tunnel junctions. This resulted in a huge surge in this area of research, currently one of the most active areas. TMR effect is used in all ultra-high-density magnetic data storage, as well as for the development of nonvolatile magnetic random access memory (MRAM) that is currently being advanced further in various electronic devices, including for neuromorphic computing architecture. For his leadership in spintronics, the discovery of TMR, the development of MRAM, and for mentoring the next generation of scientists, Moodera was named a 2024 AAAS Fellow. For his TMR discovery he was awarded the Oliver Buckley Prize (2009) by the American Physical Society (APS), named an American National Science Foundation Competitiveness and Innovation Fellow (2008-10), won IBM and TDK Research Awards (1995-98), and became a Fellow of APS (2000).
Noelle Eckley Selin, the director of the MIT Center for Sustainability Science and Strategy and a professor in the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences, uses atmospheric chemistry modeling to inform decision-making strategies on air pollution, climate change, and toxic substances, including mercury and persistent organic pollutants. She has also published articles and book chapters on the interactions between science and policy in international environmental negotiations, in particular focusing on global efforts to regulate hazardous chemicals and persistent organic pollutants. She is named a 2024 AAAS Fellow for world-recognized leadership in modeling the impacts of air pollution on human health, in assessing the costs and benefits of related policies, and in integrating technology dynamics into sustainability science.
Additional MIT alumni honored as 2024 AAAS Fellows include: Danah Boyd SM ’02 (Media Arts and Sciences); Michael S. Branicky ScD ’95 (EECS); Jane P. Chang SM ’95, PhD ’98 (Chemical Engineering); Yong Chen SM '99 (Mathematics); Roger Nelson Clark PhD '80 (EAPS); Mark Stephen Daskin ’74, PhD ’78 (Civil and Environmental Engineering); Marla L. Dowell PhD ’94 (Physics); Raissa M. D’Souza PhD ’99 (Physics); Cynthia Joan Ebinger SM '86, PhD '88 (EAPS/WHOI); Thomas Henry Epps III ’98, SM ’99 (Chemical Engineering); Daniel Goldman ’94 (Physics); Kenneth Keiler PhD ’96 (Biology); Karen Jean Meech PhD '87 (EAPS); Christopher B. Murray PhD ’95 (Chemistry); Jason Nieh '89 (EECS); William Nordhaus PhD ’67 (Economics); Milica Radisic PhD '04 (Chemical Engineering); James G. Rheinwald PhD ’76 (Biology); Adina L. Roskies PhD ’04 (Philosophy); Linda Rothschild (Preiss) PhD '70 (Mathematics); Soni Lacefield Shimoda PhD '03 (Biology); Dawn Y. Sumner PhD ’95 (EAPS); Tina L. Tootle PhD ’04 (Biology); Karen Viskupic PhD '03 (EAPS); Brant M. Weinstein PhD ’92 (Biology); Chee Wei Wong SM ’01, ScD ’03 (Mechanical Engineering; and Fei Xu PhD ’95 (Brain and Cognitive Sciences).
Professor Emeritus Earle Lomon, nuclear theorist, dies at 94On the physics faculty for nearly 40 years and a member of the Center for Theoretical Physics, he focused on the interactions of hadrons and developed an R-matrix formulation of scattering theory.Earle Leonard Lomon PhD ’54, MIT professor emeritus of physics, died on March 7 in Newton, Massachusetts, at the age of 94.
A longtime member of the Center for Theoretical Physics, Lomon was interested primarily in the forces between protons and neutrons at low energies, where the effects of quarks and gluons are hidden by their confinement.
His research focused on the interactions of hadrons — protons, neutrons, mesons, and nuclei — before it was understood that they were composed of quarks and gluons.
“Earle developed an R-matrix formulation of scattering theory that allowed him to separate known effects at long distance from then-unknown forces at short distances,” says longtime colleague Robert Jaffe, the Jane and Otto Morningstar Professor of Physics.
“When QCD [quantum chromodynamics] emerged as the correct field theory of hadrons, Earle moved quickly to incorporate the effects of quarks and gluons at short distance and high energies,” says Jaffe. “Earle’s work can be interpreted as a precursor to modern chiral effective field theory, where the pertinent degrees of freedom at low energy, which are hadrons, are matched smoothly onto the quark and gluon degrees of freedom that dominate at higher energy.”
“He was a truly cosmopolitan scientist, given his open mind and deep kindness,” says Bruno Coppi, MIT professor emeritus of physics.
Early years
Born Nov. 15, 1930, in Montreal, Quebec, Earle was the only son of Harry Lomon and Etta Rappaport. At Montreal High School, he met his future wife, Ruth Jones. Their shared love for classical music drew them both to the school's Classical Music Club, where Lomon served as president and Ruth was an accomplished musician.
While studying at McGill University, he was a research physicist for the Canada Defense Research Board from 1950 to 1951. After graduating in 1951, he married Jones, and they moved to Cambridge, where he pursued his doctorate at MIT in theoretical physics, mentored by Professor Hermann Feshbach.
Lomon spent 1954 to 1955 at the Institute for Theoretical Physics (now the Niels Bohr Institute) in Copenhagen. “With the presence of Niels Bohr, Aage Bohr, Ben Mottelson, and Willem V.R. Malkus, there were many physicists from Europe and elsewhere, including MIT’s Dave Frisch, making the Institute for Physics an exciting place to be,” recalled Lomon.
In 1956-57, he was a research associate at the Laboratory for Nuclear Studies at Cornell University. He received his PhD from MIT in 1954, and did postdoctoral work at the Institute of Theoretical Physics in Denmark, the Weizmann Institute of Science in Israel, and Cornell. He was an associate professor at McGill from 1957 until 1960, when he joined the MIT faculty.
In 1965, Lomon was awarded a Guggenheim Memorial Foundation Fellowship and was a visiting scientist at CERN. In 1968, he joined the newly formed MIT Center for Theoretical Physics. He became a full professor in 1970 and retired in 1999.
Los Alamos and math theory
From 1968 to 2015, Lomon was an affiliate researcher at the Los Alamos National Laboratory. During this time, he collaborated with Fred Begay, a Navajo nuclear physicist and medicine man. New Mexico became the Lomon family’s second home, and Lomon enjoyed the area hiking trails and climbing Baldy Mountain.
Lomon also developed educational materials for mathematical theory. He developed textbooks, educational tools, research, and a creative problem-solving curriculum for the Unified Science and Mathematics for Elementary Schools. His children recall when Earle would review the educational tools with them at the dinner table. From 2001 to 2013, he was program director for mathematical theory for the U.S. National Science Foundation’s Theoretical Physics research hub.
Lomon was an American Physical Society Fellow and a member of the Canadian Association of Physicists.
Husband of the late Ruth Lomon, he is survived by his daughters Glynis Lomon and Deirdre Lomon; his son, Dylan Lomon; grandchildren Devin Lomon, Alexia Layne-Lomon, and Benjamin Garner; and six great-grandchildren. There will be a memorial service at a later date; instead of flowers, please consider donating to the Los Alamos National Laboratory Foundation.
Mathematicians uncover the logic behind how people walk in crowdsThe findings could help planners design safer, more efficient pedestrian thoroughfares.Next time you cross a crowded plaza, crosswalk, or airport concourse, take note of the pedestrian flow. Are people walking in orderly lanes, single-file, to their respective destinations? Or is it a haphazard tangle of personal trajectories, as people dodge and weave through the crowd?
MIT instructor Karol Bacik and his colleagues studied the flow of human crowds and developed a first-of-its-kind way to predict when pedestrian paths will transition from orderly to entangled. Their findings may help inform the design of public spaces that promote safe and efficient thoroughfares.
In a paper appearing this week in the Proceedings of the National Academy of Sciences, the researchers consider a common scenario in which pedestrians navigate a busy crosswalk. The team analyzed the scenario through mathematical analysis and simulations, considering the many angles at which individuals may cross and the dodging maneuvers they may make as they attempt to reach their destinations while avoiding bumping into other pedestrians along the way.
The researchers also carried out controlled crowd experiments and studied how real participants walked through a crowd to reach certain locations. Through their mathematical and experimental work, the team identified a key measure that determines whether pedestrian traffic is ordered, such that clear lanes form in the flow, or disordered, in which there are no discernible paths through the crowd. Called “angular spread,” this parameter describes the number of people walking in different directions.
If a crowd has a relatively small angular spread, this means that most pedestrians walk in opposite directions and meet the oncoming traffic head-on, such as in a crosswalk. In this case, more orderly, lane-like traffic is likely. If, however, a crowd has a larger angular spread, such as in a concourse, it means there are many more directions that pedestrians can take to cross, with more chance for disorder.
In fact, the researchers calculated the point at which a moving crowd can transition from order to disorder. That point, they found, was an angular spread of around 13 degrees, meaning that if pedestrians don’t walk straight across, but instead an average pedestrian veers off at an angle larger than 13 degrees, this can tip a crowd into disordered flow.
“This all is very commonsense,” says Bacik, who is a instructor of applied mathematics at MIT. “The question is whether we can tackle it precisely and mathematically, and where the transition is. Now we have a way to quantify when to expect lanes — this spontaneous, organized, safe flow — versus disordered, less efficient, potentially more dangerous flow.”
The study’s co-authors include Grzegorz Sobota and Bogdan Bacik of the Academy of Physical Education in Katowice, Poland, and Tim Rogers at the University of Bath in the United Kingdom.
Right, left, center
Bacik, who is trained in fluid dynamics and granular flow, came to study pedestrian flow during 2021, when he and his collaborators looked into the impacts of social distancing, and ways in which people might walk among each other while maintaining safe distances. That work inspired them to look more generally into the dynamics of crowd flow.
In 2023, he and his collaborators explored “lane formation,” a phenomenon by which particles, grains, and, yes, people have been observed to spontaneously form lanes, moving in single-file when forced to cross a region from two opposite directions. In that work, the team identified the mechanism by which such lanes form, which Bacik sums up as “an imbalance of turning left versus right.” Essentially, they found that as soon as something in a crowd starts to look like a lane, individuals around that fledgling lane either join up, or are forced to either side of it, walking parallel to the original lane, which others can follow. In this way, a crowd can spontaneously organize into regular, structured lanes.
“Now we’re asking, how robust is this mechanism?” Bacik says. “Does it only work in this very idealized situation, or can lane formation tolerate some imperfections, such as some people not going perfectly straight, as they might do in a crowd?”
Lane change
For their new study, the team looked to identify a key transition in crowd flow: When do pedestrians switch from orderly, lane-like traffic, to less organized, messy flow? The researchers first probed the question mathematically, with an equation that is typically used to describe fluid flow, in terms of the average motion of many individual molecules.
“If you think about the whole crowd flowing, rather than individuals, you can use fluid-like descriptions,” Bacik explains. “It’s this art of averaging, where, even if some people may cross more assertively than others, these effects are likely to average out in a sufficiently large crowd. If you only care about the global characteristics like, are there lanes or not, then you can make predictions without detailed knowledge of everyone in the crowd.”
Bacik and his colleagues used equations of fluid flow, and applied them to the scenario of pedestrians flowing across a crosswalk. The team tweaked certain parameters in the equation, such as the width of the fluid channel (in this case, the crosswalk), and the angle at which molecules (or people) flowed across, along with various directions that people can “dodge,” or move around each other to avoid colliding.
Based on these calculations, the researchers found that pedestrians in a crosswalk are more likely to form lanes, when they walk relatively straight across, from opposite directions. This order largely holds until people start veering across at more extreme angles. Then, the equation predicts that the pedestrian flow is likely to be disordered, with few to no lanes forming.
The researchers were curious to see whether the math bears out in reality. For this, they carried out experiments in a gymnasium, where they recorded the movements of pedestrians using an overhead camera. Each volunteer wore a paper hat, depicting a unique barcode that the overhead camera could track.
In their experiments, the team assigned volunteers various start and end positions along opposite sides of a simulated crosswalk, and tasked them with simultaneously walking across the crosswalk to their target location without bumping into anyone. They repeated the experiment many times, each time having volunteers assume different start and end positions. In the end, the researchers were able to gather visual data of multiple crowd flows, with pedestrians taking many different crossing angles.
When they analyzed the data and noted when lanes spontaneously formed, and when they did not, the team found that, much like the equation predicted, the angular spread mattered. Their experiments confirmed that the transition from ordered to disordered flow occurred somewhere around the theoretically predicted 13 degrees. That is, if an average person veered more than 13 degrees away from straight ahead, the pedestrian flow could tip into disorder, with little lane formation. What’s more, they found that the more disorder there is in a crowd, the less efficiently it moves.
The team plans to test their predictions on real-world crowds and pedestrian thoroughfares.
“We would like to analyze footage and compare that with our theory,” Bacik says. “And we can imagine that, for anyone designing a public space, if they want to have a safe and efficient pedestrian flow, our work could provide a simpler guideline, or some rules of thumb.”
This work is supported, in part, by the Engineering and Physical Sciences Research Council of UK Research and Innovation.
MIT scientists engineer starfish cells to shape-shift in response to lightThe research may enable the design of synthetic, light-activated cells for wound healing or drug delivery.Life takes shape with the motion of a single cell. In response to signals from certain proteins and enzymes, a cell can start to move and shake, leading to contractions that cause it to squeeze, pinch, and eventually divide. As daughter cells follow suit down the generational line, they grow, differentiate, and ultimately arrange themselves into a fully formed organism.
Now MIT scientists have used light to control how a single cell jiggles and moves during its earliest stage of development. The team studied the motion of egg cells produced by starfish — an organism that scientists have long used as a classic model for understanding cell growth and development.
The researchers focused on a key enzyme that triggers a cascade of motion within a starfish egg cell. They genetically designed a light-sensitive version of the same enzyme, which they injected into egg cells, and then stimulated the cells with different patterns of light.
They found that the light successfully triggered the enzyme, which in turn prompted the cells to jiggle and move in predictable patterns. For instance, the scientists could stimulate cells to exhibit small pinches or sweeping contractions, depending on the pattern of light they induced. They could even shine light at specific points around a cell to stretch its shape from a circle to a square.
Their results, appearing today in the journal Nature Physics, provide scientists with a new optical tool for controlling cell shape in its earliest developmental stages. Such a tool, they envision, could guide the design of synthetic cells, such as therapeutic “patch” cells that contract in response to light signals to help close wounds, or drug-delivering “carrier” cells that release their contents only when illuminated at specific locations in the body. Overall, the researchers see their findings as a new way to probe how life takes shape from a single cell.
“By revealing how a light-activated switch can reshape cells in real time, we’re uncovering basic design principles for how living systems self-organize and evolve shape,” says the study’s senior author, Nikta Fakhri, associate professor of physics at MIT. “The power of these tools is that they are guiding us to decode all these processes of growth and development, to help us understand how nature does it.”
The study’s MIT authors include first author Jinghui Liu, Yu-Chen Chao, and Tzer Han Tan; along with Tom Burkart, Alexander Ziepke, and Erwin Frey of Ludwig Maximilian University of Munich; John Reinhard of Saarland University; and S. Zachary Swartz of the Whitehead Institute for Biomedical Research.
Cell circuitry
Fakhri’s group at MIT studies the physical dynamics that drive cell growth and development. She is particularly interested in symmetry, and the processes that govern how cells follow or break symmetry as they grow and divide. The five-limbed starfish, she says, is an ideal organism for exploring such questions of growth, symmetry, and early development.
“A starfish is a fascinating system because it starts with a symmetrical cell and becomes a bilaterally symmetric larvae at early stages, and then develops into pentameral adult symmetry,” Fakhri says. “So there’s all these signaling processes that happen along the way to tell the cell how it needs to organize.”
Scientists have long studied the starfish and its various stages of development. Among many revelations, researchers have discovered a key “circuitry” within a starfish egg cell that controls its motion and shape. This circuitry involves an enzyme, GEF, that naturally circulates in a cell’s cytoplasm. When this enzyme is activated, it induces a change in a protein, called Rho, that is known to be essential for regulating cell mechanics.
When the GEF enzyme stimulates Rho, it causes the protein to switch from an essentially free-floating state to a state that binds the protein to the cell’s membrane. In this membrane-bound state, the protein then triggers the growth of microscopic, muscle-like fibers that thread out across the membrane and subsequently twitch, enabling the cell to contract and move.
In previous work, Fakhri’s group showed that a cell’s movements can be manipulated by varying the cell’s concentrations of GEF enzyme: The more enzyme they introduced into a cell, the more contractions the cell would exhibit.
“This whole idea made us think whether it’s possible to hack this circuitry, to not just change a cell’s pattern of movements but get a desired mechanical response,” Fakhri says.
Lights and action
To precisely manipulate a cell’s movements, the team looked to optogenetics — an approach that involves genetically engineering cells and cellular components such as proteins and enzymes, such that they activate in response to light.
Using established optogenetic techniques, the researchers developed a light-sensitive version of the GEF enzyme. From this engineered enzyme, they isolated its mRNA — essentially, the genetic blueprint for building the enzyme. They then injected this blueprint into egg cells that the team harvested from a single starfish ovary, which can hold millions of unfertilized cells. The cells, infused with the new mRNA, then began to produce light-sensitive GEF enzymes on their own.
In experiments, the researchers then placed each enzyme-infused egg cell under a microscope and shone light onto the cell in different patterns and from different points along the cell’s periphery. They took videos of the cell’s movements in response.
They found that when they aimed the light in specific points, the GEF enzyme became activated and recruited Rho protein to the light-targeted sites. There, the protein then set off its characteristic cascade of muscle-like fibers that pulled or pinched the cell in the same, light-stimulated spots. Much like pulling the strings of a marionette, they were able to control the cell’s movements, for instance directing it to morph into various shapes, including a square.
Surprisingly, they also found they could stimulate the cell to undergo sweeping contractions by shining a light in a single spot, exceeding a certain threshold of enzyme concentration.
“We realized this Rho-GEF circuitry is an excitable system, where a small, well-timed stimulus can trigger a large, all-or-nothing response,” Fakhri says. “So we can either illuminate the whole cell, or just a tiny place on the cell, such that enough enzyme is recruited to that region so the system gets kickstarted to contract or pinch on its own.”
The researchers compiled their observations and derived a theoretical framework to predict how a cell’s shape will change, given how it is stimulated with light. The framework, Fakhri says, opens a window into “the ‘excitability’ at the heart of cellular remodeling, which is a fundamental process in embryo development and wound healing.”
She adds: “This work provides a blueprint for designing ‘programmable’ synthetic cells, letting researchers orchestrate shape changes at will for future biomedical applications.”
This work was supported, in part, by the Sloan Foundation, and the National Science Foundation.
Device enables direct communication among multiple quantum processorsMIT researchers developed a photon-shuttling “interconnect” that can facilitate remote entanglement, a key step toward a practical quantum computer.Quantum computers have the potential to solve complex problems that would be impossible for the most powerful classical supercomputer to crack.
Just like a classical computer has separate, yet interconnected, components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate quantum information between multiple processors.
Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between network nodes, with compounding error rates.
On the way to overcoming these challenges, MIT researchers developed a new interconnect device that can support scalable, “all-to-all” communication, such that all superconducting quantum processors in a network can communication directly with each other.
They created a network of two quantum processors and used their interconnect to send microwave photons back and forth on demand in a user-defined direction. Photons are particles of light that can carry quantum information.
The device includes a superconducting wire, or waveguide, that shuttles photons between processors and can be routed as far as needed. The researchers can couple any number of modules to it, efficiently transmitting information between a scalable network of processors.
They used this interconnect to demonstrate remote entanglement, a type of correlation between quantum processors that are not physically connected. Remote entanglement is a key step toward developing a powerful, distributed network of many quantum processors.
“In the future, a quantum computer will probably need both local and nonlocal interconnects. Local interconnects are natural in arrays of superconducting qubits. Ours allows for more nonlocal connections. We can send photons at different frequencies, times, and in two propagation directions, which gives our network more flexibility and throughput,” says Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) and lead author of a paper on the interconnect.
Her co-authors include Beatriz Yankelevich, a graduate student in the EQuS Group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science (EECS) and professor of Physics, director of the Center for Quantum Engineering, and associate director of RLE; and others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.
A scalable architecture
The researchers previously developed a quantum computing module, which enabled them to send information-carrying microwave photons in either direction along a waveguide.
In the new work, they took that architecture a step further by connecting two modules to a waveguide in order to emit photons in a desired direction and then absorb them at the other end.
Each module is composed of four qubits, which serve as an interface between the waveguide carrying the photons and the larger quantum processors.
The qubits coupled to the waveguide emit and absorb photons, which are then transferred to nearby data qubits.
The researchers use a series of microwave pulses to add energy to a qubit, which then emits a photon. Carefully controlling the phase of those pulses enables a quantum interference effect that allows them to emit the photon in either direction along the waveguide. Reversing the pulses in time enables a qubit in another module any arbitrary distance away to absorb the photon.
“Pitching and catching photons enables us to create a ‘quantum interconnect’ between nonlocal quantum processors, and with quantum interconnects comes remote entanglement,” explains Oliver.
“Generating remote entanglement is a crucial step toward building a large-scale quantum processor from smaller-scale modules. Even after that photon is gone, we have a correlation between two distant, or ‘nonlocal,’ qubits. Remote entanglement allows us to take advantage of these correlations and perform parallel operations between two qubits, even though they are no longer connected and may be far apart,” Yankelevich explains.
However, transferring a photon between two modules is not enough to generate remote entanglement. The researchers need to prepare the qubits and the photon so the modules “share” the photon at the end of the protocol.
Generating entanglement
The team did this by halting the photon emission pulses halfway through their duration. In quantum mechanical terms, the photon is both retained and emitted. Classically, one can think that half-a-photon is retained and half is emitted.
Once the receiver module absorbs that “half-photon,” the two modules become entangled.
But as the photon travels, joints, wire bonds, and connections in the waveguide distort the photon and limit the absorption efficiency of the receiving module.
To generate remote entanglement with high enough fidelity, or accuracy, the researchers needed to maximize how often the photon is absorbed at the other end.
“The challenge in this work was shaping the photon appropriately so we could maximize the absorption efficiency,” Almanakly says.
They used a reinforcement learning algorithm to “predistort” the photon. The algorithm optimized the protocol pulses in order to shape the photon for maximal absorption efficiency.
When they implemented this optimized absorption protocol, they were able to show photon absorption efficiency greater than 60 percent.
This absorption efficiency is high enough to prove that the resulting state at the end of the protocol is entangled, a major milestone in this demonstration.
“We can use this architecture to create a network with all-to-all connectivity. This means we can have multiple modules, all along the same bus, and we can create remote entanglement among any pair of our choosing,” Yankelevich says.
In the future, they could improve the absorption efficiency by optimizing the path over which the photons propagate, perhaps by integrating modules in 3D instead of having a superconducting wire connecting separate microwave packages. They could also make the protocol faster so there are fewer chances for errors to accumulate.
“In principle, our remote entanglement generation protocol can also be expanded to other kinds of quantum computers and bigger quantum internet systems,” Almanakly says.
This work was funded, in part, by the U.S. Army Research Office, the AWS Center for Quantum Computing, and the U.S. Air Force Office of Scientific Research.
Professor Emeritus Lee Grodzins, pioneer in nuclear physics, dies at 98An MIT faculty member for 40 years, Grodzins performed groundbreaking studies of the weak interaction, led in detection technology, and co-founded the Union of Concerned Scientists.Nuclear physicist and MIT Professor Emeritus Lee Grodzins died on March 6 at his home in the Maplewood Senior Living Community at Weston, Massachusetts. He was 98.
Grodzins was a pioneer in nuclear physics research. He was perhaps best known for the highly influential experiment determining the helicity of the neutrino, which led to a key understanding of what's known as the weak interaction. He was also the founder of Niton Corp. and the nonprofit Cornerstones of Science, and was a co-founder of the Union of Concerned Scientists.
He retired in 1999 after serving as an MIT physics faculty member for 40 years. As a member of the Laboratory for Nuclear Science (LNS), he initiated the relativistic heavy-ion physics program. He published over 170 scientific papers and held 64 U.S. patents.
“Lee was a very good experimental physicist, especially with his hands making gadgets,” says Heavy Ion Group and Francis L. Friedman Professor Emeritus Wit Busza PhD ’64. “His enthusiasm for physics spilled into his enthusiasm for how physics was taught in our department.”
Industrious son of immigrants
Grodzins was born July 10, 1926, in Lowell, Massachusetts, the middle child of Eastern European Jewish immigrants David and Taube Grodzins. He grew up in Manchester, New Hampshire. His two sisters were Ethel Grodzins Romm, journalist, author, and businesswoman who later ran his company, Niton Corp.; and Anne Lipow, who became a librarian and library science expert.
His father, who ran a gas station and a used-tire business, died when Lee was 15. To help support his family, Lee sold newspapers, a business he grew into the second-largest newspaper distributor in Manchester.
At 17, Grodzins attended the University of New Hampshire, graduating in less than three years with a degree in mechanical engineering. However, he decided to be a physicist after disagreeing with a textbook that used the word “never.”
“I was pretty good in math and was undecided about my future,” Grodzins said in a 1958 New York Daily News article. “It wasn’t until my senior year that I unexpectedly realized I wanted to be a physicist. I was reading a physics text one day when suddenly this sentence hit me: ‘We will never be able to see the atom.’ I said to myself that that was as stupid a statement as I’d ever read. What did he mean ‘never!’ I got so annoyed that I started devouring other writers to see what they had to say and all at once I found myself in the midst of modern physics.”
He wrote his senior thesis on “Atomic Theory.”
After graduating in 1946, he approached potential employers by saying, “I have a degree in mechanical engineering, but I don’t want to be one. I’d like to be a physicist, and I’ll take anything in that line at whatever you will pay me.”
He accepted an offer from General Electric’s Research Laboratory in Schenectady, New York, where he worked in fundamental nuclear research building cosmic ray detectors, while also pursuing his master’s degree at Union College. “I had a ball,” he recalled. “I stayed in the lab 12 hours a day. They had to kick me out at night.”
Brookhaven
After earning his PhD from Purdue University in 1954, he spent a year as a lecturer there, before becoming a researcher at Brookhaven National Laboratory (BNL) with Maurice Goldhaber’s nuclear physics group, probing the properties of the nuclei of atoms.
In 1957, he, with Goldhaber and Andy Sunyar, used a simple table-top experiment to measure the helicity of the neutrino. Helicity characterizes the alignment of a particle’s intrinsic spin vector with that particle’s direction of motion.
The research provided new support for the idea that the principle of conservation of parity — which had been accepted for 30 years as a basic law of nature before being disproven the year before, leading to the 1957 Nobel Prize in Physics — was not as inviolable as the scientists thought it was, and did not apply to the behavior of some subatomic particles.
The experiment took about 10 days to complete, followed by a month of checks and rechecks. They submitted a letter on “Helicity of Neutrinos” to Physical Review on Dec. 11, 1957, and a week later, Goldhaber told a Stanford University audience that the neutrino is left-handed, meaning that the weak interaction was probably one force. This work proved crucial to our understanding of the weak interaction, the force that governs nuclear beta decay.
“It was a real upheaval in our understanding of physics,” says Grodzins’ longtime colleague Stephen Steadman. The breakthrough was commemorated in 2008, with a conference at BNL on “Neutrino Helicity at 50.”
Steadman also recalls Grodzins’ story about one night at Brookhaven, where he was working on an experiment that involved a radioactive source inside a chamber. Lee noticed that a vacuum pump wasn’t working, so he tinkered with it a while before heading home. Later that night, he gets a call from the lab. “They said, ‘Don't go anywhere!’” recalls Steadman. It turns out the radiation source in the lab had exploded, and the pump filled the lab with radiation. “They were actually able to trace his radioactive footprints from the lab to his home,” says Steadman. “He kind of shrugged it off.”
The MIT years
Grodzins joined the faculty of MIT in 1959, where he taught physics for four decades. He inherited Robley Evans’ Radiation Laboratory, which used radioactive sources to study properties of nuclei, and led the Relativistic Heavy Ion Group, which was affiliated with the LNS.
In 1972, he launched a program at BNL using the then-new Tandem Van de Graaff accelerator to study interactions of heavy ions with nuclei. “As the BNL tandem was getting commissioned, we started a program, together with Doug Cline at the University of Rochester, tandem to investigate Coulomb-nuclear interference,” says Steadman, a senior research scientist at LNS. “The experimental results were decisive but somewhat controversial at the time. We clearly detected the interference effect.” The experimental work was published in Physical Review Letters.
Grodzins’ team looked for super-heavy elements using the Lawrence Berkeley National Laboratory Super-Hilac, investigated heavy-ion fission and other heavy-ion reactions, and explored heavy-ion transfer reactions. The latter research showed with precise detail the underlying statistical behavior of the transfer of nucleons between the heavy-ion projectile and target, using a theoretical statistical model of Surprisal Analysis developed by Rafi Levine and his graduate student. Recalls Steadman, “these results were both outstanding in their precision and initially controversial in interpretation.”
In 1985, he carried out the first computer axial tomographic experiment using synchrotron radiation, and in 1987, his group was involved in the first run of Experiment 802, a collaborative experiment with about 50 scientists from around the world that studied relativistic heavy ion collisions at Brookhaven. The MIT responsibility was to build the drift chambers and design the bending magnet for the experiment.
“He made significant contributions to the initial design and construction phases, where his broad expertise and knowledge of small area companies with unique capabilities was invaluable,” says George Stephans, physics senior lecturer and senior research scientist at MIT.
Professor emeritus of physics Rainer Weiss ’55, PhD ’62 recalls working on a Mossbauer experiment to establish if photons changed frequency as they traveled through bright regions. “It was an idea held by some to explain the ‘apparent’ red shift with distance in our universe,” says Weiss. “We became great friends in the process, and of course, amateur cosmologists.”
“Lee was great for developing good ideas,” Steadman says. “He would get started on one idea, but then get distracted with another great idea. So, it was essential that the team would carry these experiments to their conclusion: they would get the papers published.”
MIT mentor
Before retiring in 1999, Lee supervised 21 doctoral dissertations and was an early proponent of women graduate students in physics. He also oversaw the undergraduate thesis of Sidney Altman, who decades later won the Nobel Prize in Chemistry. For many years, he helped teach the Junior Lab required of all undergraduate physics majors. He got his favorite student evaluation, however, for a different course, billed as offering a “superficial overview” of nuclear physics. The comment read: “This physics course was not superficial enough for me.”
“He really liked to work with students,” says Steadman. “They could always go into his office anytime. He was a very supportive mentor.”
“He was a wonderful mentor, avuncular and supportive of all of us,” agrees Karl van Bibber ’72, PhD ’76, now at the University of California at Berkeley. He recalls handing his first paper to Grodzins for comments. “I was sitting at my desk expecting a pat on the head. Quite to the contrary, he scowled, threw the manuscript on my desk and scolded, ‘Don't even pick up a pencil again until you've read a Hemingway novel!’ … The next version of the paper had an average sentence length of about six words; we submitted it, and it was immediately accepted by Physical Review Letters.”
Van Bibber has since taught the “Grodzins Method” in his graduate seminars on professional orientation for scientists and engineers, including passing around a few anthologies of Hemingway short stories. “I gave a copy of one of the dog-eared anthologies to Lee at his 90th birthday lecture, which elicited tears of laughter.”
Early in George Stephans’ MIT career as a research scientist, he worked with Grodzins’ newly formed Relativistic Heavy Ion Group. “Despite his wide range of interests, he paid close attention to what was going on and was always very supportive of us, especially the students. He was a very encouraging and helpful mentor to me, as well as being always pleasant and engaging to work with. He actively pushed to get me promoted to principal research scientist relatively early, in recognition of my contributions.”
“He always seemed to know a lot about everything, but never acted condescending,” says Stephans. “He seemed happiest when he was deeply engaged digging into the nitty-gritty details of whatever unique and unusual work one of these companies was doing for us.”
Al Lazzarini ’74, PhD ’78 recalls Grodzins’ investigations using proton-induced X-ray emission (PIXE) as a sensitive tool to measure trace elemental amounts. “Lee was a superb physicist,” says Lazzarini. “He gave an enthralling seminar on an investigation he had carried out on a lock of Napoleon’s hair, looking for evidence of arsenic poisoning.”
Robert Ledoux ’78, PhD ’81, a former professor of physics at MIT who is now program director of the U.S. Advanced Research Projects Agency with the Department of Energy, worked with Grodzins as both a student and colleague. “He was a ‘nuclear physicist’s physicist’ — a superb experimentalist who truly loved building and performing experiments in many areas of nuclear physics. His passion for discovery was matched only by his generosity in sharing knowledge.”
The research funding crisis starting in 1969 led Grodzins to become concerned that his graduate students would not find careers in the field. He helped form the Economic Concerns Committee of the American Physical Society, for which he produced a major report on the “Manpower Crisis in Physics” (1971), and presented his results before the American Association for the Advancement of Science, and at the Karlsruhe National Lab in Germany.
Grodzins played a significant role in bringing the first Chinese graduate students to MIT in the 1970s and 1980s.
One of the students he welcomed was Huan Huang PhD ’90. “I am forever grateful to him for changing my trajectory,” says Huang, now at the University of California at Los Angeles. “His unwavering support and ‘go do it’ attitude inspired us to explore physics at the beginning of a new research field of high energy heavy ion collisions in the 1980s. I have been trying to be a ‘nice professor’ like Lee all my academic career.”
Even after he left MIT, Grodzins remained available for his former students. “Many tell me how much my lifestyle has influenced them, which is gratifying,” Huang says. “They’ve been a central part of my life. My biography would be grossly incomplete without them.”
Niton Corp. and post-MIT work
Grodzins liked what he called “tabletop experiments,” like the one used in his 1957 neutrino experiment, which involved a few people building a device that could fit on a tabletop. “He didn’t enjoy working in large collaborations, which nuclear physics embraced.” says Steadman. “I think that’s why he ultimately left MIT.”
In the 1980s, he launched what amounted to a new career in detection technology. In 1987, after developing a scanning proton-induced X-ray microspectrometer for use measuring elemental concentrations in air, he founded the Niton Corp., which developed, manufactured, and marketed test kits and instruments to measure radon gas in buildings, lead-based paint detection, and other nondestructive testing applications. (“Niton” is an obsolete term for radon.)
“At the time, there was a big scare about radon in New England, and he thought he could develop a radon detector that was inexpensive and easy to use,” says Steadman. “His radon detector became a big business.”
He later developed devices to detect explosives, drugs, and other contraband in luggage and cargo containers. Handheld devices used X-ray fluorescence to determine the composition of metal alloys and to detect other materials. The handheld XL Spectrum Analyzer could detect buried and surface lead on painted surfaces, to protect children living in older homes. Three Niton X-ray fluorescence analyzers earned R&D 100 awards.
“Lee was very technically gifted,” says Steadman.
In 1999, Grodzins retired from MIT and devoted his energies to industry, including directing the R&D group at Niton.
His sister Ethel Grodzins Romm was the president and CEO of Niton, followed by his son Hal. Many of Niton’s employees were MIT graduates. In 2005, he and his family sold Niton to Thermo Fisher Scientific, where Lee remained as a principal scientist until 2010.
In the 1990s, he was vice president of American Science and Engineering, and between the ages of 70 and 90, he was awarded three patents a year.
“Curiosity and creativity don’t stop after a certain age,” Grodzins said to UNH Today. “You decide you know certain things, and you don’t want to change that thinking. But thinking outside the box really means thinking outside your box.”
“I miss his enthusiasm,” says Steadman. “I saw him about a couple of years ago and he was still on the move, always ready to launch a new effort, and he was always trying to pull you into those efforts.”
A better world
In the 1950s, Grodzins and other Brookhaven scientists joined the American delegation at the Second United Nations International Conference on the Peaceful Uses of Atomic Energy in Geneva.
Early on, he joined several Manhattan Project alums at MIT in their concern about the consequences of nuclear bombs. In Vietnam-era 1969, Grodzins co-founded the Union of Concerned Scientists, which calls for scientific research to be directed away from military technologies and toward solving pressing environmental and social problems. He served as its chair in 1970 and 1972. He also chaired committees for the American Physical Society and the National Research Council.
As vice president for advanced products at American Science and Engineering, which made homeland security equipment, he became a consultant on airport security, especially following the 9/11 attacks. As an expert witness, he testified at the celebrated trial to determine whether Pan Am was negligent for the bombing of Flight 103 over Lockerbie, Scotland, and he took part in a weapons inspection trip on the Black Sea. He also was frequently called as an expert witness on patent cases.
In 1999, Grodzins founded the nonprofit Cornerstones in Science, a public library initiative to improve public engagement with science. Based originally at the Curtis Memorial Library in Brunswick, Maine, Cornerstones now partners with libraries in Maine, Arizona, Texas, Massachusetts, North Carolina, and California. Among their initiatives was one that has helped supply telescopes to libraries and astronomy clubs around the country.
“He had a strong sense of wanting to do good for mankind,” says Steadman.
Awards
Grodzins authored more than 170 technical papers and holds more than 60 U.S. patents. His numerous accolades included being named a Guggenheim Fellow in 1964 and 1971, and a senior von Humboldt fellow in 1980. He was a fellow of the American Physical Society and the American Academy of Arts and Sciences, and received an honorary doctor of science degree from Purdue University in 1998.
In 2021, the Denver X-Ray Conference gave Grodzins the Birks Award in X-Florescence Spectrometry, for having introduced “a handheld XRF unit which expanded analysis to in-field applications such as environmental studies, archeological exploration, mining, and more.”
Personal life
One evening in 1955, shortly after starting his work at Brookhaven, Grodzins decided to take a walk and explore the BNL campus. He found just one building that had lights on and was open, so he went in. Inside, a group was rehearsing a play. He was immediately smitten with one of the actors, Lulu Anderson, a young biologist. “I joined the acting company, and a year-and-a-half later, Lulu and I were married,” Grodzins had recalled. They were happily married for 62 years, until Lulu’s death in 2019.
They raised two sons, Dean, now of Cambridge, Massachusetts, and Hal Grodzins, who lives in Maitland, Florida. Lee and Lulu owned a succession of beloved huskies, most of them named after physicists.
After living in Arlington, Massachusetts, the Grodzins family moved to Lexington, Massachusetts, in 1972 and bought a second home a few years later in Brunswick, Maine. Starting around 1990, Lee and Lulu spent every weekend, year-round, in Brunswick. In both places, they were avid supporters of their local libraries, museums, theaters, symphonies, botanical gardens, public radio, and TV stations.
Grodzins took his family along to conferences, fellowships, and other invitations. They all lived in Denmark for two sabbaticals, in 1964-65 and 1971-72, while Lee worked at the Neils Bohr Institute. They also traveled together to China for a month in 1975, and for two months in 1980. As part of the latter trip, they were among the first American visitors to Tibet since the 1940s. Lee and Lulu also traveled the world, from Antarctica to the Galapagos Islands to Greece.
His homes had basement workshops well-stocked with tools. His sons enjoyed a playroom he built for them in their Arlington home. He also once constructed his own high-fidelity record player, patched his old Volvo with fiberglass, changed his own oil, and put on the winter tires and chains himself. He was an early adopter of the home computer.
“His work in science and technology was part of a general love of gadgets and of fixing and making things,” his son, Dean, wrote in a Facebook post.
Lee is survived by Dean, his wife, Nora Nykiel Grodzins, and their daughter, Lily; and by Hal and his wife Cathy Salmons.
A remembrance and celebration for Lee Grodzins is planned for this summer. Donations in his name may be made to Cornerstones of Science.
Drawing inspiration from ancient chemical reactionsBy studying cellular enzymes that perform difficult reactions, MIT chemist Dan Suess hopes to find new solutions to global energy challenges.To help find solutions to the planet’s climate crisis, MIT Associate Professor Daniel Suess is looking to Earth’s ancient past.
Early in the evolution of life, cells gained the ability to perform reactions such as transferring electrons from one atom to another. These reactions, which help cells to build carbon-containing or nitrogen-containing compounds, rely on specialized enzymes with clusters of metal atoms.
By learning more about how those enzymes work, Suess hopes to eventually devise new ways to perform fundamental chemical reactions that could help capture carbon from the atmosphere or enable the development of alternative fuels.
“We have to find some way of rewiring society so that we are not just relying on vast reserves of reduced carbon, fossil fuels, and burning them using oxygen,” he says. “What we’re doing is we’re looking backward, up to a billion years before oxygen and photosynthesis came along, to see if we can identify the chemical principles that underlie processes that aren’t reliant on burning carbon.”
His work could also shed light on other important cellular reactions such as the conversion of nitrogen gas to ammonia, which is also the key step in the production of synthetic fertilizer.
Exploring chemistry
Suess, who grew up in Spokane, Washington, became interested in math at a young age, but ended up majoring in chemistry and English at Williams College, which he chose based on its appealing selection of courses.
“I was interested in schools that were more focused on the liberal arts model, Williams being one of those. And I just thought they had the right combination of really interesting courses and freedom to take classes that you wanted,” he says. “I went in not expecting to major in chemistry, but then I really enjoyed my chemistry classes and chemistry teachers.”
In his classes, he explored all aspects of chemistry and found them all appealing.
“I liked organic chemistry, because there’s an emphasis on making things. And I liked physical chemistry because there was an attempt to have at least a semiquantitative way of understanding the world. Physical chemistry describes some of the most important developments in science in the 20th century, including quantum mechanics and its application to atoms and molecules,” he says.
After college, Suess came to MIT for graduate school and began working with chemistry professor Jonas Peters, who had recently arrived from Caltech. A couple of years later, Peters ended up moving back to Caltech, and Suess followed, continuing his PhD thesis research on new ways to synthesize inorganic molecules.
His project focused on molecules that consist of a metal such as iron or cobalt bound to a nonmetallic group known as a ligand. Within these molecules, the metal atom typically pulls in electrons from the ligand. However, the molecules Suess worked on were designed so that the metal would give up its own electrons to the ligand. Such molecules can be used to speed up difficult reactions that require breaking very strong bonds, like the nitrogen-nitrogen triple bond in N2.
During a postdoc at the University of California at Davis, Suess switched gears and began working on biomolecules — specifically, metalloproteins. These are protein enzymes that have metals tucked into their active sites, where they help to catalyze reactions.
Suess studied how cells synthesize the metal-containing active sites in these proteins, focusing on an enzyme called iron-iron hydrogenase. This enzyme, found mainly in anaerobic bacteria, including some that live in the human digestive tract, catalyzes reactions involving the transfer of protons and electrons. Specifically, it can combine two protons and two electrons to make H2, or can perform the reverse reaction, breaking H2 into protons and electrons.
“That enzyme is really important because a lot of cellular metabolic processes either generate excess electrons or require excess electrons. If you generate excess electrons, they have to go somewhere, and one solution is to put them on protons to make H2,” Suess says.
Global scale reactions
Since joining the MIT faculty in 2017, Suess has continued his investigations of metalloproteins and the reactions that they catalyze.
“We’re interested in global-scale chemical reactions, meaning they’re occurring on the microscopic scale but happening on a huge scale,” he says. “They impact the planet and have determined what the molecular composition of the biosphere is and what it’s going to be.”
Photosynthesis, which emerged around 2.4 billion years ago, has had the biggest impact on the atmosphere, filling it with oxygen, but Suess focuses on reactions that cells began using even earlier, when the atmosphere lacked oxygen and cell metabolism could not be driven by respiration.
Many of these ancient reactions, which are still used by cells today, involve a class of metalloproteins called iron-sulfur proteins. These enzymes, which are found in all kingdoms of life, are involved in catalyzing many of the most difficult reactions that occur in cells, such as forming carbon radicals and converting nitrogen to ammonia.
To study the metalloenzymes that catalyze these reactions, Suess’s lab takes two different approaches. In one, they create synthetic versions of the proteins that may contain fewer metal atoms, which allows for greater control over the composition and shape of the protein, making them easier to study.
In another approach, they use the natural version of the protein but substitute one of the metal atoms with an isotope that makes it easier to use spectroscopic techniques to analyze the protein’s structure.
“That allows us to study both the bonding in the resting state of an enzyme, as well as the bonding and structures of reaction intermediates that you can only characterize spectroscopically,” Suess says.
Understanding how enzymes perform these reactions could help researchers find new ways to remove carbon dioxide from the atmosphere by combining it with other molecules to create larger compounds. Finding alternative ways to convert nitrogen gas to ammonia could also have a big impact on greenhouse gas emissions, as the Haber Bosch process now used to synthesize fertilizer produces requires huge amounts of energy.
“Our primary focus is on understanding the natural world, but I think that as we’re looking at different ways to wire biological catalysts to do efficient reactions that impact society, we need to know how that wiring works. And so that is what we’re trying to figure out,” he says.
At the core of problem-solvingStuart Levine ’97, director of MIT’s BioMicro Center, keeps departmental researchers at the forefront of systems biology.As director of the MIT BioMicro Center (BMC), Stuart Levine ’97 wholeheartedly embraces the variety of challenges he tackles each day. One of over 50 core facilities providing shared resources across the Institute, the BMC supplies integrated high-throughput genomics, single-cell and spatial transcriptomic analysis, bioinformatics support, and data management to researchers across MIT. The BioMicro Center is part of the Integrated Genomics and Bioinformatics core facility at the Robert A. Swanson (1969) Biotechnology Center.
“Every day is a different day,” Levine says, “there are always new problems, new challenges, and the technology is continuing to move at an incredible pace.” After more than 15 years in the role, Levine is grateful that the breadth of his work allows him to seek solutions for so many scientific problems.
By combining bioinformatics expertise with biotech relationships and a focus on maximizing the impact of the center’s work, Levine brings the broad range of skills required to match the diversity of questions asked by investigators in MIT’s Department of Biology and Koch Institute for Integrative Cancer Research, as well as researchers across MIT’s campus.
Expansive expertise
Biology first appealed to Levine as an MIT undergraduate taking class 7.012 (Introduction to Biology), thanks to the charisma of instructors Professor Eric Lander and Amgen Professor Emerita Nancy Hopkins. After earning his PhD in biochemistry from Harvard University and Massachusetts General Hospital, Levine returned to MIT for postdoctoral work with Professor Richard Young, core member at the Whitehead Institute for Biomedical Research.
In the Young Lab, Levine found his calling as an informaticist and ultimately decided to stay at MIT. Here, his work has a wide-ranging impact: the BMC serves over 100 labs annually, from the the Computer Science and Artificial Intelligence Laboratory and the departments of Brain and Cognitive Sciences; Earth, Atmospheric and Planetary Sciences; Chemical Engineering; Mechanical Engineering; and, of course, Biology.
“It’s a fun way to think about science,” Levine says, noting that he applies his knowledge and streamlines workflows across these many disciplines by “truly and deeply understanding the instrumentation complexities.”
This depth of understanding and experience allows Levine to lead what longtime colleague Professor Laurie Boyer describes as “a state-of-the-art core that has served so many faculty and provides key training opportunities for all.” He and his team work with cutting-edge, finely tuned scientific instruments that generate vast amounts of bioinformatics data, then use powerful computational tools to store, organize, and visualize the data collected, contributing to research on topics ranging from host-parasite interactions to proposed tools for NASA’s planetary protection policy.
Staying ahead of the curve
With a scientist directing the core, the BMC aims to enable researchers to “take the best advantage of systems biology methods,” says Levine. These methods use advanced research technologies to do things like prepare large sets of DNA and RNA for sequencing, read DNA and RNA sequences from single cells, and localize gene expression to specific tissues.
Levine presents a lightweight, clear rectangle about the width of a cell phone and the length of a VHS cassette.
“This is a flow cell that can do 20 human genomes to clinical significance in two days — 8 billion reads,” he says. “There are newer instruments with several times that capacity available as well.”
The vast majority of research labs do not need that kind of power, but the Institute, and its researchers as a whole, certainly do. Levine emphasizes that “the ROI [return on investment] for supporting shared resources is extremely high because whatever support we receive impacts not just one lab, but all of the labs we support. Keeping MIT’s shared resources at the bleeding edge of science is critical to our ability to make a difference in the world.”
To stay at the edge of research technology, Levine maintains company relationships, while his scientific understanding allows him to educate researchers on what is possible in the space of modern systems biology. Altogether, these attributes enable Levine to help his researcher clients “push the limits of what is achievable.”
The man behind the machines
Each core facility operates like a small business, offering specialized services to a diverse client base across academic and industry research, according to Amy Keating, Jay A. Stein (1968) Professor of Biology and head of the Department of Biology. She explains that “the PhD-level education and scientific and technological expertise of MIT’s core directors are critical to the success of life science research at MIT and beyond.”
While Levine clearly has the education and expertise, the success of the BMC “business” is also in part due to his tenacity and focus on results for the core’s users.
He was recognized by the Institute with the MIT Infinite Mile Award in 2015 and the MIT Excellence Award in 2017, for which one nominator wrote, “What makes Stuart’s leadership of the BMC truly invaluable to the MIT community is his unwavering dedication to producing high-quality data and his steadfast persistence in tackling any type of troubleshooting needed for a project. These attributes, fostered by Stuart, permeate the entire culture of the BMC.”
“He puts researchers and their research first, whether providing education, technical services, general tech support, or networking to collaborators outside of MIT,” says Noelani Kamelamela, lab manager of the BMC. “It’s all in service to users and their projects.”
Tucked into the far back corner of the BMC lab space, Levine’s office is a fitting symbol of his humility. While his guidance and knowledge sit at the center of what elevates the BMC beyond technical support, he himself sits away from the spotlight, resolutely supporting others to advance science.
“Stuart has always been the person, often behind the scenes, that pushes great science, ideas, and people forward,” Boyer says. “His knowledge and advice have truly allowed us to be at the leading edge in our work.”
To the brain, Esperanto and Klingon appear the same as English or MandarinA new study finds natural and invented languages elicit similar responses in the brain’s language-processing network.Within the human brain, a network of regions has evolved to process language. These regions are consistently activated whenever people listen to their native language or any language in which they are proficient.
A new study by MIT researchers finds that this network also responds to languages that are completely invented, such as Esperanto, which was created in the late 1800s as a way to promote international communication, and even to languages made up for television shows such as “Star Trek” and “Game of Thrones.”
To study how the brain responds to these artificial languages, MIT neuroscientists convened nearly 50 speakers of these languages over a single weekend. Using functional magnetic resonance imaging (fMRI), the researchers found that when participants listened to a constructed language in which they were proficient, the same brain regions lit up as those activated when they processed their native language.
“We find that constructed languages very much recruit the same system as natural languages, which suggests that the key feature that is necessary to engage the system may have to do with the kinds of meanings that both kinds of languages can express,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research and the senior author of the study.
The findings help to define some of the key properties of language, the researchers say, and suggest that it’s not necessary for languages to have naturally evolved over a long period of time or to have a large number of speakers.
“It helps us narrow down this question of what a language is, and do it empirically, by testing how our brain responds to stimuli that might or might not be language-like,” says Saima Malik-Moraleda, an MIT postdoc and the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.
Convening the conlang community
Unlike natural languages, which evolve within communities and are shaped over time, constructed languages, or “conlangs,” are typically created by one person who decides what sounds will be used, how to label different concepts, and what the grammatical rules are.
Esperanto, the most widely spoken conlang, was created in 1887 by L.L. Zamenhof, who intended it to be used as a universal language for international communication. Currently, it is estimated that around 60,000 people worldwide are proficient in Esperanto.
In previous work, Fedorenko and her students have found that computer programming languages, such as Python — another type of invented language — do not activate the brain network that is used to process natural language. Instead, people who read computer code rely on the so-called multiple demand network, a brain system that is often recruited for difficult cognitive tasks.
Fedorenko and others have also investigated how the brain responds to other stimuli that share features with language, including music and nonverbal communication such as gestures and facial expressions.
“We spent a lot of time looking at all these various kinds of stimuli, finding again and again that none of them engage the language-processing mechanisms,” Fedorenko says. “So then the question becomes, what is it that natural languages have that none of those other systems do?”
That led the researchers to wonder if artificial languages like Esperanto would be processed more like programming languages or more like natural languages. Similar to programming languages, constructed languages are created by an individual for a specific purpose, without natural evolution within a community. However, unlike programming languages, both conlangs and natural languages can be used to convey meanings about the state of the external world or the speaker’s internal state.
To explore how the brain processes conlangs, the researchers invited speakers of Esperanto and several other constructed languages to MIT for a weekend conference in November 2022. The other languages included Klingon (from “Star Trek”), Na’vi (from “Avatar”), and two languages from “Game of Thrones” (High Valyrian and Dothraki). For all of these languages, there are texts available for people who want to learn the language, and for Esperanto, Klingon, and High Valyrian, there is even a Duolingo app available.
“It was a really fun event where all the communities came to participate, and over a weekend, we collected all the data,” says Malik-Moraleda, who co-led the data collection effort with former MIT postbac Maya Taliaferro, now a PhD student at New York University.
During that event, which also featured talks from several of the conlang creators, the researchers used fMRI to scan 44 conlang speakers as they listened to sentences from the constructed language in which they were proficient. The creators of these languages — who are co-authors on the paper — helped construct the sentences that were presented to the participants.
While in the scanner, the participants also either listened to or read sentences in their native language, and performed some nonlinguistic tasks for comparison. The researchers found that when people listened to a conlang, the same language regions in the brain were activated as when they listened to their native language.
Common features
The findings help to identify some of the key features that are necessary to recruit the brain’s language processing areas, the researchers say. One of the main characteristics driving language responses seems to be the ability to convey meanings about the interior and exterior world — a trait that is shared by natural and constructed languages, but not programming languages.
“All of the languages, both natural and constructed, express meanings related to inner and outer worlds. They refer to objects in the world, to properties of objects, to events,” Fedorenko says. “Whereas programming languages are much more similar to math. A programming language is a symbolic generative system that allows you to express complex meanings, but it’s a self-contained system: The meanings are highly abstract and mostly relational, and not connected to the real world that we experience.”
Some other characteristics of natural languages, which are not shared by constructed languages, don’t seem to be necessary to generate a response in the language network.
“It doesn’t matter whether the language is created and shaped over time by a community of speakers, because these constructed languages are not,” Malik-Moraleda says. “It doesn’t matter how old they are, because conlangs that are just a decade old engage the same brain regions as natural languages that have been around for many hundreds of years.”
To further refine the features of language that activate the brain’s language network, Fedorenko’s lab is now planning to study how the brain responds to a conlang called Lojban, which was created by the Logical Language Group in the 1990s and was designed to prevent ambiguity of meanings and promote more efficient communication.
The research was funded by MIT’s McGovern Institute for Brain Research, Brain and Cognitive Sciences Department, the Simons Center for the Social Brain, the Frederick A. and Carole J. Middleton Career Development Professorship, and the U.S. National Institutes of Health.
A dive into the “almost magical” potential of photonic crystals In MIT’s 2025 Killian Lecture, physicist John Joannopoulos recounts highlights from a career at the vanguard of photonics research and innovation.When you’re challenging a century-old assumption, you’re bound to meet a bit of resistance. That’s exactly what John Joannopoulos and his group at MIT faced in 1998, when they put forth a new theory on how materials can be made to bend light in entirely new ways.
“Because it was such a big difference in what people expected, we wrote down the theory for this, but it was very difficult to get it published,” Joannopoulos told a capacity crowd in MIT’s Huntington Hall on Friday, as he delivered MIT’s James R. Killian, Jr. Faculty Achievement Award Lecture.
Joannopoulos’ theory offered a new take on a type of material known as a one-dimensional photonic crystal. Photonic crystals are made from alternating layers of refractive structures whose arrangement can influence how incoming light is reflected or absorbed.
In 1887, the English physicist John William Strutt, better known as the Lord Rayleigh, established a theory for how light should bend through a similar structure composed of multiple refractive layers. Rayleigh predicted that such a structure could reflect light, but only if that light is coming from a very specific angle. In other words, such a structure could act as a mirror for light shining from a specific direction only.
More than a century later, Joannopoulos and his group found that, in fact, quite the opposite was true. They proved in theoretical terms that, if a one-dimensional photonic crystal were made from layers of materials with certain “refractive indices,” bending light to different degrees, then the crystal as a whole should be able to reflect light coming from any and all directions. Such an arrangement could act as a “perfect mirror.”
The idea was a huge departure from what scientists had long assumed, and as such, when Joannopoulos submitted the research for peer review, it took some time for the journal, and the community, to come around. But he and his students kept at it, ultimately verifying the theory with experiments.
That work led to a high-profile publication, which helped the group focus the idea into a device: Using the principles that they laid out, they effectively fabricated a perfect mirror and folded it into a tube to form a hollow-core fiber. When they shone light through, the inside of the fiber reflected all the light, trapping it entirely in the core as the light pinged through the fiber. In 2000, the team launched a startup to further develop the fiber into a flexible, highly precise and minimally invasive “photonics scalpel,” which has since been used in hundreds of thousands of medical procedures including a surgeries of the brain and spine.
“And get this: We have estimated more than 500,000 procedures across hospitals in the U.S. and abroad,” Joannopoulos proudly stated, to appreciative applause.
Joannopoulos is the recipient of the 2024-2025 James R. Killian, Jr. Faculty Achievement Award, and is the Francis Wright Davis Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT. In response to an audience member who asked what motivated him in the face of initial skepticism, he replied, “You have to persevere if you believe what you have is correct.”
Immeasurable impact
The Killian Award was established in 1971 to honor MIT’s 10th president, James Killian. Each year, a member of the MIT faculty is honored with the award in recognition of their extraordinary professional accomplishments.
Joannopoulos received his PhD from the University of California at Berkeley in 1974, then immediately joined MIT’s physics faculty. In introducing his lecture, Mary Fuller, professor of literature and chair of the MIT faculty, noted: “If you do the math, you’ll know he just celebrated 50 years at MIT.” Throughout that remarkable tenure, Fuller noted Joannopoulos’ profound impact on generations of MIT students.
“We recognize you as a leader, a visionary scientist, beloved mentor, and a believer in the goodness of people,” Fuller said. “Your legendary impact at MIT and the broader scientific community is immeasurable.”
Bending light
In his lecture, which he titled “Working at the Speed of Light,” Joannopoulos took the audience through the basic concepts underlying photonic crystals, and the ways in which he and others have shown that these materials can bend and twist incoming light in a controlled way.
As he described it, photonic crystals are “artificial materials” that can be designed to influence the properties of photons in a way that’s similar to how physical features in semiconductors affect the flow of electrons. In the case of semiconductors, such materials have a specific “band gap,” or a range of energies in which electrons cannot exist.
In the 1990s, Joannopoulos and others wondered whether the same effects could be realized for optical materials, to intentionally reflect, or keep out, some kinds of light while letting others through. And even more intriguing: Could a single material be designed such that incoming light pinballs away from certain regions in a material in predesigned paths?
“The answer was a resounding yes,” he said.
Joannopoulos described the excitement within the emerging field by quoting an editor from the journal Nature, who wrote at the time: “If only it were possible to make materials in which electromagnetic waves cannot propagate at certain frequencies, all kinds of almost-magical things would be possible.”
Joannopoulos and his group at MIT began in earnest to elucidate the ways in which light interacts with matter and air. The team worked first with two-dimensional photonic crystals made from a horizontal matrix-like pattern of silicon dots surrounded by air. Silicon has a high refractive index, meaning it can greatly bend or reflect light, while air has a much lower index. Joannopoulos predicted that the silicon could be patterned to ping light away, forcing it to travel through the air in predetermined paths.
In multiple works, he and his students showed through theory and experiments that they could design photonic crystals to, for instance, bend incoming light by 90 degrees and force light to circulate only at the edges of a crystal under an applied magnetic field.
“Over the years there have been quite a few examples we’ve discovered of very anomalous, strange behavior of light that cannot exist in normal objects,” he said.
In 1998, after showing that light can be reflected from all directions from a stacked, one-dimensional photonic crystal, he and his students rolled the crystal structure into a fiber, which they tested in a lab. In a video that Joannopoulos played for the audience, a student carefully aimed the end of the long, flexible fiber at a sheet of material made from the same material as the fiber’s casing. As light pumped through the multilayered photonic lining of the fiber and out the other end, the student used the light to slowly etch a smiley face design in the sheet, drawing laughter from the crowd.
As the video demonstrated, although the light was intense enough to melt the material of the fiber’s coating, it was nevertheless entirely contained within the fiber’s core, thanks to the multilayered design of its photonic lining. What’s more, the light was focused enough to make precise patterns when it shone out of the fiber.
“We had originally developed this [optical fiber] as a military device,” Joannopoulos said. “But then the obvious choice to use it for the civilian population was quite clear.”
“Believing in the goodness of people and what they can do”
He and others co-founded Omniguide in 2000, which has since grown into a medical device company that develops and commercializes minimally invasive surgical tools such as the fiber-based “photonics scalpel.” In illustrating the fiber’s impact, Joannopoulos played a news video, highlighting the fiber’s use in performing precise and effective neurosurgery. The optical scalpel has also been used to perform procedures in larynology, head and neck surgery, and gynecology, along with brain and spinal surgeries.
Omniguide is one of several startups that Joannopoulos has helped found, along with Luminus Devices, Inc., WiTricity Corporation, Typhoon HIL, Inc., and Lightelligence. He is author or co-author of over 750 refereed journal articles, four textbooks, and 126 issued U.S. patents. He has earned numerous recognitions and awards, including his election to the National Academy of Sciences and the American Academy of Arts and Sciences.
The Killian Award citation states: “Professor Joannopoulos has been a consistent role model not just in what he does, but in how he does it. … Through all these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”
At the end of the talk, Yoel Fink, Joannopoulos’ former student and frequent collaborator, who is now professor of materials science, asked Joannopoulos how, particularly in current times, he has been able to “maintain such a positive and optimistic outlook, of humans and human nature.”
“It’s a matter of believing in the goodness of people and what they can do, what they accomplish, and giving an environment where they’re working in, where they feel extermely comfortable,” Joannopoulos offered. “That includes creating a sense of trust between the faculty and the students, which is key. That helps enormously.”
Evidence that 40Hz gamma stimulation promotes brain health is expandingA decade of studies provide a growing evidence base that increasing the power of the brain’s gamma rhythms could help fight Alzheimer’s, and perhaps other neurological diseases.A decade after scientists in The Picower Institute for Learning and Memory at MIT first began testing whether sensory stimulation of the brain’s 40Hz “gamma” frequency rhythms could treat Alzheimer’s disease in mice, a growing evidence base supporting the idea that it can improve brain health — in humans as well as animals — has emerged from the work of labs all over the world. A new open-access review article in PLOS Biology describes the state of research so far and presents some of the fundamental and clinical questions at the forefront of the noninvasive gamma stimulation now.
“As we’ve made all our observations, many other people in the field have published results that are very consistent,” says Li-Huei Tsai, Picower professor of neuroscience at MIT, director of MIT’s Aging Brain Initiative, and senior author of the new review, with postdoc Jung Park. “People have used many different ways to induce gamma including sensory stimulation, transcranial alternating current stimulation, or transcranial magnetic stimulation, but the key is delivering stimulation at 40 hertz. They all see beneficial effects.”
A decade of discovery at MIT
Starting with a paper in Nature in 2016, a collaboration led by Tsai has produced a series of studies showing that 40Hz stimulation via light, sound, the two combined, or tactile vibration reduces hallmarks of Alzheimer’s pathology such as amyloid and tau proteins, prevents neuron death, decreases synapse loss, and sustains memory and cognition in various Alzheimer’s mouse models. The collaboration’s investigations of the underlying mechanisms that produce these benefits have so far identified specific cellular and molecular responses in many brain cell types including neurons, microglia, astrocytes, oligodendrocytes, and the brain’s blood vessels. Last year, for instance, the lab reported in Nature that 40Hz audio and visual stimulation induced interneurons in mice to increase release of the peptide VIP, prompting increased clearance of amyloid from brain tissue via the brain’s glymphatic “plumbing” system.
Meanwhile, at MIT and at the MIT spinoff company Cognito Therapeutics, phase II clinical studies have shown that people with Alzheimer’s exposed to 40Hz light and sound experienced a significant slowing of brain atrophy and improvements on some cognitive measures, compared to untreated controls. Cognito, which has also measured significant preservation of the brain’s “white matter” in volunteers, has been conducting a pivotal, nationwide phase III clinical trial of sensory gamma stimulation for more than a year.
“Neuroscientists often lament that it is a great time to have AD [Alzheimer’s disease] if you are a mouse,” Park and Tsai wrote in the review. “Our ultimate goal, therefore, is to translate GENUS discoveries into a safe, accessible, and noninvasive therapy for AD patients.” The MIT team often refers to 40Hz stimulation as “GENUS” for Gamma Entrainment Using Sensory Stimulation.
A growing field
As Tsai’s collaboration, which includes MIT colleagues Edward Boyden and Emery N. Brown, has published its results, many other labs have produced studies adding to the evidence that various methods of noninvasive gamma sensory stimulation can combat Alzheimer’s pathology. Among many examples cited in the new review, in 2024 a research team in China independently corroborated that 40Hz sensory stimulation increases glymphatic fluid flows in mice. In another example, a Harvard Medical School-based team in 2022 showed that 40Hz gamma stimulation using Transcranial Alternating Current Stimulation significantly reduced the burden of tau in three out of four human volunteers. And in another study involving more than 100 people, researchers in Scotland in 2023 used audio and visual gamma stimulation (at 37.5Hz) to improve memory recall.
Open questions
Amid the growing number of publications describing preclinical studies with mice and clinical trials with people, open questions remain, Tsai and Park acknowledge. The MIT team and others are still exploring the cellular and molecular mechanisms that underlie GENUS’s effects. Tsai says her lab is looking at other neuropeptide and neuromodulatory systems to better understand the cascade of events linking sensory stimulation to the observed cellular responses. Meanwhile, the nature of how some cells, such as microglia, respond to gamma stimulation and how that affects pathology remains unclear, Tsai adds.
Even with a national phase III clinical trial underway, it is still important to investigate these fundamental mechanisms, Tsai says, because new insights into how noninvasive gamma stimulation affects the brain could improve and expand its therapeutic potential.
“The more we understand the mechanisms, the more we will have good ideas about how to further optimize the treatment,” Tsai says. “And the more we understand its action and the circuits it affects, the more we will know beyond Alzheimer’s disease what other neurological disorders will benefit from this.”
Indeed, the review points to studies at MIT and other institutions providing at least some evidence that GENUS might be able to help with Parkinson’s disease, stroke, anxiety, epilepsy, and the cognitive side effects of chemotherapy and conditions that reduce myelin, such as multiple sclerosis. Tsai’s lab has been studying whether it can help with Down syndrome as well.
The open questions may help define the next decade of GENUS research.
QS World University Rankings rates MIT No. 1 in 11 subjects for 2025The Institute also ranks second in seven subject areas.QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2025, the organization announced today.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 13 straight years.
Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity — a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?
A new study from MIT Professor Ila Fiete suggests a surprising answer.
In findings published Feb. 18 in Nature, Fiete, an associate investigator in the McGovern Institute for Brain Research and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.
Joining two big ideas
“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona PhD '25, a former graduate student and K. Lisa Yang ICoN Center graduate fellow, and postdoc Sarthak Chandra also led the study.
Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition — small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.
Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.
Modular systems in the brain
The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale — they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.
No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.
Modular systems in nature
The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread, and also to vary, smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries — distinct ecological “neighborhoods” that don’t overlap.
Fiete’s study suggests why: local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection — and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.
A self-organizing world
One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same — they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.
The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.
Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”
Study: The ozone hole is healing, thanks to global reduction of CFCsNew results show with high statistical confidence that ozone recovery is going strong.A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.
Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.
“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”
The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.
Roots of ozone recovery
Within the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.
In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.
The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.
In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.
“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.
Anthropogenic healing
In their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.
Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.
“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”
The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.
They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.
The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.
“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”
If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.
“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”
This research was supported, in part, by the National Science Foundation and NASA.
Study suggests new molecular strategy for treating fragile X syndromeEnhancing activity of a specific component of neurons’ “NMDA” receptors normalized protein synthesis, neural activity, and seizure susceptibility in the hippocampus of fragile X lab mice.Building on more than two decades of research, a study by MIT neuroscientists at The Picower Institute for Learning and Memory reports a new way to treat pathology and symptoms of fragile X syndrome, the most common genetically-caused autism spectrum disorder. The team showed that augmenting a novel type of neurotransmitter signaling reduced hallmarks of fragile X in mouse models of the disorder.
The new approach, described in Cell Reports, works by targeting a specific molecular subunit of “NMDA” receptors that they discovered plays a key role in how neurons synthesize proteins to regulate their connections, or “synapses,” with other neurons in brain circuits. The scientists showed that in fragile X model mice, increasing the receptor’s activity caused neurons in the hippocampus region of the brain to increase molecular signaling that suppressed excessive bulk protein synthesis, leading to other key improvements.
Setting the table
“One of the things I find most satisfying about this study is that the pieces of the puzzle fit so nicely into what had come before,” says study senior author Mark Bear, Picower Professor in MIT’s Department of Brain and Cognitive Sciences. Former postdoc Stephanie Barnes, now a lecturer at the University of Glasgow, is the study’s lead author.
Bear’s lab studies how neurons continually edit their circuit connections, a process called “synaptic plasticity” that scientists believe to underlie the brain’s ability to adapt to experience and to form and process memories. These studies led to two discoveries that set the table for the newly published advance. In 2011, Bear’s lab showed that fragile X and another autism disorder, tuberous sclerosis (Tsc), represented two ends of a continuum of a kind of protein synthesis in the same neurons. In fragile X there was too much. In Tsc there was too little. When lab members crossbred fragile X and Tsc mice, in fact, their offspring emerged healthy, as the mutations of each disorder essentially canceled each other out.
More recently, Bear’s lab showed a different dichotomy. It has long been understood from their influential work in the 1990s that the flow of calcium ions through NMDA receptors can trigger a form of synaptic plasticity called “long-term depression” (LTD). But in 2020, they found that another mode of signaling by the receptor — one that did not require ion flow — altered protein synthesis in the neuron and caused a physical shrinking of the dendritic “spine” structures housing synapses.
For Bear and Barnes, these studies raised the prospect that if they could pinpoint how NMDA receptors affect protein synthesis they might identify a new mechanism that could be manipulated therapeutically to address fragile X (and perhaps tuberous sclerosis) pathology and symptoms. That would be an important advance to complement ongoing work Bear’s lab has done to correct fragile X protein synthesis levels via another receptor called mGluR5.
Receptor dissection
In the new study, Bear and Barnes’ team decided to use the non-ionic effect on spine shrinkage as a readout to dissect how NMDARs signal protein synthesis for synaptic plasticity in hippocampus neurons. They hypothesized that the dichotomy of ionic effects on synaptic function and non-ionic effects on spine structure might derive from the presence of two distinct components of NMDA receptors: “subunits” called GluN2A and GluN2B. To test that, they used genetic manipulations to knock out each of the subunits. When they did so, they found that knocking out “2A” or “2B” could eliminate LTD, but that only knocking out 2B affected spine size. Further experiments clarified that 2A and 2B are required for LTD, but that spine shrinkage solely depends on the 2B subunit.
The next task was to resolve how the 2B subunit signals spine shrinkage. A promising possibility was a part of the subunit called the “carboxyterminal domain,” or CTD. So, in a new experiment Bear and Barnes took advantage of a mouse that had been genetically engineered by researchers at the University of Edinburgh so that the 2A and 2B CTDs could be swapped with one another. A telling result was that when the 2B subunit lacked its proper CTD, the effect on spine structure disappeared. The result affirmed that the 2B subunit signals spine shrinkage via its CTD.
Another consequence of replacing the CTD of the 2B subunit was an increase in bulk protein synthesis that resembled findings in fragile X. Conversely, augmenting the non-ionic signaling through the 2B subunit suppressed bulk protein synthesis, reminiscent of Tsc.
Treating fragile X
Putting the pieces together, the findings indicated that augmenting signaling through the 2B subunit might, like introducing the mutation causing Tsc, rescue aspects of fragile X.
Indeed, when the scientists swapped in the 2B subunit CTD of NMDA receptor in fragile X model mice they found correction of not only the excessive bulk protein synthesis, but also altered synaptic plasticity, and increased electrical excitability that are hallmarks of the disease. To see if a treatment that targets NMDA receptors might be effective in fragile X, they tried an experimental drug called Glyx-13. This drug binds to the 2B subunit of NMDA receptors to augment signaling. The researchers found that this treatment can also normalize protein synthesis and reduced sound-induced seizures in the fragile X mice.
The team now hypothesizes, based on another prior study in the lab, that the beneficial effect to fragile X mice of the 2B subunit’s CTD signaling is that it shifts the balance of protein synthesis away from an all-too-efficient translation of short messenger RNAs (which leads to excessive bulk protein synthesis) toward a lower-efficiency translation of longer messenger RNAs.
Bear says he does not know what the prospects are for Glyx-13 as a clinical drug, but he noted that there are some drugs in clinical development that specifically target the 2B subunit of NMDA receptors.
In addition to Bear and Barnes, the study’s other authors are Aurore Thomazeau, Peter Finnie, Max Heinreich, Arnold Heynen, Noboru Komiyama, Seth Grant, Frank Menniti, and Emily Osterweil.
The FRAXA Foundation, The Picower Institute for Learning and Memory, The Freedom Together Foundation, and the National Institutes of Health funded the study.
Breakfast of champions: MIT hosts top young scientistsAt an MIT-led event at AJAS/AAAS, researchers connect with MIT faculty, Nobel laureates, and industry leaders to share their work, gain mentorship, and explore future careers in science.On Feb. 14, some of the nation’s most talented high school researchers convened in Boston for the annual American Junior Academy of Science (AJAS) conference, held alongside the American Association for the Advancement of Science (AAAS) annual meeting. As a highlight of the event, MIT once again hosted its renowned “Breakfast with Scientists,” offering students a unique opportunity to connect with leading scientific minds from around the world.
The AJAS conference began with an opening reception at the MIT Schwarzman College of Computing, where professor of biology and chemistry Catherine Drennan delivered the keynote address, welcoming 162 high school students from 21 states. Delegates were selected through state Academy of Science competitions, earning the chance to share their work and connect with peers and professionals in science, technology, engineering, and mathematics (STEM).
Over breakfast, students engaged with distinguished scientists, including MIT faculty, Nobel laureates, and industry leaders, discussing research, career paths, and the broader impact of scientific discovery.
Amy Keating, MIT biology department head, sat at a table with students ranging from high school juniors to college sophomores. The group engaged in an open discussion about life as a scientist at a leading institution like MIT. One student expressed concern about the competitive nature of innovative research environments, prompting Keating to reassure them, saying, “MIT has a collaborative philosophy rather than a competitive one.”
At another table, Nobel laureate and former MIT postdoc Gary Ruvkun shared a lighthearted moment with students, laughing at a TikTok video they had created to explain their science fair project. The interaction reflected the innate curiosity and excitement that drives discovery at all stages of a scientific career.
Donna Gerardi, executive director of the National Association of Academies of Science, highlighted the significance of the AJAS program. “These students are not just competing in science fairs; they are becoming part of a larger scientific community. The connections they make here can shape their careers and future contributions to science.”
Alongside the breakfast, AJAS delegates participated in a variety of enriching experiences, including laboratory tours, conference sessions, and hands-on research activities.
“I am so excited to be able to discuss my research with experts and get some guidance on the next steps in my academic trajectory,” said Andrew Wesel, a delegate from California.
A defining feature of the AJAS experience was its emphasis on mentorship and collaboration rather than competition. Delegates were officially inducted as lifetime Fellows of the American Junior Academy of Science at the conclusion of the conference, joining a distinguished network of scientists and researchers.
Sponsored by the MIT School of Science and School of Engineering, the breakfast underscored MIT’s longstanding commitment to fostering young scientific talent. Faculty and researchers took the opportunity to encourage students to pursue careers in STEM fields, providing insights into the pathways available to them.
“It was a joy to spend time with such passionate students,” says Kristala Prather, head of the Department of Chemical Engineering at MIT. “One of the brightest moments for me was sitting next to a young woman who will be joining MIT in the fall — I just have to convince her to study ChemE!”
Seeing more in expansion microscopyNew methods light up lipid membranes and let researchers see sets of proteins inside cells with high resolution.In biology, seeing can lead to understanding, and researchers in Professor Edward Boyden’s lab at the McGovern Institute for Brain Research are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy — a high-resolution imaging technique the group introduced in 2015 — so researchers everywhere can see more when they look at cells and tissues under a light microscope.
“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT. “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.
With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in open-access form in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.
Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.
Since first developing expansion microscopy, Boyden and his team have continued to enhance the method — increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.
Visualizing cell membranes
One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the Feb. 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.
Tay Shin SM ’20, PhD ’23, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”
The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.
After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.
Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.
Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.
Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.
One sample, many proteins
To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.
A key to that new method, reported Nov. 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoc Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.
After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.
Expansion microscopy lets biologists visualize some of cells’ tiniest features — but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.
To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.
The team used multiExR to look at amyloid plaques — the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high-throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease — but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.
Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the U.S. Army, Cancer Research U.K., the New York Stem Cell Foundation, the U.S. National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures, Samsung, MathWorks, the Collamore-Rogers Fellowship, the U.S. National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.
Five years, five triumphs in Putnam Math CompetitionUndergrads sweep Putnam Fellows for fifth year in a row and continue Elizabeth Lowell Putnam winning streak.For the fifth time in the history of the annual William Lowell Putnam Mathematical Competition, and for the fifth year in a row, MIT swept all five of the contest’s top spots.
The top five scorers each year are named Putnam Fellows. Senior Brian Liu and juniors Papon Lapate and Luke Robitaille are now three-time Putnam Fellows, sophomore Jiangqi Dai earned his second win, and first-year Qiao Sun earned his first. Each receives a $2,500 award. This is also the fifth time that any school has had all five Putnam Fellows.
MIT’s team also came in first. The team was made up of Lapate, Robitaille, and Sun (in alphabetical order); Lapate and Robitaille were also on last year’s winning team. This is MIT’s ninth first-place win in the past 11 competitions. Teams consist of the three top scorers from each institution. The institution with the first-place team receives a $25,000 award, and each team member receives $1,000.
First-year Jessica Wan was the top-scoring woman, finishing in the top 25, which earned her the $1,000 Elizabeth Lowell Putnam Prize. She is the eighth MIT student to receive this honor since the award was created in 1992. This is the sixth year in a row that an MIT woman has won the prize.
In total, 69 MIT students scored within the top 100. Beyond the top five scorers, MIT took nine of the next 11 spots (each receiving a $1,000 award), and seven of the next nine spots (earning $250 awards). Of the 75 receiving honorable mentions, 48 were from MIT. A total of 3,988 students took the exam in December, including 222 MIT students.
This exam is considered to be the most prestigious university-level mathematics competition in the United States and Canada.
The Putnam is known for its difficulty: While a perfect score is 120, this year’s top score was 90, and the median was just 2. While many MIT students scored well, the Department of Mathematics is proud of everyone who just took the exam, says Professor Michel Goemans, head of the Department of Mathematics.
“Year after year, I am so impressed by the sheer number of students at MIT that participate in the Putnam competition,” Goemans says. “In no other college or university in the world can one find hundreds of students who get a kick out of thinking about math problems. So refreshing!”
Adds Professor Bjorn Poonen, who helped MIT students prepare for the exam this year, “The incredible competition performance is just one manifestation of MIT’s vibrant community of students who love doing math and discussing math with each other, students who through their hard work in this environment excel in ways beyond competitions, too.”
While the annual Putnam Competition is administered to thousands of undergraduate mathematics students across the United States and Canada, in recent years around 70 of its top 100 performers have been MIT students. Since 2000, MIT has placed among the top five teams 23 times.
MIT’s success in the Putnam exam isn’t surprising. MIT’s recent Putnam coaches are four-time Putnam Fellow Bjorn Poonen and three-time Putnam Fellow Yufei Zhao ’10, PhD ’15.
MIT is also a top destination for medalists participating in the International Mathematics Olympiad (IMO) for high school students. Indeed, over the last decade MIT has enrolled almost every American IMO medalist, and more international IMO gold medalists than the universities of any other single country, according to forthcoming research from the Global Talent Fund (GTF), which offers scholarship and training programs for math Olympiad students and coaches.
IMO participation is a strong predictor of future achievement. According to the International Mathematics Olympiad Foundation, about half of Fields Medal winners are IMO alums — but it’s not the only ingredient.
“Recruiting the most talented students is only the beginning. A top-tier university education — with excellent professors, supportive mentors, and an engaging peer community — is key to unlocking their full potential," says GTF President Ruchir Agarwal. "MIT’s sustained Putnam success shows how the right conditions deliver spectacular results. The catalytic reaction of MIT’s concentration of math talent and the nurturing environment of Building 2 should accelerate advancements in fundamental science for years and decades to come.”
Many MIT mathletes see competitions not only as a way to hone their mathematical aptitude, but also as a way to create a strong sense of community, to help inspire and educate the next generation.
Chris Peterson SM ’13, director of communications and special projects at MIT Admissions and Student Financial Services, points out that many MIT students with competition math experience volunteer to help run programs for K-12 students including HMMT and Math Prize for Girls, and mentor research projects through the Program for Research in Mathematics, Engineering and Science (PRIMES).
Many of the top scorers are also alumni of the PRIMES high school outreach program. Two of this year’s Putnam Fellows, Liu and Robitaille, are PRIMES alumni, as are four of the next top 11, and six out of the next nine winners, along with many of the students receiving honorable mentions. Pavel Etingof, a math professor who is also PRIMES’ chief research advisor, states that among the 25 top winners, 12 (48 percent) are PRIMES alumni.
“We at PRIMES are very proud of our alumnae’s fantastic showing at the Putnam Competition,” says PRIMES director Slava Gerovitch PhD ’99. “PRIMES serves as a pipeline of mathematical excellence from high school through undergraduate studies, and beyond.”
Along the same lines, a collaboration between the MIT Department of Mathematics and MISTI-Africa has sent MIT students with Olympiad experience abroad during the Independent Activities Period (IAP) to coach high school students who hope to compete for their national teams.
First-years at MIT also take class 18.A34 (Mathematical Problem Solving), known informally as the Putnam Seminar, not only to hone their Putnam exam skills, but also to make new friends.
“Many people think of math competitions as primarily a way to identify and recognize talent, which of course they are,” says Peterson. “But the community convened by and through these competitions generates educational externalities that collectively exceed the sum of individual accomplishment.”
Math Community and Outreach Officer Michael King also notes the camaraderie that forms around the test.
“My favorite time of the Putnam day is right after the problem session, when the students all jump up, run over to their friends, and begin talking animatedly,” says King, who also took the exam as an undergraduate student. “They cheer each other’s successes, debate problem solutions, commiserate over missed answers, and share funny stories. It’s always amazing to work with the best math students in the world, but the most rewarding aspect is seeing the friendships that develop.”
A full list of the winners can be found on the Putnam website.
An ancient RNA-guided system could simplify delivery of gene editing therapiesThe programmable proteins are compact, modular, and can be directed to modify DNA in human cells.A vast search of natural diversity has led scientists at MIT’s McGovern Institute for Brain Research and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox.
These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.
These findings are reported online Feb. 27 in the journal Science.
“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.
“Nature is pretty incredible,” says Zhang, who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.
In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR-Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.
At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionary relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.
Zhang’s team discovered more than 20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region — its TIGR arrays — encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.
Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.
They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact — a quarter of the size Cas9, on average — making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.
Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses, as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.
This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.
MIT physicists find unexpected crystals of electrons in an ultrathin materialRhombohedral graphene reveals new exotic interacting electron states.MIT physicists report the unexpected discovery of electrons forming crystalline structures in a material only billionths of a meter thick. The work adds to a gold mine of discoveries originating from the material, which the same team discovered about three years ago.
In a paper published Jan. 22 in Nature, the team describes how electrons in devices made, in part, of the material can become solid, or form crystals, by changing the voltage applied to the devices when they are kept at a temperature similar to that of outer space. Under the same conditions, they also showed the emergence of two new electronic states that add to work they reported last year showing that electrons can split into fractions of themselves.
The physicists were able to make the discoveries thanks to new custom-made filters for better insulation of the equipment involved in the work. These allowed them to cool their devices to a temperature an order of magnitude colder than they achieved for the earlier results.
The team also observed all of these phenomena using two slightly different “versions” of the material, one composed of five layers of atomically thin carbon; the other composed of four layers. This indicates “that there’s a family of materials where you can get this kind of behavior, which is exciting,” says Long Ju, an assistant professor in the MIT Department of Physics who led the work. Ju is also affiliated with MIT’s Materials Research Laboratory and Research Lab of Electronics.
Referring to the material, known as rhombohedral pentalayer graphene, Ju says, “We found a gold mine, and every scoop is revealing something new.”
New material
Rhombohedral pentalayer graphene is essentially a special form of pencil lead. Pencil lead, or graphite, is composed of graphene, a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure. Rhombohedral pentalayer graphene is composed of five layers of graphene stacked in a specific overlapping order.
Since Ju and colleagues discovered the material, they have tinkered with it by adding layers of another material they thought might accentuate the graphene’s properties, or even produce new phenomena. For example, in 2023 they created a sandwich of rhombohedral pentalayer graphene with “buns” made of hexagonal boron nitride. By applying different voltages, or amounts of electricity, to the sandwich, they discovered three important properties never before seen in natural graphite.
Last year, Ju and colleagues reported yet another important and even more surprising phenomenon: Electrons became fractions of themselves upon applying a current to a new device composed of rhombohedral pentalayer graphene and hexagonal boron nitride. This is important because this “fractional quantum Hall effect” has only been seen in a few systems, usually under very high magnetic fields. The Ju work showed that the phenomenon could occur in a fairly simple material without a magnetic field. As a result, it is called the “fractional quantum anomalous Hall effect” (anomalous indicates that no magnetic field is necessary).
New results
In the current work, the Ju team reports yet more unexpected phenomena from the general rhombohedral graphene/boron nitride system when it is cooled to 30 millikelvins (1 millikelvin is equivalent to -459.668 degrees Fahrenheit). In last year’s paper, Ju and colleagues reported six fractional states of electrons. In the current work, they report discovering two more of these fractional states.
They also found another unusual electronic phenomenon: the integer quantum anomalous Hall effect in a wide range of electron densities. The fractional quantum anomalous Hall effect was understood to emerge in an electron “liquid” phase, analogous to water. In contrast, the new state that the team has now observed can be interpreted as an electron “solid” phase — resembling the formation of electronic “ice” — that can also coexist with the fractional quantum anomalous Hall states when the system’s voltage is carefully tuned at ultra-low temperatures.
One way to think about the relation between the integer and fractional states is to imagine a map created by tuning electric voltages: By tuning the system with different voltages, you can create a “landscape” similar to a river (which represents the liquid-like fractional states) cutting through glaciers (which represent the solid-like integer effect), Ju explains.
Ju notes that his team observed all of these phenomena not only in pentalayer rhombohedral graphene, but also in rhombohedral graphene composed of four layers. This creates a family of materials, and indicates that other “relatives” may exist.
“This work shows how rich this material is in exhibiting exotic phenomena. We’ve just added more flavor to this already very interesting material,” says Zhengguang Lu, a co-first author of the paper. Lu, who conducted the work as a postdoc at MIT, is now on the faculty at Florida State University.
In addition to Ju and Lu, other principal authors of the Nature paper are Tonghang Han and Yuxuan Yao, both of MIT. Lu, Han, and Yao are co-first authors of the paper who contributed equally to the work. Other MIT authors are Jixiang Yang, Junseok Seo, Lihan Shi, and Shenyong Ye. Additional members of the team are Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
This work was supported by a Sloan Fellowship, a Mathworks Fellowship, the U.S. Department of Energy, the Japan Society for the Promotion of Science KAKENHI, and the World Premier International Research Initiative of Japan. Device fabrication was performed at the Harvard Center for Nanoscale Systems and MIT.nano.
Helping the immune system attack tumorsStefani Spranger is working to discover why some cancers don’t respond to immunotherapy, in hopes of making them more vulnerable to it.In addition to patrolling the body for foreign invaders, the immune system also hunts down and destroys cells that have become cancerous or precancerous. However, some cancer cells end up evading this surveillance and growing into tumors.
Once established, tumor cells often send out immunosuppressive signals, which leads T cells to become “exhausted” and unable to attack the tumor. In recent years, some cancer immunotherapy drugs have shown great success in rejuvenating those T cells so they can begin attacking tumors again.
While this approach has proven effective against cancers such as melanoma, it doesn’t work as well for others, including lung and ovarian cancer. MIT Associate Professor Stefani Spranger is trying to figure out how those tumors are able to suppress immune responses, in hopes of finding new ways to galvanize T cells into attacking them.
“We really want to understand why our immune system fails to recognize cancer,” Spranger says. “And I’m most excited about the really hard-to-treat cancers because I think that’s where we can make the biggest leaps.”
Her work has led to a better understanding of the factors that control T-cell responses to tumors, and raised the possibility of improving those responses through vaccination or treatment with immune-stimulating molecules called cytokines.
“We’re working on understanding what exactly the problem is, and then collaborating with engineers to find a good solution,” she says.
Jumpstarting T cells
As a student in Germany, where students often have to choose their college major while still in high school, Spranger envisioned going into the pharmaceutical industry and chose to major in biology. At Ludwig Maximilian University in Munich, her course of study began with classical biology subjects such as botany and zoology, and she began to doubt her choice. But, once she began taking courses in cell biology and immunology, her interest was revived and she continued into a biology graduate program at the university.
During a paper discussion class early in her graduate school program, Spranger was assigned to a Science paper on a promising new immunotherapy treatment for melanoma. This strategy involves isolating tumor-infiltrating T-cells during surgery, growing them into large numbers, and then returning them to the patient. For more than 50 percent of those patients, the tumors were completely eliminated.
“To me, that changed the world,” Spranger recalls. “You can take the patient’s own immune system, not really do all that much to it, and then the cancer goes away.”
Spranger completed her PhD studies in a lab that worked on further developing that approach, known as adoptive T-cell transfer therapy. At that point, she still was leaning toward going into pharma, but after finishing her PhD in 2011, her husband, also a biologist, convinced her that they should both apply for postdoc positions in the United States.
They ended up at the University of Chicago, where Spranger worked in a lab that studies how the immune system responds to tumors. There, she discovered that while melanoma is usually very responsive to immunotherapy, there is a small fraction of melanoma patients whose T cells don’t respond to the therapy at all. That got her interested in trying to figure out why the immune system doesn’t always respond to cancer the way that it should, and in finding ways to jumpstart it.
During her postdoc, Spranger also discovered that she enjoyed mentoring students, which she hadn’t done as a graduate student in Germany. That experience drew her away from going into the pharmaceutical industry, in favor of a career in academia.
“I had my first mentoring teaching experience having an undergrad in the lab, and seeing that person grow as a scientist, from barely asking questions to running full experiments and coming up with hypotheses, changed how I approached science and my view of what academia should be for,” she says.
Modeling the immune system
When applying for faculty jobs, Spranger was drawn to MIT by the collaborative environment of MIT and its Koch Institute for Integrative Cancer Research, which offered the chance to collaborate with a large community of engineers who work in the field of immunology.
“That community is so vibrant, and it’s amazing to be a part of it,” she says.
Building on the research she had done as a postdoc, Spranger wanted to explore why some tumors respond well to immunotherapy, while others do not. For many of her early studies, she used a mouse model of non-small-cell lung cancer. In human patients, the majority of these tumors do not respond well to immunotherapy.
“We build model systems that resemble each of the different subsets of non-responsive non-small cell lung cancer, and we’re trying to really drill down to the mechanism of why the immune system is not appropriately responding,” she says.
As part of that work, she has investigated why the immune system behaves differently in different types of tissue. While immunotherapy drugs called checkpoint inhibitors can stimulate a strong T-cell response in the skin, they don’t do nearly as much in the lung. However, Spranger has shown that T cell responses in the lung can be improved when immune molecules called cytokines are also given along with the checkpoint inhibitor.
Those cytokines work, in part, by activating dendritic cells — a class of immune cells that help to initiate immune responses, including activation of T cells.
“Dendritic cells are the conductor for the orchestra of all the T cells, although they’re a very sparse cell population,” Spranger says. “They can communicate which type of danger they sense from stressed cells and then instruct the T cells on what they have to do and where they have to go.”
Spranger’s lab is now beginning to study other types of tumors that don’t respond at all to immunotherapy, including ovarian cancer and glioblastoma. Both the brain and the peritoneal cavity appear to suppress T-cell responses to tumors, and Spranger hopes to figure out how to overcome that immunosuppression.
“We’re specifically focusing on ovarian cancer and glioblastoma, because nothing’s working right now for those cancers,” she says. “We want to understand what we have to do in those sites to induce a really good anti-tumor immune response.”
Four from MIT named 2025 Gates Cambridge ScholarsMarkey Freudenburg-Puricelli, Christina Kim ’24, Abigail Schipper ’24, and Rachel Zhang ’21 will pursue graduate studies at Cambridge University in the U.K.MIT senior Markey Freudenburg-Puricelli and alumnae Christina Kim ’24, Abigail (“Abbie”) Schipper ’24, and Rachel Zhang ’21 have been selected as Gates Cambridge Scholars and will begin graduate studies this fall in the field of their choice at Cambridge University in the U.K.
Now celebrating its 25th year, the Gates Cambridge program provides fully funded post-graduate scholarships to outstanding applicants from countries outside of the U.K. The mission of Gates Cambridge is to build a global network of future leaders committed to changing the world for the better.
Students interested in applying to Gates Cambridge should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.
Markey Freudenburg-Puricelli
Freudenburg-Puricelli is majoring in Earth, atmospheric, and planetary sciences and minoring in Spanish. Her passion for geoscience has led her to travel to different corners of the world to conduct geologic fieldwork. These experiences have motivated her to pursue a career in developing scientific policy and environmental regulation that can protect those most vulnerable to climate change. As a Gates Cambridge Scholar, she will pursue an MPhil in environmental policy.
Arriving at MIT, Freudenburg-Puricelli joined the Terrascope first-year learning community, which focuses on hands-on education relating to global environmental issues. She then became an undergraduate research assistant in the McGee Lab for Paleoclimate and Geochronology, where she gathered and interpreted data used to understand climate features of permafrost across northern Canada.
Following a summer internship in Chile researching volcanoes at the Universidad Católica del Norte, Freudenburg-Puricelli joined the Gehring Lab for Plant Genetics, Epigenetics, and Seed Biology. Last summer, she traveled to Peru to work with the Department of Paleontology at the Universidad Nacional de Piura, conducting fieldwork and preserving and organizing fossil specimens. Freudenburg-Puricelli has also done fieldwork on sedimentology in New Mexico, geological mapping in the Mojave Desert, and field oceanography onboard the SSV Corwith Cramer.
On campus, Freudenburg-Puricelli is an avid glassblower and has been a teaching assistant at the MIT glassblowing lab. She is also a tour guide for the MIT Office of Admissions and has volunteered with the Department of Earth, Atmospheric and Planetary Sciences’ first-year pre-orientation program.
Christina Kim ’24
Hailing from Princeton, New Jersey, Kim majored in chemistry and biology at MIT. Her dedication to bridging knowledge gaps in women’s health brought her to the Wellcome Sanger Institute in Cambridge, U.K., where she has been working as a researcher.
As a Gates Cambridge Scholar, Kim will pursue a research MPhil at the Institute to leverage cutting-edge tools in bioinformatics and tissue engineering for designing novel in vitro models of human placental development. Kim hopes that her work will revolutionize the way that scientists study enigmatic processes in reproductive biology and that she will ultimately contribute crucial steps toward developing life-saving interventions for pregnant women worldwide.
Abigail “Abbie” Schipper ’24
Originally from Portland, Oregon, Schipper graduated from MIT with a BS in mechanical engineering and a minor in biology. At Cambridge, she will pursue an MPhil in engineering, researching medical devices used in pre-hospital trauma systems in low- and middle-income countries with the Cambridge Health Systems Design group.
At MIT, Schipper was a member of MIT Emergency Medical Services, volunteering on the ambulance and serving as the heartsafe officer and director of ambulance operations. Inspired by her work in CPR education, she helped create the LifeSaveHer project, which aims to decrease the gender disparity in out-of-hospital cardiac arrest survival outcomes through the creation of female CPR mannequins and associated research. This team was the first-place winner of the 2023 PKG IDEAS Competition and a recipient of the Eloranta Research Fellowship.
Schipper’s work has also focused on designing medical devices for low-resource or extreme environments. As an undergraduate, she performed research in the lab of Professor Giovanni Traverso, where she worked on a project designing a drug delivery implant for regions with limited access to surgery. During a summer internship at the University College London Collaborative Center for Inclusion Health, she worked with the U.K.’s National Health Service to create durable, low-cost carbon dioxide sensors to approximate the risk of airborne infectious disease transmission in shelters for people experiencing homelessness.
After graduation, Schipper interned at SAGA Space Architecture through MISTI Denmark, designing life support systems for an underwater habitat that will be used for astronaut training and oceanographic research.
Schipper was a member of the Concourse learning community, Sigma Kappa Sorority, and her living group, Burton 3rd. In her free time, she enjoys fixing bicycles and playing the piano.
Rachel Zhang ’21
Zhang graduated from MIT with a BS in physics in 2021. During her senior year, she was a recipient of the Joel Matthews Orloff Award. She then earned an MS in astronomy at Northwestern University. An internship at the Center for Computational Astrophysics at the Flatiron Institute deepened her interest in the applications of machine learning for astronomy. At Cambridge, she will pursue a PhD in applied mathematics and theoretical physics.
This article was updated to reflect an additional winner announced in early April.
Study: Even after learning the right idea, humans and animals still seem to test other approachesNew research adds evidence that learning a successful strategy for approaching a task doesn’t prevent further exploration, even if doing so reduces performance.Maybe it’s a life hack or a liability, or a little of both. A surprising result in a new MIT study may suggest that people and animals alike share an inherent propensity to keep updating their approach to a task even when they have already learned how they should approach it, and even if the deviations sometimes lead to unnecessary error.
The behavior of “exploring” when one could just be “exploiting” could make sense for at least two reasons, says Mriganka Sur, senior author of the study published Feb. 18 in Current Biology. Just because a task’s rules seem set one moment doesn’t mean they’ll stay that way in this uncertain world, so altering behavior from the optimal condition every so often could help reveal needed adjustments. Moreover, trying new things when you already know what you like is a way of finding out whether there might be something even better out there than the good thing you’ve got going on right now.
“If the goal is to maximize reward, you should never deviate once you have found the perfect solution, yet you keep exploring,” says Sur, the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “Why? It’s like food. We all like certain foods, but we still keep trying different foods because you never know, there might be something you could discover.”
Predicting timing
Former research technician Tudor Dragoi, now a graduate student at Boston University, led the study in which he and fellow members of the Sur Lab explored how humans and marmosets, a small primate, make predictions about event timing.
Three humans and two marmosets were given a simple task. They’d see an image on a screen for some amount of time — the amount of time varied from one trial to the next within a limited range — and they simply had to hit a button (marmosets poked a tablet while humans clicked a mouse) when the image disappeared. Success was defined as reacting as quickly as possible to the image’s disappearance without hitting the button too soon. Marmosets received a juice reward on successful trials.
Though marmosets needed more training time than humans, the subjects all settled into the same reasonable pattern of behavior regarding the task. The longer the image stayed on the screen, the faster their reaction time to its disappearance. This behavior follows the “hazard model” of prediction in which, if the image can only last for so long, the longer it’s still there, the more likely it must be to disappear very soon. The subjects learned this and overall, with more experience, their reaction times became faster.
But as the experiment continued, Sur and Dragoi’s team noticed something surprising was also going on. Mathematical modeling of the reaction time data revealed that both the humans and marmosets were letting the results of the immediate previous trial influence what they did on the next trial, even though they had already learned what to do. If the image was only on the screen briefly in one trial, on the next round subjects would decrease reaction time a bit (presumably expecting a shorter image duration again) whereas if the image lingered, they’d increase reaction time (presumably because they figured they’d have a longer wait).
Those results add to ones from a similar study Sur’s lab published in 2023, in which they found that even after mice learned the rules of a different cognitive task, they’d arbitrarily deviate from the winning strategy every so often. In that study, like this one, learning the successful strategy didn’t prevent subjects from continuing to test alternatives, even if it meant sacrificing reward.
“The persistence of behavioral changes even after task learning may reflect exploration as a strategy for seeking and setting on an optimal internal model of the environment,” the scientists wrote in the new study.
Relevance for autism
The similarity of the human and marmoset behaviors is an important finding as well, Sur says. That’s because differences in making predictions about one’s environment is posited to be a salient characteristic of autism spectrum disorders. Because marmosets are small, are inherently social, and are more cognitively complex than mice, work has begun in some labs to establish marmoset autism models, but a key component was establishing that they model autism-related behaviors well. By demonstrating that marmosets model neurotypical human behavior regarding predictions, the study therefore adds weight to the emerging idea that marmosets can indeed provide informative models for autism studies.
In addition to Dragoi and Sur, other authors of the paper are Hiroki Sugihara, Nhat Le, Elie Adam, Jitendra Sharma, Guoping Feng, and Robert Desimone.
The Simons Foundation Autism Research Initiative supported the research through the Simons Center for the Social Brain at MIT.
MIT faculty, alumni named 2025 Sloan Research FellowsAnnual award honors early-career researchers for creativity, innovation, and research accomplishments.Seven MIT faculty and 21 additional MIT alumni are among 126 early-career researchers honored with 2025 Sloan Research Fellowships by the Alfred P. Sloan Foundation.
The recipients represent the MIT departments of Biology; Chemical Engineering; Chemistry; Civil and Environmental Engineering; Earth, Atmospheric and Planetary Sciences; Economics; Electrical Engineering and Computer Science; Mathematics; and Physics as well as the Music and Theater Arts Section and the MIT Sloan School of Management.
The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.
“The Sloan Research Fellows represent the very best of early-career science, embodying the creativity, ambition, and rigor that drive discovery forward,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “These extraordinary scholars are already making significant contributions, and we are confident they will shape the future of their fields in remarkable ways.”
Including this year’s recipients, a total of 333 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. MIT and Northwestern University are tied for having the most faculty in the 2025 cohort of fellows, each with seven. The MIT recipients are:
Ariel L. Furst is the Paul M. Cook Career Development Professor of Chemical Engineering at MIT. Her lab combines biological, chemical, and materials engineering to solve challenges in human health and environmental sustainability, with lab members developing technologies for implementation in low-resource settings to ensure equitable access to technology. Furst completed her PhD in the lab of Professor Jacqueline K. Barton at Caltech developing new cancer diagnostic strategies based on DNA charge transport. She was then an A.O. Beckman Postdoctoral Fellow in the lab of Professor Matthew Francis at the University of California at Berkeley, developing sensors to monitor environmental pollutants. She is the recipient of the NIH New Innovator Award, the NSF CAREER Award, and the Dreyfus Teacher-Scholar Award. She is passionate about STEM outreach and increasing participation of underrepresented groups in engineering.
Mohsen Ghaffari SM ’13, PhD ’17 is an associate professor in the Department of Electrical Engineering and Computer Science (EECS) as well as the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research explores the theory of distributed and parallel computation, and he has had influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing (which resolved several decades-old open problems), improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation. His work has been recognized with best paper awards at the IEEE Symposium on Foundations of Computer Science (FOCS), ACM-SIAM Symposium on Discrete Algorithms (SODA), ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), the ACM Symposium on Principles of Distributed Computing (PODC), and the International Symposium on Distributed Computing (DISC), the European Research Council's Starting Grant, and a Google Faculty Research Award, among others.
Marzyeh Ghassemi PhD ’17 is an associate professor within EECS and the Institute for Medical Engineering and Science (IMES). Ghassemi earned two bachelor’s degrees in computer science and electrical engineering from New Mexico State University as a Goldwater Scholar; her MS in biomedical engineering from Oxford University as a Marshall Scholar; and her PhD in computer science from MIT. Following stints as a visiting researcher with Alphabet’s Verily and an assistant professor at University of Toronto, Ghassemi joined EECS and IMES as an assistant professor in July 2021. (IMES is the home of the Harvard-MIT Program in Health Sciences and Technology.) She is affiliated with the Laboratory for Information and Decision Systems (LIDS), the MIT-IBM Watson AI Lab, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Institute for Data, Systems, and Society (IDSS), and CSAIL. Ghassemi’s research in the Healthy ML Group creates a rigorous quantitative framework in which to design, develop, and place machine learning models in a way that is robust and useful, focusing on health settings. Her contributions range from socially-aware model construction to improving subgroup- and shift-robust learning methods to identifying important insights in model deployment scenarios that have implications in policy, health practice, and equity. Among other awards, Ghassemi has been named one of MIT Technology Review’s 35 Innovators Under 35 and an AI2050 Fellow, as well as receiving the 2018 Seth J. Teller Award, the 2023 MIT Prize for Open Data, a 2024 NSF CAREER Award, and the Google Research Scholar Award. She founded the nonprofit Association for Health, Inference and Learning (AHLI) and her work has been featured in popular press such as Forbes, Fortune, MIT News, and The Huffington Post.
Darcy McRose is the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Civil and Environmental Engineering. She is an environmental microbiologist who draws on techniques from genetics, chemistry, and geosciences to understand the ways microbes control nutrient cycling and plant health. Her laboratory uses small molecules, or “secondary metabolites,” made by plants and microbes as tractable experiments tools to study microbial activity in complex environments like soils and sediments. In the long term, this work aims to uncover fundamental controls on microbial physiology and community assembly that can be used to promote agricultural sustainability, ecosystem health, and human prosperity.
Sarah Millholland, an assistant professor of physics at MIT and member of the Kavli Institute for Astrophysics and Space Research, is a theoretical astrophysicist who studies extrasolar planets, including their formation and evolution, orbital dynamics, and interiors/atmospheres. She studies patterns in the observed planetary orbital architectures, referring to properties like the spacings, eccentricities, inclinations, axial tilts, and planetary size relationships. She specializes in investigating how gravitational interactions such as tides, resonances, and spin dynamics sculpt observable exoplanet properties. She is the 2024 recipient of the Vera Rubin Early Career Award for her contributions to the formation and dynamics of extrasolar planetary systems. She plans to use her Sloan Fellowship to explore how tidal physics shape the diversity of orbits and interiors of exoplanets orbiting close to their stars.
Emil Verner is the Albert F. (1942) and Jeanne P. Clear Career Development Associate Professor of Global Management and an associate professor of finance at the MIT Sloan School of Management. His research lies at the intersection of finance and macroeconomics, with a particular focus on understanding the causes and consequences of financial crises over the past 150 years. Verner’s recent work examines the drivers of bank runs and insolvency during banking crises, the role of debt booms in amplifying macroeconomic fluctuations, the effectiveness of debt relief policies during crises, and how financial crises impact political polarization and support for populist parties. Before joining MIT, he earned a PhD in economics from Princeton University.
Christian Wolf, the Rudi Dornbusch Career Development Assistant Professor of Economics and a faculty research fellow at the National Bureau of Economic Research, works in macroeconomics, monetary economics, and time series econometrics. His work focuses on the development and application of new empirical methods to address classic macroeconomic questions and to evaluate how robust the answers are to a range of common modeling assumptions. His research has provided path-breaking insights on monetary transmission mechanisms and fiscal policy. In a separate strand of work, Wolf has substantially deepened our understanding of the appropriate methods macroeconomists should use to estimate impulse response functions — how key economic variables respond to policy changes or unexpected shocks.
The following MIT alumni also received fellowships:
Jason Altschuler SM ’18, PhD ’22
David Bau III PhD ’21
Rene Boiteau PhD ’16
Lynne Chantranupong PhD ’17
Lydia B. Chilton ’06, ’07, MNG ’09
Jordan Cotler ’15
Alexander Ji PhD ’17
Sarah B. King ’10
Allison Z. Koenecke ’14
Eric Larson PhD ’18
Chen Lian ’15, PhD ’20
Huanqian Loh ’06
Ian J. Moult PhD ’16
Lisa Olshansky PhD ’15
Andrew Owens SM ’13, PhD ’16
Matthew Rognlie PhD ’16
David Rolnick ’12, PhD ’18
Shreya Saxena PhD ’17
Mark Sellke ’18
Amy X. Zhang PhD ’19
Aleksandr V. Zhukhovitskiy PhD ’16
Longtime MIT Professor Anthony “Tony” Sinskey ScD ’67, who was also the co-founder and faculty director of the Center for Biomedical Innovation (CBI), passed away on Feb. 12 at his home in New Hampshire. He was 84.
Deeply engaged with MIT, Sinskey left his mark on the Institute as much through the relationships he built as the research he conducted. Colleagues say that throughout his decades on the faculty, Sinskey’s door was always open.
“He was incredibly generous in so many ways,” says Graham Walker, an American Cancer Society Professor at MIT. “He was so willing to support people, and he did it out of sheer love and commitment. If you could just watch Tony in action, there was so much that was charming about the way he lived. I’ve said for years that after they made Tony, they broke the mold. He was truly one of a kind.”
Sinskey’s lab at MIT explored methods for metabolic engineering and the production of biomolecules. Over the course of his research career, he published more than 350 papers in leading peer-reviewed journals for biology, metabolic engineering, and biopolymer engineering, and filed more than 50 patents. Well-known in the biopharmaceutical industry, Sinskey contributed to the founding of multiple companies, including Metabolix, Tepha, Merrimack Pharmaceuticals, and Genzyme Corporation. Sinskey’s work with CBI also led to impactful research papers, manufacturing initiatives, and educational content since its founding in 2005.
Across all of his work, Sinskey built a reputation as a supportive, collaborative, and highly entertaining friend who seemed to have a story for everything.
“Tony would always ask for my opinions — what did I think?” says Barbara Imperiali, MIT’s Class of 1922 Professor of Biology and Chemistry, who first met Sinskey as a graduate student. “Even though I was younger, he viewed me as an equal. It was exciting to be able to share my academic journey with him. Even later, he was continually opening doors for me, mentoring, connecting. He felt it was his job to get people into a room together to make new connections.”
Sinskey grew up in the small town of Collinsville, Illinois, and spent nights after school working on a farm. For his undergraduate degree, he attended the University of Illinois, where he got a job washing dishes at the dining hall. One day, as he recalled in a 2020 conversation, he complained to his advisor about the dishwashing job, so the advisor offered him a job washing equipment in his microbiology lab.
In a development that would repeat itself throughout Sinskey’s career, he befriended the researchers in the lab and started learning about their work. Soon he was showing up on weekends and helping out. The experience inspired Sinskey to go to graduate school, and he only applied to one place.
Sinskey earned his ScD from MIT in nutrition and food science in 1967. He joined MIT’s faculty a few years later and never left.
“He loved MIT and its excellence in research and education, which were incredibly important to him,” Walker says. “I don’t know of another institution this interdisciplinary — there’s barely a speed bump between departments — so you can collaborate with anybody. He loved that. He also loved the spirit of entrepreneurship, which he thrived on. If you heard somebody wanted to get a project done, you could run around, get 10 people, and put it together. He just loved doing stuff like that.”
Working across departments would become a signature of Sinskey’s research. His original office was on the first floor of MIT’s Building 56, right next to the parking lot, so he’d leave his door open in the mornings and afternoons and colleagues would stop in and chat.
“One of my favorite things to do was to drop in on Tony when I saw that his office door was open,” says Chris Kaiser, MIT’s Amgen Professor of Biology. “We had a whole range of things we liked to catch up on, but they always included his perspectives looking back on his long history at MIT. It also always included hopes for the future, including tracking trajectories of MIT students, whom he doted on.”
Long before the internet, colleagues describe Sinskey as a kind of internet unto himself, constantly leveraging his vast web of relationships to make connections and stay on top of the latest science news.
“He was an incredibly gracious person — and he knew everyone,” Imperiali says. “It was as if his Rolodex had no end. You would sit there and he would say, ‘Call this person.’ or ‘Call that person.’ And ‘Did you read this new article?’ He had a wonderful view of science and collaboration, and he always made that a cornerstone of what he did. Whenever I’d see his door open, I’d grab a cup of tea and just sit there and talk to him.”
When the first recombinant DNA molecules were produced in the 1970s, it became a hot area of research. Sinskey wanted to learn more about recombinant DNA, so he hosted a large symposium on the topic at MIT that brought in experts from around the world.
“He got his name associated with recombinant DNA for years because of that,” Walker recalls. “People started seeing him as Mr. Recombinant DNA. That kind of thing happened all the time with Tony.”
Sinskey’s research contributions extended beyond recombinant DNA into other microbial techniques to produce amino acids and biodegradable plastics. He co-founded CBI in 2005 to improve global health through the development and dispersion of biomedical innovations. The center adopted Sinskey’s collaborative approach in order to accelerate innovation in biotechnology and biomedical research, bringing together experts from across MIT’s schools.
“Tony was at the forefront of advancing cell culture engineering principles so that making biomedicines could become a reality. He knew early on that biomanufacturing was an important step on the critical path from discovering a drug to delivering it to a patient,” says Stacy Springs, the executive director of CBI. “Tony was not only my boss and mentor, but one of my closest friends. He was always working to help everyone reach their potential, whether that was a colleague, a former or current researcher, or a student. He had a gentle way of encouraging you to do your best.”
“MIT is one of the greatest places to be because you can do anything you want here as long as it’s not a crime,” Sinskey joked in 2020. “You can do science, you can teach, you can interact with people — and the faculty at MIT are spectacular to interact with.”
Sinskey shared his affection for MIT with his family. His wife, the late ChoKyun Rha ’62, SM ’64, SM ’66, ScD ’67, was a professor at MIT for more than four decades and the first woman of Asian descent to receive tenure at MIT. His two sons also attended MIT — Tong-ik Lee Sinskey ’79, SM ’80 and Taeminn Song MBA ’95, who is the director of strategy and strategic initiatives for MIT Information Systems and Technology (IS&T).
Song recalls: “He was driven by same goal my mother had: to advance knowledge in science and technology by exploring new ideas and pushing everyone around them to be better.”
Around 10 years ago, Sinskey began teaching a class with Walker, Course 7.21/7.62 (Microbial Physiology). Walker says their approach was to treat the students as equals and learn as much from them as they taught. The lessons extended beyond the inner workings of microbes to what it takes to be a good scientist and how to be creative. Sinskey and Rha even started inviting the class over to their home for Thanksgiving dinner each year.
“At some point, we realized the class was turning into a close community,” Walker says. “Tony had this endless supply of stories. It didn’t seem like there was a topic in biology that Tony didn’t have a story about either starting a company or working with somebody who started a company.”
Over the last few years, Walker wasn’t sure they were going to continue teaching the class, but Sinskey remarked it was one of the things that gave his life meaning after his wife’s passing in 2021. That decided it.
After finishing up this past semester with a class-wide lunch at Legal Sea Foods, Sinskey and Walker agreed it was one of the best semesters they’d ever taught.
In addition to his two sons, Sinskey is survived by his daughter-in-law Hyunmee Elaine Song, five grandchildren, and two great grandsons. He has two brothers, Terry Sinskey (deceased in 1975) and Timothy Sinskey, and a sister, Christine Sinskey Braudis.
Gifts in Sinskey’s memory can be made to the ChoKyun Rha (1962) and Anthony J Sinskey (1967) Fund.
MIT biologists discover a new type of control over RNA splicingThey identified proteins that influence splicing of about half of all human introns, allowing for more complex types of gene regulation.RNA splicing is a cellular process that is critical for gene expression. After genes are copied from DNA into messenger RNA, portions of the RNA that don’t code for proteins, called introns, are cut out and the coding portions are spliced back together.
This process is controlled by a large protein-RNA complex called the spliceosome. MIT biologists have now discovered a new layer of regulation that helps to determine which sites on the messenger RNA molecule the spliceosome will target.
The research team discovered that this type of regulation, which appears to influence the expression of about half of all human genes, is found throughout the animal kingdom, as well as in plants. The findings suggest that the control of RNA splicing, a process that is fundamental to gene expression, is more complex than previously known.
“Splicing in more complex organisms, like humans, is more complicated than it is in some model organisms like yeast, even though it’s a very conserved molecular process. There are bells and whistles on the human spliceosome that allow it to process specific introns more efficiently. One of the advantages of a system like this may be that it allows more complex types of gene regulation,” says Connor Kenny, an MIT graduate student and the lead author of the study.
Christopher Burge, the Uncas and Helen Whitaker Professor of Biology at MIT, is the senior author of the study, which appears today in Nature Communications.
Building proteins
RNA splicing, a process discovered in the late 1970s, allows cells to precisely control the content of the mRNA transcripts that carry the instructions for building proteins.
Each mRNA transcript contains coding regions, known as exons, and noncoding regions, known as introns. They also include sites that act as signals for where splicing should occur, allowing the cell to assemble the correct sequence for a desired protein. This process enables a single gene to produce multiple proteins; over evolutionary timescales, splicing can also change the size and content of genes and proteins, when different exons become included or excluded.
The spliceosome, which forms on introns, is composed of proteins and noncoding RNAs called small nuclear RNAs (snRNAs). In the first step of spliceosome assembly, an snRNA molecule known as U1 snRNA binds to the 5’ splice site at the beginning of the intron. Until now, it had been thought that the binding strength between the 5’ splice site and the U1 snRNA was the most important determinant of whether an intron would be spliced out of the mRNA transcript.
In the new study, the MIT team discovered that a family of proteins called LUC7 also helps to determine whether splicing will occur, but only for a subset of introns — in human cells, up to 50 percent.
Before this study, it was known that LUC7 proteins associate with U1 snRNA, but the exact function wasn’t clear. There are three different LUC7 proteins in human cells, and Kenny’s experiments revealed that two of these proteins interact specifically with one type of 5’ splice site, which the researchers called “right-handed.” A third human LUC7 protein interacts with a different type, which the researchers call “left-handed.”
The researchers found that about half of human introns contain a right- or left-handed site, while the other half do not appear to be controlled by interaction with LUC7 proteins. This type of control appears to add another layer of regulation that helps remove specific introns more efficiently, the researchers say.
“The paper shows that these two different 5’ splice site subclasses exist and can be regulated independently of one another,” Kenny says. “Some of these core splicing processes are actually more complex than we previously appreciated, which warrants more careful examination of what we believe to be true about these highly conserved molecular processes.”
“Complex splicing machinery”
Previous work has shown that mutation or deletion of one of the LUC7 proteins that bind to right-handed splice sites is linked to blood cancers, including about 10 percent of acute myeloid leukemias (AMLs). In this study, the researchers found that AMLs that lost a copy of the LUC7L2 gene have inefficient splicing of right-handed splice sites. These cancers also developed the same type of altered metabolism seen in earlier work.
“Understanding how the loss of this LUC7 protein in some AMLs alters splicing could help in the design of therapies that exploit these splicing differences to treat AML,” Burge says. “There are also small molecule drugs for other diseases such as spinal muscular atrophy that stabilize the interaction between U1 snRNA and specific 5’ splice sites. So the knowledge that particular LUC7 proteins influence these interactions at specific splice sites could aid in improving the specificity of this class of small molecules.”
Working with a lab led by Sascha Laubinger, a professor at Martin Luther University Halle-Wittenberg, the researchers found that introns in plants also have right- and left-handed 5’ splice sites that are regulated by Luc7 proteins.
The researchers’ analysis suggests that this type of splicing arose in a common ancestor of plants, animals, and fungi, but it was lost from fungi soon after they diverged from plants and animals.
“A lot what we know about how splicing works and what are the core components actually comes from relatively old yeast genetics work,” Kenny says. “What we see is that humans and plants tend to have more complex splicing machinery, with additional components that can regulate different introns independently.”
The researchers now plan to further analyze the structures formed by the interactions of Luc7 proteins with mRNA and the rest of the spliceosome, which could help them figure out in more detail how different forms of Luc7 bind to different 5’ splice sites.
The research was funded by the U.S. National Institutes of Health and the German Research Foundation.
Viewing the universe through ripples in spacePhysicist Salvatore Vitale is looking for new sources of gravitational waves, to reach beyond what we can learn about the universe through light alone.In early September 2015, Salvatore Vitale, who was then a research scientist at MIT, stopped home in Italy for a quick visit with his parents after attending a meeting in Budapest. The meeting had centered on the much-anticipated power-up of Advanced LIGO — a system scientists hoped would finally detect a passing ripple in space-time known as a gravitational wave.
Albert Einstein had predicted the existence of these cosmic reverberations nearly 100 years earlier and thought they would be impossible to measure. But scientists including Vitale believed they might have a shot with their new ripple detector, which was scheduled, finally, to turn on in a few days. At the meeting in Budapest, team members were excited, albeit cautious, acknowledging that it could be months or years before the instruments picked up any promising signs.
However, the day after he arrived for his long-overdue visit with his family, Vitale received a huge surprise.
“The next day, we detect the first gravitational wave, ever,” he remembers. “And of course I had to lock myself in a room and start working on it.”
Vitale and his colleagues had to work in secrecy to prevent the news from getting out before they could scientifically confirm the signal and characterize its source. That meant that no one — not even his parents — could know what he was working on. Vitale departed for MIT and promised that he would come back to visit for Christmas.
“And indeed, I fly back home on the 25th of December, and on the 26th we detect the second gravitational wave! At that point I had to swear them to secrecy and tell them what happened, or they would strike my name from the family record,” he says, only partly in jest.
With the family peace restored, Vitale could focus on the path ahead, which suddenly seemed bright with gravitational discoveries. He and his colleagues, as part of the LIGO Scientific Collaboration, announced the detection of the first gravitational wave in February 2016, confirming Einstein’s prediction. For Vitale, the moment also solidified his professional purpose.
“Had LIGO not detected gravitational waves when it did, I would not be where I am today,” Vitale says. “For sure I was very lucky to be doing this at the right time, for me, and for the instrument and the science.”
A few months after, Vitale joined the MIT faculty as an assistant professor of physics. Today, as a recently tenured associate professor, he is working with his students to analyze a bounty of gravitational signals, from Advanced LIGO as well as Virgo (a similar detector in Italy) and KAGRA, in Japan. The combined power of these observatories is enabling scientists to detect at least one gravitational wave a week, which has revealed a host of extreme sources, from merging black holes to colliding neutron stars.
“Gravitational waves give us a different view of the same universe, which could teach us about things that are very hard to see with just photons,” Vitale says.
Random motion
Vitale is from Reggio di Calabria, a small coastal city in the south of Italy, right at “the tip of the boot,” as he says. His family owned and ran a local grocery store, where he spent so much time as a child that he could recite the names of nearly all the wines in the store.
When he was 9 years old, he remembers stopping in at the local newsstand, which also sold used books. He gathered all the money he had in order to purchase two books, both by Albert Einstein. The first was a collection of letters from the physicist to his friends and family. The second was his theory of relativity.
“I read the letters, and then went through the second book and remember seeing these weird symbols that didn’t mean anything to me,” Vitale recalls.
Nevertheless, the kid was hooked, and continued reading up on physics, and later, quantum mechanics. Toward the end of high school, it wasn’t clear if Vitale could go on to college. Large grocery chains had run his parents’ store out of business, and in the process, the family lost their home and were struggling to recover their losses. But with his parents’ support, Vitale applied and was accepted to the University of Bologna, where he went on to earn a bachelor’s and a master’s in theoretical physics, specializing in general relativity and approximating ways to solve Einstein’s equations. He went on to pursue his PhD in theoretical physics at the Pierre and Marie Curie University in Paris.
“Then, things changed in a very, very random way,” he says.
Vitale’s PhD advisor was hosting a conference, and Vitale volunteered to hand out badges and flyers and help guests get their bearings. That first day, one guest drew his attention.
“I see this guy sitting on the floor, kind of banging his head against his computer because he could not connect his Ubuntu computer to the Wi-Fi, which back then was very common,” Vitale says. “So I tried to help him, and failed miserably, but we started chatting.”
The guest happened to be a professor from Arizona who specialized in analyzing gravitational-wave signals. Over the course of the conference, the two got to know each other, and the professor invited Vitale to Arizona to work with his research group. The unexpected opportunity opened a door to gravitational-wave physics that Vitale might have passed by otherwise.
“When I talk to undergrads and how they can plan their career, I say I don’t know that you can,” Vitale says. “The best you can hope for is a random motion that, overall, goes in the right direction.”
High risk, high reward
Vitale spent two months at Embry-Riddle Aeronautical University in Prescott, Arizona, where he analyzed simulated data of gravitational waves. At that time, around 2009, no one had detected actual signals of gravitational waves. The first iteration of the LIGO detectors began observations in 2002 but had so far come up empty.
“Most of my first few years was working entirely with simulated data because there was no real data in the first place. That led a lot of people to leave the field because it was not an obvious path,” Vitale says.
Nevertheless, the work he did in Arizona only piqued his interest, and Vitale chose to specialize in gravitational-wave physics, returning to Paris to finish up his PhD, then going on to a postdoc position at NIKHEF, the Dutch National Institute for Subatomic Physics at the University of Amsterdam. There, he joined on as a member of the Virgo collaboration, making further connections among the gravitational-wave community.
In 2012, he made the move to Cambridge, Massachusetts, where he started as a postdoc at MIT’s LIGO Laboratory. At that time, scientists there were focused on fine-tuning Advanced LIGO’s detectors and simulating the types of signals that they might pick up. Vitale helped to develop an algorithm to search for signals likely to be gravitational waves.
Just before the detectors turned on for the first observing run, Vitale was promoted to research scientist. And as luck would have it, he was working with MIT students and colleagues on one of the two algorithms that picked up what would later be confirmed to be the first ever gravitational wave.
“It was exciting,” Vitale recalls. “Also, it took us several weeks to convince ourselves that it was real.”
In the whirlwind that followed the official announcement, Vitale became an assistant professor in MIT’s physics department. In 2017, in recognition of the discovery, the Nobel Prize in Physics was awarded to three pivotal members of the LIGO team, including MIT’s Rainier Weiss. Vitale and other members of the LIGO-Virgo collaboration attended the Nobel ceremony later on, in Stockholm, Sweden — a moment that was captured in a photograph displayed proudly in Vitale’s office.
Vitale was promoted to associate professor in 2022 and earned tenure in 2024. Unfortunately his father passed away shortly before the tenure announcement. “He would have been very proud,” Vitale reflects.
Now, in addition to analyzing gravitational-wave signals from LIGO, Virgo, and KAGRA, Vitale is pushing ahead on plans for an even bigger, better LIGO successor. He is part of the Cosmic Explorer Project, which aims to build a gravitational-wave detector that is similar in design to LIGO but 10 times bigger. At that scale, scientists believe such an instrument could pick up signals from sources that are much farther away in space and time, even close to the beginning of the universe.
Then, scientists could look for never-before-detected sources, such as the very first black holes formed in the universe. They could also search within the same neighborhood as LIGO and Virgo, but with higher precision. Then, they might see gravitational signals that Einstein didn’t predict.
“Einstein developed the theory of relativity to explain everything from the motion of Mercury, which circles the sun every 88 days, to objects such as black holes that are 30 times the mass of the sun and move at half the speed of light,” Vitale says. “There’s no reason the same theory should work for both cases, but so far, it seems so, and we’ve found no departure from relativity. But you never know, and you have to keep looking. It’s high risk, for high reward.”
Study reveals the Phoenix galaxy cluster in the act of extreme coolingObservations from NASA’s James Webb Space Telescope help to explain the cluster’s mysterious starburst, usually only seen in younger galaxies.The core of a massive cluster of galaxies appears to be pumping out far more stars than it should. Now researchers at MIT and elsewhere have discovered a key ingredient within the cluster that explains the core’s prolific starburst.
In a new study published in Nature, the scientists report using NASA’s James Webb Space Telescope (JWST) to observe the Phoenix cluster — a sprawling collection of gravitationally bound galaxies that circle a central massive galaxy some 5.8 billion light years from Earth. The cluster is the largest of its kind that scientists have so far observed. For its size and estimated age, the Phoenix should be what astronomers call “red and dead” — long done with any star formation that is characteristic of younger galaxies.
But astronomers previously discovered that the core of the Phoenix cluster appeared surprisingly bright, and the central galaxy seemed to be churning out stars at an extremely vigorous rate. The observations raised a mystery: How was the Phoenix fueling such rapid star formation?
In younger galaxies, the “fuel” for forging stars is in the form of extremely cold and dense clouds of interstellar gas. For the much older Phoenix cluster, it was unclear whether the central galaxy could undergo the extreme cooling of gas that would be required to explain its stellar production, or whether cold gas migrated in from other, younger galaxies.
Now, the MIT team has gained a much clearer view of the cluster’s core, using JWST’s far-reaching, infrared-measuring capabilities. For the first time, they have been able to map regions within the core where there are pockets of “warm” gas. Astronomers have previously seen hints of both very hot gas, and very cold gas, but nothing in between.
The detection of warm gas confirms that the Phoenix cluster is actively cooling and able to generate a huge amount of stellar fuel on its own.
“For the first time we have a complete picture of the hot-to-warm-to-cold phase in star formation, which has really never been observed in any galaxy,” says study lead author Michael Reefe, a physics graduate student in MIT’s Kavli Institute for Astrophysics and Space Research. “There is a halo of this intermediate gas everywhere that we can see.”
“The question now is, why this system?” adds co-author Michael McDonald, associate professor of physics at MIT. “This huge starburst could be something every cluster goes through at some point, but we’re only seeing it happen currently in one cluster. The other possibility is that there’s something divergent about this system, and the Phoenix went down a path that other systems don’t go. That would be interesting to explore.”
Hot and cold
The Phoenix cluster was first spotted in 2010 by astronomers using the South Pole Telescope in Antarctica. The cluster comprises about 1,000 galaxies and lies in the constellation Phoenix, after which it is named. Two years later, McDonald led an effort to focus in on Phoenix using multiple telescopes, and discovered that the cluster’s central galaxy was extremely bright. The unexpected luminosity was due to a firehose of star formation. He and his colleagues estimated that this central galaxy was turning out stars at a staggering rate of about 1,000 per year.
“Previous to the Phoenix, the most star-forming galaxy cluster in the universe had about 100 stars per year, and even that was an outlier. The typical number is one-ish,” McDonald says. “The Phoenix is really offset from the rest of the population.”
Since that discovery, scientists have checked in on the cluster from time to time for clues to explain the abnormally high stellar production. They have observed pockets of both ultrahot gas, of about 1 million degrees Fahrenheit, and regions of extremely cold gas, of 10 kelvins, or 10 degrees above absolute zero.
The presence of very hot gas is no surprise: Most massive galaxies, young and old, host black holes at their cores that emit jets of extremely energetic particles that can continually heat up the galaxy’s gas and dust throughout a galaxy’s lifetime. Only in a galaxy’s early stages does some of this million-degree gas cool dramatically to ultracold temperatures that can then form stars. For the Phoenix cluster’s central galaxy, which should be well past the stage of extreme cooling, the presence of ultracold gas presented a puzzle.
“The question has been: Where did this cold gas come from?” McDonald says. “It’s not a given that hot gas will ever cool, because there could be black hole or supernova feedback. So, there are a few viable options, the simplest being that this cold gas was flung into the center from other nearby galaxies. The other is that this gas somehow is directly cooling from the hot gas in the core.”
Neon signs
For their new study, the researchers worked under a key assumption: If the Phoenix cluster’s cold, star-forming gas is coming from within the central galaxy, rather than from the surrounding galaxies, the central galaxy should have not only pockets of hot and cold gas, but also gas that’s in a “warm” in-between phase. Detecting such intermediate gas would be like catching the gas in the midst of extreme cooling, serving as proof that the core of the cluster was indeed the source of the cold stellar fuel.
Following this reasoning, the team sought to detect any warm gas within the Phoenix core. They looked for gas that was somewhere between 10 kelvins and 1 million kelvins. To search for this Goldilocks gas in a system that is 5.8 billion light years away, the researchers looked to JWST, which is capable of observing farther and more clearly than any observatory to date.
The team used the Medium-Resolution Spectrometer on JWST’s Mid-Infrared Instrument (MIRI), which enables scientists to map light in the infrared spectrum. In July of 2023, the team focused the instrument on the Phoenix core and collected 12 hours’ worth of infrared images. They looked for a specific wavelength that is emitted when gas — specifically neon gas — undergoes a certain loss of ions. This transition occurs at around 300,000 kelvins, or 540,000 degrees Fahrenheit — a temperature that happens to be within the “warm” range that the researchers looked to detect and map. The team analyzed the images and mapped the locations where warm gas was observed within the central galaxy.
“This 300,000-degree gas is like a neon sign that’s glowing in a specific wavelength of light, and we could see clumps and filaments of it throughout our entire field of view,” Reefe says. “You could see it everywhere.”
Based on the extent of warm gas in the core, the team estimates that the central galaxy is undergoing a huge degree of extreme cooling and is generating an amount of ultracold gas each year that is equal to the mass of about 20,000 suns. With that kind of stellar fuel supply, the team says it’s very likely that the central galaxy is indeed generating its own starburst, rather than using fuel from surrounding galaxies.
“I think we understand pretty completely what is going on, in terms of what is generating all these stars,” McDonald says. “We don’t understand why. But this new work has opened a new way to observe these systems and understand them better.”
This work was funded, in part, by NASA.