On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.
The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.
The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.
MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.
Q: How did you come to get involved in memory competitions?
A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.
Q: What was the Tournament of Memory Champions like?
A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.
Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.
Since I hadn't done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.
Q: What is your approach to improving memory?
A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.
Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.
The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot.
When a cell protector collaborates with a killerNew research reveals what it takes for a protein that is best known for protecting cells against death to take on the opposite role.From early development to old age, cell death is a part of life. Without enough of a critical type of cell death known as apoptosis, animals wind up with too many cells, which can set the stage for cancer or autoimmune disease. But careful control is essential, because when apoptosis eliminates the wrong cells, the effects can be just as dire, helping to drive many kinds of neurodegenerative disease.
By studying the microscopic roundworm Caenorhabditis elegans — which was honored with its fourth Nobel Prize last month — scientists at MIT’s McGovern Institute for Brain Research have begun to unravel a longstanding mystery about the factors that control apoptosis: how a protein capable of preventing programmed cell death can also promote it. Their study, led by Robert Horvitz, the David H. Koch Professor of Biology at MIT, and reported Oct. 9 in the journal Science Advances, sheds light on the process of cell death in both health and disease.
“These findings, by graduate student Nolan Tucker and former graduate student, now MIT faculty colleague, Peter Reddien, have revealed that a protein interaction long thought to block apoptosis in C. elegans likely instead has the opposite effect,” says Horvitz, who is also an investigator at the Howard Hughes Medical Institute and the McGovern Institute. Horvitz shared the 2002 Nobel Prize in Physiology or Medicine for discovering and characterizing the genes controlling cell death in C. elegans.
Mechanisms of cell death
Horvitz, Tucker, Reddien, and colleagues have provided foundational insights in the field of apoptosis by using C. elegans to analyze the mechanisms that drive apoptosis, as well as the mechanisms that determine how cells ensure apoptosis happens when and where it should. Unlike humans and other mammals, which depend on dozens of proteins to control apoptosis, these worms use just a few. And when things go awry, it’s easy to tell: When there’s not enough apoptosis, researchers can see that there are too many cells inside the worms’ translucent bodies. And when there’s too much, the worms lack certain biological functions or, in more extreme cases, can’t reproduce or die during embryonic development.
Work in the Horvitz lab defined the roles of many of the genes and proteins that control apoptosis in worms. These regulators proved to have counterparts in human cells, and for that reason studies of worms have helped reveal how human cells govern cell death and pointed toward potential targets for treating disease.
A protein’s dual role
Three of C. elegans’ primary regulators of apoptosis actively promote cell death, whereas just one, CED-9, reins in the apoptosis-promoting proteins to keep cells alive. As early as the 1990s, however, Horvitz and colleagues recognized that CED-9 was not exclusively a protector of cells. Their experiments indicated that the protector protein also plays a role in promoting cell death. But while researchers thought they knew how CED-9 protected against apoptosis, its pro-apoptotic role was more puzzling.
CED-9’s dual role means that mutations in the gene that encode it can impact apoptosis in multiple ways. Most ced-9 mutations interfere with the protein’s ability to protect against cell death and result in excess cell death. Conversely, mutations that abnormally activate ced-9 cause too little cell death, just like mutations that inactivate any of the three killer genes.
An atypical ced-9 mutation, identified by Reddien when he was a PhD student in Horvitz’s lab, hinted at how CED-9 promotes cell death. That mutation altered the part of the CED-9 protein that interacts with the protein CED-4, which is proapoptotic. Since the mutation specifically leads to a reduction in apoptosis, this suggested that CED-9 might need to interact with CED-4 to promote cell death.
The idea was particularly intriguing because researchers had long thought that CED-9’s interaction with CED-4 had exactly the opposite effect: In the canonical model, CED-9 anchors CED-4 to cells’ mitochondria, sequestering the CED-4 killer protein and preventing it from associating with and activating another key killer, the CED-3 protein — thereby preventing apoptosis.
To test the hypothesis that CED-9’s interactions with the killer CED-4 protein enhance apoptosis, the team needed more evidence. So graduate student Nolan Tucker used CRISPR gene editing tools to create more worms with mutations in CED-9, each one targeting a different spot in the CED-4-binding region. Then he examined the worms. “What I saw with this particular class of mutations was extra cells and viability,” he says — clear signs that the altered CED-9 was still protecting against cell death, but could no longer promote it. “Those observations strongly supported the hypothesis that the ability to bind CED-4 is needed for the pro-apoptotic function of CED-9,” Tucker explains. Their observations also suggested that, contrary to earlier thinking, CED-9 doesn’t need to bind with CED-4 to protect against apoptosis.
When he looked inside the cells of the mutant worms, Tucker found additional evidence that these mutations prevented CED-9’s ability to interact with CED-4. When both CED-9 and CED-4 are intact, CED-4 appears associated with cells’ mitochondria. But in the presence of these mutations, CED-4 was instead at the edge of the cell nucleus. CED-9’s ability to bind CED-4 to mitochondria appeared to be necessary to promote apoptosis, not to protect against it.
Looking ahead
While the team’s findings begin to explain a long-unanswered question about one of the primary regulators of apoptosis, they raise new ones, as well. “I think that this main pathway of apoptosis has been seen by a lot of people as more-or-less settled science. Our findings should change that view,” Tucker says.
The researchers see important parallels between their findings from this study of worms and what’s known about cell death pathways in mammals. The mammalian counterpart to CED-9 is a protein called BCL-2, mutations in which can lead to cancer. BCL-2, like CED-9, can both promote and protect against apoptosis. As with CED-9, the pro-apoptotic function of BCL-2 has been mysterious. In mammals, too, mitochondria play a key role in activating apoptosis. The Horvitz lab’s discovery opens opportunities to better understand how apoptosis is regulated not only in worms but also in humans, and how dysregulation of apoptosis in humans can lead to such disorders as cancer, autoimmune disease, and neurodegeneration.
MIT physicists predict exotic form of matter with potential for quantum computingNew work suggests the ability to create fractionalized electrons known as non-Abelian anyons without a magnetic field, opening new possibilities for basic research and future applications.MIT physicists have shown that it should be possible to create an exotic form of matter that could be manipulated to form the qubit (quantum bit) building blocks of future quantum computers that are even more powerful than the quantum computers in development today.
The work builds on a discovery last year of materials that host electrons that can split into fractions of themselves but, importantly, can do so without the application of a magnetic field.
The general phenomenon of electron fractionalization was first discovered in 1982 and resulted in a Nobel Prize. That work, however, required the application of a magnetic field. The ability to create the fractionalized electrons without a magnetic field opens new possibilities for basic research and makes the materials hosting them more useful for applications.
When electrons split into fractions of themselves, those fractions are known as anyons. Anyons come in variety of flavors, or classes. The anyons discovered in the 2023 materials are known as Abelian anyons. Now, in a paper reported in the Oct. 17 issue of Physical Review Letters, the MIT team notes that it should be possible to create the most exotic class of anyons, non-Abelian anyons.
“Non-Abelian anyons have the bewildering capacity of ‘remembering’ their spacetime trajectories; this memory effect can be useful for quantum computing,” says Liang Fu, a professor in MIT’s Department of Physics and leader of the work.
Fu further notes that “the 2023 experiments on electron fractionalization greatly exceeded theoretical expectations. My takeaway is that we theorists should be bolder.”
Fu is also affiliated with the MIT Materials Research Laboratory. His colleagues on the current work are graduate students Aidan P. Reddy and Nisarga Paul, and postdoc Ahmed Abouelkomsan, all of the MIT Department of Phsyics. Reddy and Paul are co-first authors of the Physical Review Letters paper.
The MIT work and two related studies were also featured in an Oct. 17 story in Physics Magazine. “If this prediction is confirmed experimentally, it could lead to more reliable quantum computers that can execute a wider range of tasks … Theorists have already devised ways to harness non-Abelian states as workable qubits and manipulate the excitations of these states to enable robust quantum computation,” writes Ryan Wilkinson.
The current work was guided by recent advances in 2D materials, or those consisting of only one or a few layers of atoms. “The whole world of two-dimensional materials is very interesting because you can stack them and twist them, and sort of play Legos with them to get all sorts of cool sandwich structures with unusual properties,” says Paul. Those sandwich structures, in turn, are called moiré materials.
Anyons can only form in two-dimensional materials. Could they form in moiré materials? The 2023 experiments were the first to show that they can. Soon afterwards, a group led by Long Ju, an MIT assistant professor of physics, reported evidence of anyons in another moiré material. (Fu and Reddy were also involved in the Ju work.)
In the current work, the physicists showed that it should be possible to create non-Abelian anyons in a moiré material composed of atomically thin layers of molybdenum ditelluride. Says Paul, “moiré materials have already revealed fascinating phases of matter in recent years, and our work shows that non-Abelian phases could be added to the list.”
Adds Reddy, “our work shows that when electrons are added at a density of 3/2 or 5/2 per unit cell, they can organize into an intriguing quantum state that hosts non-Abelian anyons.”
The work was exciting, says Reddy, in part because “oftentimes there’s subtlety in interpreting your results and what they are actually telling you. So it was fun to think through our arguments” in support of non-Abelian anyons.
Says Paul, “this project ranged from really concrete numerical calculations to pretty abstract theory and connected the two. I learned a lot from my collaborators about some very interesting topics.”
This work was supported by the U.S. Air Force Office of Scientific Research. The authors also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center, the Kavli Institute for Theoretical Physics, the Knut and Alice Wallenberg Foundation, and the Simons Foundation.
How can electrons split into fractions of themselves?Physicists surprised to discover electrons in pentalayer graphene can exhibit fractional charge. New study suggests how this could work.MIT physicists have taken a key step toward solving the puzzle of what leads electrons to split into fractions of themselves. Their solution sheds light on the conditions that give rise to exotic electronic states in graphene and other two-dimensional systems.
The new work is an effort to make sense of a discovery that was reported earlier this year by a different group of physicists at MIT, led by Assistant Professor Long Ju. Ju’s team found that electrons appear to exhibit “fractional charge” in pentalayer graphene — a configuration of five graphene layers that are stacked atop a similarly structured sheet of boron nitride.
Ju discovered that when he sent an electric current through the pentalayer structure, the electrons seemed to pass through as fractions of their total charge, even in the absence of a magnetic field. Scientists had already shown that electrons can split into fractions under a very strong magnetic field, in what is known as the fractional quantum Hall effect. Ju’s work was the first to find that this effect was possible in graphene without a magnetic field — which until recently was not expected to exhibit such an effect.
The phenemonon was coined the “fractional quantum anomalous Hall effect,” and theorists have been keen to find an explanation for how fractional charge can emerge from pentalayer graphene.
The new study, led by MIT professor of physics Senthil Todadri, provides a crucial piece of the answer. Through calculations of quantum mechanical interactions, he and his colleagues show that the electrons form a sort of crystal structure, the properties of which are ideal for fractions of electrons to emerge.
“This is a completely new mechanism, meaning in the decades-long history, people have never had a system go toward these kinds of fractional electron phenomena,” Todadri says. “It’s really exciting because it makes possible all kinds of new experiments that previously one could only dream about.”
The team’s study appeared last week in the journal Physical Review Letters. Two other research teams — one from Johns Hopkins University, and the other from Harvard University, the University of California at Berkeley, and Lawrence Berkeley National Laboratory — have each published similar results in the same issue. The MIT team includes Zhihuan Dong PhD ’24 and former postdoc Adarsh Patri.
“Fractional phenomena”
In 2018, MIT professor of physics Pablo Jarillo-Herrero and his colleagues were the first to observe that new electronic behavior could emerge from stacking and twisting two sheets of graphene. Each layer of graphene is as thin as a single atom and structured in a chicken-wire lattice of hexagonal carbon atoms. By stacking two sheets at a very specific angle to each other, he found that the resulting interference, or moiré pattern, induced unexpected phenomena such as both superconducting and insulating properties in the same material. This “magic-angle graphene,” as it was soon coined, ignited a new field known as twistronics, the study of electronic behavior in twisted, two-dimensional materials.
“Shortly after his experiments, we realized these moiré systems would be ideal platforms in general to find the kinds of conditions that enable these fractional electron phases to emerge,” says Todadri, who collaborated with Jarillo-Herrero on a study that same year to show that, in theory, such twisted systems could exhibit fractional charge without a magnetic field. “We were advocating these as the best systems to look for these kinds of fractional phenomena,” he says.
Then, in September of 2023, Todadri hopped on a Zoom call with Ju, who was familiar with Todari’s theoretical work and had kept in touch with him through Ju’s own experimental work.
“He called me on a Saturday and showed me the data in which he saw these [electron] fractions in pentalayer graphene,” Todadri recalls. “And that was a big surprise because it didn’t play out the way we thought.”
In his 2018 paper, Todadri predicted that fractional charge should emerge from a precursor phase characterized by a particular twisting of the electron wavefunction. Broadly speaking, he theorized that an electron’s quantum properties should have a certain twisting, or degree to which it can be manipulated without changing its inherent structure. This winding, he predicted, should increase with the number of graphene layers added to a given moiré structure.
“For pentalayer graphene, we thought the wavefunction would wind around five times, and that would be a precursor for electron fractions,” Todadri says. “But he did his experiments and discovered that it does wind around, but only once. That then raised this big question: How should we think about whatever we are seeing?”
Extraordinary crystal
In the team’s new study, Todadri went back to work out how electron fractions could emerge from pentalayer graphene if not through the path he initially predicted. The physicists looked through their original hypothesis and realized they may have missed a key ingredient.
“The standard strategy in the field when figuring out what’s happening in any electronic system is to treat electrons as independent actors, and from that, figure out their topology, or winding,” Todadri explains. “But from Long’s experiments, we knew this approximation must be incorrect.”
While in most materials, electrons have plenty of space to repel each other and zing about as independent agents, the particles are much more confined in two-dimensional structures such as pentalayer graphene. In such tight quarters, the team realized that electrons should also be forced to interact, behaving according to their quantum correlations in addition to their natural repulsion. When the physicists added interelectron interactions to their theory, they found it correctly predicted the winding that Ju observed for pentalayer graphene.
Once they had a theoretical prediction that matched with observations, the team could work from this prediction to identify a mechanism by which pentalayer graphene gave rise to fractional charge.
They found that the moiré arrangement of pentalayer graphene, in which each lattice-like layer of carbon atoms is arranged atop the other and on top of the boron-nitride, induces a weak electrical potential. When electrons pass through this potential, they form a sort of crystal, or a periodic formation, that confines the electrons and forces them to interact through their quantum correlations. This electron tug-of-war creates a sort of cloud of possible physical states for each electron, which interacts with every other electron cloud in the crystal, in a wavefunction, or a pattern of quantum correlations, that gives the winding that should set the stage for electrons to split into fractions of themselves.
“This crystal has a whole set of unusual properties that are different from ordinary crystals, and leads to many fascinating questions for future research,” Todadri says. “For the short term, this mechanism provides the theoretical foundation for understanding the observations of fractions of electrons in pentalayer graphene and for predicting other systems with similar physics.”
This work was supported, in part, by the National Science Foundation and the Simons Foundation.
Four from MIT named 2025 Rhodes ScholarsYiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo will start postgraduate studies at Oxford next fall.Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo have been selected as 2025 Rhodes Scholars and will begin fully funded postgraduate studies at Oxford University in the U.K. next fall. In addition to MIT’s two U.S. Rhodes winners, Oluigbo and Nair, two affiliates were awarded international Rhodes Scholarships: Chen for Rhodes’ China constituency and Hector for the Global Rhodes Scholarship. Hector is the first Haitian citizen to be named a Rhodes Scholar.
The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.
“It is profoundly inspiring to work with our amazing students, who have accomplished so much at MIT and, at the same time, thought deeply about how they can have an impact in solving the world's major challenges,” says Professor Nancy Kanwisher, who co-chairs the committee along with Professor Tom Levenson. “These students have worked hard to develop and articulate their vision and to learn to communicate it to others with passion, clarity, and confidence. We are thrilled but not surprised to see so many of them recognized this year as finalists and as winners.”
Yiming Chen ’24
Yiming Chen, from Beijing, China, and the Washington area, was named one of four Rhodes China Scholars on Sept 28. At Oxford, she will pursue graduate studies in engineering science, working toward her ongoing goal of advancing AI safety and reliability in clinical workflows.
Chen graduated from MIT in 2024 with a BS in mathematics and computer science and an MEng in computer science. She worked on several projects involving machine learning for health care, and focused her master’s research on medical imaging in the Medical Vision Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Collaborating with IBM Research, Chen developed a neural framework for clinical-grade lumen segmentation in intravascular ultrasound and presented her findings at the MICCAI Machine Learning in Medical Imaging conference. Additionally, she worked at Cleanlab, an MIT-founded startup, creating an open-source library to ensure the integrity of image datasets used in vision tasks.
Chen was a teaching assistant in the MIT math and electrical engineering and computer science departments, and received a teaching excellence award. She taught high school students at the Hampshire College Summer Studies in Math and was selected to participate in MISTI Global Teaching Labs in Italy.
Having studied the guzheng, a traditional Chinese instrument, since age 4, Chen served as president of the MIT Chinese Music Ensemble, explored Eastern and Western music synergies with the MIT Chamber Music Society, and performed at the United Nations. On campus, she was also active with Asymptones a capella, MIT Ring Committee, Ribotones, Figure Skating Club, and the Undergraduate Association Innovation Committee.
Wilhem Hector
Wilhem Hector, a senior from Port-au-Prince, Haiti, majoring in mechanical engineering, was awarded a Global Rhodes Scholarship on Nov 1. The first Haitian national to be named a Rhodes Scholar, Hector will pursue at Oxford a master’s in energy systems followed by a master’s in education, focusing on digital and social change. His long-term goals are twofold: pioneering Haiti’s renewable energy infrastructure and expanding hands-on opportunities in the country‘s national curriculum.
Hector developed his passion for energy through his research in the MIT Howland Lab, where he investigated the uncertainty of wind power production during active yaw control. He also helped launch the MIT Renewable Energy Clinic through his work on the sources of opposition to energy projects in the U.S. Beyond his research, Hector had notable contributions as an intern at Radia Inc. and DTU Wind Energy Systems, where he helped develop computational wind farm modeling and simulation techniques.
Outside of MIT, he leads the Hector Foundation, a nonprofit providing educational opportunities to young people in Haiti. He has raised over $80,000 in the past five years to finance their initiatives, including the construction of Project Manus, Haiti’s first open-use engineering makerspace. Hector’s service endeavors have been supported by the MIT PKG Center, which awarded him the Davis Peace Prize, the PKG Fellowship for Social Impact, and the PKG Award for Public Service.
Hector co-chairs both the Student Events Board and the Class of 2025 Senior Ball Committee and has served as the social chair for Chocolate City and the African Students Association.
Anushka Nair
Anushka Nair, from Portland, Oregon, will graduate next spring with BS and MEng degrees in computer science and engineering with concentrations in economics and AI. She plans to pursue a DPhil in social data science at the Oxford Internet Institute. Nair aims to develop ethical AI technologies that address pressing societal challenges, beginning with combating misinformation.
For her master’s thesis under Professor David Rand, Nair is developing LLM-powered fact-checking tools to detect nuanced misinformation beyond human or automated capabilities. She also researches human-AI co-reasoning at the MIT Center for Collective Intelligence with Professor Thomas Malone. Previously, she conducted research on autonomous vehicle navigation at Stanford’s AI and Robotics Lab, energy microgrid load balancing at MIT’s Institute for Data, Systems, and Society, and worked with Professor Esther Duflo in economics.
Nair interned in the Executive Office of the Secretary General at the United Nations, where she integrated technology solutions and assisted with launching the High-Level Advisory Body on AI. She also interned in Tesla’s energy sector, contributing to Autobidder, an energy trading tool, and led the launch of a platform for monitoring distributed energy resources and renewable power plants. Her work has earned her recognition as a Social and Ethical Responsibilities of Computing Scholar and a U.S. Presidential Scholar.
Nair has served as President of the MIT Society of Women Engineers and MIT and Harvard Women in AI, spearheading outreach programs to mentor young women in STEM fields. She also served as president of MIT Honors Societies Eta Kappa Nu and Tau Beta Pi.
David Oluigbo
David Oluigbo, from Washington, is a senior majoring in artificial intelligence and decision making and minoring in brain and cognitive sciences. At Oxford, he will undertake an MS in applied digital health followed by an MS in modeling for global health. Afterward, Oluigbo plans to attend medical school with the goal of becoming a physician-scientist who researches and applies AI to address medical challenges in low-income countries.
Since his first year at MIT, Oluigbo has conducted neural and brain research with Ev Fedorenko at the McGovern Institute for Brain Research and with Susanna Mierau’s Synapse and Network Development Group at Brigham and Women’s Hospital. His work with Mierau led to several publications and a poster presentation at the Federation of European Societies annual meeting.
In a summer internship at the National Institutes of Health Clinical Center, Oluigbo designed and trained machine-learning models on CT scans for automatic detection of neuroendocrine tumors, leading to first authorship on an International Society for Optics and Photonics conference proceeding paper, which he presented at the 2024 annual meeting. Oluigbo also did a summer internship with the Anyscale Learning for All Laboratory at the MIT Computer Science and Artificial Intelligence Laboratory.
Oluigbo is an EMT and systems administrator officer with MIT-EMS. He is a consultant for Code for Good, a representative on the MIT Schwarzman College of Computing Undergraduate Advisory Group, and holds executive roles with the Undergraduate Association, the MIT Brain and Cognitive Society, and the MIT Running Club.
Neuroscientists create a comprehensive map of the cerebral cortexUsing fMRI, the research team identified 24 networks that perform specific functions within the brain’s cerebral cortex.By analyzing brain scans taken as people watched movie clips, MIT researchers have created the most comprehensive map yet of the functions of the brain’s cerebral cortex.
Using functional magnetic resonance imaging (fMRI) data, the research team identified 24 networks with different functions, which include processing language, social interactions, visual features, and other types of sensory input.
Many of these networks have been seen before but haven’t been precisely characterized using naturalistic conditions. While the new study mapped networks in subjects watching engaging movies, previous works have used a small number of specific tasks or examined correlations across the brain in subjects who were simply resting.
“There’s an emerging approach in neuroscience to look at brain networks under more naturalistic conditions. This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s not going to give us all the answers, but it generates a lot of interesting ideas based on what we see going on in the movies that's related to these network maps that emerge.”
The researchers hope that their new map will serve as a starting point for further study of what each of these networks is doing in the brain.
Desimone and John Duncan, a program leader in the MRC Cognition and Brain Sciences Unit at Cambridge University, are the senior authors of the study, which appears today in Neuron. Reza Rajimehr, a research scientist in the McGovern Institute and a former graduate student at Cambridge University, is the lead author of the paper.
Precise mapping
The cerebral cortex of the brain contains regions devoted to processing different types of sensory information, including visual and auditory input. Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using fMRI to measure brain activity as subjects perform a single task such as looking at faces.
In other studies, researchers have scanned people’s brains as they do nothing, or let their minds wander. From those studies, researchers have identified networks such as the default mode network, a network of areas that is active during internally focused activities such as daydreaming.
“Up to now, most studies of networks were based on doing functional MRI in the resting-state condition. Based on those studies, we know some main networks in the cortex. Each of them is responsible for a specific cognitive function, and they have been highly influential in the neuroimaging field,” Rajimehr says.
However, during the resting state, many parts of the cortex may not be active at all. To gain a more comprehensive picture of what all these regions are doing, the MIT team analyzed data recorded while subjects performed a more natural task: watching a movie.
“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic information and contextual information,” Rajimehr says. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”
The data for this study was generated as part of the Human Connectome Project. Using a 7-Tesla MRI scanner, which offers higher resolution than a typical MRI scanner, brain activity was imaged in 176 people as they watched one hour of movie clips showing a variety of scenes.
The MIT team used a machine-learning algorithm to analyze the activity patterns of each brain region, allowing them to identify 24 networks with different activity patterns and functions.
Some of these networks are located in sensory areas such as the visual cortex or auditory cortex, as expected for regions with specific sensory functions. Other areas respond to features such as actions, language, or social interactions. Many of these networks have been seen before, but this technique offers more precise definition of where the networks are located, the researchers say.
“Different regions are competing with each other for processing specific features, so when you map each function in isolation, you may get a slightly larger network because it is not getting constrained by other processes,” Rajimehr says. “But here, because all the areas are considered together, we are able to define more precise boundaries between different networks.”
The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex, which appears to be highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.
Executive control networks
Three of the networks found in this study are involved in “executive control,” and were most active during transitions between different clips. The researchers also observed that these control networks appear to have a “push-pull” relationship with networks that process specific features such as faces or actions. When networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.
“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”
Using a movie-watching paradigm, the researchers are now studying some of the networks they identified in more detail, to identify subregions involved in particular tasks. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies. In a new network that analyzes visual scenes, they have identified regions involved in processing memory of places.
“This kind of experiment is really about generating hypotheses for how the cerebral cortex is functionally organized. Networks that emerge during movie watching now need to be followed up with more specific experiments to test the hypotheses. It’s giving us a new view into the operation of the entire cortex during a more naturalistic task than just sitting at rest,” Desimone says.
The research was funded by the McGovern Institute, the Cognitive Science and Technology Council of Iran, the MRC Cognition and Brain Sciences Unit at the University of Cambridge, and a Cambridge Trust scholarship.
Asteroid grains shed light on the outer solar system’s originsA weak magnetic field likely pulled matter inward to form the outer planetary bodies, from Jupiter to Neptune.Tiny grains from a distant asteroid are revealing clues to the magnetic forces that shaped the far reaches of the solar system over 4.6 billion years ago.
Scientists at MIT and elsewhere have analyzed particles of the asteroid Ryugu, which were collected by the Japanese Aerospace Exploration Agency’s (JAXA) Hayabusa2 mission and brought back to Earth in 2020. Scientists believe Ryugu formed on the outskirts of the early solar system before migrating in toward the asteroid belt, eventually settling into an orbit between Earth and Mars.
The team analyzed Ryugu’s particles for signs of any ancient magnetic field that might have been present when the asteroid first took shape. Their results suggest that if there was a magnetic field, it would have been very weak. At most, such a field would have been about 15 microtesla. (The Earth’s own magnetic field today is around 50 microtesla.)
Even so, the scientists estimate that such a low-grade field intensity would have been enough to pull together primordial gas and dust to form the outer solar system’s asteroids and potentially play a role in giant planet formation, from Jupiter to Neptune.
The team’s results, which are published today in the journal AGU Advances, show for the first time that the distal solar system likely harbored a weak magnetic field. Scientists have known that a magnetic field shaped the inner solar system, where Earth and the terrestrial planets were formed. But it was unclear whether such a magnetic influence extended into more remote regions, until now.
“We’re showing that, everywhere we look now, there was some sort of magnetic field that was responsible for bringing mass to where the sun and planets were forming,” says study author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That now applies to the outer solar system planets.”
The study’s lead author is Elias Mansbach PhD ’24, who is now a postdoc at Cambridge University. MIT co-authors include Eduardo Lima, Saverio Cambioni, and Jodie Ream, along with Michael Sowell and Joseph Kirschvink of Caltech, Roger Fu of Harvard University, Xue-Ning Bai of Tsinghua University, Chisato Anai and Atsuko Kobayashi of the Kochi Advanced Marine Core Research Institute, and Hironori Hidaka of Tokyo Institute of Technology.
A far-off field
Around 4.6 billion years ago, the solar system formed from a dense cloud of interstellar gas and dust, which collapsed into a swirling disk of matter. Most of this material gravitated toward the center of the disk to form the sun. The remaining bits formed a solar nebula of swirling, ionized gas. Scientists suspect that interactions between the newly formed sun and the ionized disk generated a magnetic field that threaded through the nebula, helping to drive accretion and pull matter inward to form the planets, asteroids, and moons.
“This nebular field disappeared around 3 to 4 million years after the solar system’s formation, and we are fascinated with how it played a role in early planetary formation,” Mansbach says.
Scientists previously determined that a magnetic field was present throughout the inner solar system — a region that spanned from the sun to about 7 astronomical units (AU), out to where Jupiter is today. (One AU is the distance between the sun and the Earth.) The intensity of this inner nebular field was somewhere between 50 to 200 microtesla, and it likely influenced the formation of the inner terrestrial planets. Such estimates of the early magnetic field are based on meteorites that landed on Earth and are thought to have originated in the inner nebula.
“But how far this magnetic field extended, and what role it played in more distal regions, is still uncertain because there haven’t been many samples that could tell us about the outer solar system,” Mansbach says.
Rewinding the tape
The team got an opportunity to analyze samples from the outer solar system with Ryugu, an asteroid that is thought to have formed in the early outer solar system, beyond 7 AU, and was eventually brought into orbit near the Earth. In December 2020, JAXA’s Hayabusa2 mission returned samples of the asteroid to Earth, giving scientists a first look at a potential relic of the early distal solar system.
The researchers acquired several grains of the returned samples, each about a millimeter in size. They placed the particles in a magnetometer — an instrument in Weiss’ lab that measures the strength and direction of a sample’s magnetization. They then applied an alternating magnetic field to progressively demagnetize each sample.
“Like a tape recorder, we are slowly rewinding the sample’s magnetic record,” Mansbach explains. “We then look for consistent trends that tell us if it formed in a magnetic field.”
They determined that the samples held no clear sign of a preserved magnetic field. This suggests that either there was no nebular field present in the outer solar system where the asteroid first formed, or the field was so weak that it was not recorded in the asteroid’s grains. If the latter is the case, the team estimates such a weak field would have been no more than 15 microtesla in intensity.
The researchers also reexamined data from previously studied meteorites. They specifically looked at “ungrouped carbonaceous chondrites” — meteorites that have properties that are characteristic of having formed in the distal solar system. Scientists had estimated the samples were not old enough to have formed before the solar nebula disappeared. Any magnetic field record the samples contain, then, would not reflect the nebular field. But Mansbach and his colleagues decided to take a closer look.
“We reanalyzed the ages of these samples and found they are closer to the start of the solar system than previously thought,” Mansbach says. “We think these samples formed in this distal, outer region. And one of these samples does actually have a positive field detection of about 5 microtesla, which is consistent with an upper limit of 15 microtesla.”
This updated sample, combined with the new Ryugu particles, suggest that the outer solar system, beyond 7 AU, hosted a very weak magnetic field, that was nevertheless strong enough to pull matter in from the outskirts to eventually form the outer planetary bodies, from Jupiter to Neptune.
“When you’re further from the sun, a weak magnetic field goes a long way,” Weiss notes. “It was predicted that it doesn’t need to be that strong out there, and that’s what we’re seeing.”
The team plans to look for more evidence of distal nebular fields with samples from another far-off asteroid, Bennu, which were delivered to Earth in September 2023 by NASA’s OSIRIS-REx spacecraft.
“Bennu looks a lot like Ryugu, and we’re eagerly awaiting first results from those samples,” Mansbach says.
This research was supported, in part, by NASA.
A portable light system that can digitize everyday objectsA new design tool uses UV and RGB lights to change the color and textures of everyday objects. The system could enable surfaces to display dynamic patterns, such as health data and fashion designs.When Nikola Tesla predicted we’d have handheld phones that could display videos, photographs, and more, his musings seemed like a distant dream. Nearly 100 years later, smartphones are like an extra appendage for many of us.
Digital fabrication engineers are now working toward expanding the display capabilities of other everyday objects. One avenue they’re exploring is reprogrammable surfaces — or items whose appearances we can digitally alter — to help users present important information, such as health statistics, as well as new designs on things like a wall, mug, or shoe.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of California at Berkeley, and Aarhus University have taken an intriguing step forward by fabricating “PortaChrome,” a portable light system and design tool that can change the color and textures of various objects. Equipped with ultraviolet (UV) and red, green, and blue (RGB) LEDs, the device can be attached to everyday objects like shirts and headphones. Once a user creates a design and sends it to a PortaChrome machine via Bluetooth, the surface can be programmed into multicolor displays of health data, entertainment, and fashion designs.
To make an item reprogrammable, the object must be coated with photochromic dye, an invisible ink that can be turned into different colors with light patterns. Once it’s coated, individuals can create and relay patterns to the item via the team’s graphic design software, or use the team’s API to interact with the device directly and embed data-driven designs. When attached to a surface, PortaChrome’s UV lights saturate the dye while the RGB LEDs desaturate it, activating the colors and ensuring each pixel is toned to match the intended design.
Zhu and her colleagues’ integrated light system changes objects’ colors in less than four minutes on average, which is eight times faster than their prior work, “Photo-Chromeleon.” This speed boost comes from switching to a light source that makes contact with the object to transmit UV and RGB rays. Photo-Chromeleon used a projector to help activate the color-changing properties of photochromic dye, where the light on the object's surface is at a reduced intensity.
“PortaChrome provides a more convenient way to reprogram your surroundings,” says Yunyi Zhu ’20, MEng ’21, an MIT PhD student in electrical engineering and computer science, affiliate of CSAIL, and lead author on a paper about the work. “Compared with our projector-based system from before, PortaChrome is a more portable light source that can be placed directly on top of the photochromic surface. This allows the color change to happen without user intervention and helps us avoid contaminating our environment with UV. As a result, users can wear their heart rate chart on their shirt after a workout, for instance.”
Giving everyday objects a makeover
In demos, PortaChrome displayed health data on different surfaces. A user hiked with PortaChrome sewed onto their backpack, putting it into direct contact with the back of their shirt, which was coated in photochromic dye. Altitude and heart rate sensors sent data to the lighting device, which was then converted into a chart through a reprogramming script developed by the researchers. This process created a health visualization on the back of the user’s shirt. In a similar showing, MIT researchers displayed a heart gradually coming together on the back of a tablet to show how a user was progressing toward a fitness goal.
PortaChrome also showed a flair for customizing wearables. For example, the researchers redesigned some white headphones with sideways blue lines and horizontal yellow and purple stripes. The photochromic dye was coated on the headphones and the team then attached the PortaChrome device to the inside of the headphone case. Finally, the researchers successfully reprogrammed their patterns onto the object, which resembled watercolor art. Researchers also recolored a wrist splint to match different clothes using this process.
Eventually, the work could be used to digitize consumers’ belongings. Imagine putting on a cloak that can change your entire shirt design, or using your car cover to give your vehicle a new look.
PortaChrome’s main ingredients
On the hardware end, PortaChrome is a combination of four main ingredients. Their portable device consists of a textile base as a sort of backbone, a textile layer with the UV lights soldered on and another with the RGB stuck on, and a silicone diffusion layer to top it off. Resembling a translucent honeycomb, the silicone layer covers the interlaced UV and RGB LEDs and directs them toward individual pixels to properly illuminate a design over a surface.
This device can be flexibly wrapped around objects with different shapes. For tables and other flat surfaces, you could place PortaChrome on top, like a placemat. For a curved item like a thermos, you could wrap the light source around like a coffee cup sleeve to ensure it reprograms the entire surface.
The portable, flexible light system is crafted with maker space-available tools (like laser cutters, for example), and the same method can be replicated with flexible PCB materials and other mass manufacturing systems.
While it can also quickly convert our surroundings into dynamic displays, Zhu and her colleagues believe it could benefit from further speed boosts. They'd like to use smaller LEDs, with the likely result being a surface that could be reprogrammed in seconds with a higher-resolution design, thanks to increased light intensity.
“The surfaces of our everyday things are encoded with colors and visual textures, delivering crucial information and shaping how we interact with them,” says Georgia Tech postdoc Tingyu Cheng, who was not involved with the research. “PortaChrome is taking a leap forward by providing reprogrammable surfaces with the integration of flexible light sources (UV and RGB LEDs) and photochromic pigments into everyday objects, pixelating the environment with dynamic color and patterns. The capabilities demonstrated by PortaChrome could revolutionize the way we interact with our surroundings, particularly in domains like personalized fashion and adaptive user interfaces. This technology enables real-time customization that seamlessly integrates into daily life, offering a glimpse into the future of ‘ubiquitous displays.’”
Zhu is joined by nine CSAIL affiliates on the paper: MIT PhD student and MIT Media Lab affiliate Cedric Honnet; former visiting undergraduate researchers Yixiao Kang, Angelina J. Zheng, and Grace Tang; MIT undergraduate student Luca Musk; University of Michigan Assistant Professor Junyi Zhu SM ’19, PhD ’24; recent postdoc and Aarhus University assistant professor Michael Wessely; and senior author Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the HCI Engineering Group at CSAIL.
This work was supported by the MIT-GIST Joint Research Program and was presented at the ACM Symposium on User Interface Software and Technology in October.
A new MIT initiative aims to elevate human-centered research and teaching, and bring together scholars in the humanities, arts, and social sciences with their colleagues across the Institute.
The MIT Human Insight Collaborative (MITHIC) launched earlier this fall. A formal kickoff event for MITHIC was held on campus Monday, Oct. 28, before a full audience in MIT’s Huntington Hall (Room 10-250). The event featured a conversation with Min Jin Lee, acclaimed author of “Pachinko,” moderated by Linda Pizzuti Henry SM ’05, co-owner and CEO of Boston Globe Media.
Initiative leaders say MITHIC will foster creativity, inquiry, and understanding, amplifying the Institute’s impact on global challenges like climate change, AI, pandemics, poverty, democracy, and more.
President Sally Kornbluth says MITHIC is the first of a new model known as the MIT Collaboratives, designed among other things to foster and support new collaborations on compelling global problems. The next MIT Collaborative will focus on life sciences and health.
“The MIT Collaboratives will make it easier for our faculty to ‘go big’ — to pursue the most innovative ideas in their disciplines and build connections to other fields,” says Kornbluth.
“We created MITHIC with a particular focus on the human-centered fields, to help advance research with the potential for global impact. MITHIC also has another, more local aim: to support faculty in developing fresh approaches to teaching and research that will engage and inspire a new generation of students,” Kornbluth adds.
A transformative opportunity
MITHIC is co-chaired by Anantha Chandrakasan, chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and Agustin Rayo, Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences (SHASS).
“MITHIC is an incredibly exciting and meaningful initiative to me as it represents MIT at its core — bringing broad perspectives and human insights to solve some of the world’s most important problems,” says Chandrakasan. “It offers the opportunity to shape the future of research and education at MIT through advancing core scholarship in the individual humanities, arts, and social sciences disciplines, but also through cross-cutting problem formulation and problem-solving. I have no doubt MITHIC will inspire our community to think differently and work together in ways that will have a lasting impact on society.”
Rayo says true innovation must go beyond technology to encompass the full complexity of the human experience.
“At MIT, we aim to make the world a better place. But you can't make the world a better place unless you understand its full economic, political, social, ethical — human — dimensions,” Rayo says. “MITHIC can help ensure that MIT educates broad-minded students, who are ready for the multidimensional challenges of the future.”
Rayo sees MITHIC as a transformative opportunity for MIT.
“MIT needs an integrated approach, which combines STEM with the human-centered disciplines. MITHIC can help catalyze that integration,” he says.
Mark Gorenberg ’76, chair of the MIT Corporation, says MITHIC represents a commitment to collaboration, a spirit of curiosity, and the belief that uniting the humanities and sciences results in solutions that are not only innovative, but meaningful and lasting.
“MIT has long been a place where boundless ideas and entrepreneurial energy come together to meet the world’s toughest challenges,” Gorenberg says. “With MITHIC, we’re adding a powerful new layer to that mission — one that captures the richness of human experience and imagination.”
Support for MITHIC comes from all five MIT schools, the MIT Schwarzman College of Computing, and the Office of the Provost, along with philanthropic support.
Charlene Kabcenell ’79, a life member of the MIT Corporation, and Derry Kabcenell ’75 chose to support MITHIC financially.
“MIT produces world-class scientists and technologists, but expertise in the skills of these areas is not enough. We are excited that the collaborations catalyzed by this initiative will help our graduates to stay mindful of the impact of their work on people and society,” they say.
Ray Stata ’57, MIT Corporation life member emeritus, is also a benefactor of MITHIC.
“In industry, it is not just technical innovation and breakthroughs that win, but also culture, in the ways people collaborate and work together. These are skills and behaviors that can be learned through a deeper understanding of humanities and social sciences. This has always been an important part of MIT’s education and I am happy to see the renewed attention being given to this aspect of the learning experience,” he says.
“A potential game changer”
Keeril Makan, associate dean for strategic initiatives in SHASS and the Michael (1949) and Sonja Koerner Music Composition Professor, is the faculty lead for MITHIC.
“MITHIC is about incentivizing collaboration, not research in specific areas,” says Makan. “It’s a ground-up approach, where we support faculty based upon the research that is of interest to them, which they identify.”
MITHIC consists of three new funding opportunities for faculty, the largest of which is the SHASS+ Connectivity Fund. For all three funds, proposals can be for projects ready to begin, as well as planning grants in preparation for future proposals.
The SHASS+ Connectivity Fund will support research that bridges between SHASS fields and other fields at MIT. Proposals require a project lead in SHASS and another project lead whose primary appointment is outside of SHASS.
The SHASS+ Connectivity Fund is co-chaired by David Kaiser, the Germehausen Professor of the History of Science and professor of physics, and Maria Yang, deputy dean of engineering and Kendall Rohsenow Professor of Mechanical Engineering.
“MIT has set an ambitious agenda for itself focused on addressing extremely complex and challenging problems facing society today, such as climate change, and there is a critical role for technological solutions to address these problems,” Yang says. “However, the origin of these problems are in part due to humans, so humanistic considerations need to be part of the solution. Such problems cannot be conquered by technology alone.”
Yang says the goal of the SHASS+ Connectivity Fund is to enhance MIT’s research by building interdisciplinary teams, embedding a human-centered focus.
“My hope is that these collaborations will build bridges between SHASS and the rest of MIT, and will lead to integrated research that is more powerful and meaningful together,” says Yang.
Proposals for the first round of projects are due Nov. 22, but MITHIC is already bringing MIT faculty together to share ideas in hopes of sparking ideas for potential collaboration.
An information session and networking reception was held in September. MITHIC has also been hosting a series of “Meeting of the Minds” events. Makan says these have been opportunities for faculty and teaching staff to make connections around a specific topic or area of interest with colleagues they haven’t previously worked with.
Recent Meeting of the Minds sessions have been held on topics like cybersecurity, social history of math, food security, and rebuilding Ukraine.
“Faculty are already educating each other about their disciplines,” says Makan. “What happens in SHASS has been opaque to faculty in the other schools, just as the research in the other schools has been opaque to the faculty in SHASS. We’ve seen progress with initiatives like the Social and Ethical Responsibilities of Computing (SERC), when it comes to computing. MITHIC will broaden that scope.”
The leadership of MITHIC is cross-disciplinary, with a steering committee of faculty representing all five schools and the MIT Schwarzman College of Computing.
Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology, is a member of the MITHIC steering committee. He says that while he continues to be amazed and inspired by the diverse research and work from across MIT, there’s potential to go even further by working together and connecting across diverse perspectives, ideas, and approaches.
“The bold goal and mission of MITHIC, to connect the humanities at MIT to work being conducted across the other schools at MIT, feels like a potential game-changer,” he says. “I am really excited to see the unexpected new work and directions that come out of this initiative, including hopefully connections that persist and transform the work across MIT.”
Enhancing the arts and humanities
In addition to the SHASS+ Connectivity Fund, MITHIC has two funds aimed specifically at enhancing research and teaching within SHASS.
The Humanities Cultivation Fund will support projects from the humanities and arts in SHASS. It is co-chaired by Arthur Bahr, professor of literature, and Anne McCants, the Ann F. Friedlaender Professor of History and SHASS research chair.
“Humanistic scholarship and artistic creation have long been among MIT’s hidden gems. The Humanities Cultivation Fund offers an exciting new opportunity to not only allow such work to continue to flourish, but also to give it greater visibility across the MIT community and into the wider world of scholarship. The fund aspires to cultivate — that is, to seed and nurture — new ideas and modes of inquiry into the full spectrum of human culture and expression,” says McCants.
The SHASS Education Innovation Fund will support new educational approaches in SHASS fields. The fund is co-chaired by Eric Klopfer, professor of comparative media studies/writing, and Emily Richmond Pollock, associate professor of music and SHASS undergraduate education chair.
Pollock says the fund is a welcome chance to support colleagues who have a strong sense of where teaching in SHASS could go next.
“We are looking for efforts that address contemporary challenges of teaching and learning, with approaches that can be tested in a specific context and later applied across the school. The crucial role of SHASS in educating MIT students in all fields means that what we devise here in our curriculum can have huge benefits for the Institute as a whole.”
Makan says infusing MIT’s human-centered disciplines with support is an essential part of MITHIC.
“The stronger these units are, the more the human-centered disciplines permeate the student experience, ultimately helping to build a stronger, more inclusive MIT,” says Makan.
Bridging Talents and Opportunities Forum connects high school and college students with STEAM leaders and resourcesEvent at MIT featured an array of national and international speakers including a Nobel laureate, leaders in industry, and in entertainment.Bridging Talents and Opportunities (BTO) held its second annual forum at the Stratton Student Center at MIT Oct. 11-12. The two-day event gathered over 500 participants, including high school students and their families, undergraduate students, professors, and leaders across STEAM (science, technology, engineering, arts, and mathematics) fields.
The forum sought to empower talented students from across the United States and Latin America to dream big and pursue higher education, demonstrating that access to prestigious institutions like MIT is possible regardless of socioeconomic barriers. The event featured inspirational talks from world-renowned scientists, innovators, entrepreneurs, social leaders, and major figures in entertainment — from Nobel laureate Rigoberta Menchú Tum to musician and producer Emilio Estefan, and more.
“Our initiative is committed to building meaningful connections among talented young individuals, their families, foundations, and leaders in science, art, mathematics, and technology,” says Ronald Garcia Ruiz, the Thomas A. Frank Career Development Assistant Professor of Physics at MIT and an organizer of the forum. “Recognizing that talent is universal but opportunities are often confined to select sectors of society, we are dedicated to bridging this gap. BTO provides a platform for sharing inspiring stories and offering support to promising young talents, empowering them to seize the diverse opportunities that await them.”
During their talks and panel discussions, speakers shared their insight into topics such as access to STEAM education, overcoming challenges and socioeconomic barriers, and strategies for fostering inclusion in STEAM fields. Students also had the opportunity to network with industry leaders and professionals, building connections to foster future collaborations.
Attendees also participated in hands-on scientific demonstrations, interaction with robots, and tours of MIT labs, providing a view of cutting-edge scientific research. The event also included musical performances from Latin American students from Berklee College of Music.
“I was thrilled to see the enthusiasm of young people and their parents and to be inspired by the great life stories of accomplished scientists and individuals from other fields making a positive impact in the real world,” says Edwin Pedrozo Peñafiel, assistant professor of physics at the University of Florida and an organizer. “This is why I strongly believe that representation matters.”
Welcoming a Nobel laureate
The first day of the forum opened with the welcoming words from Nergis Mavalvala, dean of the School of Science, and Boleslaw Wyslouch, director of the Laboratory for Nuclear Science and the MIT Bates Research and Engineering Center, and concluded with a keynote address by human rights activist Rigoberta Menchú Tum, 1992 Nobel Peace laureate and founder of the Rigoberta Menchú Tum Foundation. Reflecting upon Indigenous perspectives on science, she emphasized the importance of maintaining a humanistic perspective in scientific discovery. “My struggle has been one of constructing a humanistic perspective … that science, technology … are products of the strength of human beings,” Menchú remarked. She also shared her extraordinary story, encouraging students to persevere no matter the obstacles.
Diana Grass, a PhD Student in the Harvard-MIT Health Sciences and Technology program and organizer, shares, “As a woman in science and a first-generation student, I’ve experienced firsthand the impact of breaking barriers and the importance of representation. At Bridging Talents and Opportunities (BTO), we are shaping a future where opportunities are available to all. Seeing students from disadvantaged backgrounds, along with their parents, engage with some of today’s most influential scientists and leaders — who shared their own stories of resilience — was both inspiring and transformative. It ignited crucial conversations about how interdisciplinary collaboration in STEAM, grounded in humanity, is essential for tackling the critical challenges of our era.”
Power of the Arts
The second day concluded with a panel on “The Power of the Arts,” featuring actor, singer, and songwriter Carlos Ponce, as well as musician and producer Emilio Estefan. They were joined by journalist and author Luz María Doria, who moderated the discussion. Throughout the panel, the speakers recounted their inspiring journeys toward success in the entertainment industry. “This forum reaffirmed our commitment to bridging talent with opportunity,” says Ponce. “The energy and engagement from students, families, and speakers were incredible, fostering a space of learning, empowerment, and possibility.”
During the forum, a two-hour workshop was held that brought together scientists, nonprofit foundations, and business leaders to discuss concrete proposals for creating opportunities for young talents. In this workshop, they had the opportunity to share their ideas with one another. Key ideas and final takeaways from the workshop included developing strategic programs to match talented young students with mentors from diverse backgrounds who can serve as role models, better utilization of existing programs supporting underserved populations, dissemination of information about such programs, ideas to improve financial support for students pursuing education, and fostering extended collaborations between the three groups involved in the workshop.
Maria Angélica Cuellar, CEO of Incontact Group and a BTO organizer, says, “The event was absolutely spectacular and exceeded our expectations. We not only brought together leaders making a global impact in STEAM and business, but also secured financial commitments to support young talents. Through media coverage and streaming, our message reached every corner of the world, especially Latin America and the U.S. I’m deeply grateful for the commitment of each speaker and for the path now open to turn this dream of connecting stakeholders into tangible results and actions. An exciting challenge lies ahead, driving us to work even harder to create opportunities for these talented young people.”
“Bridging Talents and Opportunities was a unique event that brought together students, parents, professors, and leaders in different fields in a relatable and inspiring environment,” says Sebastián Ruiz Lopera, a PhD candidate in the Department of Electrical Engineering and Computer Science and an organizer. “Every speaker, panelist, and participant shared a story of resilience and passion that will motivate the next generation of young talents from disadvantaged backgrounds to become the new leaders and stakeholders.”
The 2024 BTO forum was made possible with the support of the Latinx Graduate Student Association at MIT, Laboratory of Nuclear Science, MIT MLK Scholars Program, Institute Community and Equity Office, the School of Science, the U.S. Department of Energy, University of Florida, CHN, JGMA Architects, Berklee College of Music, and the Harvard Colombian Student Society.
Killing the messengerA newly characterized anti-viral defense system in bacteria aborts infection through a novel mechanism by chemically altering mRNA.Like humans and other complex multicellular organisms, single-celled bacteria can fall ill and fight off viral infections. A bacterial virus is caused by a bacteriophage, or, more simply, phage, which is one of the most ubiquitous life forms on earth. Phages and bacteria are engaged in a constant battle, the virus attempting to circumvent the bacteria’s defenses, and the bacteria racing to find new ways to protect itself.
These anti-phage defense systems are carefully controlled, and prudently managed — dormant, but always poised to strike.
New open-access research recently published in Nature from the Laub Lab in the Department of Biology at MIT has characterized an anti-phage defense system in bacteria, CmdTAC. CmdTAC prevents viral infection by altering the single-stranded genetic code used to produce proteins, messenger RNA.
This defense system detects phage infection at a stage when the viral phage has already commandeered the host’s machinery for its own purposes. In the face of annihilation, the ill-fated bacterium activates a defense system that will halt translation, preventing the creation of new proteins and aborting the infection — but dooming itself in the process.
“When bacteria are in a group, they’re kind of like a multicellular organism that is not connected to one another. It’s an evolutionarily beneficial strategy for one cell to kill itself to save another identical cell,” says Christopher Vassallo, a postdoc and co-author of the study. “You could say it’s like self-sacrifice: One cell dies to protect the other cells.”
The enzyme responsible for altering the mRNA is called an ADP-ribosyltransferase. Researchers have characterized hundreds of these enzymes — although a few are known to target DNA or RNA, all but a handful target proteins. This is the first time these enzymes have been characterized targeting mRNA within cells.
Expanding understanding of anti-phage defense
Co-first author and graduate student Christopher Doering notes that it is only within the last decade or so that researchers have begun to appreciate the breadth of diversity and complexity of anti-phage defense systems. For example, CRISPR gene editing, a technique used in everything from medicine to agriculture, is rooted in research on the bacterial CRISPR-Cas9 anti-phage defense system.
CmdTAC is a subset of a widespread anti-phage defense mechanism called a toxin-antitoxin system. A TA system is just that: a toxin capable of killing or altering the cell’s processes rendered inert by an associated antitoxin.
Although these TA systems can be identified — if the toxin is expressed by itself, it kills or inhibits the growth of the cell; if the toxin and antitoxin are expressed together, the toxin is neutralized — characterizing the cascade of circumstances that activates these systems requires extensive effort. In recent years, however, many TA systems have been shown to serve as anti-phage defense.
Two general questions need to be answered to understand a viral defense system: How do bacteria detect an infection, and how do they respond?
Detecting infection
CmdTAC is a TA system with an additional element, and the three components generally exist in a stable complex: the toxic CmdT, the antitoxin CmdA, and an additional component called a chaperone, CmdC.
If the phage’s protective capsid protein is present, CmdC disassociates from CmdT and CmdA and interacts with the phage capsid protein instead. In the model outlined in the paper, the chaperone CmdC is, therefore, the sensor of the system, responsible for recognizing when an infection is occurring. Structural proteins, such as the capsid that protects the phage genome, are a common trigger because they’re abundant and essential to the phage.
The uncoupling of CmdC exposes the neutralizing antitoxin CmdA to be degraded, which releases the toxin CmdT to do its lethal work.
Toxicity on the loose
The researchers were guided by computational tools, so they knew that CmdT was likely an ADP-ribosyltransferase due to its similarities to other such enzymes. As the name suggests, the enzyme transfers an ADP ribose onto its target.
To determine if CmdT interacted with any sequences or positions in particular, they tested a mix of short sequences of single-stranded RNA. RNA has four bases: A, U, G, and C, and the evidence points to the enzyme recognizing GA sequences.
The CmdT modification of GA sequences in mRNA blocks their translation. The cessation of creating new proteins aborts the infection, preventing the phage from spreading beyond the host to infect other bacteria.
“Not only is it a new type of bacterial immune system, but the enzyme involved does something that’s never been seen before: the ADP-ribsolyation of mRNA,” Vassallo says.
Although the paper outlines the broad strokes of the anti-phage defense system, it’s unclear how CmdC interacts with the capsid protein, and how the chemical modification of GA sequences prevents translation.
Beyond bacteria
More broadly, exploring anti-phage defense aligns with the Laub Lab’s overall goal of understanding how bacteria function and evolve, but these results may have broader implications beyond bacteria.
Senior author Michael Laub, Salvador E. Luria Professor and Howard Hughes Medical Institute Investigator, says the ADP-ribosyltransferase has homologs in eukaryotes, including human cells. They are not well studied, and not among the Laub Lab’s research topics, but they are known to be up-regulated in response to viral infection.
“There are so many different — and cool — mechanisms by which organisms defend themselves against viral infection,” Laub says. “The notion that there may be some commonality between how bacteria defend themselves and how humans defend themselves is a tantalizing possibility.”
Smart handling of neutrons is crucial to fusion power successAssistant Professor Ethan Peterson is addressing some of the practical, overlooked issues that need to be worked out for viable fusion power plants.In fall 2009, when Ethan Peterson ’13 arrived at MIT as an undergraduate, he already had some ideas about possible career options. He’d always liked building things, even as a child, so he imagined his future work would involve engineering of some sort. He also liked physics. And he’d recently become intent on reducing our dependence on fossil fuels and simultaneously curbing greenhouse gas emissions, which made him consider studying solar and wind energy, among other renewable sources.
Things crystallized for him in the spring semester of 2010, when he took an introductory course on nuclear fusion, taught by Anne White, during which he discovered that when a deuterium nucleus and a tritium nucleus combine to produce a helium nucleus, an energetic (14 mega electron volt) neutron — traveling at one-sixth the speed of light — is released. Moreover, 1020 (100 billion billion) of these neutrons would be produced every second that a 500-megawatt fusion power plant operates. “It was eye-opening for me to learn just how energy-dense the fusion process is,” says Peterson, who became the Class of 1956 Career Development Professor of nuclear science and engineering in July 2024. “I was struck by the richness and interdisciplinary nature of the fusion field. This was an engineering discipline where I could apply physics to solve a real-world problem in a way that was both interesting and beautiful.”
He soon became a physics and nuclear engineering double major, and by the time he graduated from MIT in 2013, the U.S. Department of Energy (DoE) had already decided to cut funding for MIT’s Alcator C-Mod fusion project. In view of that facility’s impending closure, Peterson opted to pursue graduate studies at the University of Wisconsin. There, he acquired a basic science background in plasma physics, which is central not only to nuclear fusion but also to astrophysical phenomena such as the solar wind.
When Peterson received his PhD from Wisconsin in 2019, nuclear fusion had rebounded at MIT with the launch, a year earlier, of the SPARC project — a collaborative effort being carried out with the newly founded MIT spinout Commonwealth Fusion Systems. He returned to his alma mater as a postdoc and then a research scientist in the Plasma Science and Fusion Center, taking his time, at first, to figure out how to best make his mark in the field.
Minding your neutrons
Around that time, Peterson was participating in a community planning process, sponsored by the DoE, that focused on critical gaps that needed to be closed for a successful fusion program. In the course of these discussions, he came to realize that inadequate attention had been paid to the handling of neutrons, which carry 80 percent of the energy coming out of a fusion reaction — energy that needs to be harnessed for electrical generation. However, these neutrons are so energetic that they can penetrate through many tens of centimeters of material, potentially undermining the structural integrity of components and damaging vital equipment such as superconducting magnets. Shielding is also essential for protecting humans from harmful radiation.
One goal, Peterson says, is to minimize the number of neutrons that escape and, in so doing, to reduce the amount of lost energy. A complementary objective, he adds, “is to get neutrons to deposit heat where you want them to and to stop them from depositing heat where you don’t want them to.” These considerations, in turn, can have a profound influence on fusion reactor design. This branch of nuclear engineering, called neutronics — which analyzes where neutrons are created and where they end up going — has become Peterson’s specialty.
It was never a high-profile area of research in the fusion community — as plasma physics, for example, has always garnered more of the spotlight and more of the funding. That’s exactly why Peterson has stepped up. “The impacts of neutrons on fusion reactor design haven’t been a high priority for a long time,” he says. “I felt that some initiative needed to be taken,” and that prompted him to make the switch from plasma physics to neutronics. It has been his principal focus ever since — as a postdoc, a research scientist, and now as a faculty member.
A code to design by
The best way to get a neutron to transfer its energy is to make it collide with a light atom. Lithium, with an atomic number of three, or lithium-containing materials are normally good choices — and necessary for producing tritium fuel. The placement of lithium “blankets,” which are intended to absorb energy from neutrons and produce tritium, “is a critical part of the design of fusion reactors,” Peterson says. High-density materials, such as lead and tungsten, can be used, conversely, to block the passage of neutrons and other types of radiation. “You might want to layer these high- and low-density materials in a complicated way that isn’t immediately intuitive” he adds. Determining which materials to put where — and of what thickness and mass — amounts to a tricky optimization problem, which will affect the size, cost, and efficiency of a fusion power plant.
To that end, Peterson has developed modelling tools that can make analyses of these sorts easier and faster, thereby facilitating the design process. “This has traditionally been the step that takes the longest time and causes the biggest holdups,” he says. The models and algorithms that he and his colleagues are devising are general enough, moreover, to be compatible with a diverse range of fusion power plant concepts, including those that use magnets or lasers to confine the plasma.
Now that he’s become a professor, Peterson is in a position to introduce more people to nuclear engineering, and to neutronics in particular. “I love teaching and mentoring students, sharing the things I’m excited about,” he says. “I was inspired by all the professors I had in physics and nuclear engineering at MIT, and I hope to give back to the community in the same way.”
He also believes that if you are going to work on fusion, there is no better place to be than MIT, “where the facilities are second-to-none. People here are extremely innovative and passionate. And the sheer number of people who excel in their fields is staggering.” Great ideas can sometimes be sparked by off-the-cuff conversations in the hallway — something that happens more frequently than you expect, Peterson remarks. “All of these things taken together makes MIT a very special place.”
2024 Math Prize for Girls at MIT sees six-way tieContest hosted by the Department of Mathematics attracts 274 participants and celebrates 16th anniversary.After 274 young women spent two-and-a-half hours working through 20 advanced math problems for the 16th annual Advantage Testing Foundation/Jane Street Math Prize for Girls (MP4G) contest held Oct. 4-6 at MIT, a six-way tie was announced.
Hosted by the MIT Department of Mathematics and sponsored by the Advantage Testing Foundation and global trading firm Jane Street, MP4G is the largest math prize for girls in the world. The competitors, who came from across the United States and Canada, had scored high enough on the American Mathematics Competition exam to apply for and be accepted by MP4G. This year, MP4G received 891 applications to solve multistage problems in geometry, algebra, and trigonometry. This year's problems are listed on the MP4G website.
Because of the six-way tie, the $50,000 first-place prize and subsequent awards ($20,000 for second, $10,000 for third, $4,000 apiece for fourth and fifth and $2,000 for sixth place) was instead evenly divided, with each winner receiving $15,000. While each scored 15 out of 20, the winners were actually placed in order of how they answered the most difficult problems.
In first place was Shruti Arun, 11th grade, Cherry Creek High School, Colorado, who last year placed fourth; followed by Angela Liu, 12th grade, home-schooled, California; Sophia Hou, 11th grade, Thomas Jefferson High School for Science and Technology, Virginia; Susie Lu, 11th grade, Stanford Online High School, Washington, who last year placed 19th; Katie He, 12th grade, the Frazer School, Florida; and Katherine Liu, 12th grade, Clements High School, Texas — with the latter two having tied for seventh place last year.
The next round of winners, all with a score of 14, took home $1,000 each: Angela Ho, 11th grade, Stevenson High School, Illinois; Hannah Fox, 12th grade, Proof School, California; Selena Ge, 9th grade, Lexington High School, Massachusetts; Alansha Jiang, 12th grade, Newport High School, Washington; Laura Wang, 9th grade, Lakeside School, Washington; Alyssa Chu, 12th grade, Rye Country Day School, New York; Emily Yu, 12th grade, Mendon High School, New York; and Ivy Guo, 12th grade, Blair High School, Maryland.
The $2,000 Youth Prize to the highest-scoring contestant in 9th grade or below was shared evenly by Selena Ge and Laura Wang. In total, the event awards $100,000 in monetary prize to the top 14 contestants (including tie scores). Honorable mention trophies were awarded to the next 25 winners.
“I knew there were a lot of really smart people there, so the chances of me getting first wasn’t particularly high,” Katie He told a Florida newspaper. “When I heard six ways, I was so excited though,” He says, “because that’s just really cool that we all get to be happy about our performances and celebrate together and share the same joy.”
The event featured a keynote lecture by Harvard University professor of mathematics Lauren Williams on the "Combinatorics of Hopping Particles;” talks by Po-Shen Loh, professor of math at Carnegie Mellon University, and Maria Klawe, president of Math for America; and a musical performance by the MIT Logarhythms. Last year’s winner, Jessica Wan, volunteered as a proctor. Now a first-year at MIT, Wan won MP4G in 2022 and 2019. Alumna and doctoral candidate Nitya Mani was on hand to note, during her speech at the awards ceremony, how much bigger the event has grown over the years.
The day before the competition, attendees gathered to attend campus tours, icebreaker events, and networking sessions around MIT, at the Boston Marriott Cambridge, and at Kresge Auditorium, where the awards ceremony took place. Contestants also met MP4G alumnae at the Women in STEM Ask Me Anything event.
Math Community and Outreach Officer Michael King described the event as a “virtuous circle” where alumni return to encourage participants and help to keep the event running. “It’s good for MIT, because it attracts top female students from around the country. The atmosphere, with hundreds of girls excited about math and supported by their families, was wonderful. I thought to myself, ‘This is possible, to have rooms of math people that aren’t 80 percent men.’ The more women in math, the more role models. This is what inspires people to enter a discipline. MP4G creates a community of role models.”
Chris Peterson SM ’13, director of communications and special projects at MIT Admissions and Student Financial Services, agrees. “Everyone sees and appreciates the competitive function that Math Prize performs to identify and celebrate these highly talented young mathematicians. What’s less visible, but equally or even more important, is the crucial community role it plays as an affinity community to build relationships and a sense of belonging among these young women that will follow and empower them through the rest of their education and careers.”
Petersen also discussed life at MIT and the admissions process at the Art of Problem Solving’s recent free MIT Math Jam, as he has annually for the past decade. He was joined by MIT Math doctoral candidate Evan Chen ’18, a former deputy leader of the USA International Math Olympiad team.
Many alumnae returned to MIT to participate in a panel for attendees and their parents. For one panelist, MP4G is a family affair. Sheela Devadas, MP4G ’10 and ’11, is the sister of electrical engineering and computer science doctoral candidate and fellow MP4G alum Lalita; their mother, Sulochana, is MP4G’s program administrator.
“One of the goals of MP4G is to inspire young mathematicians,” says Devadas. “Although it is a competition, there is a lot of camaraderie between the contestants as well, and opportunities to meet both current undergraduate STEM majors and older role models who have pursued math-based careers. This aligned with my experience at MIT as a math major, where the atmosphere felt both competitive and collaborative in a way that inspired us.”
“There are many structural barriers and interpersonal issues facing women in STEM-oriented careers,” she adds. “One issue that is sometimes overlooked, which I have sometimes run into, is that both in school and in the workplace, it can be challenging to get your peers to respect your mathematical skill rather than pressuring you to take on tasks like note-taking or scheduling that are seen as more 'female' (though those tasks are also valuable and necessary).”
Another panelist, Jennifer Xiong ’23, talked about her time at MP4G, MIT, and her current role as a pharmaceutical researcher at Moderna.
“MP4G is what made me want to attend MIT, where I met my first MIT friend,” she says. Later, as an MIT student, she volunteered with MP4G to help her stay connected with the program. “MP4G is exciting because it brings together young girls who are interested in solving hard problems, to MIT campus, where they can build community and foster their interests in math.”
Volunteer Ranu Boppana ’87, the wife of MP4G founding director and MIT Math Research Affiliate Ravi Boppana PhD ’86, appreciates watching how this program has helped inspire women to pursue STEM education. “I’m most struck by the fact that MIT is now gender-balanced for undergraduates, but also impressed with what a more diverse place it is in every way.”
The Boppanas were inspired to found MP4G because their daughter was a mathlete in middle school and high school, and often the only girl in many regional competitions. “Ravi realized that the girls needed a community of their own, and role models to help them visualize seeing themselves in STEM.”
“Each year, the best part of MP4G is seeing the girls create wonderful networks for themselves, as some are often the only girls they know interested in math at home. This event is also such a fabulous introduction to MIT for them. I think this event helps MIT recruit the most mathematically talented girls in the country.”
Ravi also recently created the YouTube channel Boppana Math, geared toward high school students. “My goal is to create videos that are accessible to bright high school students, such as the participants in the Math Prize for Girls,” says Ravi. “My most recent video, 'Hypergraphs and Acute Triangles,' won an Honorable Mention at this year’s Summer of Math Exposition.”
The full list of winners is posted on the Art of Problem Solving website. The top 45 students are invited to take the 2024 Math Prize for Girls Olympiad at their schools. Canada/USA Mathcamp also provides $500 merit scholarships to the top 35 MP4G students who enroll in its summer program. This reflects a $250 increase to the scholarships. Applications to compete in next year’s MP4G will open in March 2025.
Quantum simulator could help uncover materials for high-performance electronicsBy emulating a magnetic field on a superconducting quantum computer, researchers can probe complex properties of materials.Quantum computers hold the promise to emulate complex materials, helping researchers better understand the physical properties that arise from interacting atoms and electrons. This may one day lead to the discovery or design of better semiconductors, insulators, or superconductors that could be used to make ever faster, more powerful, and more energy-efficient electronics.
But some phenomena that occur in materials can be challenging to mimic using quantum computers, leaving gaps in the problems that scientists have explored with quantum hardware.
To fill one of these gaps, MIT researchers developed a technique to generate synthetic electromagnetic fields on superconducting quantum processors. The team demonstrated the technique on a processor comprising 16 qubits.
By dynamically controlling how the 16 qubits in their processor are coupled to one another, the researchers were able to emulate how electrons move between atoms in the presence of an electromagnetic field. Moreover, the synthetic electromagnetic field is broadly adjustable, enabling scientists to explore a range of material properties.
Emulating electromagnetic fields is crucial to fully explore the properties of materials. In the future, this technique could shed light on key features of electronic systems, such as conductivity, polarization, and magnetization.
“Quantum computers are powerful tools for studying the physics of materials and other quantum mechanical systems. Our work enables us to simulate much more of the rich physics that has captivated materials scientists,” says Ilan Rosen, an MIT postdoc and lead author of a paper on the quantum simulator.
The senior author is William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of the Engineering Quantum Systems group, and associate director of the Research Laboratory of Electronics. Oliver and Rosen are joined by others in the departments of Electrical Engineering and Computer Science and of Physics and at MIT Lincoln Laboratory. The research appears today in Nature Physics.
A quantum emulator
Companies like IBM and Google are striving to build large-scale digital quantum computers that hold the promise of outperforming their classical counterparts by running certain algorithms far more rapidly.
But that’s not all quantum computers can do. The dynamics of qubits and their couplings can also be carefully constructed to mimic the behavior of electrons as they move among atoms in solids.
“That leads to an obvious application, which is to use these superconducting quantum computers as emulators of materials,” says Jeffrey Grover, a research scientist at MIT and co-author on the paper.
Rather than trying to build large-scale digital quantum computers to solve extremely complex problems, researchers can use the qubits in smaller-scale quantum computers as analog devices to replicate a material system in a controlled environment.
“General-purpose digital quantum simulators hold tremendous promise, but they are still a long way off. Analog emulation is another approach that may yield useful results in the near-term, particularly for studying materials. It is a straightforward and powerful application of quantum hardware,” explains Rosen. “Using an analog quantum emulator, I can intentionally set a starting point and then watch what unfolds as a function of time.”
Despite their close similarity to materials, there are a few important ingredients in materials that can’t be easily reflected on quantum computing hardware. One such ingredient is a magnetic field.
In materials, electrons “live” in atomic orbitals. When two atoms are close to one another, their orbitals overlap and electrons can “hop” from one atom to another. In the presence of a magnetic field, that hopping behavior becomes more complex.
On a superconducting quantum computer, microwave photons hopping between qubits are used to mimic electrons hopping between atoms. But, because photons are not charged particles like electrons, the photons’ hopping behavior would remain the same in a physical magnetic field.
Since they can’t just turn on a magnetic field in their simulator, the MIT team employed a few tricks to synthesize the effects of one instead.
Tuning up the processor
The researchers adjusted how adjacent qubits in the processor were coupled to each other to create the same complex hopping behavior that electromagnetic fields cause in electrons.
To do that, they slightly changed the energy of each qubit by applying different microwave signals. Usually, researchers will set qubits to the same energy so that photons can hop from one to another. But for this technique, they dynamically varied the energy of each qubit to change how they communicate with each other.
By precisely modulating these energy levels, the researchers enabled photons to hop between qubits in the same complex manner that electrons hop between atoms in a magnetic field.
Plus, because they can finely tune the microwave signals, they can emulate a range of electromagnetic fields with different strengths and distributions.
The researchers undertook several rounds of experiments to determine what energy to set for each qubit, how strongly to modulate them, and the microwave frequency to use.
“The most challenging part was finding modulation settings for each qubit so that all 16 qubits work at once,” Rosen says.
Once they arrived at the right settings, they confirmed that the dynamics of the photons uphold several equations that form the foundation of electromagnetism. They also demonstrated the “Hall effect,” a conduction phenomenon that exists in the presence of an electromagnetic field.
These results show that their synthetic electromagnetic field behaves like the real thing.
Moving forward, they could use this technique to precisely study complex phenomena in condensed matter physics, such as phase transitions that occur when a material changes from a conductor to an insulator.
“A nice feature of our emulator is that we need only change the modulation amplitude or frequency to mimic a different material system. In this way, we can scan over many materials properties or model parameters without having to physically fabricate a new device each time.” says Oliver.
While this work was an initial demonstration of a synthetic electromagnetic field, it opens the door to many potential discoveries, Rosen says.
“The beauty of quantum computers is that we can look at exactly what is happening at every moment in time on every qubit, so we have all this information at our disposal. We are in a very exciting place for the future,” he adds.
This work is supported, in part, by the U.S. Department of Energy, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. Army Research Office, the Oak Ridge Institute for Science and Education, the Office of the Director of National Intelligence, NASA, and the National Science Foundation.
AXIS mission selected as NASA Astrophysics Probe competition finalistMIT Kavli Institute scientists and collaborators will produce a concept study to launch a $1B experiment to investigate the X-ray universe.The MIT Kavli Institute for Astrophysics and Space Research (MKI) is a project lead for one of two finalist missions recently selected for NASA's new Probe Explorers program. Working with collaborators at the University of Maryland and Goddard Space Flight Research Center, the team will produce a one-year concept study to launch the Advanced X-ray Imaging Satellite (AXIS) in 2032.
Erin Kara, associate professor of physics and astrophysicist at MIT, is the deputy principal investigator for AXIS. The MIT team includes MKI scientists Eric Miller, Mark Bautz, Catherine Grant, Michael McDonald, and Kevin Burdge. Says Kara, "I am honored to be working with this amazing team in ushering in a new era for X-ray astronomy."
The AXIS mission is designed to revolutionize the view scientists have of high-energy events and environments in the universe using new technologies capable of seeing even deeper into space and further back in time.
"If selected to move forward," explains Kara, "AXIS will answer some of the biggest mysteries in modern astrophysics, from the formation of supermassive black holes to the progenitors of the most energetic and explosive events in the universe to the effects of stars on exoplanets. Simply put, it's the next-generation observatory we need to transform our understanding of the universe."
Critical to AXIS's success is the CCD focal plane — an array of imaging devices that record the properties of the light coming into the telescope. If selected, MKI scientists will work with colleagues at MIT Lincoln Laboratory and Stanford University to develop this high-speed camera, which sits at the heart of the telescope, connected to the X-ray Mirror Assembly and telescope tube. The work to create the array builds on previous imaging technology developed by MKI and Lincoln Laboratory, including instruments flying on the Chandra X-ray Observatory, the Suzaku X-ray Observatory, and the Transiting Exoplanet Survey Satellite (TESS).
Camera lead Eric Miller notes that "the advanced detectors that we will use provide the same excellent sensitivity as previous instruments, but operating up to 100 times faster to keep up with all of the X-rays focused by the mirror." As such, the development of the CCD focal plane will have significant impact in both scientific and technological realms.
"Engineering the array over the next year," adds Kara, "will lay the groundwork not just for AXIS, but for future missions as well."
MIT Schwarzman College of Computing launches postdoctoral program to advance AI across disciplinesThe new Tayebati Postdoctoral Fellowship Program will support leading postdocs to bring cutting-edge AI to bear on research in scientific discovery or music.The MIT Stephen A. Schwarzman College of Computing has announced the launch of a new program to support postdocs conducting research at the intersection of artificial intelligence and particular disciplines.
The Tayebati Postdoctoral Fellowship Program will focus on AI for addressing the most challenging problems in select scientific research areas, and on AI for music composition and performance. The program will welcome an inaugural cohort of up to six postdocs for a one-year term, with the possibility of renewal for a second term.
Supported by a $20 million gift from Parviz Tayebati, an entrepreneur and executive with a broad technical background and experience with startup companies, the program will empower top postdocs by providing an environment that facilitates their academic and professional development and enables them to pursue ambitious discoveries. “I am proud to support a fellowship program that champions interdisciplinary research and fosters collaboration across departments. My hope is that this gift will inspire a new generation of scholars whose research advances knowledge and nurtures innovation that transcends traditional boundaries,” says Tayebati.
"Artificial intelligence holds tremendous potential to accelerate breakthroughs in science and ignite human creativity," says Dan Huttenlocher, dean of the Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “This new postdoc program is a remarkable opportunity to cultivate exceptional bilingual talent combining AI and another discipline. The program will offer fellows the chance to engage in research at the forefront of both AI and another field, collaborating with leading experts across disciplines. We are deeply thankful to Parviz for his foresight in supporting the development of researchers in this increasingly important area.”
Candidates accepted into the program will work on projects that encompass one of six disciplinary areas: biology/bioengineering, brain and cognitive sciences, chemistry/chemical engineering, materials science and engineering, music, and physics. Each fellow will have a faculty mentor in the disciplinary area as well as in AI.
The Tayebati Postdoctoral Fellowship Program is a key component of a larger focus of the MIT Schwarzman College of Computing aimed at fostering innovative research in computing. As part of this focus, the college has three postdoctoral programs, each of which provides training and mentorship to fellows, broadens their research horizons, and helps them develop expertise in computing, including its intersection with other disciplines.
Other programs include MEnTorEd Opportunities in Research (METEOR), which was established by the Computer Science and Artificial Intelligence Laboratory in 2020. Recently expanded to span MIT through the college, the goal of METEOR is to support exceptional scholars in computer science and AI and to broaden participation in the field.
In addition, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing, offers researchers exploring how computing is reshaping society the opportunity to participate as a SERC postdoc. SERC postdocs engage in a number of activities throughout the year, including leading interdisciplinary teams of MIT undergraduate and graduate students, known as SERC Scholars, to work on research projects investigating such topics as generative AI and democracy, combating deepfakes, examining data ownership, and the societal impact of gamification, among others.
MIT affiliates receive 2024-25 awards and honors from the American Physical SocietyTwo faculty, a graduate student, and 10 additional alumni receive top awards and prizes; four faculty, one senior researcher, and seven alumni named APS Fellows.A number of individuals with MIT ties have received honors from the American Physical Society (APS) for 2024 and 2025.
Awardees include Professor Frances Ross; Professor Vladan Vuletić, graduate student Jiliang Hu ’19, PhD ’24; as well as 10 alumni. New APS Fellows include Professor Joseph Checkelsky, Senior Researcher John Chiaverini, Associate Professor Areg Danagoulian, Professor Ruben Juanes, and seven alumni.
Frances M. Ross, the TDK Professor in Materials Science and Engineering, received the 2025 Joseph F. Keithley Award For Advances in Measurement Science “for groundbreaking advances in in situ electron microscopy in vacuum and liquid environments.”
Ross uses transmission electron microscopy to watch crystals as they grow and react under different conditions, including both liquid and gaseous environments. The microscopy techniques developed over Ross’ research career help in exploring growth mechanisms during epitaxy, catalysis, and electrochemical deposition, with applications in microelectronics and energy storage. Ross’ research group continues to develop new microscopy instrumentation to enable deeper exploration of these processes.
Vladan Vuletić, the Lester Wolfe Professor of Physics, received the 2025 Arthur L. Schawlow Prize in Laser Science “for pioneering work on spin squeezing for optical atomic clocks, quantum nonlinear optics, and laser cooling to quantum degeneracy.” Vuletić’s research includes ultracold atoms, laser cooling, large-scale quantum entanglement, quantum optics, precision tests of physics beyond the Standard Model, and quantum simulation and computing with trapped neutral atoms.
His Experimental Atomic Physics Group is also affiliated with the MIT-Harvard Center for Ultracold Atoms and the Research Laboratory of Electronics (RLE). In 2020, his group showed that the precision of current atomic clocks could be improved by entangling the atoms — a quantum phenomenon by which particles are coerced to behave in a collective, highly correlated state.
Jiliang Hu received the 2024 Award for Outstanding Doctoral Thesis Research in Biological Physics “for groundbreaking biophysical contributions to microbial ecology that bridge experiment and theory, showing how only a few coarse-grained features of ecological networks can predict emergent phases of diversity, dynamics, and invasibility in microbial communities.”
Hu is working in PhD advisor Professor Jeff Gore’s lab. He is interested in exploring the high-dimensional dynamics and emergent phenomena of complex microbial communities. In his first project, he demonstrated that multi-species communities can be described by a phase diagram as a function of the strength of interspecies interactions and the diversity of the species pool. He is now studying alternative stable states and the role of migration in the dynamics and biodiversity of metacommunities.
Alumni receiving awards:
Riccardo Betti PhD ’92 is the 2024 recipient of the John Dawson Award in Plasma Physics “for pioneering the development of statistical modeling to predict, design, and analyze implosion experiments on the 30kJ OMEGA laser, achieving hot spot energy gains above unity and record Lawson triple products for direct-drive laser fusion.”
Javier Mauricio Duarte ’10 received the 2024 Henry Primakoff Award for Early-Career Particle Physics “for accelerating trigger technologies in experimental particle physics with novel real-time approaches by embedding artificial intelligence and machine learning in programmable gate arrays, and for critical advances in Higgs physics studies at the Large Hadron Collider in all-hadronic final states.”
Richard Furnstahl ’18 is the 2025 recipient of the Feshbach Prize Theoretical Nuclear Physics “for foundational contributions to calculations of nuclei, including applying the Similarity Renormalization Group to the nuclear force, grounding nuclear density functional theory in those forces, and using Bayesian methods to quantify the uncertainties in effective field theory predictions of nuclear observables.”
Harold Yoonsung Hwang ’93, SM ’93 is the 2024 recipient of the James C. McGroddy Prize for New Materials “for pioneering work in oxide interfaces, dilute superconductivity in heterostructures, freestanding oxide membranes, and superconducting nickelates using pulsed laser deposition, as well as for significant early contributions to the physics of bulk transition metal oxides.”
James P. Knauer ’72 received the 2024 John Dawson Award in Plasma Physics “for pioneering the development of statistical modeling to predict, design, and analyze implosion experiments on the 30kJ OMEGA laser, achieving hot spot energy gains above unity and record Lawson triple products for direct-drive laser fusion.”
Sekazi Mtingwa ’71 is the 2025 recipient of the John Wheatley Award “for exceptional contributions to capacity building in Africa, the Middle East, and other developing regions, including leadership in training researchers in beamline techniques at synchrotron light sources and establishing the groundwork for future facilities in the Global South.
Michael Riordan ’68, PhD ’73 received the 2025 Abraham Pais Prize for History of Physics, which “recognizes outstanding scholarly achievements in the history of physics.”
Charles E. Sing PhD ’12 received the 2024 John H. Dillon Medal “for pioneering advances in polyelectrolyte phase behavior and polymer dynamics using theory and computational modeling.”
David W. Taylor ’01 received the 2025 Jonathan F. Reichert and Barbara Wolff-Reichert Award for Excellence in Advanced Laboratory Instruction “for continuous physical measurement laboratory improvements, leveraging industrial and academic partnerships that enable innovative and diversified independent student projects, and giving rise to practical skillsets yielding outstanding student outcomes.”
Wennie Wang ’13 is the 2025 recipient of the Maria Goeppert Mayer Award “for outstanding contributions to the field of materials science, including pioneering research on defective transition metal oxides for energy sustainability, a commitment to broadening participation of underrepresented groups in computational materials science, and leadership and advocacy in the scientific community.”
APS Fellows
Joseph Checkelsky, the Mitsui Career Development Associate Professor of Physics, received the 2024 Division of Condensed Matter Physics Fellowship “for pioneering contributions to the synthesis and study of quantum materials, including kagome and pyrochlore metals and natural superlattice compounds.”
Affiliated with the MIT Materials Research Laboratory and the MIT Center for Quantum Engineering, Checkelsky is working at the intersection of materials synthesis and quantum physics to discover new materials and physical phenomena to expand the boundaries of understanding of quantum mechanical condensed matter systems, as well as open doorways to new technologies by realizing emergent electronic and magnetic functionalities. Research in Checkelsky’s lab focuses on the study of exotic electronic states of matter through the synthesis, measurement, and control of solid-state materials. His research includes studying correlated behavior in topologically nontrivial materials, the role of geometrical phases in electronic systems, and novel types of geometric frustration.
John Chiaverini, a senior staff member in the Quantum Information and Integrated Nanosystems group and an MIT principal investigator in RLE, was elected a 2024 Fellow of the American Physical Society in the Division of Quantum Information “for pioneering contributions to experimental quantum information science, including early demonstrations of quantum algorithms, the development of the surface-electrode ion trap, and groundbreaking work in integrated photonics for trapped-ion quantum computation.”
Chiaverini is pursuing research in quantum computing and precision measurement using individual atoms. Currently, Chiaverini leads a team developing novel technologies for control of trapped-ion qubits, including trap-integrated optics and electronics; this research has the potential to allow scaling of trapped-ion systems to the larger numbers of ions needed for practical applications while maintaining high levels of control over their quantum states. He and the team are also exploring new techniques for the rapid generation of quantum entanglement between ions, as well as investigating novel encodings of quantum information that have the potential to yield higher-fidelity operations than currently available while also providing capabilities to correct the remaining errors.
Areg Danagoulian, associate professor of nuclear science and engineering, received the 2024 Forum on Physics and Society Fellowship “for seminal technological contributions in the field of arms control and cargo security, which significantly benefit international security.”
His current research interests focus on nuclear physics applications in societal problems, such as nuclear nonproliferation, technologies for arms control treaty verification, nuclear safeguards, and cargo security. Danagoulian also serves as the faculty co-director for MIT’s MISTI Eurasia program.
Ruben Juanes, professor of civil and environmental engineering and earth, atmospheric and planetary sciences (CEE/EAPS) received the 2024 Division of Fluid Dynamics Fellowship “for fundamental advances — using experiments, innovative imaging, and theory — in understanding the role of wettability for controlling the dynamics of fluid displacement in porous media and geophysical flows, and exploiting this understanding to optimize.”
An expert in the physics of multiphase flow in porous media, Juanes uses a mix of theory, computational, and real-life experiments to establish a fundamental understanding of how different fluids such as oil, water, and gas move through rocks, soil, or underwater reservoirs to solve energy and environmental-driven geophysical problems. His major contributions have been in developing improved safety and effectiveness of carbon sequestration, advanced understanding of fluid interactions in porous media for energy and environmental applications, imaging and computational techniques for real-time monitoring of subsurface fluid flows, and insights into how underground fluid movement contributes to landslides, floods, and earthquakes.
Alumni receiving fellowships:
Constantia Alexandrou PhD ’85 is the 2024 recipient of the Division of Nuclear Physics Fellowship “for the pioneering contributions in calculating nucleon structure observables using lattice QCD.”
Daniel Casey PhD ’12 received the 2024 Division of Plasma Physics Fellowship “for outstanding contributions to the understanding of the stagnation conditions required to achieve ignition.”
Maria K. Chan PhD ’09 is the 2024 recipient of the Topical Group on Energy Research and Applications Fellowship “for contributions to methodological innovations, developments, and demonstrations toward the integration of computational modeling and experimental characterization to improve the understanding and design of renewable energy materials.”
David Humphreys ’82, PhD ’91 received the 2024 Division of Plasma Physics Fellowship “for sustained leadership in developing the field of model-based dynamic control of magnetically confined plasmas, and for providing important and timely contributions to the understanding of tokamak stability, disruptions, and halo current physics.
Eric Torrence PhD ’97 received the 2024 Division of Particles and Fields Fellowship “for significant contributions with the ATLAS and FASER Collaborations, particularly in the searches for new physics, measurement of the LHC luminosity, and for leadership in the operations of both experiments.”
Tiffany S. Santos ’02, PhD ’07 is the 2024 recipient of the Topical Group on Magnetism and Its Applications Fellowship “for innovative contributions in synthesis and characterization of novel ultrathin magnetic films and interfaces, and tailoring their properties for optimal performance, especially in magnetic data storage and spin-transport devices.”
Lei Zhou ’14, PhD ’19 received the 2024 Forum on Industrial and Applied Physics Fellowship “for outstanding and sustained contributions to the fields of metamaterials, especially for proposing metasurfaces as a bridge to link propagating waves and surface waves.”
Scientists discover molecules that store much of the carbon in space The discovery of pyrene derivatives in a distant interstellar cloud may help to reveal how our own solar system formed.A team led by researchers at MIT has discovered that a distant interstellar cloud contains an abundance of pyrene, a type of large, carbon-containing molecule known as a polycyclic aromatic hydrocarbon (PAH).
The discovery of pyrene in this far-off cloud, which is similar to the collection of dust and gas that eventually became our own solar system, suggests that pyrene may have been the source of much of the carbon in our solar system. That hypothesis is also supported by a recent finding that samples returned from the near-Earth asteroid Ryugu contain large quantities of pyrene.
“One of the big questions in star and planet formation is: How much of the chemical inventory from that early molecular cloud is inherited and forms the base components of the solar system? What we’re looking at is the start and the end, and they’re showing the same thing. That’s pretty strong evidence that this material from the early molecular cloud finds its way into the ice, dust, and rocky bodies that make up our solar system,” says Brett McGuire, an assistant professor of chemistry at MIT.
Due to its symmetry, pyrene itself is invisible to the radio astronomy techniques that have been used to detect about 95 percent of molecules in space. Instead, the researchers detected an isomer of cyanopyrene, a version of pyrene that has reacted with cyanide to break its symmetry. The molecule was detected in a distant cloud known as TMC-1, using the 100-meter Green Bank Telescope (GBT), a radio telescope at the Green Bank Observatory in West Virginia.
McGuire and Ilsa Cooke, an assistant professor of chemistry at the University of British Colombia, are the senior authors of a paper describing the findings, which appears today in Science. Gabi Wenzel, an MIT postdoc in McGuire’s group, is the lead author of the study.
Carbon in space
PAHs, which contain rings of carbon atoms fused together, are believed to store 10 to 25 percent of the carbon that exists in space. More than 40 years ago, scientists using infrared telescopes began detecting features that are thought to belong to vibrational modes of PAHs in space, but this technique couldn’t reveal exactly which types of PAHs were out there.
“Since the PAH hypothesis was developed in the 1980s, many people have accepted that PAHs are in space, and they have been found in meteorites, comets, and asteroid samples, but we can’t really use infrared spectroscopy to unambiguously identify individual PAHs in space,” Wenzel says.
In 2018, a team led by McGuire reported the discovery of benzonitrile — a six-carbon ring attached to a nitrile (carbon-nitrogen) group — in TMC-1. To make this discovery, they used the GBT, which can detect molecules in space by their rotational spectra — distinctive patterns of light that molecules give off as they tumble through space. In 2021, his team detected the first individual PAHs in space: two isomers of cyanonaphthalene, which consists of two rings fused together, with a nitrile group attached to one ring.
On Earth, PAHs commonly occur as byproducts of burning fossil fuels, and they’re also found in char marks on grilled food. Their discovery in TMC-1, which is only about 10 kelvins, suggested that it may also be possible for them to form at very low temperatures.
The fact that PAHs have also been found in meteorites, asteroids, and comets has led many scientists to hypothesize that PAHs are the source of much of the carbon that formed our own solar system. In 2023, researchers in Japan found large quantities of pyrene in samples returned from the asteroid Ryugu during the Hayabusa2 mission, along with smaller PAHs including naphthalene.
That discovery motivated McGuire and his colleagues to look for pyrene in TMC-1. Pyrene, which contains four rings, is larger than any of the other PAHs that have been detected in space. In fact, it’s the third-largest molecule identified in space, and the largest ever detected using radio astronomy.
Before looking for these molecules in space, the researchers first had to synthesize cyanopyrene in the laboratory. The cyano or nitrile group is necessary for the molecule to emit a signal that a radio telescope can detect. The synthesis was performed by MIT postdoc Shuo Zhang in the group of Alison Wendlandt, an MIT associate professor of chemistry.
Then, the researchers analyzed the signals that the molecules emit in the laboratory, which are exactly the same as the signals that they emit in space.
Using the GBT, the researchers found these signatures throughout TMC-1. They also found that cyanopyrene accounts for about 0.1 percent of all the carbon found in the cloud, which sounds small but is significant when one considers the thousands of different types of carbon-containing molecules that exist in space, McGuire says.
“While 0.1 percent doesn’t sound like a large number, most carbon is trapped in carbon monoxide (CO), the second-most abundant molecule in the universe besides molecular hydrogen. If we set CO aside, one in every few hundred or so remaining carbon atoms is in pyrene. Imagine the thousands of different molecules that are out there, nearly all of them with many different carbon atoms in them, and one in a few hundred is in pyrene,” he says. “That is an absolutely massive abundance. An almost unbelievable sink of carbon. It’s an interstellar island of stability.”
Ewine van Dishoeck, a professor of molecular astrophysics at Leiden Observatory in the Netherlands, called the discovery “unexpected and exciting.”
“It builds on their earlier discoveries of smaller aromatic molecules, but to make the jump now to the pyrene family is huge. Not only does it demonstrate that a significant fraction of carbon is locked up in these molecules, but it also points to different formation routes of aromatics than have been considered so far,” says van Dishoeck, who was not involved in the research.
An abundance of pyrene
Interstellar clouds like TMC-1 may eventually give rise to stars, as clumps of dust and gas coalesce into larger bodies and begin to heat up. Planets, asteroids, and comets arise from some of the gas and dust that surround young stars. Scientists can’t look back in time at the interstellar cloud that gave rise to our own solar system, but the discovery of pyrene in TMC-1, along with the presence of large amounts of pyrene in the asteroid Ryugu, suggests that pyrene may have been the source of much of the carbon in our own solar system.
“We now have, I would venture to say, the strongest evidence ever of this direct molecular inheritance from the cold cloud all the way through to the actual rocks in the solar system,” McGuire says.
The researchers now plan to look for even larger PAH molecules in TMC-1. They also hope to investigate the question of whether the pyrene found in TMC-1 was formed within the cold cloud or whether it arrived from elsewhere in the universe, possibly from the high-energy combustion processes that surround dying stars.
The research was funded in part by a Beckman Foundation Young Investigator Award, the Schmidt Futures, the U.S. National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, the Goddard Center for Astrobiology, and the NASA Planetary Science Division Internal Scientist Funding Program.
Physicists discover first “black hole triple”System observed 8,000 light-years away may be the first direct evidence of “gentle” black hole formation.Many black holes detected to date appear to be part of a pair. These binary systems comprise a black hole and a secondary object — such as a star, a much denser neutron star, or another black hole — that spiral around each other, drawn together by the black hole’s gravity to form a tight orbital pair.
Now a surprising discovery is expanding the picture of black holes, the objects they can host, and the way they form.
In a study appearing today in Nature, physicists at MIT and Caltech report that they have observed a “black hole triple” for the first time. The new system holds a central black hole in the act of consuming a small star that’s spiraling in very close to the black hole, every 6.5 days — a configuration similar to most binary systems. But surprisingly, a second star appears to also be circling the black hole, though at a much greater distance. The physicists estimate this far-off companion is orbiting the black hole every 70,000 years.
That the black hole seems to have a gravitational hold on an object so far away is raising questions about the origins of the black hole itself. Black holes are thought to form from the violent explosion of a dying star — a process known as a supernova, by which a star releases a huge amount of energy and light in a final burst before collapsing into an invisible black hole.
The team’s discovery, however, suggests that if the newly-observed black hole resulted from a typical supernova, the energy it would have released before it collapsed would have kicked away any loosely bound objects in its outskirts. The second, outer star, then, shouldn’t still be hanging around.
Instead, the team suspects the black hole formed through a more gentle process of “direct collapse,” in which a star simply caves in on itself, forming a black hole without a last dramatic flash. Such a gentle origin would hardly disturb any loosely bound, faraway objects.
Because the new triple system includes a very far-off star, this suggests the system’s black hole was born through a gentler, direct collapse. And while astronomers have observed more violent supernovae for centuries, the team says the new triple system could be the first evidence of a black hole that formed from this more gentle process.
“We think most black holes form from violent explosions of stars, but this discovery helps call that into question,” says study author Kevin Burdge, a Pappalardo Fellow in the MIT Department of Physics. “This system is super exciting for black hole evolution, and it also raises questions of whether there are more triples out there.”
The study’s co-authors at MIT are Erin Kara, Claude Canizares, Deepto Chakrabarty, Anna Frebel, Sarah Millholland, Saul Rappaport, Rob Simcoe, and Andrew Vanderburg, along with Kareem El-Badry at Caltech.
Tandem motion
The discovery of the black hole triple came about almost by chance. The physicists found it while looking through Aladin Lite, a repository of astronomical observations, aggregated from telescopes in space and all around the world. Astronomers can use the online tool to search for images of the same part of the sky, taken by different telescopes that are tuned to various wavelengths of energy and light.
The team had been looking within the Milky Way galaxy for signs of new black holes. Out of curiosity, Burdge reviewed an image of V404 Cygni — a black hole about 8,000 light years from Earth that was one of the very first objects ever to be confirmed as a black hole, in 1992. Since then, V404 Cygni has become one of the most well-studied black holes, and has been documented in over 1,300 scientific papers. However, none of those studies reported what Burdge and his colleagues observed.
As he looked at optical images of V404 Cygni, Burdge saw what appeared to be two blobs of light, surprisingly close to each other. The first blob was what others determined to be the black hole and an inner, closely orbiting star. The star is so close that it is shedding some of its material onto the black hole, and giving off the light that Burdge could see. The second blob of light, however, was something that scientists did not investigate closely, until now. That second light, Burdge determined, was most likely coming from a very far-off star.
“The fact that we can see two separate stars over this much distance actually means that the stars have to be really very far apart,” says Burdge, who calculated that the outer star is 3,500 astronomical units (AU) away from the black hole (1 AU is the distance between the Earth and sun). In other words, the outer star is 3,500 times father away from the black hole as the Earth is from the sun. This is also equal to 100 times the distance between Pluto and the sun.
The question that then came to mind was whether the outer star was linked to the black hole and its inner star. To answer this, the researchers looked to Gaia, a satellite that has precisely tracked the motions of all the stars in the galaxy since 2014. The team analyzed the motions of the inner and outer stars over the last 10 years of Gaia data and found that the stars moved exactly in tandem, compared to other neighboring stars. They calculated that the odds of this kind of tandem motion are about one in 10 million.
“It’s almost certainly not a coincidence or accident,” Burdge says. “We’re seeing two stars that are following each other because they’re attached by this weak string of gravity. So this has to be a triple system.”
Pulling strings
How, then, could the system have formed? If the black hole arose from a typical supernova, the violent explosion would have kicked away the outer star long ago.
“Imagine you’re pulling a kite, and instead of a strong string, you’re pulling with a spider web,” Burdge says. “If you tugged too hard, the web would break and you’d lose the kite. Gravity is like this barely bound string that’s really weak, and if you do anything dramatic to the inner binary, you’re going to lose the outer star.”
To really test this idea, however, Burdge carried out simulations to see how such a triple system could have evolved and retained the outer star.
At the start of each simulation, he introduced three stars (the third being the black hole, before it became a black hole). He then ran tens of thousands of simulations, each one with a slightly different scenario for how the third star could have become a black hole, and subsequently affected the motions of the other two stars. For instance, he simulated a supernova, varying the amount and direction of energy that it gave off. He also simulated scenarios of direct collapse, in which the third star simply caved in on itself to form a black hole, without giving off any energy.
“The vast majority of simulations show that the easiest way to make this triple work is through direct collapse,” Burdge says.
In addition to giving clues to the black hole’s origins, the outer star has also revealed the system’s age. The physicists observed that the outer star happens to be in the process of becoming a red giant — a phase that occurs at the end of a star’s life. Based on this stellar transition, the team determined that the outer star is about 4 billion years old. Given that neighboring stars are born around the same time, the team concludes that the black hole triple is also 4 billion years old.
“We’ve never been able to do this before for an old black hole,” Burdge says. “Now we know V404 Cygni is part of a triple, it could have formed from direct collapse, and it formed about 4 billion years ago, thanks to this discovery.”
This work was supported, in part, by the National Science Foundation.
Brain pathways that control dopamine release may influence motor controlThe newly identified pathways appear to relay emotional information that helps to shape the motivation to take action.Within the human brain, movement is influenced by a brain region called the striatum, which sends instructions to motor neurons in the brain. Those instructions are conveyed by two pathways, one that initiates movement (“go”) and one that suppresses it (“no-go”).
In a new study, MIT researchers have discovered an additional two pathways that arise in the striatum and appear to modulate the effects of the go and no-go pathways. These newly discovered pathways connect to dopamine-producing neurons in the brain — one stimulates dopamine release and the other inhibits it.
By controlling the amount of dopamine in the brain via clusters of neurons known as striosomes, these pathways appear to modify the instructions given by the go and no-go pathways. They may be especially involved in influencing decisions that have a strong emotional component, the researchers say.
“Among all the regions of the striatum, the striosomes alone turned out to be able to project to the dopamine-containing neurons, which we think has something to do with motivation, mood, and controlling movement,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
Iakovos Lazaridis, a research scientist at the McGovern Institute, is the lead author of the paper, which appears today in the journal Current Biology.
New pathways
Graybiel has spent much of her career studying the striatum, a structure located deep within the brain that is involved in learning and decision-making, as well as control of movement.
Within the striatum, neurons are arranged in a labyrinth-like structure that includes striosomes, which Graybiel discovered in the 1970s. The classical go and no-go pathways arise from neurons that surround the striosomes, which are known collectively as the matrix. The matrix cells that give rise to these pathways receive input from sensory processing regions such as the visual cortex and auditory cortex. Then, they send go or no-go commands to neurons in the motor cortex.
However, the function of the striosomes, which are not part of those pathways, remained unknown. For many years, researchers in Graybiel’s lab have been trying to solve that mystery.
Their previous work revealed that striosomes receive much of their input from parts of the brain that process emotion. Within striosomes, there are two major types of neurons, classified as D1 and D2. In a 2015 study, Graybiel found that one of these cell types, D1, sends input to the substantia nigra, which is the brain’s major dopamine-producing center.
It took much longer to trace the output of the other set, D2 neurons. In the new Current Biology study, the researchers discovered that those neurons also eventually project to the substantia nigra, but first they connect to a set of neurons in the globus palladus, which inhibits dopamine output. This pathway, an indirect connection to the substantia nigra, reduces the brain’s dopamine output and inhibits movement.
The researchers also confirmed their earlier finding that the pathway arising from D1 striosomes connects directly to the substantia nigra, stimulating dopamine release and initiating movement.
“In the striosomes, we’ve found what is probably a mimic of the classical go/no-go pathways,” Graybiel says. “They’re like classic motor go/no-go pathways, but they don’t go to the motor output neurons of the basal ganglia. Instead, they go to the dopamine cells, which are so important to movement and motivation.”
Emotional decisions
The findings suggest that the classical model of how the striatum controls movement needs to be modified to include the role of these newly identified pathways. The researchers now hope to test their hypothesis that input related to motivation and emotion, which enters the striosomes from the cortex and the limbic system, influences dopamine levels in a way that can encourage or discourage action.
That dopamine release may be especially relevant for actions that induce anxiety or stress. In their 2015 study, Graybiel’s lab found that striosomes play a key role in making decisions that provoke high levels of anxiety; in particular, those that are high risk but may also have a big payoff.
“Ann Graybiel and colleagues have earlier found that the striosome is concerned with inhibiting dopamine neurons. Now they show unexpectedly that another type of striosomal neuron exerts the opposite effect and can signal reward. The striosomes can thus both up- or down-regulate dopamine activity, a very important discovery. Clearly, the regulation of dopamine activity is critical in our everyday life with regard to both movements and mood, to which the striosomes contribute,” says Sten Grillner, a professor of neuroscience at the Karolinska Institute in Sweden, who was not involved in the research.
Another possibility the researchers plan to explore is whether striosomes and matrix cells are arranged in modules that affect motor control of specific parts of the body.
“The next step is trying to isolate some of these modules, and by simultaneously working with cells that belong to the same module, whether they are in the matrix or striosomes, try to pinpoint how the striosomes modulate the underlying function of each of these modules,” Lazaridis says.
They also hope to explore how the striosomal circuits, which project to the same region of the brain that is ravaged by Parkinson’s disease, may influence that disorder.
The research was funded by the National Institutes of Health, the Saks-Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, Jim and Joan Schattinger, the Hock E. Tan and K. Lisa Yang Center for Autism Research, Robert Buxton, the Simons Foundation, the CHDI Foundation, and an Ellen Schapiro and Gerald Axelbaum Investigator BBRF Young Investigator Grant.
A new framework to efficiently screen drugsNovel method to scale phenotypic drug screening drastically reduces the number of input samples, costs, and labor required to execute a screen.Some of the most widely used drugs today, including penicillin, were discovered through a process called phenotypic screening. Using this method, scientists are essentially throwing drugs at a problem — for example, when attempting to stop bacterial growth or fixing a cellular defect — and then observing what happens next, without necessarily first knowing how the drug works. Perhaps surprisingly, historical data show that this approach is better at yielding approved medicines than those investigations that more narrowly focus on specific molecular targets.
But many scientists believe that properly setting up the problem is the true key to success. Certain microbial infections or genetic disorders caused by single mutations are much simpler to prototype than complex diseases like cancer. These require intricate biological models that are far harder to make or acquire. The result is a bottleneck in the number of drugs that can be tested, and thus the usefulness of phenotypic screening.
Now, a team of scientists led by the Shalek Lab at MIT has developed a promising new way to address the difficulty of applying phenotyping screening to scale. Their method allows researchers to simultaneously apply multiple drugs to a biological problem at once, and then computationally work backward to figure out the individual effects of each. For instance, when the team applied this method to models of pancreatic cancer and human immune cells, they were able to uncover surprising new biological insights, while also minimizing cost and sample requirements by several-fold — solving a few problems in scientific research at once.
Zev Gartner, a professor in pharmaceutical chemistry at the University of California at San Francisco, says this new method has great potential. “I think if there is a strong phenotype one is interested in, this will be a very powerful approach,” Gartner says.
The research was published Oct. 8 in Nature Biotechnology. It was led by Ivy Liu, Walaa Kattan, Benjamin Mead, Conner Kummerlowe, and Alex K. Shalek, the director of the Institute for Medical Engineering and Sciences (IMES) and the Health Innovation Hub at MIT, as well as the J. W. Kieckhefer Professor in IMES and the Department of Chemistry. It was supported by the National Institutes of Health and the Bill and Melinda Gates Foundation.
A “crazy” way to increase scale
Technological advances over the past decade have revolutionized our understanding of the inner lives of individual cells, setting the stage for richer phenotypic screens. However, many challenges remain.
For one, biologically representative models like organoids and primary tissues are only available in limited quantities. The most informative tests, like single-cell RNA sequencing, are also expensive, time-consuming, and labor-intensive.
That’s why the team decided to test out the “bold, maybe even crazy idea” to mix everything together, says Liu, a PhD student in the MIT Computational and Systems Biology program. In other words, they chose to combine many perturbations — things like drugs, chemical molecules, or biological compounds made by cells — into one single concoction, and then try to decipher their individual effects afterward.
They began testing their workflow by making different combinations of 316 U.S. Food and Drug Administration-approved drugs. “It’s a high bar: basically, the worst-case scenario,” says Liu. “Since every drug is known to have a strong effect, the signals could have been impossible to disentangle.”
These random combinations ranged from three to 80 drugs per pool, each of which was applied to lab-grown cells. The team then tried to understand the effects of the individual drug using a linear computational model.
It was a success. When compared with traditional tests for each individual drug, the new method yielded comparable results, successfully finding the strongest drugs and their respective effects in each pool, at a fraction of the cost, samples, and effort.
Putting it into practice
To test the method’s applicability to address real-world health challenges, the team then approached two problems that were previously unimaginable with past phenotypic screening techniques.
The first test focused on pancreatic ductal adenocarcinoma (PDAC), one of the deadliest types of cancer. In PDAC, many types of signals come from the surrounding cells in the tumor's environment. These signals can influence how the tumor progresses and responds to treatments. So, the team wanted to identify the most important ones.
Using their new method to pool different signals in parallel, they found several surprise candidates. “We never could have predicted some of our hits,” says Shalek. These included two previously overlooked cytokines that actually could predict survival outcomes of patients with PDAC in public cancer data sets.
The second test looked at the effects of 90 drugs on adjusting the immune system’s function. These drugs were applied to fresh human blood cells, which contain a complex mix of different types of immune cells. Using their new method and single-cell RNA-sequencing, the team could not only test a large library of drugs, but also separate the drugs’ effects out for each type of cell. This enabled the team to understand how each drug might work in a more complex tissue, and then select the best one for the job.
“We might say there’s a defect in a T cell, so we’re going to add this drug, but we never think about, well, what does that drug do to all of the other cells in the tissue?” says Shalek. “We now have a way to gather this information, so that we can begin to pick drugs to maximize on-target effects and minimize side effects.”
Together, these experiments also showed Shalek the need to build better tools and datasets for creating hypotheses about potential treatments. “The complexity and lack of predictability for the responses we saw tells me that we likely are not finding the right, or most effective, drugs in many instances,” says Shalek.
Reducing barriers and improving lives
Although the current compression technique can identify the perturbations with the greatest effects, it’s still unable to perfectly resolve the effects of each one. Therefore, the team recommends that it act as a supplement to support additional screening. “Traditional tests that examine the top hits should follow,” Liu says.
Importantly, however, the new compression framework drastically reduces the number of input samples, costs, and labor required to execute a screen. With fewer barriers in play, it marks an exciting advance for understanding complex responses in different cells and building new models for precision medicine.
Shalek says, “This is really an incredible approach that opens up the kinds of things that we can do to find the right targets, or the right drugs, to use to improve lives for patients.”
Astronomers detect ancient lonely quasars with murky originsThe quasars appear to have few cosmic neighbors, raising questions about how they first emerged more than 13 billion years ago.A quasar is the extremely bright core of a galaxy that hosts an active supermassive black hole at its center. As the black hole draws in surrounding gas and dust, it blasts out an enormous amount of energy, making quasars some of the brightest objects in the universe. Quasars have been observed as early as a few hundred million years after the Big Bang, and it’s been a mystery as to how these objects could have grown so bright and massive in such a short amount of cosmic time.
Scientists have proposed that the earliest quasars sprang from overly dense regions of primordial matter, which would also have produced many smaller galaxies in the quasars’ environment. But in a new MIT-led study, astronomers observed some ancient quasars that appear to be surprisingly alone in the early universe.
The astronomers used NASA’s James Webb Space Telescope (JWST) to peer back in time, more than 13 billion years, to study the cosmic surroundings of five known ancient quasars. They found a surprising variety in their neighborhoods, or “quasar fields.” While some quasars reside in very crowded fields with more than 50 neighboring galaxies, as all models predict, the remaining quasars appear to drift in voids, with only a few stray galaxies in their vicinity.
These lonely quasars are challenging physicists’ understanding of how such luminous objects could have formed so early on in the universe, without a significant source of surrounding matter to fuel their black hole growth.
“Contrary to previous belief, we find on average, these quasars are not necessarily in those highest-density regions of the early universe. Some of them seem to be sitting in the middle of nowhere,” says Anna-Christina Eilers, assistant professor of physics at MIT. “It’s difficult to explain how these quasars could have grown so big if they appear to have nothing to feed from.”
There is a possibility that these quasars may not be as solitary as they appear, but are instead surrounded by galaxies that are heavily shrouded in dust and therefore hidden from view. Eilers and her colleagues hope to tune their observations to try and see through any such cosmic dust, in order to understand how quasars grew so big, so fast, in the early universe.
Eilers and her colleagues report their findings in a paper appearing today in the Astrophysical Journal. The MIT co-authors include postdocs Rohan Naidu and Minghao Yue; Robert Simcoe, the Francis Friedman Professor of Physics and director of MIT’s Kavli Institute for Astrophysics and Space Research; and collaborators from institutions including Leiden University, the University of California at Santa Barbara, ETH Zurich, and elsewhere.
Galactic neighbors
The five newly observed quasars are among the oldest quasars observed to date. More than 13 billion years old, the objects are thought to have formed between 600 to 700 million years after the Big Bang. The supermassive black holes powering the quasars are a billion times more massive than the sun, and more than a trillion times brighter. Due to their extreme luminosity, the light from each quasar is able to travel over the age of the universe, far enough to reach JWST’s highly sensitive detectors today.
“It’s just phenomenal that we now have a telescope that can capture light from 13 billion years ago in so much detail,” Eilers says. “For the first time, JWST enabled us to look at the environment of these quasars, where they grew up, and what their neighborhood was like.”
The team analyzed images of the five ancient quasars taken by JWST between August 2022 and June 2023. The observations of each quasar comprised multiple “mosaic” images, or partial views of the quasar’s field, which the team effectively stitched together to produce a complete picture of each quasar’s surrounding neighborhood.
The telescope also took measurements of light in multiple wavelengths across each quasar’s field, which the team then processed to determine whether a given object in the field was light from a neighboring galaxy, and how far a galaxy is from the much more luminous central quasar.
“We found that the only difference between these five quasars is that their environments look so different,” Eilers says. “For instance, one quasar has almost 50 galaxies around it, while another has just two. And both quasars are within the same size, volume, brightness, and time of the universe. That was really surprising to see.”
Growth spurts
The disparity in quasar fields introduces a kink in the standard picture of black hole growth and galaxy formation. According to physicists’ best understanding of how the first objects in the universe emerged, a cosmic web of dark matter should have set the course. Dark matter is an as-yet unknown form of matter that has no other interactions with its surroundings other than through gravity.
Shortly after the Big Bang, the early universe is thought to have formed filaments of dark matter that acted as a sort of gravitational road, attracting gas and dust along its tendrils. In overly dense regions of this web, matter would have accumulated to form more massive objects. And the brightest, most massive early objects, such as quasars, would have formed in the web’s highest-density regions, which would have also churned out many more, smaller galaxies.
“The cosmic web of dark matter is a solid prediction of our cosmological model of the Universe, and it can be described in detail using numerical simulations,” says co-author Elia Pizzati, a graduate student at Leiden University. “By comparing our observations to these simulations, we can determine where in the cosmic web quasars are located.”
Scientists estimate that quasars would have had to grow continuously with very high accretion rates in order to reach the extreme mass and luminosities at the times that astronomers have observed them, fewer than 1 billion years after the Big Bang.
“The main question we’re trying to answer is, how do these billion-solar-mass black holes form at a time when the universe is still really, really young? It’s still in its infancy,” Eilers says.
The team’s findings may raise more questions than answers. The “lonely” quasars appear to live in relatively empty regions of space. If physicists’ cosmological models are correct, these barren regions signify very little dark matter, or starting material for brewing up stars and galaxies. How, then, did extremely bright and massive quasars come to be?
“Our results show that there’s still a significant piece of the puzzle missing of how these supermassive black holes grow,” Eilers says. “If there’s not enough material around for some quasars to be able to grow continuously, that means there must be some other way that they can grow, that we have yet to figure out.”
This research was supported, in part, by the European Research Council.
An exotic-materials researcher with the soul of an explorer Associate professor of physics Riccardo Comin never stops seeking uncharted territory.Riccardo Comin says the best part of his job as a physics professor and exotic-materials researcher is when his students come into his office to tell him they have new, interesting data.
“It’s that moment of discovery, that moment of awe, of revelation of something that’s outside of anything you know,” says Comin, the Class of 1947 Career Development Associate Professor of Physics. “That’s what makes it all worthwhile.”
Intriguing data energizes Comin because it can potentially grant access to an unexplored world. His team has discovered materials with quantum and other exotic properties, which could find a range of applications, such as handling the world’s exploding quantities of data, more precise medical imaging, and vastly increased energy efficiency — to name just a few. For Comin, who has always been somewhat of an explorer, new discoveries satisfy a kind of intellectual wanderlust.
As a small child growing up in the city of Udine in northeast Italy, Comin loved geography and maps, even drawing his own of imaginary cities and countries. He traveled literally, too, touring Europe with his parents; his father was offered free train travel as a project manager on large projects for Italian railroads.
Comin also loved numbers from an early age, and by about eighth grade would go to the public library to delve into math textbooks about calculus and analytical geometry that were far beyond what he was being taught in school. Later, in high school, Comin enjoyed being challenged by a math and physics teacher who in class would ask him questions about extremely advanced concepts.
“My classmates were looking at me like I was an alien, but I had a lot of fun,” Comin says.
Unafraid to venture alone into more rarefied areas of study, Comin nonetheless sought community, and appreciated the rapport he had with his teacher.
“He gave me the kind of interaction I was looking for, because otherwise it would have been just me and my books,” Comin says. “He helped transform an isolated activity into a social one. He made me feel like I had a buddy.”
By the end of his undergraduate studies at the University of Trieste, Comin says he decided on experimental physics, to have “the opportunity to explore and observe physical phenomena.”
He visited a nearby research facility that houses the Elettra Synchrotron to look for a research position where he could work on his undergraduate thesis, and became interested in all of the materials science research being conducted there. Drawn to community as well as the research, he chose a group that was investigating how the atoms and molecules in a liquid can rearrange themselves to become a glass.
“This one group struck me. They seemed to really enjoy what they were doing, and they had fun outside of work and enjoyed the outdoors,” Comin says. “They seemed to be a nice group of people to be part of. I think I cared more about the social environment than the specific research topic.”
By the time Comin was finishing his master’s, also in Trieste, and wanted to get a PhD, his focus had turned to electrons inside a solid rather than the behavior of atoms and molecules. Having traveled “literally almost everywhere in Europe,” Comin says he wanted to experience a different research environment outside of Europe.
He told his academic advisor he wanted to go to North America and was connected with Andrea Damascelli, the Canada Research Chair in Electronic Structure of Quantum Materials at the University of British Columbia, who was working on high-temperature superconductors. Comin says he was fascinated by the behavior of the electrons in the materials Damascelli and his group were studying.
“It’s almost like a quantum choreography, particles that dance together” rather than moving in many different directions, Comin says.
Comin’s subsequent postdoctoral work at the University of Toronto, focusing on optoelectronic materials — which can interact with photons and electrical energy — ignited his passion for connecting a material’s properties to its functionality and bridging the gap between fundamental physics and real-world applications.
Since coming to MIT in 2016, Comin has continued to delight in the behavior of electrons. He and Joe Checkelsky, associate professor of physics, had a breakthrough with a new class of materials in which electrons, very atypically, are nearly stationary.
Such materials could be used to explore zero energy loss, such as from power lines, and new approaches to quantum computing.
“It’s a very peculiar state of matter,” says Comin. “Normally, electrons are just zapping around. If you put an electron in a crystalline environment, what that electron will want to do is hop around, explore its neighbors, and basically be everywhere at the same time.”
The more sedentary electrons occurred in materials where a structure of interlaced triangles and hexagons tended to trap the electrons on the hexagons and, because the electrons all have the same energy, they create what’s called an electronic flat band, referring to the pattern that is created when they are measured. Their existence was predicted theoretically, but they had not been observed.
Comin says he and his colleagues made educated guesses on where to find flat bands, but they were elusive. After three years of research, however, they had a breakthrough.
“We put a sample material in an experimental chamber, we aligned the sample to do the experiment and started the measurement and, literally, five to 10 minutes later, we saw this beautiful flat band on the screen,” Comin says. “It was so clear, like this thing was basically screaming, How could you not find me before?
“That started off a whole area of research that is growing and growing — and a new direction in our field.”
Comin’s later research into certain two-dimensional materials with the thickness of single atoms and an internal structural feature of chirality, or right-handedness or left-handedness similar to how a spiral has a twist in one direction or the other, has yielded another new realm to explore.
By controlling the chirality, “there are interesting prospects of realizing a whole new class of devices” that could store information in a way that’s more robust and much more energy-efficient than current methods, says Comin, who is affiliated with MIT’s Materials Research Laboratory. Such devices would be especially valuable as the amount of data available generally and technologies like artificial intelligence grow exponentially.
While investigating these previously unknown properties of certain materials, Comin is characteristically adventurous in his pursuit.
“I embrace the randomness that nature throws at you,” he says. “It appears random, but there could be something behind it, so we try variations, switch things around, see what nature serves you. Much of what we discover is due to luck — and the rest boils down to a mix of knowledge and intuition to recognize when we’re seeing something new, something that’s worth exploring.”
Q&A: How the Europa Clipper will set cameras on a distant icy moonMIT Research Scientist Jason Soderblom describes how the NASA mission will study the geology and composition of the surface of Jupiter’s water-rich moon and assess its astrobiological potential.With its latest space mission successfully launched, NASA is set to return for a close-up investigation of Jupiter’s moon Europa. Yesterday at 12:06 p.m. EDT, the Europa Clipper lifted off via SpaceX Falcon Heavy rocket on a mission that will take a close look at Europa’s icy surface. Five years from now, the spacecraft will visit the moon, which hosts a water ocean covered by a water-ice shell. The spacecraft’s mission is to learn more about the composition and geology of the moon’s surface and interior and to assess its astrobiological potential. Because of Jupiter’s intense radiation environment, Europa Clipper will conduct a series of flybys, with its closest approach bringing it within just 16 miles of Europa’s surface.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) Research Scientist Jason Soderblom is a co-investigator on two of the spacecraft’s instruments: the Europa Imaging System and the Mapping Imaging Spectrometer for Europa. Over the past nine years, he and his fellow team members have been building imaging and mapping instruments to study Europa’s surface in detail to gain a better understanding of previously seen geologic features, as well as the chemical composition of the materials that are present. Here, he describes the mission's primary plans and goals.
Q: What do we currently know about Europa’s surface?
A: We know from NASA Galileo mission data that the surface crust is relatively thin, but we don’t know how thin it is. One of the goals of the Europa Clipper mission is to measure the thickness of that ice shell. The surface is riddled with fractures that indicate tectonism is actively resurfacing the moon. Its crust is primarily composed of water ice, but there are also exposures of non-ice material along these fractures and ridges that we believe include material coming up from within Europa.
One of the things that makes investigating the materials on the surface more difficult is the environment. Jupiter is a significant source of radiation, and Europa is relatively close to Jupiter. That radiation modifies the materials on the surface; understanding that radiation damage is a key component to understanding the composition.
This is also what drives the clipper-style mission and gives the mission its name: we clip by Europa, collect data, and then spend the majority of our time outside of the radiation environment. That allows us time to download the data, analyze it, and make plans for the next flyby.
Q: Did that pose a significant challenge when it came to instrument design?
A: Yes, and this is one of the reasons that we're just now returning to do this mission. The concept of this mission came about around the time of the Galileo mission in the late 1990s, so it's been roughly 25 years since scientists first wanted to carry out this mission. A lot of that time has been figuring out how to deal with the radiation environment.
There's a lot of tricks that we've been developing over the years. The instruments are heavily shielded, and lots of modeling has gone into figuring exactly where to put that shielding. We've also developed very specific techniques to collect data. For example, by taking a whole bunch of short observations, we can look for the signature of this radiation noise, remove it from the little bits of data here and there, add the good data together, and end up with a low-radiation-noise observation.
Q: You're involved with the two different imaging and mapping instruments: the Europa Imaging System (EIS) and the Mapping Imaging Spectrometer for Europa (MISE). How are they different from each other?
A: The camera system [EIS] is primarily focused on understanding the physics and the geology that's driving processes on the surface, looking for: fractured zones; regions that we refer to as chaos terrain, where it looks like icebergs have been suspended in a slurry of water and have jumbled around and mixed and twisted; regions where we believe the surface is colliding and subduction is occurring, so one section of the surface is going beneath the other; and other regions that are spreading, so new surface is being created like our mid-ocean ridges on Earth.
The spectrometer’s [MISE] primary function is to constrain the composition of the surface. In particular, we're really interested in sections where we think liquid water might have come to the surface. Understanding what material is from within Europa and what material is being deposited from external sources is also important, and separating that is necessary to understand the composition of those coming from Europa and using that to learn about the composition of the subsurface ocean.
There is an intersection between those two, and that's my interest in the mission. We have color imaging with our imaging system that can provide some crude understanding of the composition, and there is a mapping component to our spectrometer that allows us to understand how the materials that we're detecting are physically distributed and correlate with the geology. So there's a way to examine the intersection of those two disciplines — to extrapolate the compositional information derived from the spectrometer to much higher resolutions using the camera, and to extrapolate the geological information that we learn from the camera to the compositional constraints from the spectrometer.
Q: How do those mission goals align with the research that you've been doing here at MIT?
A: One of the other major missions that I've been involved with was the Cassini mission, primarily working with the Visual and Infrared Spectrometer team to understand the geology and composition of Saturn's moon Titan. That instrument is very similar to the MISE instrument, both in function and in science objective, and so there's a very strong connection between that and the Europa Clipper mission. For another mission, for which I’m leading the camera team, is working to retrieve a sample of a comet, and my primary function on that mission is understanding the geology of the cometary surface.
Q: What are you most excited about learning from the Europa Clipper mission?
A: I'm most fascinated with some of these very unique geologic features that we see on the surface of Europa, understanding the composition of the material that is involved, and the processes that are driving those features. In particular, the chaos terrains and the fractures that we see on the surface.
Q: It's going to be a while before the spacecraft finally reaches Europa. What work needs to be done in the meantime?
A: A key component of this mission will be the laboratory work here on Earth, expanding our spectral libraries so that when we collect a spectrum of Europa's surface, we can compare that to laboratory measurements. We are also in the process of developing a number of models to allow us to, for example, understand how a material might process and change starting in the ocean and working its way up through fractures and eventually to the surface. Developing these models now is an important piece before we collect these data, then we can make corrections and get improved observations as the mission progresses. Making the best and most efficient use of the spacecraft resources requires an ability to reprogram and refine observations in real-time.
Model reveals why debunking election misinformation often doesn’t workThe new study also identifies factors that can make these efforts more successful.When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.
Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.
For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.
“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”
The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.
MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.
Modeling motivation
In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.
As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.
Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.
The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.
Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.
“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”
The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.
Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.
Building consensus
In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.
However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.
“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”
Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.
As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.
“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.
The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.
Tiny magnetic discs offer remote brain stimulation without transgenesThe devices could be a useful tool for biomedical research, and possible clinical use in the future.Novel magnetic nanodiscs could provide a much less invasive way of stimulating parts of the brain, paving the way for stimulation therapies without implants or genetic modification, MIT researchers report.
The scientists envision that the tiny discs, which are about 250 nanometers across (about 1/500 the width of a human hair), would be injected directly into the desired location in the brain. From there, they could be activated at any time simply by applying a magnetic field outside the body. The new particles could quickly find applications in biomedical research, and eventually, after sufficient testing, might be applied to clinical uses.
The development of these nanoparticles is described in the journal Nature Nanotechnology, in a paper by Polina Anikeeva, a professor in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, graduate student Ye Ji Kim, and 17 others at MIT and in Germany.
Deep brain stimulation (DBS) is a common clinical procedure that uses electrodes implanted in the target brain regions to treat symptoms of neurological and psychiatric conditions such as Parkinson’s disease and obsessive-compulsive disorder. Despite its efficacy, the surgical difficulty and clinical complications associated with DBS limit the number of cases where such an invasive procedure is warranted. The new nanodiscs could provide a much more benign way of achieving the same results.
Over the past decade other implant-free methods of producing brain stimulation have been developed. However, these approaches were often limited by their spatial resolution or ability to target deep regions. For the past decade, Anikeeva’s Bioelectronics group as well as others in the field used magnetic nanomaterials to transduce remote magnetic signals into brain stimulation. However, these magnetic methods relied on genetic modifications and can’t be used in humans.
Since all nerve cells are sensitive to electrical signals, Kim, a graduate student in Anikeeva’s group, hypothesized that a magnetoelectric nanomaterial that can efficiently convert magnetization into electrical potential could offer a path toward remote magnetic brain stimulation. Creating a nanoscale magnetoelectric material was, however, a formidable challenge.
Kim synthesized novel magnetoelectric nanodiscs and collaborated with Noah Kent, a postdoc in Anikeeva’s lab with a background in physics who is a second author of the study, to understand the properties of these particles.
The structure of the new nanodiscs consists of a two-layer magnetic core and a piezoelectric shell. The magnetic core is magnetostrictive, which means it changes shape when magnetized. This deformation then induces strain in the piezoelectric shell which produces a varying electrical polarization. Through the combination of the two effects, these composite particles can deliver electrical pulses to neurons when exposed to magnetic fields.
One key to the discs’ effectiveness is their disc shape. Previous attempts to use magnetic nanoparticles had used spherical particles, but the magnetoelectric effect was very weak, says Kim. This anisotropy enhances magnetostriction by over a 1000-fold, adds Kent.
The team first added their nanodiscs to cultured neurons, which allowed then to activate these cells on demand with short pulses of magnetic field. This stimulation did not require any genetic modification.
They then injected small droplets of magnetoelectric nanodiscs solution into specific regions of the brains of mice. Then, simply turning on a relatively weak electromagnet nearby triggered the particles to release a tiny jolt of electricity in that brain region. The stimulation could be switched on and off remotely by the switching of the electromagnet. That electrical stimulation “had an impact on neuron activity and on behavior,” Kim says.
The team found that the magnetoelectric nanodiscs could stimulate a deep brain region, the ventral tegmental area, that is associated with feelings of reward.
The team also stimulated another brain area, the subthalamic nucleus, associated with motor control. “This is the region where electrodes typically get implanted to manage Parkinson’s disease,” Kim explains. The researchers were able to successfully demonstrate the modulation of motor control through the particles. Specifically, by injecting nanodiscs only in one hemisphere, the researchers could induce rotations in healthy mice by applying magnetic field.
The nanodiscs could trigger the neuronal activity comparable with conventional implanted electrodes delivering mild electrical stimulation. The authors achieved subsecond temporal precision for neural stimulation with their method yet observed significantly reduced foreign body responses as compared to the electrodes, potentially allowing for even safer deep brain stimulation.
The multilayered chemical composition and physical shape and size of the new multilayered nanodiscs is what made precise stimulation possible.
While the researchers successfully increased the magnetostrictive effect, the second part of the process, converting the magnetic effect into an electrical output, still needs more work, Anikeeva says. While the magnetic response was a thousand times greater, the conversion to an electric impulse was only four times greater than with conventional spherical particles.
“This massive enhancement of a thousand times didn’t completely translate into the magnetoelectric enhancement,” says Kim. “That’s where a lot of the future work will be focused, on making sure that the thousand times amplification in magnetostriction can be converted into a thousand times amplification in the magnetoelectric coupling.”
What the team found, in terms of the way the particles’ shapes affects their magnetostriction, was quite unexpected. “It’s kind of a new thing that just appeared when we tried to figure out why these particles worked so well,” says Kent.
Anikeeva adds: “Yes, it’s a record-breaking particle, but it’s not as record-breaking as it could be.” That remains a topic for further work, but the team has ideas about how to make further progress.
While these nanodiscs could in principle already be applied to basic research using animal models, to translate them to clinical use in humans would require several more steps, including large-scale safety studies, “which is something academic researchers are not necessarily most well-positioned to do,” Anikeeva says. “When we find that these particles are really useful in a particular clinical context, then we imagine that there will be a pathway for them to undergo more rigorous large animal safety studies.”
The team included researchers affiliated with MIT’s departments of Materials Science and Engineering, Electrical Engineering and Computer Science, Chemistry, and Brain and Cognitive Sciences; the Research Laboratory of Electronics; the McGovern Institute for Brain Research; and the Koch Institute for Integrative Cancer Research; and from the Friedrich-Alexander University of Erlangen, Germany. The work was supported, in part, by the National Institutes of Health, the National Center for Complementary and Integrative Health, the National Institute for Neurological Disorders and Stroke, the McGovern Institute for Brain Research, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience.
A new method makes high-resolution imaging more accessibleLabs that can’t afford expensive super-resolution microscopes could use a new expansion technique to image nanoscale structures inside cells.A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.
In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.
“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”
At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.
“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.
Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.
A single expansion
Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.
The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.
“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”
With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.
In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.
To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.
To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.
Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.
“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”
Imaging tiny structures
Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.
In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).
Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.
The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.
“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.
The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.
The way sensory prediction changes under anesthesia tells us how conscious cognition worksA new study adds evidence that consciousness requires communication between sensory and cognitive regions of the brain’s cortex.Our brains constantly work to make predictions about what’s going on around us to ensure that we can attend to and consider the unexpected, for instance. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence to the idea that conscious thought requires synchronized communication — mediated by brain rhythms in specific frequency bands — between basic sensory and higher-order cognitive regions of the brain.
Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how brain rhythms enable the brain to remain prepared to attend to surprises. Cognition-oriented brain regions (generally at the front of the brain) use relatively low-frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g., your co-worker’s music). When sensory regions detect a surprise (e.g., the office fire alarm), they use faster-frequency gamma rhythms to tell the higher regions about it, and the higher regions process that at gamma frequencies to decide what to do (e.g., exit the building).
The new results, published Oct. 7 in the Proceedings of the National Academy of Sciences, show that when animals were under propofol-induced general anesthesia, a sensory region retained the capacity to detect simple surprises but communication with a higher cognitive region toward the front of the brain was lost, making that region unable to engage in its “top-down” regulation of the activity of the sensory region and keeping it oblivious to simple and more complex surprises alike.
What we've got here is failure to communicate
“What we are doing here speaks to the nature of consciousness,” says co-senior author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences. “Propofol general anesthesia deactivates the top-down processes that that underlie cognition. It essentially disconnects communication between the front and back halves of the brain.”
Co-senior author Andre Bastos, an assistant professor in the psychology department at Vanderbilt and a former member of Miller’s MIT lab, adds that the study results highlight the key role of frontal areas in consciousness.
“These results are particularly important given the newfound scientific interest in the mechanisms of consciousness, and how consciousness relates to the ability of the brain to form predictions,” Bastos says.
The brain’s ability to predict is dramatically altered during anesthesia. It was interesting that the front of the brain, areas associated with cognition, were more strongly diminished in their predictive abilities than sensory areas. This suggests that prefrontal areas help to spark an “ignition” event that allows sensory information to become conscious. Sensory cortex activation by itself does not lead to conscious perception. These observations help us narrow down possible models for the mechanisms of consciousness.
Yihan Sophy Xiong, a graduate student in Bastos’ lab who led the study, says the anesthetic reduces the times in which inter-regional communication within the cortex can occur.
“In the awake brain, brain waves give short windows of opportunity for neurons to fire optimally — the ‘refresh rate’ of the brain, so to speak,” Xiong says. “This refresh rate helps organize different brain areas to communicate effectively. Anesthesia both slows down the refresh rate, which narrows these time windows for brain areas to talk to each other and makes the refresh rate less effective, so that neurons become more disorganized about when they can fire. When the refresh rate no longer works as intended, our ability to make predictions is weakened.”
Learning from oddballs
To conduct the research, the neuroscientists measured the electrical signals, “or spiking,” of hundreds of individual neurons and the coordinated rhythms of their aggregated activity (at alpha/beta and gamma frequencies), in two areas on the surface, or cortex, of the brain of two animals as they listened to sequences of tones. Sometimes the sequences would all be the same note (e.g., AAAAA). Sometimes there’d be a simple surprise that the researchers called a “local oddball” (e.g., AAAAB). But sometimes the surprise would be more complicated, or a “global oddball.” For example, after seeing a series of AAAABs, there’d all of a sudden be AAAAA, which violates the global but not the local pattern.
Prior work has suggested that a sensory region (in this case the temporoparietal area, or Tpt) can spot local oddballs on its own, Miller says. Detecting the more complicated global oddball requires the participation of a higher order region (in this case the frontal eye fields, or FEF).
The animals heard the tone sequences both while awake and while under propofol anesthesia. There were no surprises about the waking state. The researchers reaffirmed that top-down alpha/beta rhythms from FEF carried predictions to the Tpt and that Tpt would increase gamma rhythms when an oddball came up, causing FEF (and the prefrontal cortex) to respond with upticks of gamma activity as well.
But by several measures and analyses, the scientists could see these dynamics break down after the animals lost consciousness.
Under propofol, for instance, spiking activity declined overall but when a local oddball came along, Tpt spiking still increased notably but now spiking in FEF didn’t follow suit as it does during wakefulness.
Meanwhile, when a global oddball was presented during wakefulness, the researchers could use software to “decode” representation of that among neurons in FEF and the prefrontal cortex (another cognition-oriented region). They could also decode local oddballs in the Tpt. But under anesthesia the decoder could no longer reliably detect representation of local or global oddballs in FEF or the prefrontal cortex.
Moreover, when they compared rhythms in the regions amid wakeful versus unconscious states they found stark differences. When the animals were awake, oddballs increased gamma activity in both Tpt and FEF and alpha/beta rhythms decreased. Regular, non-oddball stimulation increased alpha/beta rhythms. But when the animals lost consciousness the increase in gamma rhythms from a local oddball was even greater in Tpt than when the animal was awake.
“Under propofol-mediated loss of consciousness, the inhibitory function of alpha/beta became diminished and/or eliminated, leading to disinhibition of oddballs in sensory cortex,” the authors wrote.
Other analyses of inter-region connectivity and synchrony revealed that the regions lost the ability to communicate during anesthesia.
In all, the study’s evidence suggests that conscious thought requires coordination across the cortex, from front to back, the researchers wrote.
“Our results therefore suggest an important role for prefrontal cortex activation, in addition to sensory cortex activation, for conscious perception,” the researchers wrote.
In addition to Xiong, Miller, and Bastos, the paper’s other authors are Jacob Donoghue, Mikael Lundqvist, Meredith Mahnke, Alex Major, and Emery N. Brown.
The National Institutes of Health, The JPB Foundation, and The Picower Institute for Learning and Memory funded the study.
Mixing joy and resolve, event celebrates women in science and addresses persistent inequalitiesThe Kuggie Vallee Distinguished Lectures and Workshops presented inspiring examples of success, even as the event evoked frank discussions of the barriers that still hinder many women in science.For two days at The Picower Institute for Learning and Memory at MIT, participants in the Kuggie Vallee Distinguished Lectures and Workshops celebrated the success of women in science and shared strategies to persist through, or better yet dissipate, the stiff headwinds women still face in the field.
“Everyone is here to celebrate and to inspire and advance the accomplishments of all women in science,” said host Li-Huei Tsai, Picower Professor in the Department of Brain and Cognitive Sciences and director of the Picower Institute, as she welcomed an audience that included scores of students, postdocs, and other research trainees. “It is a great feeling to have the opportunity to showcase examples of our successes and to help lift up the next generation.”
Tsai earned the honor of hosting the event after she was named a Vallee Visiting Professor in 2022 by the Vallee Foundation. Foundation president Peter Howley, a professor of pathological anatomy at Harvard University, said the global series of lectureships and workshops were created to honor Kuggie Vallee, a former Lesley College professor who worked to advance the careers of women.
During the program Sept. 24-25, speakers and audience members alike made it clear that helping women succeed requires both recognizing their achievements and resolving to change social structures in which they face marginalization.
Inspiring achievements
Lectures on the first day featured two brain scientists who have each led acclaimed discoveries that have been transforming their fields.
Michelle Monje, a pediatric neuro-oncologist at Stanford University whose recognitions include a MacArthur Fellowship, described her lab’s studies of brain cancers in children, which emerge at specific times in development as young brains adapt to their world by wiring up new circuits and insulating neurons with a fatty sheathing called myelin. Monje has discovered that when the precursors to myelinating cells, called oligodendrocyte precursor cells, harbor cancerous mutations, the tumors that arise — called gliomas — can hijack those cellular and molecular mechanisms. To promote their own growth, gliomas tap directly into the electrical activity of neural circuits by forging functional neuron-to-cancer connections, akin to the “synapse” junctions healthy neurons make with each other. Years of her lab’s studies, often led by female trainees, have not only revealed this insidious behavior (and linked aberrant myelination to many other diseases as well), but also revealed specific molecular factors involved. Those findings, Monje said, present completely novel potential avenues for therapeutic intervention.
“This cancer is an electrically active tissue and that is not how we have been approaching understanding it,” she said.
Erin Schuman, who directs the Max Planck Institute for Brain Research in Frankfurt, Germany, and has won honors including the Brain Prize, described her groundbreaking discoveries related to how neurons form and edit synapses along the very long branches — axons and dendrites — that give the cells their exotic shapes. Synapses form very far from the cell body where scientists had long thought all proteins, including those needed for synapse structure and activity, must be made. In the mid-1990s, Schuman showed that the protein-making process can occur at the synapse and that neurons stage the needed infrastructure — mRNA and ribosomes — near those sites. Her lab has continued to develop innovative tools to build on that insight, cataloging the stunning array of thousands of mRNAs involved, including about 800 that are primarily translated at the synapse, studying the diversity of synapses that arise from that collection, and imaging individual ribosomes such that her lab can detect when they are actively making proteins in synaptic neighborhoods.
Persistent headwinds
While the first day’s lectures showcased examples of women’s success, the second day’s workshops turned the spotlight on the social and systemic hindrances that continue to make such achievements an uphill climb. Speakers and audience members engaged in frank dialogues aimed at calling out those barriers, overcoming them, and dismantling them.
Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology at MIT and professor of behavioral and policy sciences in the MIT Sloan School of Management, told the group that as bad as sexual harassment and assault in the workplace are, the more pervasive, damaging, and persistent headwinds for women across a variety of professions are “deeply sedimented cultural habits” that marginalize their expertise and contributions in workplaces, rendering them invisible to male counterparts, even when they are in powerful positions. High-ranking women in Silicon Valley who answered the “Elephant in the Valley” survey, for instance, reported high rates of many demeaning comments and demeanor, as well as exclusion from social circles. Even U.S. Supreme Court justices are not immune, she noted, citing research showing that for decades female justices have been interrupted with disproportionate frequency during oral arguments at the court. Silbey’s research has shown that young women entering the engineering workforce often become discouraged by a system that appears meritocratic, but in which they are often excluded from opportunities to demonstrate or be credited for that merit and are paid significantly less.
“Women’s occupational inequality is a consequence of being ignored, having contributions overlooked or appropriated, of being assigned to lower-status roles, while men are pushed ahead, honored and celebrated, often on the basis of women’s work,” Silbey said.
Often relatively small in numbers, women in such workplaces become tokens — visible as different, but still treated as outsiders, Silbey said. Women tend to internalize this status, becoming very cautious about their work while some men surge ahead in more cavalier fashion. Silbey and speakers who followed illustrated the effect this can have on women’s careers in science. Kara McKinley, an assistant professor of stem cell and regenerative biology at Harvard, noted that while the scientific career “pipeline” in some areas of science is full of female graduate students and postdocs, only about 20 percent of natural sciences faculty positions are held by women. Strikingly, women are already significantly depleted in the applicant pools for assistant professor positions, she said. Those who do apply tend to wait until they are more qualified than the men they are competing against.
McKinley and Silbey each noted that women scientists submit fewer papers to prestigious journals, with Silbey explaining that it’s often because women are more likely to worry that their studies need to tie up every loose end. Yet, said Stacie Weninger, a venture capitalist and president of the F-Prime Biomedical Research Initiative and a former editor at Cell Press, women were also less likely than men to rebut rejections from journal editors, thereby accepting the rejection even though rebuttals sometimes work.
Several speakers, including Weninger and Silbey, said pedagogy must change to help women overcome a social tendency to couch their assertions in caveats when many men speak with confidence and are therefore perceived as more knowledgeable.
At lunch, trainees sat in small groups with the speakers. They shared sometimes harrowing personal stories of gender-related difficulties in their young careers and sought advice on how to persist and remain resilient. Schuman advised the trainees to report mistreatment, even if they aren’t confident that university officials will be able to effect change, to at least make sure patterns of mistreatment get on the record. Reflecting on discouraging comments she experienced early in her career, Monje advised students to build up and maintain an inner voice of confidence and draw upon it when criticism is unfair.
“It feels terrible in the moment, but cream rises,” Monje said. “Believe in yourself. It will be OK in the end.”
Lifting each other up
Speakers at the conference shared many ideas to help overcome inequalities. McKinley described a program she launched in 2020 to ensure that a diversity of well-qualified women and non-binary postdocs are recruited for, and apply for, life sciences faculty jobs: the Leading Edge Symposium. The program identifies and names fellows — 200 so far — and provides career mentoring advice, a supportive community, and a platform to ensure they are visible to recruiters. Since the program began, 99 of the fellows have gone on to accept faculty positions at various institutions.
In a talk tracing the arc of her career, Weninger, who trained as a neuroscientist at Harvard, said she left bench work for a job as an editor because she wanted to enjoy the breadth of science, but also noted that her postdoc salary didn’t even cover the cost of child care. She left Cell Press in 2005 to help lead a task force on women in science that Harvard formed in the wake of comments by then-president Lawrence Summers widely understood as suggesting that women lacked “natural ability” in science and engineering. Working feverishly for months, the task force recommended steps to increase the number of senior women in science, including providing financial support for researchers who were also caregivers at home so they’d have the money to hire a technician. That extra set of hands would afford them the flexibility to keep research running even as they also attended to their families. Notably, Monje said she does this for the postdocs in her lab.
A graduate student asked Silbey at the end of her talk how to change a culture in which traditionally male-oriented norms marginalize women. Silbey said it starts with calling out those norms and recognizing that they are the issue, rather than increasing women’s representation in, or asking them to adapt to, existing systems.
“To make change, it requires that you do recognize the differences of the experiences and not try to make women exactly like men, or continue the past practices and think, ‘Oh, we just have to add women into it’,” she said.
Silbey also praised the Kuggie Vallee event at MIT for assembling a new community around these issues. Women in science need more social networks where they can exchange information and resources, she said.
“This is where an organ, an event like this, is an example of making just that kind of change: women making new networks for women,” she said.
Study finds mercury pollution from human activities is decliningModels show that an unexpected reduction in human-driven emissions led to a 10 percent decline in atmospheric mercury concentrations.MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.
In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.
They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.
Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.
“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.
The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.
However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.
“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).
Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.
Mercury mismatch
The Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.
The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.
This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.
Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.
“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.
Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.
At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.
“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.
Multifaceted models
The researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.
By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.
Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline. Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.
For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.
“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.
Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.
While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.
One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.
They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.
Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.
In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.
“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.
In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.
This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency.
Cancer biologists discover a new mechanism for an old drug Study reveals the drug, 5-fluorouracil, acts differently in different types of cancer — a finding that could help researchers design better drug combinations.Since the 1950s, a chemotherapy drug known as 5-fluorouracil has been used to treat many types of cancer, including blood cancers and cancers of the digestive tract.
Doctors have long believed that this drug works by damaging the building blocks of DNA. However, a new study from MIT has found that in cancers of the colon and other gastrointestinal cancers, it actually kills cells by interfering with RNA synthesis.
The findings could have a significant effect on how doctors treat many cancer patients. Usually, 5-fluorouracil is given in combination with chemotherapy drugs that damage DNA, but the new study found that for colon cancer, this combination does not achieve the synergistic effects that were hoped for. Instead, combining 5-FU with drugs that affect RNA synthesis could make it more effective in patients with GI cancers, the researchers say.
“Our work is the most definitive study to date showing that RNA incorporation of the drug, leading to an RNA damage response, is responsible for how the drug works in GI cancers,” says Michael Yaffe, a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, and a member of MIT’s Koch Institute for Integrative Cancer Research. “Textbooks implicate the DNA effects of the drug as the mechanism in all cancer types, but our data shows that RNA damage is what’s really important for the types of tumors, like GI cancers, where the drug is used clinically.”
Yaffe, the senior author of the new study, hopes to plan clinical trials of 5-fluorouracil with drugs that would enhance its RNA-damaging effects and kill cancer cells more effectively.
Jung-Kuei Chen, a Koch Institute research scientist, and Karl Merrick, a former MIT postdoc, are the lead authors of the paper, which appears today in Cell Reports Medicine.
An unexpected mechanism
Clinicians use 5-fluorouracil (5-FU) as a first-line drug for colon, rectal, and pancreatic cancers. It’s usually given in combination with oxaliplatin or irinotecan, which damage DNA in cancer cells. The combination was thought to be effective because 5-FU can disrupt the synthesis of DNA nucleotides. Without those building blocks, cells with damaged DNA wouldn’t be able to efficiently repair the damage and would undergo cell death.
Yaffe’s lab, which studies cell signaling pathways, wanted to further explore the underlying mechanisms of how these drug combinations preferentially kill cancer cells.
The researchers began by testing 5-FU in combination with oxaliplatin or irinotecan in colon cancer cells grown in the lab. To their surprise, they found that not only were the drugs not synergistic, in many cases they were less effective at killing cancer cells than what one would expect by simply adding together the effects of 5-FU or the DNA-damaging drug given alone.
“One would have expected that these combinations to cause synergistic cancer cell death because you are targeting two different aspects of a shared process: breaking DNA, and making nucleotides,” Yaffe says. “Karl looked at a dozen colon cancer cell lines, and not only were the drugs not synergistic, in most cases they were antagonistic. One drug seemed to be undoing what the other drug was doing.”
Yaffe’s lab then teamed up with Adam Palmer, an assistant professor of pharmacology at the University of North Carolina School of Medicine, who specializes in analyzing data from clinical trials. Palmer’s research group examined data from colon cancer patients who had been on one or more of these drugs and showed that the drugs did not show synergistic effects on survival in most patients.
“This confirmed that when you give these combinations to people, it’s not generally true that the drugs are actually working together in a beneficial way within an individual patient,” Yaffe says. “Instead, it appears that one drug in the combination works well for some patients while another drug in the combination works well in other patients. We just cannot yet predict which drug by itself is best for which patient, so everyone gets the combination.”
These results led the researchers to wonder just how 5-FU was working, if not by disrupting DNA repair. Studies in yeast and mammalian cells had shown that the drug also gets incorporated into RNA nucleotides, but there has been dispute over how much this RNA damage contributes to the drug’s toxic effects on cancer cells.
Inside cells, 5-FU is broken down into two different metabolites. One of these gets incorporated into DNA nucleotides, and other into RNA nucleotides. In studies of colon cancer cells, the researchers found that the metabolite that interferes with RNA was much more effective at killing colon cancer cells than the one that disrupts DNA.
That RNA damage appears to primarily affect ribosomal RNA, a molecule that forms part of the ribosome — a cell organelle responsible for assembling new proteins. If cells can’t form new ribosomes, they can’t produce enough proteins to function. Additionally, the lack of undamaged ribosomal RNA causes cells to destroy a large set of proteins that normally bind up the RNA to make new functional ribosomes.
The researchers are now exploring how this ribosomal RNA damage leads cells to under programmed cell death, or apoptosis. They hypothesize that sensing of the damaged RNAs within cell structures called lysosomes somehow triggers an apoptotic signal.
“My lab is very interested in trying to understand the signaling events during disruption of ribosome biogenesis, particularly in GI cancers and even some ovarian cancers, that cause the cells to die. Somehow, they must be monitoring the quality control of new ribosome synthesis, which somehow is connected to the death pathway machinery,” Yaffe says.
New combinations
The findings suggest that drugs that stimulate ribosome production could work together with 5-FU to make a highly synergistic combination. In their study, the researchers showed that a molecule that inhibits KDM2A, a suppressor of ribosome production, helped to boost the rate of cell death in colon cancer cells treated with 5-FU.
The findings also suggest a possible explanation for why combining 5-FU with a DNA-damaging drug often makes both drugs less effective. Some DNA damaging drugs send a signal to the cell to stop making new ribosomes, which would negate 5-FU’s effect on RNA. A better approach may be to give each drug a few days apart, which would give patients the potential benefits of each drug, without having them cancel each other out.
“Importantly, our data doesn’t say that these combination therapies are wrong. We know they’re effective clinically. It just says that if you adjust how you give these drugs, you could potentially make those therapies even better, with relatively minor changes in the timing of when the drugs are given,” Yaffe says.
He is now hoping to work with collaborators at other institutions to run a phase 2 or 3 clinical trial in which patients receive the drugs on an altered schedule.
“A trial is clearly needed to look for efficacy, but it should be straightforward to initiate because these are already clinically accepted drugs that form the standard of care for GI cancers. All we’re doing is changing the timing with which we give them,” he says.
The researchers also hope that their work could lead to the identification of biomarkers that predict which patients’ tumors will be more susceptible to drug combinations that include 5-FU. One such biomarker could be RNA polymerase I, which is active when cells are producing a lot of ribosomal RNA.
The research was funded by the Damon Runyon Cancer Research Foundation, a fellowship from the Ludwig Center at MIT, the National Institutes of Health, the Ovarian Cancer Research Fund, the Charles and Marjorie Holloway Foundation, and the STARR Cancer Consortium.
Victor Ambros ’75, PhD ’79 and Gary Ruvkun share Nobel Prize in Physiology or MedicineThe scientists, who worked together as postdocs at MIT, are honored for their discovery of microRNA — a class of molecules that are critical for gene regulation.MIT alumnus Victor Ambros ’75, PhD ’79 and Gary Ruvkun, who did his postdoctoral training at MIT, will share the 2024 Nobel Prize in Physiology or Medicine, the Royal Swedish Academy of Sciences announced this morning in Stockholm.
Ambros, a professor at the University of Massachusetts Chan Medical School, and Ruvkun, a professor at Harvard Medical School and Massachusetts General Hospital, were honored for their discovery of microRNA, a class of tiny RNA molecules that play a critical role in gene control.
“Their groundbreaking discovery revealed a completely new principle of gene regulation that turned out to be essential for multicellular organisms, including humans. It is now known that the human genome codes for over one thousand microRNAs. Their surprising discovery revealed an entirely new dimension to gene regulation. MicroRNAs are proving to be fundamentally important for how organisms develop and function,” the Nobel committee said in its announcement today.
During the late 1980s, Ambros and Ruvkun both worked as postdocs in the laboratory of H. Robert Horvitz, a David H. Koch Professor at MIT, who was awarded the Nobel Prize in 2002.
While in Horvitz’s lab, the pair began studying gene control in the roundworm C. elegans — an effort that laid the groundwork for their Nobel discoveries. They studied two mutant forms of the worm, known as lin-4 and lin-14, that showed defects in the timing of the activation of genetic programs that control development.
In the early 1990s, while Ambros was a faculty member at Harvard University, he made a surprising discovery. The lin-4 gene, instead of encoding a protein, produced a very short RNA molecule that appeared to inhibit the expression of lin-14.
At the same time, Ruvkun was continuing to study these C. elegans genes in his lab at MGH and Harvard. He showed that lin-4 did not inhibit lin-14 by preventing the lin-14 gene from being transcribed into messenger RNA; instead, it appeared to turn off the gene’s expression later on, by preventing production of the protein encoded by lin-14.
The two compared results and realized that the sequence of lin-4 was complementary to some short sequences of lin-14. Lin-4, they showed, was binding to messenger RNA encoding lin-14 and blocking it from being translated into protein — a mechanism for gene control that had never been seen before. Those results were published in two articles in the journal Cell in 1993.
In an interview with the Journal of Cell Biology, Ambros credited the contributions of his collaborators, including his wife, Rosalind “Candy” Lee ’76, and postdoc Rhonda Feinbaum, who both worked in his lab, cloned and characterized the lin-4 microRNA, and were co-authors on one of the 1993 Cell papers.
In 2000, Ruvkun published the discovery of another microRNA molecule, encoded by a gene called let-7, which is found throughout the animal kingdom. Since then, more than 1,000 microRNA genes have been found in humans.
“Ambros and Ruvkun’s seminal discovery in the small worm C. elegans was unexpected, and revealed a new dimension to gene regulation, essential for all complex life forms,” the Nobel citation declared.
Ambros, who was born in New Hampshire and grew up in Vermont, earned his PhD at MIT under the supervision of David Baltimore, then an MIT professor of biology, who received a Nobel Prize in 1973. Ambros was a longtime faculty member at Dartmouth College before joining the faculty at the University of Massachusetts Chan Medical School in 2008.
Ruvkun is a graduate of the University of California at Berkeley and earned his PhD at Harvard University before joining Horvitz’s lab at MIT.
Translating MIT research into real-world resultsMIT’s innovation and entrepreneurship system helps launch water, food, and ag startups with social and economic benefits.Inventive solutions to some of the world’s most critical problems are being discovered in labs, classrooms, and centers across MIT every day. Many of these solutions move from the lab to the commercial world with the help of over 85 Institute resources that comprise MIT’s robust innovation and entrepreneurship (I&E) ecosystem. The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) draws on MIT’s wealth of I&E knowledge and experience to help researchers commercialize their breakthrough technologies through the J-WAFS Solutions grant program. By collaborating with I&E programs on campus, J-WAFS prepares MIT researchers for the commercial world, where their novel innovations aim to improve productivity, accessibility, and sustainability of water and food systems, creating economic, environmental, and societal benefits along the way.
The J-WAFS Solutions program launched in 2015 with support from Community Jameel, an international organization that advances science and learning for communities to thrive. Since 2015, J-WAFS Solutions has supported 19 projects with one-year grants of up to $150,000, with some projects receiving renewal grants for a second year of support. Solutions projects all address challenges related to water or food. Modeled after the esteemed grant program of MIT’s Deshpande Center for Technological Innovation, and initially administered by Deshpande Center staff, the J-WAFS Solutions program follows a similar approach by supporting projects that have already completed the basic research and proof-of-concept phases. With technologies that are one to three years away from commercialization, grantees work on identifying their potential markets and learn to focus on how their technology can meet the needs of future customers.
“Ingenuity thrives at MIT, driving inventions that can be translated into real-world applications for widespread adoption, implantation, and use,” says J-WAFS Director Professor John H. Lienhard V. “But successful commercialization of MIT technology requires engineers to focus on many challenges beyond making the technology work. MIT’s I&E network offers a variety of programs that help researchers develop technology readiness, investigate markets, conduct customer discovery, and initiate product design and development,” Lienhard adds. “With this strong I&E framework, many J-WAFS Solutions teams have established startup companies by the completion of the grant. J-WAFS-supported technologies have had powerful, positive effects on human welfare. Together, the J-WAFS Solutions program and MIT’s I&E ecosystem demonstrate how academic research can evolve into business innovations that make a better world,” Lienhard says.
Creating I&E collaborations
In addition to support for furthering research, J-WAFS Solutions grants allow faculty, students, postdocs, and research staff to learn the fundamentals of how to transform their work into commercial products and companies. As part of the grant requirements, researchers must interact with mentors through MIT Venture Mentoring Service (VMS). VMS connects MIT entrepreneurs with teams of carefully selected professionals who provide free and confidential mentorship, guidance, and other services to help advance ideas into for-profit, for-benefit, or nonprofit ventures. Since 2000, VMS has mentored over 4,600 MIT entrepreneurs across all industries, through a dynamic and accomplished group of nearly 200 mentors who volunteer their time so that others may succeed. The mentors provide impartial and unbiased advice to members of the MIT community, including MIT alumni in the Boston area. J-WAFS Solutions teams have been guided by 21 mentors from numerous companies and nonprofits. Mentors often attend project events and progress meetings throughout the grant period.
“Working with VMS has provided me and my organization with a valuable sounding board for a range of topics, big and small,” says Eric Verploegen PhD ’08, former research engineer in the MIT D-Lab and founder of J-WAFS spinout CoolVeg. Along with professors Leon Glicksman and Daniel Frey, Verploegen received a J-WAFS Solutions grant in 2021 to commercialize cold-storage chambers that use evaporative cooling to help farmers preserve fruits and vegetables in rural off-grid communities. Verploegen started CoolVeg in 2022 to increase access and adoption of open-source, evaporative cooling technologies through collaborations with businesses, research institutions, nongovernmental organizations, and government agencies. “Working as a solo founder at my nonprofit venture, it is always great to have avenues to get feedback on communications approaches, overall strategy, and operational issues that my mentors have experience with,” Verploegen says. Three years after the initial Solutions grant, one of the VMS mentors assigned to the evaporative cooling team still acts as a mentor to Verploegen today.
Another Solutions grant requirement is for teams to participate in the Spark program — a free, three-week course that provides an entry point for researchers to explore the potential value of their innovation. Spark is part of the National Science Foundation’s (NSF) Innovation Corps (I-Corps), which is an “immersive, entrepreneurial training program that facilitates the transformation of invention to impact.” In 2018, MIT received an award from the NSF, establishing the New England Regional Innovation Corps Node (NE I-Corps) to deliver I-Corps training to participants across New England. Trainings are open to researchers, engineers, scientists, and others who want to engage in a customer discovery process for their technology. Offered regularly throughout the year, the Spark course helps participants identify markets and explore customer needs in order to understand how their technologies can be positioned competitively in their target markets. They learn to assess barriers to adoption, as well as potential regulatory issues or other challenges to commercialization. NE-I-Corps reports that since its start, over 1,200 researchers from MIT have completed the program and have gone on to launch 175 ventures, raising over $3.3 billion in funding from grants and investors, and creating over 1,800 jobs.
Constantinos Katsimpouras, a research scientist in the Department of Chemical Engineering, went through the NE I-Corps Spark program to better understand the customer base for a technology he developed with professors Gregory Stephanopoulos and Anthony Sinskey. The group received a J-WAFS Solutions grant in 2021 for their microbial platform that converts food waste from the dairy industry into valuable products. “As a scientist with no prior experience in entrepreneurship, the program introduced me to important concepts and tools for conducting customer interviews and adopting a new mindset,” notes Katsimpouras. “Most importantly, it encouraged me to get out of the building and engage in interviews with potential customers and stakeholders, providing me with invaluable insights and a deeper understanding of my industry,” he adds. These interviews also helped connect the team with companies willing to provide resources to test and improve their technology — a critical step to the scale-up of any lab invention.
In the case of Professor Cem Tasan’s research group in the Department of Materials Science and Engineering, the I-Corps program led them to the J-WAFS Solutions grant, instead of the other way around. Tasan is currently working with postdoc Onur Guvenc on a J-WAFS Solutions project to manufacture formable sheet metal by consolidating steel scrap without melting, thereby reducing water use compared to traditional steel processing. Before applying for the Solutions grant, Guvenc took part in NE I-Corps. Like Katsimpouras, Guvenc benefited from the interaction with industry. “This program required me to step out of the lab and engage with potential customers, allowing me to learn about their immediate challenges and test my initial assumptions about the market,” Guvenc recalls. “My interviews with industry professionals also made me aware of the connection between water consumption and steelmaking processes, which ultimately led to the J-WAFS 2023 Solutions Grant,” says Guvenc.
After completing the Spark program, participants may be eligible to apply for the Fusion program, which provides microgrants of up to $1,500 to conduct further customer discovery. The Fusion program is self-paced, requiring teams to conduct 12 additional customer interviews and craft a final presentation summarizing their key learnings. Professor Patrick Doyle’s J-WAFS Solutions team completed the Spark and Fusion programs at MIT. Most recently, their team was accepted to join the NSF I-Corps National program with a $50,000 award. The intensive program requires teams to complete an additional 100 customer discovery interviews over seven weeks. Located in the Department of Chemical Engineering, the Doyle lab is working on a sustainable microparticle hydrogel system to rapidly remove micropollutants from water. The team’s focus has expanded to higher value purifications in amino acid and biopharmaceutical manufacturing applications. Devashish Gokhale PhD ’24 worked with Doyle on much of the underlying science.
“Our platform technology could potentially be used for selective separations in very diverse market segments, ranging from individual consumers to large industries and government bodies with varied use-cases,” Gokhale explains. He goes on to say, “The I-Corps Spark program added significant value by providing me with an effective framework to approach this problem ... I was assigned a mentor who provided critical feedback, teaching me how to formulate effective questions and identify promising opportunities.” Gokhale says that by the end of Spark, the team was able to identify the best target markets for their products. He also says that the program provided valuable seminars on topics like intellectual property, which was helpful in subsequent discussions the team had with MIT’s Technology Licensing Office.
Another member of Doyle’s team, Arjav Shah, a recent PhD from MIT’s Department of Chemical Engineering and a current MBA candidate at the MIT Sloan School of Management, is spearheading the team’s commercialization plans. Shah attended Fusion last fall and hopes to lead efforts to incorporate a startup company called hydroGel. “I admire the hypothesis-driven approach of the I-Corps program,” says Shah. “It has enabled us to identify our customers’ biggest pain points, which will hopefully lead us to finding a product-market fit.” He adds “based on our learnings from the program, we have been able to pivot to impact-driven, higher-value applications in the food processing and biopharmaceutical industries.” Postdoc Luca Mazzaferro will lead the technical team at hydroGel alongside Shah.
In a different project, Qinmin Zheng, a postdoc in the Department of Civil and Environmental Engineering, is working with Professor Andrew Whittle and Lecturer Fábio Duarte. Zheng plans to take the Fusion course this fall to advance their J-WAFS Solutions project that aims to commercialize a novel sensor to quantify the relative abundance of major algal species and provide early detection of harmful algal blooms. After completing Spark, Zheng says he’s “excited to participate in the Fusion program, and potentially the National I-Corps program, to further explore market opportunities and minimize risks in our future product development.”
Economic and societal benefits
Commercializing technologies developed at MIT is one of the ways J-WAFS helps ensure that MIT research advances will have real-world impacts in water and food systems. Since its inception, the J-WAFS Solutions program has awarded 28 grants (including renewals), which have supported 19 projects that address a wide range of global water and food challenges. The program has distributed over $4 million to 24 professors, 11 research staff, 15 postdocs, and 30 students across MIT. Nearly half of all J-WAFS Solutions projects have resulted in spinout companies or commercialized products, including eight companies to date plus two open-source technologies.
Nona Technologies is an example of a J-WAFS spinout that is helping the world by developing new approaches to produce freshwater for drinking. Desalination — the process of removing salts from seawater — typically requires a large-scale technology called reverse osmosis. But Nona created a desalination device that can work in remote off-grid locations. By separating salt and bacteria from water using electric current through a process called ion concentration polarization (ICP), their technology also reduces overall energy consumption. The novel method was developed by Jongyoon Han, professor of electrical engineering and biological engineering, and research scientist Junghyo Yoon. Along with Bruce Crawford, a Sloan MBA alum, Han and Yoon created Nona Technologies to bring their lightweight, energy-efficient desalination technology to the market.
“My feeling early on was that once you have technology, commercialization will take care of itself,” admits Crawford. The team completed both the Spark and Fusion programs and quickly realized that much more work would be required. “Even in our first 24 interviews, we learned that the two first markets we envisioned would not be viable in the near term, and we also got our first hints at the beachhead we ultimately selected,” says Crawford. Nona Technologies has since won MIT’s $100K Entrepreneurship Competition, received media attention from outlets like Newsweek and Fortune, and hired a team that continues to further the technology for deployment in resource-limited areas where clean drinking water may be scarce.
Food-borne diseases sicken millions of people worldwide each year, but J-WAFS researchers are addressing this issue by integrating molecular engineering, nanotechnology, and artificial intelligence to revolutionize food pathogen testing. Professors Tim Swager and Alexander Klibanov, of the Department of Chemistry, were awarded one of the first J-WAFS Solutions grants for their sensor that targets food safety pathogens. The sensor uses specialized droplets that behave like a dynamic lens, changing in the presence of target bacteria in order to detect dangerous bacterial contamination in food. In 2018, Swager launched Xibus Systems Inc. to bring the sensor to market and advance food safety for greater public health, sustainability, and economic security.
“Our involvement with the J-WAFS Solutions Program has been vital,” says Swager. “It has provided us with a bridge between the academic world and the business world and allowed us to perform more detailed work to create a usable application,” he adds. In 2022, Xibus developed a product called XiSafe, which enables the detection of contaminants like salmonella and listeria faster and with higher sensitivity than other food testing products. The innovation could save food processors billions of dollars worldwide and prevent thousands of food-borne fatalities annually.
J-WAFS Solutions companies have raised nearly $66 million in venture capital and other funding. Just this past June, J-WAFS spinout SiTration announced that it raised an $11.8 million seed round. Jeffrey Grossman, a professor in MIT’s Department of Materials Science and Engineering, was another early J-WAFS Solutions grantee for his work on low-cost energy-efficient filters for desalination. The project enabled the development of nanoporous membranes and resulted in two spinout companies, Via Separations and SiTration. SiTration was co-founded by Brendan Smith PhD ’18, who was a part of the original J-WAFS team. Smith is CEO of the company and has overseen the advancement of the membrane technology, which has gone on to reduce cost and resource consumption in industrial wastewater treatment, advanced manufacturing, and resource extraction of materials such as lithium, cobalt, and nickel from recycled electric vehicle batteries. The company also recently announced that it is working with the mining company Rio Tinto to handle harmful wastewater generated at mines.
But it's not just J-WAFS spinout companies that are producing real-world results. Products like the ECC Vial — a portable, low-cost method for E. coli detection in water — have been brought to the market and helped thousands of people. The test kit was developed by MIT D-Lab Lecturer Susan Murcott and Professor Jeffrey Ravel of the MIT History Section. The duo received a J-WAFS Solutions grant in 2018 to promote safely managed drinking water and improved public health in Nepal, where it is difficult to identify which wells are contaminated by E. coli. By the end of their grant period, the team had manufactured approximately 3,200 units, of which 2,350 were distributed — enough to help 12,000 people in Nepal. The researchers also trained local Nepalese on best manufacturing practices.
“It’s very important, in my life experience, to follow your dream and to serve others,” says Murcott. Economic success is important to the health of any venture, whether it’s a company or a product, but equally important is the social impact — a philosophy that J-WAFS research strives to uphold. “Do something because it’s worth doing and because it changes people’s lives and saves lives,” Murcott adds.
As J-WAFS prepares to celebrate its 10th anniversary this year, we look forward to continued collaboration with MIT’s many I&E programs to advance knowledge and develop solutions that will have tangible effects on the world’s water and food systems.
Learn more about the J-WAFS Solutions program and about innovation and entrepreneurship at MIT.
An interstellar instrument takes a final bow The Plasma Science Experiment aboard NASA’s Voyager 2 spacecraft turns off after 47 years and 15 billion miles.They planned to fly for four years and to get as far as Jupiter and Saturn. But nearly half a century and 15 billion miles later, NASA’s twin Voyager spacecraft have far exceeded their original mission, winging past the outer planets and busting out of our heliosphere, beyond the influence of the sun. The probes are currently making their way through interstellar space, traveling farther than any human-made object.
Along their improbable journey, the Voyagers made first-of-their-kind observations at all four giant outer planets and their moons using only a handful of instruments, including MIT’s Plasma Science Experiments — identical plasma sensors that were designed and built in the 1970s in Building 37 by MIT scientists and engineers.
The Plasma Science Experiment (also known as the Plasma Spectrometer, or PLS for short) measured charged particles in planetary magnetospheres, the solar wind, and the interstellar medium, the material between stars. Since launching on the Voyager 2 spacecraft in 1977, the PLS has revealed new phenomena near all the outer planets and in the solar wind across the solar system. The experiment played a crucial role in confirming the moment when Voyager 2 crossed the heliosphere and moved outside of the sun’s regime, into interstellar space.
Now, to conserve the little power left on Voyager 2 and prolong the mission’s life, the Voyager scientists and engineers have made the decision to shut off MIT’s Plasma Science Experiment. It’s the first in a line of science instruments that will progressively blink off over the coming years. On Sept. 26, the Voyager 2 PLS sent its last communication from 12.7 billion miles away, before it received the command to shut down.
MIT News spoke with John Belcher, the Class of 1922 Professor of Physics at MIT, who was a member of the original team that designed and built the plasma spectrometers, and John Richardson, principal research scientist at MIT’s Kavli Institute for Astrophysics and Space Research, who is the experiment’s principal investigator. Both Belcher and Richardson offered their reflections on the retirement of this interstellar piece of MIT history.
Q: Looking back at the experiment’s contributions, what are the greatest hits, in terms of what MIT’s Plasma Spectrometer has revealed about the solar system and interstellar space?
Richardson: A key PLS finding at Jupiter was the discovery of the Io torus, a plasma donut surrounding Jupiter, formed from sulphur and oxygen from Io’s volcanos (which were discovered in Voyager images). At Saturn, PLS found a magnetosphere full of water and oxygen that had been knocked off of Saturn’s icy moons. At Uranus and Neptune, the tilt of the magnetic fields led to PLS seeing smaller density features, with Uranus’ plasma disappearing near the planet. Another key PLS observation was of the termination shock, which was the first observation of the plasma at the largest shock in the solar system, where the solar wind stopped being supersonic. This boundary had a huge drop in speed and an increase in the density and temperature of the solar wind. And finally, PLS documented Voyager 2’s crossing of the heliopause by detecting a stopping of outward-flowing plasma. This signaled the end of the solar wind and the beginning of the local interstellar medium (LISM). Although not designed to measure the LISM, PLS constantly measured the interstellar plasma currents beyond the heliosphere. It is very sad to lose this instrument and data!
Belcher: It is important to emphasize that PLS was the result of decades of development by MIT Professor Herbert Bridge (1919-1995) and Alan Lazarus (1931-2014). The first version of the instrument they designed was flown on Explorer 10 in 1961. And the most recent version is flying on the Solar Probe, which is collecting measurements very close to the sun to understand the origins of solar wind. Bridge was the principal investigator for plasma probes on spacecraft which visited the sun and every major planetary body in the solar system.
Q: During their tenure aboard the Voyager probes, how did the plasma sensors do their job over the last 47 years?
Richardson: There were four Faraday cup detectors designed by Herb Bridge that measured currents from ions and electrons that entered the detectors. By measuring these particles at different energies, we could find the plasma velocity, density, and temperature in the solar wind and in the four planetary magnetospheres Voyager encountered. Voyager data were (and are still) sent to Earth every day and received by NASA’s deep space network of antennae. Keeping two 1970s-era spacecraft going for 47 years and counting has been an amazing feat of JPL engineering prowess — you can google the most recent rescue when Voyager 1 lost some memory in November of 2023 and stopped sending data. JPL figured out the problem and was able to reprogram the flight data system from 15 billion miles away, and all is back to normal now. Shutting down PLS involves sending a command which will get to Voyager 2 about 19 hours later, providing the rest of the spacecraft enough power to continue.
Q: Once the plasma sensors have shut down, how much more could Voyager do, and how far might it still go?
Richardson: Voyager will still measure the galactic cosmic rays, magnetic fields, and plasma waves. The available power decreases about 4 watts per year as the plutonium which powers them decays. We hope to keep some of the instruments running until the mid-2030s, but that will be a challenge as power levels decrease.
Belcher: Nick Oberg at the Kapteyn Astronomical Institute in the Netherlands has made an exhaustive study of the future of the spacecraft, using data from the European Space Agency’s spacecraft Gaia. In about 30,000 years, the spacecraft will reach the distance to the nearest stars. Because space is so vast, there is zero chance that the spacecraft will collide directly with a star in the lifetime of the universe. However, the spacecraft surface will erode by microcollisions with vast clouds of interstellar dust, but this happens very slowly.
In Oberg’s estimate, the Golden Records [identical records that were placed aboard each probe, that contain selected sounds and images to represent life on Earth] are likely to survive for a span of over 5 billion years. After those 5 billion years, things are difficult to predict, since at this point, the Milky Way will collide with its massive neighbor, the Andromeda galaxy. During this collision, there is a one in five chance that the spacecraft will be flung into the intergalactic medium, where there is little dust and little weathering. In that case, it is possible that the spacecraft will survive for trillions of years. A trillion years is about 100 times the current age of the universe. The Earth ceases to exist in about 6 billion years, when the sun enters its red giant phase and engulfs it.
In a “poor man’s” version of the Golden Record, Robert Butler, the chief engineer of the Plasma Instrument, inscribed the names of the MIT engineers and scientists who had worked on the spacecraft on the collector plate of the side-looking cup. Butler’s home state was New Hampshire, and he put the state motto, “Live Free or Die,” at the top of the list of names. Thanks to Butler, although New Hampshire will not survive for a trillion years, its state motto might. The flight spare of the PLS instrument is now displayed at the MIT Museum, where you can see the text of Butler’s message by peering into the side-looking sensor.
AI pareidolia: Can machines spot faces in inanimate objects?New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.In 1994, Florida jewelry designer Diana Duyser discovered what she believed to be the Virgin Mary’s image in a grilled cheese sandwich, which she preserved and later auctioned for $28,000. But how much do we really understand about pareidolia, the phenomenon of seeing faces and patterns in objects when they aren’t really there?
A new study from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) delves into this phenomenon, introducing an extensive, human-labeled dataset of 5,000 pareidolic images, far surpassing previous collections. Using this dataset, the team discovered several surprising results about the differences between human and machine perception, and how the ability to see faces in a slice of toast might have saved your distant relatives’ lives.
“Face pareidolia has long fascinated psychologists, but it’s been largely unexplored in the computer vision community,” says Mark Hamilton, MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead researcher on the work. “We wanted to create a resource that could help us understand how both humans and AI systems process these illusory faces.”
So what did all of these fake faces reveal? For one, AI models don’t seem to recognize pareidolic faces like we do. Surprisingly, the team found that it wasn’t until they trained algorithms to recognize animal faces that they became significantly better at detecting pareidolic faces. This unexpected connection hints at a possible evolutionary link between our ability to spot animal faces — crucial for survival — and our tendency to see faces in inanimate objects. “A result like this seems to suggest that pareidolia might not arise from human social behavior, but from something deeper: like quickly spotting a lurking tiger, or identifying which way a deer is looking so our primordial ancestors could hunt,” says Hamilton.
Another intriguing discovery is what the researchers call the “Goldilocks Zone of Pareidolia,” a class of images where pareidolia is most likely to occur. “There’s a specific range of visual complexity where both humans and machines are most likely to perceive faces in non-face objects,” William T. Freeman, MIT professor of electrical engineering and computer science and principal investigator of the project says. “Too simple, and there’s not enough detail to form a face. Too complex, and it becomes visual noise.”
To uncover this, the team developed an equation that models how people and algorithms detect illusory faces. When analyzing this equation, they found a clear “pareidolic peak” where the likelihood of seeing faces is highest, corresponding to images that have “just the right amount” of complexity. This predicted “Goldilocks zone” was then validated in tests with both real human subjects and AI face detection systems.
This new dataset, “Faces in Things,” dwarfs those of previous studies that typically used only 20-30 stimuli. This scale allowed the researchers to explore how state-of-the-art face detection algorithms behaved after fine-tuning on pareidolic faces, showing that not only could these algorithms be edited to detect these faces, but that they could also act as a silicon stand-in for our own brain, allowing the team to ask and answer questions about the origins of pareidolic face detection that are impossible to ask in humans.
To build this dataset, the team curated approximately 20,000 candidate images from the LAION-5B dataset, which were then meticulously labeled and judged by human annotators. This process involved drawing bounding boxes around perceived faces and answering detailed questions about each face, such as the perceived emotion, age, and whether the face was accidental or intentional. “Gathering and annotating thousands of images was a monumental task,” says Hamilton. “Much of the dataset owes its existence to my mom,” a retired banker, “who spent countless hours lovingly labeling images for our analysis.”
The study also has potential applications in improving face detection systems by reducing false positives, which could have implications for fields like self-driving cars, human-computer interaction, and robotics. The dataset and models could also help areas like product design, where understanding and controlling pareidolia could create better products. “Imagine being able to automatically tweak the design of a car or a child’s toy so it looks friendlier, or ensuring a medical device doesn’t inadvertently appear threatening,” says Hamilton.
“It’s fascinating how humans instinctively interpret inanimate objects with human-like traits. For instance, when you glance at an electrical socket, you might immediately envision it singing, and you can even imagine how it would ‘move its lips.’ Algorithms, however, don’t naturally recognize these cartoonish faces in the same way we do,” says Hamilton. “This raises intriguing questions: What accounts for this difference between human perception and algorithmic interpretation? Is pareidolia beneficial or detrimental? Why don’t algorithms experience this effect as we do? These questions sparked our investigation, as this classic psychological phenomenon in humans had not been thoroughly explored in algorithms.”
As the researchers prepare to share their dataset with the scientific community, they’re already looking ahead. Future work may involve training vision-language models to understand and describe pareidolic faces, potentially leading to AI systems that can engage with visual stimuli in more human-like ways.
“This is a delightful paper! It is fun to read and it makes me think. Hamilton et al. propose a tantalizing question: Why do we see faces in things?” says Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering at Caltech, who was not involved in the work. “As they point out, learning from examples, including animal faces, goes only half-way to explaining the phenomenon. I bet that thinking about this question will teach us something important about how our visual system generalizes beyond the training it receives through life.”
Hamilton and Freeman’s co-authors include Simon Stent, staff research scientist at the Toyota Research Institute; Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences, NVIDIA research scientist, and former CSAIL member; and CSAIL affiliates postdoc Vasha DuTell, Anne Harrington MEng ’23, and Research Scientist Jennifer Corbett. Their work was supported, in part, by the National Science Foundation and the CSAIL MEnTorEd Opportunities in Research (METEOR) Fellowship, while being sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator. The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.
This work is being presented this week at the European Conference on Computer Vision.
Mars’ missing atmosphere could be hiding in plain sightA new study shows Mars’ early thick atmosphere could be locked up in the planet’s clay surface.Mars wasn’t always the cold desert we see today. There’s increasing evidence that water once flowed on the Red Planet’s surface, billions of years ago. And if there was water, there must also have been a thick atmosphere to keep that water from freezing. But sometime around 3.5 billion years ago, the water dried up, and the air, once heavy with carbon dioxide, dramatically thinned, leaving only the wisp of an atmosphere that clings to the planet today.
Where exactly did Mars’ atmosphere go? This question has been a central mystery of Mars’ 4.6-billion-year history.
For two MIT geologists, the answer may lie in the planet’s clay. In a paper appearing today in Science Advances, they propose that much of Mars’ missing atmosphere could be locked up in the planet’s clay-covered crust.
The team makes the case that, while water was present on Mars, the liquid could have trickled through certain rock types and set off a slow chain of reactions that progressively drew carbon dioxide out of the atmosphere and converted it into methane — a form of carbon that could be stored for eons in the planet’s clay surface.
Similar processes occur in some regions on Earth. The researchers used their knowledge of interactions between rocks and gases on Earth and applied that to how similar processes could play out on Mars. They found that, given how much clay is estimated to cover Mars’ surface, the planet’s clay could hold up to 1.7 bar of carbon dioxide, which would be equivalent to around 80 percent of the planet’s initial, early atmosphere.
It’s possible that this sequestered Martian carbon could one day be recovered and converted into propellant to fuel future missions between Mars and Earth, the researchers propose.
“Based on our findings on Earth, we show that similar processes likely operated on Mars, and that copious amounts of atmospheric CO2 could have transformed to methane and been sequestered in clays,” says study author Oliver Jagoutz, professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This methane could still be present and maybe even used as an energy source on Mars in the future.”
The study’s lead author is recent EAPS graduate Joshua Murray PhD ’24.
In the folds
Jagoutz’ group at MIT seeks to identify the geologic processes and interactions that drive the evolution of Earth’s lithosphere — the hard and brittle outer layer that includes the crust and upper mantle, where tectonic plates lie.
In 2023, he and Murray focused on a type of surface clay mineral called smectite, which is known to be a highly effective trap for carbon. Within a single grain of smectite are a multitude of folds, within which carbon can sit undisturbed for billions of years. They showed that smectite on Earth was likely a product of tectonic activity, and that, once exposed at the surface, the clay minerals acted to draw down and store enough carbon dioxide from the atmosphere to cool the planet over millions of years.
Soon after the team reported their results, Jagoutz happened to look at a map of the surface of Mars and realized that much of that planet’s surface was covered in the same smectite clays. Could the clays have had a similar carbon-trapping effect on Mars, and if so, how much carbon could the clays hold?
“We know this process happens, and it is well-documented on Earth. And these rocks and clays exist on Mars,” Jagoutz says. “So, we wanted to try and connect the dots.”
“Every nook and cranny”
Unlike on Earth, where smectite is a consequence of continental plates shifting and uplifting to bring rocks from the mantle to the surface, there is no such tectonic activity on Mars. The team looked for ways in which the clays could have formed on Mars, based on what scientists know of the planet’s history and composition.
For instance, some remote measurements of Mars’ surface suggest that at least part of the planet’s crust contains ultramafic igneous rocks, similar to those that produce smectites through weathering on Earth. Other observations reveal geologic patterns similar to terrestrial rivers and tributaries, where water could have flowed and reacted with the underlying rock.
Jagoutz and Murray wondered whether water could have reacted with Mars’ deep ultramafic rocks in a way that would produce the clays that cover the surface today. They developed a simple model of rock chemistry, based on what is known of how igneous rocks interact with their environment on Earth.
They applied this model to Mars, where scientists believe the crust is mostly made up of igneous rock that is rich in the mineral olivine. The team used the model to estimate the changes that olivine-rich rock might undergo, assuming that water existed on the surface for at least a billion years, and the atmosphere was thick with carbon dioxide.
“At this time in Mars’ history, we think CO2 is everywhere, in every nook and cranny, and water percolating through the rocks is full of CO2 too,” Murray says.
Over about a billion years, water trickling through the crust would have slowly reacted with olivine — a mineral that is rich in a reduced form of iron. Oxygen molecules in water would have bound to the iron, releasing hydrogen as a result and forming the red oxidized iron which gives the planet its iconic color. This free hydrogen would then have combined with carbon dioxide in the water, to form methane. As this reaction progressed over time, olivine would have slowly transformed into another type of iron-rich rock known as serpentine, which then continued to react with water to form smectite.
“These smectite clays have so much capacity to store carbon,” Murray says. “So then we used existing knowledge of how these minerals are stored in clays on Earth, and extrapolate to say, if the Martian surface has this much clay in it, how much methane can you store in those clays?”
He and Jagoutz found that if Mars is covered in a layer of smectite that is 1,100 meters deep, this amount of clay could store a huge amount of methane, equivalent to most of the carbon dioxide in the atmosphere that is thought to have disappeared since the planet dried up.
“We find that estimates of global clay volumes on Mars are consistent with a significant fraction of Mars’ initial CO2 being sequestered as organic compounds within the clay-rich crust,” Murray says. “In some ways, Mars’ missing atmosphere could be hiding in plain sight.”
“Where the CO2 went from an early, thicker atmosphere is a fundamental question in the history of the Mars atmosphere, its climate, and the habitability by microbes,” says Bruce Jakosky, professor emeritus of geology at the University of Colorado and principal investigator on the Mars Atmosphere and Volatile Evolution (MAVEN) mission, which has been orbiting and studying Mars’ upper atmosphere since 2014. Jakosky was not involved with the current study. “Murray and Jagoutz examine the chemical interaction of rocks with the atmosphere as a means of removing CO2. At the high end of our estimates of how much weathering has occurred, this could be a major process in removing CO2 from Mars’ early atmosphere.”
This work was supported, in part, by the National Science Foundation.
Startup helps people fall asleep by aligning audio signals with brainwavesElemind, founded by researchers from MIT, has developed a headband that uses acoustic stimulation to move people into a sleep state.Do you ever toss and turn in bed after a long day, wishing you could just program your brain to turn off and get some sleep?
That may sound like science fiction, but that’s the goal of the startup Elemind, which is using an electroencephalogram (EEG) headband that emits acoustic stimulation aligned with people’s brainwaves to move them into a sleep state more quickly.
In a small study of adults with sleep onset insomnia, 30 minutes of stimulation from the device decreased the time it took them to fall asleep by 10 to 15 minutes. This summer, Elemind began shipping its product to a small group of users as part of an early pilot program.
The company, which was founded by MIT Professor Ed Boyden ’99, MNG ’99; David Wang ’05, SM ’10, PhD ’15; former postdoc Nir Grossman; former Media Lab research affiliate Heather Read; and Meredith Perry, plans to collect feedback from early users before making the device more widely available.
Elemind’s team believes their device offers several advantages over sleeping pills that can cause side effects and addiction.
“We wanted to create a nonchemical option for people who wanted to get great sleep without side effects, so you could get all the benefits of natural sleep without the risks,” says Perry, Elemind’s CEO. “There’s a number of people that we think would benefit from this device, whether you’re a breastfeeding mom that might not want to take a sleep drug, somebody traveling across time zones that wants to fight jet lag, or someone that simply wants to improve your next-day performance and feel like you have more control over your sleep.”
From research to product
Wang’s academic journey at MIT spanned nearly 15 years, during which he earned four degrees, culminating in a PhD in artificial intelligence in 2015. In 2014, Wang was co-teaching a class with Grossman when they began working together to noninvasively measure real-time biological oscillations in the brain and body. Through that work, they became fascinated with a technique for modulating the brain known as phase-locked stimulation, which uses precisely timed visual, physical, or auditory stimulation that lines up with brain activity.
“You’re measuring some kind of changing variable, and then you want to change your stimulus in real time in response to that variable,” explains Boyden, who pointed Wang and Grossman to a set of mathematical techniques that became some of the core intellectual property of Elemind.
Phase-locked stimulation has been used in conjunction with electrodes implanted in the brain to disrupt seizures and tremors for years. But in 2021, Wang, Grossman, Boyden, and their collaborators published a paper showing they could use electrical stimulation from outside the skull to suppress essential tremor syndrome, the most common adult movement disorder.
The results were promising, but the founders decided to start by proving their approach worked in a less regulated space: sleep. They developed a system to deliver auditory pulses timed to promote or suppress alpha oscillations in the brain, which are elevated in insomnia.
That kicked off a years-long product development process that led to the headband device Elemind uses today. The headband measures brainwaves through EEG and feeds the results into Elemind's proprietary algorithms, which are used to dynamically generate audio through a bone conduction driver. The moment the device detects that someone is asleep, the audio is slowly tapered out.
“We have a theory that the sound that we play triggers an auditory-evoked response in the brain,” Wang says. “That means we get your auditory cortex to basically release this voltage burst that sweeps across your brain and interferes with other regions. Some people who have worn Elemind call it a brain jammer. For folks that ruminate a lot before they go to sleep, their brains are actively running. This encourages their brain to quiet down.”
Beyond sleep
Elemind has established a collaboration with eight universities that allows researchers to explore the effectiveness of the company’s approach in a range of use cases, from tremors to memory formation, Alzheimer’s progression, and more.
“We’re not only developing this product, but also advancing the field of neuroscience by collecting high-resolution data to hopefully also help others conduct new research,” Wang says.
The collaborations have led to some exciting results. Researchers at McGill University found that using Elemind’s acoustic stimulation during sleep increased activity in areas of the cortex related to motor function and improved healthy adults’ performance in memory tasks. Other studies have shown the approach can be used to reduce essential tremors in patients and enhance sedation recovery.
Elemind is focused on its sleep application for now, but the company plans to develop other solutions, from medical interventions to memory and focus augmentation, as the science evolves.
“The vision is how do we move beyond sleep into what could ultimately become like an app store for the brain, where you can download a brain state like you download an app?” Perry says. “How can we make this a tool that can be applied to a bunch of different applications with a single piece of hardware that has a lot of different stimulation protocols?”
Research quantifying “nociception” could help improve management of surgical painNew statistical models based on physiological data from more than 100 surgeries provide objective, accurate measures of the body’s subconscious perception of pain.The degree to which a surgical patient’s subconscious processing of pain, or “nociception,” is properly managed by their anesthesiologist will directly affect the degree of post-operative drug side effects they’ll experience and the need for further pain management they’ll require. But pain is a subjective feeling to measure, even when patients are awake, much less when they are unconscious.
In a new study appearing in the Proceedings of the National Academy of Sciences, MIT and Massachusetts General Hospital (MGH) researchers describe a set of statistical models that objectively quantified nociception during surgery. Ultimately, they hope to help anesthesiologists optimize drug dose and minimize post-operative pain and side effects.
The new models integrate data meticulously logged over 18,582 minutes of 101 abdominal surgeries in men and women at MGH. Led by Sandya Subramanian PhD ’21, an assistant professor at the University of California at Berkeley and the University of California at San Francisco, the researchers collected and analyzed data from five physiological sensors as patients experienced a total of 49,878 distinct “nociceptive stimuli” (such as incisions or cautery). Moreover, the team recorded what drugs were administered, and how much and when, to factor in their effects on nociception or cardiovascular measures. They then used all the data to develop a set of statistical models that performed well in retrospectively indicating the body’s response to nociceptive stimuli.
The team’s goal is to furnish such accurate, objective, and physiologically principled information in real time to anesthesiologists who currently have to rely heavily on intuition and past experience in deciding how to administer pain-control drugs during surgery. If anesthesiologists give too much, patients can experience side effects ranging from nausea to delirium. If they give too little, patients may feel excessive pain after they awaken.
“Sandya’s work has helped us establish a principled way to understand and measure nociception (unconscious pain) during general anesthesia,” says study senior author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. Brown is also an anesthesiologist at MGH and a professor at Harvard Medical School. “Our next objective is to make the insights that we have gained from Sandya’s studies reliable and practical for anesthesiologists to use during surgery.”
Surgery and statistics
The research began as Subramanian’s doctoral thesis project in Brown’s lab in 2017. The best prior attempts to objectively model nociception have either relied solely on the electrocardiogram (ECG, an indirect indicator of heart-rate variability) or other systems that may incorporate more than one measurement, but were either based on lab experiments using pain stimuli that do not compare in intensity to surgical pain or were validated by statistically aggregating just a few time points across multiple patients’ surgeries, Subramanian says.
“There’s no other place to study surgical pain except for the operating room,” Subramanian says. “We wanted to not only develop the algorithms using data from surgery, but also actually validate it in the context in which we want someone to use it. If we are asking them to track moment-to-moment nociception during an individual surgery, we need to validate it in that same way.”
So she and Brown worked to advance the state of the art by collecting multi-sensor data during the whole course of actual surgeries and by accounting for the confounding effects of the drugs administered. In that way, they hoped to develop a model that could make accurate predictions that remained valid for the same patient all the way through their operation.
Part of the improvements the team achieved arose from tracking patterns of heart rate and also skin conductance. Changes in both of these physiological factors can be indications of the body’s primal “fight or flight” response to nociception or pain, but some drugs used during surgery directly affect cardiovascular state, while skin conductance (or “EDA,” electrodermal activity) remains unaffected. The study measures not only ECG but also backs it up with PPG, an optical measure of heart rate (like the oxygen sensor on a smartwatch), because ECG signals can sometimes be made noisy by all the electrical equipment buzzing away in the operating room. Similarly, Subramanian backstopped EDA measures with measures of skin temperature to ensure that changes in skin conductance from sweat were because of nociception and not simply the patient being too warm. The study also tracked respiration.
Then the authors performed statistical analyses to develop physiologically relevant indices from each of the cardiovascular and skin conductance signals. And once each index was established, further statistical analysis enabled tracking the indices together to produce models that could make accurate, principled predictions of when nociception was occurring and the body’s response.
Nailing nociception
In four versions of the model, Subramanian “supervised” them by feeding them information on when actual nociceptive stimuli occurred so that they could then learn the association between the physiological measurements and the incidence of pain-inducing events. In some of these trained versions she left out drug information and in some versions she used different statistical approaches (either “linear regression” or “random forest”). In a fifth version of the model, based on a “state space” approach, she left it unsupervised, meaning it had to learn to infer moments of nociception purely from the physiological indices. She compared all five versions of her model to one of the current industry standards, an ECG-tracking model called ANI.
Each model’s output can be visualized as a graph plotting the predicted degree of nociception over time. ANI performs just above chance but is implemented in real-time. The unsupervised model performed better than ANI, though not quite as well as the supervised models. The best performing of those was one that incorporated drug information and used a “random forest” approach. Still, the authors note, the fact that the unsupervised model performed significantly better than chance suggests that there is indeed an objectively detectable signature of the body’s nociceptive state even when looking across different patients.
“A state space framework using multisensory physiological observations is effective in uncovering this implicit nociceptive state with a consistent definition across multiple subjects,” wrote Subramanian, Brown, and their co-authors. “This is an important step toward defining a metric to track nociception without including nociceptive ‘ground truth’ information, most practical for scalability and implementation in clinical settings.”
Indeed, the next steps for the research are to increase the data sampling and to further refine the models so that they can eventually be put into practice in the operating room. That will require enabling them to predict nociception in real time, rather than in post-hoc analysis. When that advance is made, that will enable anesthesiologists or intensivists to inform their pain drug dosing judgements. Further into the future, the model could inform closed-loop systems that automatically dose drugs under the anesthesiologist’s supervision.
“Our study is an important first step toward developing objective markers to track surgical nociception,” the authors concluded. “These markers will enable objective assessment of nociception in other complex clinical settings, such as the ICU [intensive care unit], as well as catalyze future development of closed-loop control systems for nociception.”
In addition to Subramanian and Brown, the paper’s other authors are Bryan Tseng, Marcela del Carmen, Annekathryn Goodman, Douglas Dahl, and Riccardo Barbieri.
Funding from The JPB Foundation; The Picower Institute; George J. Elbaum ’59, SM ’63, PhD ’67; Mimi Jensen; Diane B. Greene SM ’78; Mendel Rosenblum; Bill Swanson; Cathy and Lou Paglia; annual donors to the Anesthesia Initiative Fund; the National Science Foundation; and an MIT Office of Graduate Education Collabmore-Rogers Fellowship supported the research.
AI model can reveal the structures of crystalline materialsBy analyzing X-ray crystallography data, the model could help researchers develop new materials for many applications, including batteries and magnets.For more than 100 years, scientists have been using X-ray crystallography to determine the structure of crystalline materials such as metals, rocks, and ceramics.
This technique works best when the crystal is intact, but in many cases, scientists have only a powdered version of the material, which contains random fragments of the crystal. This makes it more challenging to piece together the overall structure.
MIT chemists have now come up with a new generative AI model that can make it much easier to determine the structures of these powdered crystals. The prediction model could help researchers characterize materials for use in batteries, magnets, and many other applications.
“Structure is the first thing that you need to know for any material. It’s important for superconductivity, it’s important for magnets, it’s important for knowing what photovoltaic you created. It’s important for any application that you can think of which is materials-centric,” says Danna Freedman, the Frederick George Keyes Professor of Chemistry at MIT.
Freedman and Jure Leskovec, a professor of computer science at Stanford University, are the senior authors of the new study, which appears today in the Journal of the American Chemical Society. MIT graduate student Eric Riesel and Yale University undergraduate Tsach Mackey are the lead authors of the paper.
Distinctive patterns
Crystalline materials, which include metals and most other inorganic solid materials, are made of lattices that consist of many identical, repeating units. These units can be thought of as “boxes” with a distinctive shape and size, with atoms arranged precisely within them.
When X-rays are beamed at these lattices, they diffract off atoms with different angles and intensities, revealing information about the positions of the atoms and the bonds between them. Since the early 1900s, this technique has been used to analyze materials, including biological molecules that have a crystalline structure, such as DNA and some proteins.
For materials that exist only as a powdered crystal, solving these structures becomes much more difficult because the fragments don’t carry the full 3D structure of the original crystal.
“The precise lattice still exists, because what we call a powder is really a collection of microcrystals. So, you have the same lattice as a large crystal, but they’re in a fully randomized orientation,” Freedman says.
For thousands of these materials, X-ray diffraction patterns exist but remain unsolved. To try to crack the structures of these materials, Freedman and her colleagues trained a machine-learning model on data from a database called the Materials Project, which contains more than 150,000 materials. First, they fed tens of thousands of these materials into an existing model that can simulate what the X-ray diffraction patterns would look like. Then, they used those patterns to train their AI model, which they call Crystalyze, to predict structures based on the X-ray patterns.
The model breaks the process of predicting structures into several subtasks. First, it determines the size and shape of the lattice “box” and which atoms will go into it. Then, it predicts the arrangement of atoms within the box. For each diffraction pattern, the model generates several possible structures, which can be tested by feeding the structures into a model that determines diffraction patterns for a given structure.
“Our model is generative AI, meaning that it generates something that it hasn’t seen before, and that allows us to generate several different guesses,” Riesel says. “We can make a hundred guesses, and then we can predict what the powder pattern should look like for our guesses. And then if the input looks exactly like the output, then we know we got it right.”
Solving unknown structures
The researchers tested the model on several thousand simulated diffraction patterns from the Materials Project. They also tested it on more than 100 experimental diffraction patterns from the RRUFF database, which contains powdered X-ray diffraction data for nearly 14,000 natural crystalline minerals, that they had held out of the training data. On these data, the model was accurate about 67 percent of the time. Then, they began testing the model on diffraction patterns that hadn’t been solved before. These data came from the Powder Diffraction File, which contains diffraction data for more than 400,000 solved and unsolved materials.
Using their model, the researchers came up with structures for more than 100 of these previously unsolved patterns. They also used their model to discover structures for three materials that Freedman’s lab created by forcing elements that do not react at atmospheric pressure to form compounds under high pressure. This approach can be used to generate new materials that have radically different crystal structures and physical properties, even though their chemical composition is the same.
Graphite and diamond — both made of pure carbon — are examples of such materials. The materials that Freedman has developed, which each contain bismuth and one other element, could be useful in the design of new materials for permanent magnets.
“We found a lot of new materials from existing data, and most importantly, solved three unknown structures from our lab that comprise the first new binary phases of those combinations of elements,” Freedman says.
Being able to determine the structures of powdered crystalline materials could help researchers working in nearly any materials-related field, according to the MIT team, which has posted a web interface for the model at crystalyze.org.
The research was funded by the U.S. Department of Energy and the National Science Foundation.
Improving biology education here, there, and everywhereAt the cutting edge of pedagogy, Mary Ellen Wiltrout has shaped blended and online learning at MIT and beyond.When she was a child, Mary Ellen Wiltrout PhD ’09 didn’t want to follow in her mother’s footsteps as a K-12 teacher. Growing up in southwestern Pennsylvania, Wiltrout was studious with an early interest in science — and ended up pursuing biology as a career.
But following her doctorate at MIT, she pivoted toward education after all. Now, as the director of blended and online initiatives and a lecturer with the Department of Biology, she’s shaping biology pedagogy at MIT and beyond.
Establishing MOOCs at MIT
To this day, E.C. Whitehead Professor of Biology and Howard Hughes Medical Institute (HHMI) investigator emeritus Tania Baker considers creating a permanent role for Wiltrout one of the most consequential decisions she made as department head.
Since launching the very first MITxBio massive online open course 7.00x (Introduction to Biology – the Secret of Life) with professor of biology Eric Lander in 2013, Wiltrout’s team has worked with MIT Open Learning and biology faculty to build an award-winning repertoire of MITxBio courses.
MITxBio courses are currently hosted on the learning platform edX, established by MIT and Harvard University in 2012, which today connects 86 million people worldwide to online learning opportunities. Within MITxBio, Wiltrout leads a team of instructional staff and students to develop online learning experiences for MIT students and the public while researching effective methods for learner engagement and course design.
“Mary Ellen’s approach has an element of experimentation that embodies a very MIT ethos: applying rigorous science to creatively address challenges with far-reaching impact,” says Darcy Gordon, instructor of blended and online initiatives.
Mentee to motivator
Wiltrout was inspired to pursue both teaching and research by the late geneticist Elizabeth “Beth” Jones at Carnegie Mellon University, where Wiltrout earned a degree in biological sciences and served as a teaching assistant in lab courses.
“I thought it was a lot of fun to work with students, especially at the higher level of education, and especially with a focus on biology,” Wiltrout recalls, noting she developed her love of teaching in those early experiences.
Though her research advisor at the time discouraged her from teaching, Jones assured Wiltrout that it was possible to pursue both.
Jones, who received her postdoctoral training with late Professor Emeritus Boris Magasanik at MIT, encouraged Wiltrout to apply to the Institute and join American Cancer Society and HHMI Professor Graham Walker’s lab. In 2009, Wiltrout earned a PhD in biology for thesis work in the Walker lab, where she continued to learn from enthusiastic mentors.
“When I joined Graham’s lab, everyone was eager to teach and support a new student,” she reflects. After watching Walker aid a struggling student, Wiltrout was further affirmed in her choice. “I knew I could go to Graham if I ever needed to.”
After graduation, Wiltrout taught molecular biology at Harvard for a few years until Baker facilitated her move back to MIT. Now, she’s a resource for faculty, postdocs, and students.
“She is an incredibly rich source of knowledge for everything from how to implement the increasingly complex tools for running a class to the best practices for ensuring a rigorous and inclusive curriculum,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology and associate head of the biology department.
Stephen Bell, the Uncas and Helen Whitaker Professor of Biology and instructor of the Molecular Biology series of MITxBio courses, notes Wiltrout is known for staying on the “cutting edge of pedagogy.”
“She has a comprehensive knowledge of new online educational tools and is always ready to help any professor to implement them in any way they wish,” he says.
Gordon finds Wiltrout’s experiences as a biologist and learning engineer instrumental to her own professional development and a model for their colleagues in science education.
“Mary Ellen has been an incredibly supportive supervisor. She facilitates a team environment that centers on frequent feedback and iteration,” says Tyler Smith, instructor for pedagogy training and biology.
Prepared for the pandemic, and beyond
Wiltrout believes blended learning, combining in-person and online components, is the best path forward for education at MIT. Building personal relationships in the classroom is critical, but online material and supplemental instruction are also key to providing immediate feedback, formative assessments, and other evidence-based learning practices.
“A lot of people have realized that they can’t ignore online learning anymore,” Wiltrout noted during an interview on The Champions Coffee Podcast in 2023. That couldn’t have been truer than in 2020, when academic institutions were forced to suddenly shift to virtual learning.
“When Covid hit, we already had all the infrastructure in place,” Baker says. “Mary Ellen helped not just our department, but also contributed to MIT education’s survival through the pandemic.”
For Wiltrout’s efforts, she received a COVID-19 Hero Award, a recognition from the School of Science for staff members who went above and beyond during that extraordinarily difficult time.
“Mary Ellen thinks deeply about how to create the best learning opportunities possible,” says Cheeseman, one of almost a dozen faculty members who nominated her for the award.
Recently, Wiltrout expanded beyond higher education and into high schools, taking on several interns in collaboration with Empowr, a nonprofit organization that teaches software development skills to Black students to create a school-to-career pipeline. Wiltrout is proud to report that one of these interns is now a student at MIT in the class of 2028.
Looking forward, Wiltrout aims to stay ahead of the curve with the latest educational technology and is excited to see how modern tools can be incorporated into education.
“Everyone is pretty certain that generative AI is going to change education,” she says. “We need to be experimenting with how to take advantage of technology to improve learning.”
Ultimately, she is grateful to continue developing her career at MIT biology.
“It’s exciting to come back to the department after being a student and to work with people as colleagues to produce something that has an impact on what they’re teaching current MIT students and sharing with the world for further reach,” she says.
As for Wiltrout’s own daughter, she’s declared she would like to follow in her mother’s footsteps — a fitting symbol of Wiltrout’s impact on the future of education.
Liftoff: The Climate Project at MIT takes flightThe major effort to accelerate practical climate change solutions launches as its mission directors meet the Institute community.The leaders of The Climate Project at MIT met with community members at a campus forum on Monday, helping to kick off the Institute’s major new effort to accelerate and scale up climate change solutions.
“The Climate Project is a whole-of-MIT mobilization,” MIT President Sally Kornbluth said in her opening remarks. “It’s designed to focus the Institute’s talent and resources so that we can achieve much more, faster, in terms of real-world impact, from mitigation to adaptation.”
The event, “Climate Project at MIT: Launching the Missions,” drew a capacity crowd to MIT’s Samberg Center.
While the Climate Project has a number of facets, a central component of the effort consists of its six “missions,” broad areas where MIT researchers will seek to identify gaps in the global climate response that MIT can help fill, and then launch and execute research and innovation projects aimed at those areas. Each mission is led by campus faculty, and Monday’s event represented the first public conversation between the mission directors and the larger campus community.
“Today’s event is an important milestone,” said Richard Lester, MIT’s interim vice president for climate and the Japan Steel Industry Professor of Nuclear Science and Engineering, who led the Climate Project’s formation. He praised Kornbluth’s sustained focus on climate change as a leading priority for MIT.
“The reason we’re all here is because of her leadership and vision for MIT,” Lester said. “We’re also here because the MIT community — our faculty, our staff, our students — has made it abundantly clear that it wants to do more, much more, to help solve this great problem.”
The mission directors themselves emphasized the need for deep community involvement in the project — and that the Climate Project is designed to facilitate researcher-driven enterprise across campus.
“There’s a tremendous amount of urgency,” said Elsa Olivetti PhD ’07, director of the Decarbonizing Energy and Industry mission, during an onstage discussion. “We all need to do everything we can, and roll up our sleeves and get it done.” Olivetti, the Jerry McAfee Professor in Engineering, has been a professor of materials science and engineering at the Institute since 2014.
“What’s exciting about this is the chance of MIT really meeting its potential,” said Jesse Kroll, co-director of the mission for Restoring the Atmosphere, Protecting the Land and Oceans. Kroll is the Peter de Florez Professor in MIT’s Department of Civil and Environmental Engineering, a professor of chemical engineering, and the director of the Ralph M. Parsons Laboratory.
MIT, Kroll noted, features “so much amazing work going on in all these different aspects of the problem. Science, engineering, social science … we put it all together and there is huge potential, a huge opportunity for us to make a difference.”
MIT has pledged an initial $75 million to the Climate Project, including $25 million from the MIT Sloan School of Management for a complementary effort, the MIT Climate Policy Center. However, the Institute is anticipating that it will also build new connections with outside partners, whose role in implementing and scaling Climate Project solutions will be critical.
Monday’s event included a keynote talk from Brian Deese, currently the MIT Innovation and Climate Impact Fellow and the former director of the White House National Economic Council in the Biden administration.
“The magnitude of the risks associated with climate change are extraordinary,” Deese said. However, he added, “these are solvable issues. In fact, the energy transition globally will be the greatest economic opportunity in human history. … It has the potential to actually lift people out of poverty, it has the potential to drive international cooperation, it has the potential to drive innovation and improve lives — if we get this right.”
Deese’s remarks centered on a call for the U.S. to develop a current-day climate equivalent of the Marshall Plan, the U.S. initiative to provide aid to Western Europe after World War II. He also suggested three characteristics of successful climate projects, noting that many would be interdisciplinary in nature and would “engage with policy early in the design process” to become feasible.
In addition to those features, Deese said, people need to “start and end with very high ambition” when working on climate solutions. He added: “The good thing about MIT and our community is that we, you, have done this before. We’ve got examples where MIT has taken something that seemed completely improbable and made it possible, and I believe that part of what is required of this collective effort is to keep that kind of audacious thinking at the top of our mind.”
The MIT mission directors all participated in an onstage discussion moderated by Somini Sengupta, the international climate reporter on the climate team of The New York Times. Sengupta asked the group about a wide range of topics, from their roles and motivations to the political constraints on global climate progress, and more.
Andrew Babbin, co-director of the mission for Restoring the Atmosphere, Protecting the Land and Oceans, defined part of the task of the MIT missions as “identifying where those gaps of knowledge are and filling them rapidly,” something he believes is “largely not doable in the conventional way,” based on small-scale research projects. Instead, suggested Babbin, who is the Cecil and Ida Green Career Development Professor in MIT’s Program in Atmospheres, Oceans, and Climate, the collective input of research and innovation communities could help zero in on undervalued approaches to climate action.
Some innovative concepts, the mission directors noted, can be tried out on the MIT campus, in an effort to demonstrate how a more sustainable infrastructure and systems can operate at scale.
“That is absolutely crucial,” said Christoph Reinhart, director of the Building and Adapting Healthy, Resilient Cities mission, expressing the need to have the campus reach net-zero emissions. Reinhart is the Alan and Terri Spoon Professor of Architecture and Climate and director of MIT’s Building Technology Program in the School of Architecture and Planning.
In response to queries from Sengupta, the mission directors affirmed that the Climate Project needs to develop solutions that can work in different societies around the world, while acknowledging that there are many political hurdles to worldwide climate action.
“Any kind of quality engaged projects that we’ve done with communities, it’s taken years to build trust. … How you scale that without compromising is the challenge I’m faced with,” said Miho Mazereeuw, director of the Empowering Frontline Communities mission, an associate professor of architecture and urbanism, and director of MIT’s Urban Risk Lab.
“I think we will impact different communities in different parts of the world in different ways,” said Benedetto Marelli, an associate professor in MIT’s Department of Civil and Environmental Engineering, adding that it would be important to “work with local communities [and] engage stakeholders, and at the same time, use local brains to solve the problem.” The mission he directs, Wild Cards, is centered on identifying unconventional solutions that are high risk and also high reward.
Any climate program “has to be politically feasible, it has to be in separate nations’ self-interest,” said Christopher Knittel, mission director for Inventing New Policy Approaches. In an ever-shifting political world, he added, that means people must “think about not just the policy but the resiliency of the policy.” Knittel is the George P. Shultz Professor and professor of applied economics at the MIT Sloan School of Management, director of the MIT Climate Policy Center, and associate dean for Climate and Sustainability.
In all, MIT has more than 300 faculty and senior researchers who, along with their students and staff, are already working on climate issues.
Kornbluth, for her part, referred to MIT’s first-year students while discussing the larger motivations for taking concerted action to address the challenges of climate change. It might be easy for younger people to despair over the world’s climate trajectory, she noted, but the best response to that includes seeking new avenues for climate progress.
“I understand their anxiety and concern,” Kornbluth said. “But I have no doubt at all that together, we can make a difference. I believe that we have a special obligation to the new students and their entire generation to do everything we can to create a positive change. The most powerful antidote to defeat and despair is collective action.”
Bridging the heavens and EarthEAPS PhD student Jared Bryan found a way to use his research on earthquakes to help understand exoplanet migration.When Jared Bryan talks about his seismology research, it’s with a natural finesse. He’s a fifth-year PhD student working with MIT Assistant Professor William Frank on seismology research, drawn in by the lab’s combination of GPS observations, satellites, and seismic station data to understand the underlying physics of earthquakes. He has no trouble talking about seismic velocity in fault zones or how he first became interested in the field after summer internships with the Southern California Earthquake Center as an undergraduate student.
“It’s definitely like a more down-to-earth kind of seismology,” he jokingly describes it. It’s an odd comment. Where else could earthquakes be but on Earth? But it’s because Bryan finished a research project that has culminated in a new paper — published today in Nature Astronomy — involving seismic activity not on Earth, but on stars.
Building curiosity
PhD students in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) are required to complete two research projects as part of their general exam. The first is often in their main focus of research and the foundations of what will become their thesis work.
But the second project has a special requirement: It must be in a different specialty.
“Having that built into the structure of the PhD is really, really nice,” says Bryan, who hadn’t known about the special requirement when he decided to come to EAPS. “I think it helps you build curiosity and find what's interesting about what other people are doing.”
Having so many different, yet still related, fields of study housed in one department makes it easier for students with a strong sense of curiosity to explore the interconnected interactions of Earth science.
“I think everyone here is excited about a lot of different stuff, but we can’t do everything,” says Frank, the Victor P. Starr Career Development Professor of Geophysics. “This is a great way to get students to try something else that they maybe would have wanted to do in a parallel dimension, interact with other advisors, and see that science can be done in different ways.”
At first, Bryan was worried that the nature of the second project would be a restrictive diversion from his main PhD research. But Associate Professor Julien de Wit was looking for someone with a seismology background to look at some stellar observations he’d collected back in 2016. A star’s brightness was pulsating at a very specific frequency that had to be caused by changes in the star itself, so Bryan decided to help.
“I was surprised by how the kind of seismology that he was looking for was similar to the seismology that we were first doing in the ’60s and ’70s, like large-scale global Earth seismology,” says Bryan. “I thought it would be a way to rethink the foundations of the field that I had been studying applied to a new region.”
Going from earthquakes to starquakes is not a one-to-one comparison. While the foundational knowledge was there, movement of stars comes from a variety of sources like magnetism or the Coriolis effect, and in a variety of forms. In addition to the sound and pressure waves of earthquakes, they also have gravity waves, all of which happen on a scale much more massive.
“You have to stretch your mind a bit, because you can’t actually visit these places,” Bryan says. “It’s an unbelievable luxury that we have in Earth seismology that the things that we study are on Google Maps.”
But there are benefits to bringing in scientists from outside an area of expertise. De Wit, who served as Bryan’s supervisor for the project and is also an author on the paper, points out that they bring a fresh perspective and approach by asking unique questions.
“Things that people in the field would just take for granted are challenged by their questions,” he says, adding that Bryan was transparent about what he did and didn’t know, allowing for a rich exchange of information.
Tidal resonance locking
Bryan eventually found that the changes in the star’s brightness were caused by tidal resonance. Resonance is a physical occurrence where waves interact and amplify each other. The most common analogy is pushing someone on a swing set; when the person pushing does it at just the right time, it helps the person on the swing go higher.
“Tidal resonance is where you’re pushing at exactly the same frequency as they’re swinging, and the locking happens when both of those frequencies are changing,” Bryan explains. The person pushing the swing gets tired and pushes less often, while the chain of the swing change length. (Bryan jokes that here the analogy starts to break down.)
As a star changes over the course of its lifetime, tidal resonance locking can cause hot Jupiters, which are massive exoplanets that orbit very close to their host stars, to change orbital distances. This wandering migration, as they call it, explains how some hot Jupiters get so close to their host stars. They also found that the path they take to get there is not always smooth. It can speed up, slow down, or even regress.
An important implication from the paper is that tidal resonance locking could be used as an exoplanet detection tool, confirming de Wit’s hypothesis from the original 2016 observation that the pulsations had the potential to be used in such a way. If changes in the star’s brightness can be linked to this resonance locking, it may indicate planets that can’t be detected using current methods.
As below, so above
Most EAPS PhD students don’t advance their project beyond the requirements for the general exam, let alone get a paper out of it. At first, Bryan worried that continuing with it would end up being a distraction from his main work, but ultimately was glad that he committed to it and was able to contribute something meaningful to the emerging field of asteroseismology.
“I think it’s evidence that Jared is excited about what he does and has the drive and scientific skepticism to have done the extra steps to make sure that what he was doing was a real contribution to the scientific literature,” says Frank. “He’s a great example of success and what we hope for our students.”
While de Wit didn’t manage to convince Bryan to switch to exoplanet research permanently, he is “excited that there is the opportunity to keep on working together.”
Once he finishes his PhD, Bryan plans on continuing in academia as a professor running a research lab, shifting his focus onto volcano seismology and improving instrumentation for the field. He’s open to the possibility of taking his findings on Earth and applying them to volcanoes on other planetary bodies, such as those found on Venus and Jupiter’s moon Io.
“I’d like to be the bridge between those two things,” he says.
MIT OpenCourseWare sparks the joy of deep understandingWith the help of MIT’s online resources, Doğa Kürkçüoğlu, now a staff scientist at Fermilab, was able to pursue his passion for physics.From a young age, Doğa Kürkçüoğlu heard his father, a math teacher, say that learning should be about understanding and real-world applications rather than memorization. But it wasn’t until he began exploring MIT OpenCourseWare in 2004 that Kürkçüoğlu experienced what it means to truly understand complex subject matter.
“MIT professors showed me how to look at a concept from different angles that I hadn’t before, and that helped me internalize information,” says Kürkçüoğlu, who turned to MIT OpenCourseWare to supplement what he was learning as an undergraduate studying physics. “Once I understood techniques and concepts, I was able to apply them in different disciplines. Even now, there are many equations I don’t have memorized exactly, but because I understand the underlying ideas, I can derive them myself in just a few minutes.”
Though there was a point in his life when friends and classmates thought he might pursue music, Kürkçüoğlu — a skilled violinist who currently plays in a jazz band on the side — always had a passion for math and physics and was determined to learn everything he could to pursue the career he imagined for himself.
“Even when I was 4 or 5 years old, if someone asked me, ‘what do you want to be when you grow up?’ I would say a scientist or mathematician,” says Kürkçüoğlu, who is now a staff scientist at Fermilab in the Superconducting Quantum Materials and Systems Center. Fermilab is the U.S. Department of Energy laboratory for particle physics and accelerator research. “I feel lucky that I actually get to do the job I imagined as a little kid,” Kürkçüoğlu says.
OpenCourseWare and other resources from MIT Open Learning — including courses, lectures, written guides, and problem sets — played an important role in Kürkçüoğlu’s learning journey and career. He turned to these open educational resources throughout his undergraduate studies at Marmara University in Turkey. When he completed his degree in 2008, Kürkçüoğlu set his sights on a PhD. He says he felt ready to dive right into doctoral-level research thanks to so many MIT OpenCourseWare lectures, courses, and study guides. He started a PhD program at Georgia Tech, where his research focused on theoretical condensed matter physics with ultra-cold atoms.
“Without OpenCourseWare, I could not have done that,” he says, adding that he considers himself “an honorary MIT graduate.”
Memorable courses include particle physics with Iain W. Stewart, the Otto (1939) and Jane Morningstar Professorship in Science Professor of Physics and director of the Center for Theoretical Physics; and Statistical Mechanics of Fields with Mehran Kardar, professor of physics. Learning from Kardar felt especially apt, because Kürkçüoğlu’s undergraduate advisor, Nihat Berker, was Kardar’s PhD advisor. Berker is also emeritus professor of physics at MIT.
Once he completed his PhD in 2015, Kürkçüoğlu spent time as an assistant professor at Georgia Southern University and a postdoc at Los Alamos National Laboratory. He joined Fermilab in 2020. There, he works on quantum theory and quantum algorithms. He enjoys the research-focused atmosphere of a national laboratory, where teams of scientists are working toward tangible goals.
When he was teaching, though, he encouraged his students to check out Open Learning resources.
“I would tell them, first of all, to have fun. Learning should be fun — another idea that my father always encouraged as a math teacher. With OpenCourseWare, you can get a new perspective on something you already know about, or open a course that can expand your horizons,” Kürkçüoğlu says. “Depending on where you start, it might take you an hour, a week, or a month to fully understand something. Once you understand, it’s yours. It is a different kind of joy to actually, truly understand.”
A wobble from Mars could be sign of dark matter, MIT study findsWatching for changes in the Red Planet’s orbit over time could be new way to detect passing dark matter.In a new study, MIT physicists propose that if most of the dark matter in the universe is made up of microscopic primordial black holes — an idea first proposed in the 1970s — then these gravitational dwarfs should zoom through our solar system at least once per decade. A flyby like this, the researchers predict, would introduce a wobble into Mars’ orbit, to a degree that today’s technology could actually detect.
Such a detection could lend support to the idea that primordial black holes are a primary source of dark matter throughout the universe.
“Given decades of precision telemetry, scientists know the distance between Earth and Mars to an accuracy of about 10 centimeters,” says study author David Kaiser, professor of physics and the Germeshausen Professor of the History of Science at MIT. “We’re taking advantage of this highly instrumented region of space to try and look for a small effect. If we see it, that would count as a real reason to keep pursuing this delightful idea that all of dark matter consists of black holes that were spawned in less than a second after the Big Bang and have been streaming around the universe for 14 billion years.”
Kaiser and his colleagues report their findings today in the journal Physical Review D. The study’s co-authors are lead author Tung Tran ’24, who is now a graduate student at Stanford University; Sarah Geller ’12, SM ’17, PhD ’23, who is now a postdoc at the University of California at Santa Cruz; and MIT Pappalardo Fellow Benjamin Lehmann.
Beyond particles
Less than 20 percent of all physical matter is made from visible stuff, from stars and planets, to the kitchen sink. The rest is composed of dark matter, a hypothetical form of matter that is invisible across the entire electromagnetic spectrum yet is thought to pervade the universe and exert a gravitational force large enough to affect the motion of stars and galaxies.
Physicists have erected detectors on Earth to try and spot dark matter and pin down its properties. For the most part, these experiments assume that dark matter exists as a form of exotic particle that might scatter and decay into observable particles as it passes through a given experiment. But so far, such particle-based searches have come up empty.
In recent years, another possibility, first introduced in the 1970s, has regained traction: Rather than taking on a particle form, dark matter could exist as microscopic, primordial black holes that formed in the first moments following the Big Bang. Unlike the astrophysical black holes that form from the collapse of old stars, primordial black holes would have formed from the collapse of dense pockets of gas in the very early universe and would have scattered across the cosmos as the universe expanded and cooled.
These primordial black holes would have collapsed an enormous amount of mass into a tiny space. The majority of these primordial black holes could be as small as a single atom and as heavy as the largest asteroids. It would be conceivable, then, that such tiny giants could exert a gravitational force that could explain at least a portion of dark matter. For the MIT team, this possibility raised an initially frivolous question.
“I think someone asked me what would happen if a primordial black hole passed through a human body,” recalls Tung, who did a quick pencil-and-paper calculation to find that if such a black hole zinged within 1 meter of a person, the force of the black hole would push the person 6 meters, or about 20 feet away in a single second. Tung also found that the odds were astronomically unlikely that a primordial black hole would pass anywhere near a person on Earth.
Their interest piqued, the researchers took Tung’s calculations a step further, to estimate how a black hole flyby might affect much larger bodies such as the Earth and the moon.
“We extrapolated to see what would happen if a black hole flew by Earth and caused the moon to wobble by a little bit,” Tung says. “The numbers we got were not very clear. There are many other dynamics in the solar system that could act as some sort of friction to cause the wobble to dampen out.”
Close encounters
To get a clearer picture, the team generated a relatively simple simulation of the solar system that incorporates the orbits and gravitational interactions between all the planets, and some of the largest moons.
“State-of-the-art simulations of the solar system include more than a million objects, each of which has a tiny residual effect,” Lehmann notes. “But even modeling two dozen objects in a careful simulation, we could see there was a real effect that we could dig into.”
The team worked out the rate at which a primordial black hole should pass through the solar system, based on the amount of dark matter that is estimated to reside in a given region of space and the mass of a passing black hole, which in this case, they assumed to be as massive as the largest asteroids in the solar system, consistent with other astrophysical constraints.
“Primordial black holes do not live in the solar system. Rather, they’re streaming through the universe, doing their own thing,” says co-author Sarah Geller. “And the probability is, they’re going through the inner solar system at some angle once every 10 years or so.”
Given this rate, the researchers simulated various asteroid-mass black holes flying through the solar system, from various angles, and at velocities of about 150 miles per second. (The directions and speeds come from other studies of the distribution of dark matter throughout our galaxy.) They zeroed in on those flybys that appeared to be “close encounters,” or instances that caused some sort of effect in surrounding objects. They quickly found that any effect in the Earth or the moon was too uncertain to pin to a particular black hole. But Mars seemed to offer a clearer picture.
The researchers found that if a primordial black hole were to pass within a few hundred million miles of Mars, the encounter would set off a “wobble,” or a slight deviation in Mars’ orbit. Within a few years of such an encounter, Mars’ orbit should shift by about a meter — an incredibly small wobble, given the planet is more than 140 million miles from Earth. And yet, this wobble could be detected by the various high-precision instruments that are monitoring Mars today.
If such a wobble were detected in the next couple of decades, the researchers acknowledge there would still be much work needed to confirm that the push came from a passing black hole rather than a run-of-the-mill asteroid.
“We need as much clarity as we can of the expected backgrounds, such as the typical speeds and distributions of boring space rocks, versus these primordial black holes,” Kaiser notes. “Luckily for us, astronomers have been tracking ordinary space rocks for decades as they have flown through our solar system, so we could calculate typical properties of their trajectories and begin to compare them with the very different types of paths and speeds that primordial black holes should follow.”
To help with this, the researchers are exploring the possibility of a new collaboration with a group that has extensive expertise simulating many more objects in the solar system.
“We are now working to simulate a huge number of objects, from planets to moons and rocks, and how they’re all moving over long time scales,” Geller says. “We want to inject close encounter scenarios, and look at their effects with higher precision.”
“It’s a very neat test they’ve proposed, and it could tell us if the closest black hole is closer than we realize,” says Matt Caplan, associate professor of physics at Illinois State University, who was not involved in the study. “I should emphasize there’s a little bit of luck involved too. Whether or not a search finds a loud and clear signal depends on the exact path a wandering black hole takes through the solar system. Now that they’ve checked this idea with simulations, they have to do the hard part — checking the real data.”
This work was supported in part by the U.S. Department of Energy and the U.S. National Science Foundation, which includes an NSF Mathematical and Physical Sciences postdoctoral fellowship.
Finding some stability in adaptable brainsNew research suggests neurons protect and preserve certain information through a dedicated zone of stable synapses.One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.
“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute for Brain Research. In the Aug. 27 issue of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.
Visual connections
Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.
Postdoc Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).
The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells — a tight band within what she describes as the trunk of the dendritic tree.
Yaeger found several ways in which synapses in this region — formally known as the apical oblique dendrite domain — differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.
Stable synapses
In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”
The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”
When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.
That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.
“These synapses are basically a robust, high-fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context-sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”
“You actually don’t want those to be plastic,” adds Yaeger. "Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.”
By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize — further evidence that the transition depends on visual experience.
The team’s findings not only help explain how the brain balances flexibility and stability; they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: when an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.
Study: Early dark energy could resolve cosmology’s two biggest puzzlesIn the universe’s first billion years, this brief and mysterious force could have produced more bright galaxies than theory predicts.A new study by MIT physicists proposes that a mysterious force known as early dark energy could solve two of the biggest puzzles in cosmology and fill in some major gaps in our understanding of how the early universe evolved.
One puzzle in question is the “Hubble tension,” which refers to a mismatch in measurements of how fast the universe is expanding. The other involves observations of numerous early, bright galaxies that existed at a time when the early universe should have been much less populated.
Now, the MIT team has found that both puzzles could be resolved if the early universe had one extra, fleeting ingredient: early dark energy. Dark energy is an unknown form of energy that physicists suspect is driving the expansion of the universe today. Early dark energy is a similar, hypothetical phenomenon that may have made only a brief appearance, influencing the expansion of the universe in its first moments before disappearing entirely.
Some physicists have suspected that early dark energy could be the key to solving the Hubble tension, as the mysterious force could accelerate the early expansion of the universe by an amount that would resolve the measurement mismatch.
The MIT researchers have now found that early dark energy could also explain the baffling number of bright galaxies that astronomers have observed in the early universe. In their new study, reported today in the Monthly Notices of the Royal Astronomical Society, the team modeled the formation of galaxies in the universe’s first few hundred million years. When they incorporated a dark energy component only in that earliest sliver of time, they found the number of galaxies that arose from the primordial environment bloomed to fit astronomers’ observations.
“You have these two looming open-ended puzzles,” says study co-author Rohan Naidu, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “We find that in fact, early dark energy is a very elegant and sparse solution to two of the most pressing problems in cosmology.”
The study’s co-authors include lead author and Kavli postdoc Xuejian (Jacob) Shen, and MIT professor of physics Mark Vogelsberger, along with Michael Boylan-Kolchin at the University of Texas at Austin, and Sandro Tacchella at the University of Cambridge.
Big city lights
Based on standard cosmological and galaxy formation models, the universe should have taken its time spinning up the first galaxies. It would have taken billions of years for primordial gas to coalesce into galaxies as large and bright as the Milky Way.
But in 2023, NASA’s James Webb Space Telescope (JWST) made a startling observation. With an ability to peer farther back in time than any observatory to date, the telescope uncovered a surprising number of bright galaxies as large as the modern Milky Way within the first 500 million years, when the universe was just 3 percent of its current age.
“The bright galaxies that JWST saw would be like seeing a clustering of lights around big cities, whereas theory predicts something like the light around more rural settings like Yellowstone National Park,” Shen says. “And we don’t expect that clustering of light so early on.”
For physicists, the observations imply that there is either something fundamentally wrong with the physics underlying the models or a missing ingredient in the early universe that scientists have not accounted for. The MIT team explored the possibility of the latter, and whether the missing ingredient might be early dark energy.
Physicists have proposed that early dark energy is a sort of antigravitational force that is turned on only at very early times. This force would counteract gravity’s inward pull and accelerate the early expansion of the universe, in a way that would resolve the mismatch in measurements. Early dark energy, therefore, is considered the most likely solution to the Hubble tension.
Galaxy skeleton
The MIT team explored whether early dark energy could also be the key to explaining the unexpected population of large, bright galaxies detected by JWST. In their new study, the physicists considered how early dark energy might affect the early structure of the universe that gave rise to the first galaxies. They focused on the formation of dark matter halos — regions of space where gravity happens to be stronger, and where matter begins to accumulate.
“We believe that dark matter halos are the invisible skeleton of the universe,” Shen explains. “Dark matter structures form first, and then galaxies form within these structures. So, we expect the number of bright galaxies should be proportional to the number of big dark matter halos.”
The team developed an empirical framework for early galaxy formation, which predicts the number, luminosity, and size of galaxies that should form in the early universe, given some measures of “cosmological parameters.” Cosmological parameters are the basic ingredients, or mathematical terms, that describe the evolution of the universe.
Physicists have determined that there are at least six main cosmological parameters, one of which is the Hubble constant — a term that describes the universe’s rate of expansion. Other parameters describe density fluctuations in the primordial soup, immediately after the Big Bang, from which dark matter halos eventually form.
The MIT team reasoned that if early dark energy affects the universe’s early expansion rate, in a way that resolves the Hubble tension, then it could affect the balance of the other cosmological parameters, in a way that might increase the number of bright galaxies that appear at early times. To test their theory, they incorporated a model of early dark energy (the same one that happens to resolve the Hubble tension) into an empirical galaxy formation framework to see how the earliest dark matter structures evolve and give rise to the first galaxies.
“What we show is, the skeletal structure of the early universe is altered in a subtle way where the amplitude of fluctuations goes up, and you get bigger halos, and brighter galaxies that are in place at earlier times, more so than in our more vanilla models,” Naidu says. “It means things were more abundant, and more clustered in the early universe.”
“A priori, I would not have expected the abundance of JWST’s early bright galaxies to have anything to do with early dark energy, but their observation that EDE pushes cosmological parameters in a direction that boosts the early-galaxy abundance is interesting,” says Marc Kamionkowski, professor of theoretical physics at Johns Hopkins University, who was not involved with the study. “I think more work will need to be done to establish a link between early galaxies and EDE, but regardless of how things turn out, it’s a clever — and hopefully ultimately fruitful — thing to try.”
“We demonstrated the potential of early dark energy as a unified solution to the two major issues faced by cosmology. This might be an evidence for its existence if the observational findings of JWST get further consolidated,” Vogelsberger concludes. “In the future, we can incorporate this into large cosmological simulations to see what detailed predictions we get.”
This research was supported, in part, by NASA and the National Science Foundation.
Harnessing the power of placebo for pain reliefMIT researchers investigate the neural circuits that underlie placebos’ ability to relieve chronic and acute pain.Placebos are inert treatments, generally not expected to impact biological pathways or improve a person’s physical health. But time and again, some patients report that they feel better after taking a placebo. Increasingly, doctors and scientists are recognizing that rather than dismissing placebos as mere trickery, they may be able to help patients by harnessing their power.
To maximize the impact of the placebo effect and design reliable therapeutic strategies, researchers need a better understanding of how it works. Now, with a new animal model developed by scientists at the McGovern Institute at MIT, they will be able to investigate the neural circuits that underlie placebos’ ability to elicit pain relief.
“The brain and body interaction has a lot of potential, in a way that we don't fully understand,” says Fan Wang, an MIT professor of brain and cognitive sciences and investigator at the McGovern Institute. “I really think there needs to be more of a push to understand placebo effect, in pain and probably in many other conditions. Now we have a strong model to probe the circuit mechanism.”
Context-dependent placebo effect
In the Sept. 5, 2024, issue of the journal Current Biology, Wang and her team report that they have elicited strong placebo pain relief in mice by activating pain-suppressing neurons in the brain while the mice are in a specific environment, thereby teaching the animals that they feel better when they are in that context. Following their training, placing the mice in that environment alone is enough to suppress pain. The team’s experiments — which were funded by the National Institutes of Health, the K. Lisa Yang Brain-Body Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics within MIT’s Yang Tan Collective — show that this context-dependent placebo effect relieves both acute and chronic pain.
Context is critical for the placebo effect. While a pill can help a patient feel better when they expect it to, even if it is made only of sugar or starch, it seems to be not just the pill that sets up those expectations, but the entire scenario in which the pill is taken. For example, being in a hospital and interacting with doctors can contribute to a patient’s perception of care, and these social and environmental factors can make a placebo effect more probable.
MIT postdocs Bin Chen and Nitsan Goldstein used visual and textural cues to define a specific place. Then they activated pain-suppressing neurons in the brain while the animals were in this “pain-relief box.” Those pain-suppressing neurons, which Wang’s lab discovered a few years ago, are located in an emotion-processing center of the brain called the central amygdala. By expressing light-sensitive channels in these neurons, the researchers were able to suppress pain with light in the pain-relief box and leave the neurons inactive when mice were in a control box.
Animals learned to prefer the pain-relief box to other environments. And when the researchers tested their response to potentially painful stimuli after they had made that association, they found the mice were less sensitive while they were there. “Just by being in the context that they had associated with pain suppression, we saw that reduced pain—even though we weren't actually activating those [pain-suppressing] neurons,” Goldstein explains.
Acute and chronic pain relief
Some scientists have been able to elicit placebo pain relief in rodents by treating the animals with morphine, linking environmental cues to the pain suppression caused by the drugs similar to the way Wang’s team did by directly activating pain-suppressing neurons. This drug-based approach works best for setting up expectations of relief for acute pain; its placebo effect is short-lived and mostly ineffective against chronic pain. So Wang, Chen, and Goldstein were particularly pleased to find that their engineered placebo effect was effective for relieving both acute and chronic pain.
In their experiments, animals experiencing a chemotherapy-induced hypersensitivity to touch exhibited a preference for the pain relief box as much as animals who were exposed to a chemical that induces acute pain, days after their initial conditioning. Once there, their chemotherapy-induced pain sensitivity was eliminated; they exhibited no more sensitivity to painful stimuli than they had prior to receiving chemotherapy.
One of the biggest surprises came when the researchers turned their attention back to the pain-suppressing neurons in the central amygdala that they had used to trigger pain relief. They suspected that those neurons might be reactivated when mice returned to the pain-relief box. Instead, they found that after the initial conditioning period, those neurons remained quiet. “These neurons are not reactivated, yet the mice appear to be no longer in pain,” Wang says. “So it suggests this memory of feeling well is transferred somewhere else.”
Goldstein adds that there must be a pain-suppressing neural circuit somewhere that is activated by pain-relief-associated contexts — and the team’s new placebo model sets researchers up to investigate those pathways. A deeper understanding of that circuitry could enable clinicians to deploy the placebo effect — alone or in combination with active treatments — to better manage patients’ pain in the future.
Tools for making imagination blossom at MIT.nanoNew STUDIO.nano supports artistic research and encounters within MIT.nano’s facilities.The MIT community and visitors have a new reason to drop by MIT.nano: six artworks by Brazilian artist and sculptor Denise Milan. Located in the open-air stairway connecting the first- and second-floor galleries within the nanoscience and engineering facility, the works center around the stone as a microcosm of nature. From Milan’s “Mist of the Earth” series, evocative of mandalas, the project asks viewers to reflect on the environmental changes that result from human-made development.
Milan is the inaugural artist in “Encounters,” a series presented by STUDIO.nano, a new initiative from MIT.nano that encourages the exploration of platforms and pathways at the intersection of technology, science, and art. Encounters welcomes proposals from artists, scientists, engineers, and designers from outside of the MIT community looking to collaborate with MIT.nano researchers, facilities, ongoing projects, and unique spaces.
“Life is in the art of the encounter,” remarked Milan, quoting Brazilian poet Vinicius de Moraes, during a reception at MIT.nano. “And for an artist to be in a place like this, MIT.nano, what could be better? I love the curiosity of scientists. They are very much like artists ... art and science are both tools for making imagination blossom.” What followed was a freewheeling conversation between attendees that spanned topics ranging from the cyclical nature of birth, death, and survival in the cosmos to musings on the elemental sources of creativity and the similarities in artistic and scientific practice to a brief lesson on time crystals by Nobel Prize laureate Frank Wilczek, the Herman Feshbach Professor of Physics at MIT.
Milan was joined in her conversation by MIT.nano Director Vladimir Bulović, the Fariborz Maseeh Professor of Emerging Technologies; Ardalan SadeghiKivi MArch ’22, who moderated the discussion; Samantha Farrell, manager of STUDIO.nano programming; and Naomi Moniz, professor emeritus at Georgetown University, who connected Milan and her work with MIT.nano.
“In addition to the technical community, we [at MIT.nano] have been approached by countless artists and thinkers in the humanities who, to our delight, are eager to learn about the wonders of the nanoscale and how to use the tools of MIT.nano to explore and expand their own artistic practice,” said Bulović.
These interactions have spurred collaborative projects across disciplines, art exhibitions, and even MIT classes. For the past four years MIT.nano has hosted 4.373/4.374 (Creating Art, Thinking Science), an undergraduate and graduate class offered by the Art, Culture, and Technology (ACT) Program. To date, the class has brought 35 students into MIT.nano’s labs and resulted in 40 distinct projects and 60 pieces of art, many of which are on display in MIT.nano’s galleries.
With the launch of STUDIO.nano, MIT.nano will look to expand its exhibition programs, including supporting additional digital media and augmented/virtual reality projects; providing tools and spaces for development of new classes envisioned by MIT academic departments; and introducing programming such as lectures related to the studio's activities.
Milan’s work will be a permanent installation at MIT.nano, where she hopes it will encourage individuals to pursue their creative inspiration, regardless of discipline. “To exist or to disappear?” Milan asked. “If it’s us, an idea, or a dream — the question is how much of an assignment you have with your own imagination.”
Atoms on the edgePhysicists capture images of ultracold atoms flowing freely, without friction, in an exotic “edge state.”Typically, electrons are free agents that can move through most metals in any direction. When they encounter an obstacle, the charged particles experience friction and scatter randomly like colliding billiard balls.
But in certain exotic materials, electrons can appear to flow with single-minded purpose. In these materials, electrons may become locked to the material’s edge and flow in one direction, like ants marching single-file along a blanket’s boundary. In this rare “edge state,” electrons can flow without friction, gliding effortlessly around obstacles as they stick to their perimeter-focused flow. Unlike in a superconductor, where all electrons in a material flow without resistance, the current carried by edge modes occurs only at a material’s boundary.
Now MIT physicists have directly observed edge states in a cloud of ultracold atoms. For the first time, the team has captured images of atoms flowing along a boundary without resistance, even as obstacles are placed in their path. The results, which appear today in Nature Physics, could help physicists manipulate electrons to flow without friction in materials that could enable super-efficient, lossless transmission of energy and data.
“You could imagine making little pieces of a suitable material and putting it inside future devices, so electrons could shuttle along the edges and between different parts of your circuit without any loss,” says study co-author Richard Fletcher, assistant professor of physics at MIT. “I would stress though that, for us, the beauty is seeing with your own eyes physics which is absolutely incredible but usually hidden away in materials and unable to be viewed directly.”
The study’s co-authors at MIT include graduate students Ruixiao Yao and Sungjae Chi, former graduate students Biswaroop Mukherjee PhD ’20 and Airlia Shaffer PhD ’23, along with Martin Zwierlein, the Thomas A. Frank Professor of Physics. The co-authors are all members of MIT’s Research Laboratory of Electronics and the MIT-Harvard Center for Ultracold Atoms.
Forever on the edge
Physicists first invoked the idea of edge states to explain a curious phenomenon, known today as the Quantum Hall effect, which scientists first observed in 1980, in experiments with layered materials, where electrons were confined to two dimensions. These experiments were performed in ultracold conditions, and under a magnetic field. When scientists tried to send a current through these materials, they observed that electrons did not flow straight through the material, but instead accumulated on one side, in precise quantum portions.
To try and explain this strange phenomenon, physicists came up with the idea that these Hall currents are carried by edge states. They proposed that, under a magnetic field, electrons in an applied current could be deflected to the edges of a material, where they would flow and accumulate in a way that might explain the initial observations.
“The way charge flows under a magnetic field suggests there must be edge modes,” Fletcher says. “But to actually see them is quite a special thing because these states occur over femtoseconds, and across fractions of a nanometer, which is incredibly difficult to capture.”
Rather than try and catch electrons in an edge state, Fletcher and his colleagues realized they might be able to recreate the same physics in a larger and more observable system. The team has been studying the behavior of ultracold atoms in a carefully designed setup that mimics the physics of electrons under a magnetic field.
“In our setup, the same physics occurs in atoms, but over milliseconds and microns,” Zwierlein explains. “That means that we can take images and watch the atoms crawl essentially forever along the edge of the system.”
A spinning world
In their new study, the team worked with a cloud of about 1 million sodium atoms, which they corralled in a laser-controlled trap, and cooled to nanokelvin temperatures. They then manipulated the trap to spin the atoms around, much like riders on an amusement park Gravitron.
“The trap is trying to pull the atoms inward, but there’s centrifugal force that tries to pull them outward,” Fletcher explains. “The two forces balance each other, so if you’re an atom, you think you’re living in a flat space, even though your world is spinning. There’s also a third force, the Coriolis effect, such that if they try to move in a line, they get deflected. So these massive atoms now behave as if they were electrons living in a magnetic field.”
Into this manufactured reality, the researchers then introduced an “edge,” in the form of a ring of laser light, which formed a circular wall around the spinning atoms. As the team took images of the system, they observed that when the atoms encountered the ring of light, they flowed along its edge, in just one direction.
“You can imagine these are like marbles that you’ve spun up really fast in a bowl, and they just keep going around and around the rim of the bowl,” Zwierlein offers. “There is no friction. There is no slowing down, and no atoms leaking or scattering into the rest of the system. There is just beautiful, coherent flow.”
“These atoms are flowing, free of friction, for hundreds of microns,” Fletcher adds. “To flow that long, without any scattering, is a type of physics you don’t normally see in ultracold atom systems.”
This effortless flow held up even when the researchers placed an obstacle in the atoms’ path, like a speed bump, in the form of a point of light, which they shone along the edge of the original laser ring. Even as they came upon this new obstacle, the atoms didn’t slow their flow or scatter away, but instead glided right past without feeling friction as they normally would.
“We intentionally send in this big, repulsive green blob, and the atoms should bounce off it,” Fletcher says. “But instead what you see is that they magically find their way around it, go back to the wall, and continue on their merry way.”
The team’s observations in atoms document the same behavior that has been predicted to occur in electrons. Their results show that the setup of atoms is a reliable stand-in for studying how electrons would behave in edge states.
“It’s a very clean realization of a very beautiful piece of physics, and we can directly demonstrate the importance and reality of this edge,” Fletcher says. “A natural direction is to now introduce more obstacles and interactions into the system, where things become more unclear as to what to expect.”
This research was supported, in part, by the National Science Foundation.
MIT chemists explain why dinosaur collagen may have survived for millions of yearsThe researchers identified an atomic-level interaction that prevents peptide bonds from being broken down by water.Collagen, a protein found in bones and connective tissue, has been found in dinosaur fossils as old as 195 million years. That far exceeds the normal half-life of the peptide bonds that hold proteins together, which is about 500 years.
A new study from MIT offers an explanation for how collagen can survive for so much longer than expected. The research team found that a special atomic-level interaction defends collagen from attack by water molecules. This barricade prevents water from breaking the peptide bonds through a process called hydrolysis.
“We provide evidence that that interaction prevents water from attacking the peptide bonds and cleaving them. That just flies in the face of what happens with a normal peptide bond, which has a half-life of only 500 years,” says Ron Raines, the Firmenich Professor of Chemistry at MIT.
Raines is the senior author of the new study, which appears today in ACS Central Science. MIT postdoc Jinyi Yang PhD ’24 is the lead author of the paper. MIT postdoc Volga Kojasoy and graduate student Gerard Porter are also authors of the study.
Water-resistant
Collagen is the most abundant protein in animals, and it is found in not only bones but also skin, muscles, and ligaments. It’s made from long strands of protein that intertwine to form a tough triple helix.
“Collagen is the scaffold that holds us together,” Raines says. “What makes the collagen protein so stable, and such a good choice for this scaffold, is that unlike most proteins, it’s fibrous.”
In the past decade, paleobiologists have found evidence of collagen preserved in dinosaur fossils, including an 80-million-year-old Tyrannosaurus rex fossil, and a sauropodomorph fossil that is nearly 200 million years old.
Over the past 25 years, Raines’ lab has been studying collagen and how its structure enables its function. In the new study, they revealed why the peptide bonds that hold collagen together are so resistant to being broken down by water.
Peptide bonds are formed between a carbon atom from one amino acid and a nitrogen atom of the adjacent amino acid. The carbon atom also forms a double bond with an oxygen atom, forming a molecular structure called a carbonyl group. This carbonyl oxygen has a pair of electrons that don’t form bonds with any other atoms. Those electrons, the researchers found, can be shared with the carbonyl group of a neighboring peptide bond.
Because this pair of electrons is being inserted into those peptide bonds, water molecules can’t also get into the structure to disrupt the bond.
To demonstrate this, Raines and his colleagues created two interconverting mimics of collagen — the one that usually forms a triple helix, which is known as trans, and another in which the angles of the peptide bonds are rotated into a different form, known as cis. They found that the trans form of collagen did not allow water to attack and hydrolyze the bond. In the cis form, water got in and the bonds were broken.
“A peptide bond is either cis or trans, and we can change the cis to trans ratio. By doing that, we can mimic the natural state of collagen or create an unprotected peptide bond. And we saw that when it was unprotected, it was not long for the world,” Raines says.
“This work builds on a long-term effort in the Raines Group to classify the role of a long-overlooked fundamental interaction in protein structure,” says Paramjit Arora, a professor of chemistry at New York University, who was not involved in the research. “The paper directly addresses the remarkable finding of intact collagen in the ribs of a 195-million-old dinosaur fossil, and shows that overlap of filled and empty orbitals controls the conformational and hydrolytic stability of collagen.”
“No weak link”
This sharing of electrons has also been seen in protein structures known as alpha helices, which are found in many proteins. These helices may also be protected from water, but the helices are always connected by protein sequences that are more exposed, which are still susceptible to hydrolysis.
“Collagen is all triple helices, from one end to the other,” Raines says. “There’s no weak link, and that’s why I think it has survived.”
Previously, some scientists have suggested other explanations for why collagen might be preserved for millions of years, including the possibility that the bones were so dehydrated that no water could reach the peptide bonds.
“I can’t discount the contributions from other factors, but 200 million years is a long time, and I think you need something at the molecular level, at the atomic level in order to explain it,” Raines says.
The research was funded by the National Institutes of Health and the National Science Foundation.