Science news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily science news of the the MIT - Massachusetts Institute of Technology University

MIT News - School of Science
MIT news feed about: School of Science
How telecommunications cables can image the ground beneath us

By making use of MIT’s existing fiber optic infrastructure, PhD student Hilary Chang imaged the ground underneath campus, a method that can be used to characterize seismic hazards.


When people think about fiber optic cables, its usually about how they’re used for telecommunications and accessing the internet. But fiber optic cables — strands of glass or plastic that allow for the transmission of light — can be used for another purpose: imaging the ground beneath our feet.

MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) PhD student Hilary Chang recently used the MIT fiber optic cable network to successfully image the ground underneath campus using a method known as distributed acoustic sensing (DAS). By using existing infrastructure, DAS can be an efficient and effective way to understand ground composition, a critical component for assessing the seismic hazard of areas, or how at risk they are from earthquake damage.

“We were able to extract very nice, coherent waves from the surroundings, and then use that to get some information about the subsurface,” says Chang, the lead author of a recent paper describing her work that was co-authored with EAPS Principal Research Scientist Nori Nakata. 

Dark fibers

The MIT campus fiber optic system, installed from 2000 to 2003, services internal data transport between labs and buildings as well as external transport, such as the campus internet (MITNet). There are three major cable hubs on campus from which lines branch out into buildings and underground, much like a spiderweb.

The network allocates a certain number of strands per building, some of which are “dark fibers,” or cables that are not actively transporting information. Each campus fiber hub has redundant backbone cables between them so that, in the event of a failure, network transmission can switch to the dark fibers without loss of network services.

DAS can use existing telecommunication cables and ambient wavefields to extract information about the materials they pass through, making it a valuable tool for places like cities or the ocean floor, where conventional sensors can’t be deployed. Chang, who studies earthquake waveforms and the information we can extract from them, decided to try it out on the MIT campus.

In order to get access to the fiber optic network for the experiment, Chang reached out to John Morgante, a manager of infrastructure project engineering with MIT Information Systems and Technology (IS&T). Morgante has been at MIT since 1998 and was involved with the original project installing the fiber optic network, and was thus able to provide personal insight into selecting a route.

“It was interesting to listen to what they were trying to accomplish with the testing,” says Morgante. While IS&T has worked with students before on various projects involving the school’s network, he said that “in the physical plant area, this is the first that I can remember that we’ve actually collaborated on an experiment together.”

They decided on a path starting from a hub in Building 24, because it was the longest running path that was entirely underground; above-ground wires that cut through buildings wouldn’t work because they weren’t grounded, and thus were useless for the experiment. The path ran from east to west, beginning in Building 24, traveling under a section of Massachusetts Ave., along parts of Amherst and Vassar streets, and ending at Building W92.

“[Morgante] was really helpful,” says Chang, describing it as “a very good experience working with the campus IT team.”

Locating the cables

After renting an interrogator, a device that sends laser pulses to sense ambient vibrations along the fiber optic cables, Chang and a group of volunteers were given special access to connect it to the hub in Building 24. They let it run for five days.

To validate the route and make sure that the interrogator was working, Chang conducted a tap test, in which she hit the ground with a hammer several times to record the precise GPS coordinates of the cable. Conveniently, the underground route is marked by maintenance hole covers that serve as good locations to do the test. And, because she needed the environment to be as quiet as possible to collect clean data, she had to do it around 2 a.m.

“I was hitting it next to a dorm and someone yelled ‘shut up,’ probably because the hammer blows woke them up,” Chang recalls. “I was sorry.” Thankfully, she only had to tap at a few spots and could interpolate the locations for the rest.

During the day, Chang and her fellow students — Denzel Segbefia, Congcong Yuan, and Jared Bryan — performed an additional test with geophones, another instrument that detects seismic waves, out on Brigg’s Field where the cable passed under it to compare the signals. It was an enjoyable experience for Chang; when the data were collected in 2022, the campus was coming out of pandemic measures, with remote classes sometimes still in place. “It was very nice to have everyone on the field and do something with their hands,” she says.

The noise around us

Once Chang collected the data, she was able to see plenty of environmental activity in the waveforms, including the passing of cars, bikes, and even when the train that runs along the northern edge of campus made its nightly passes.

After identifying the noise sources, Chang and Nakata extracted coherent surface waves from the ambient noises and used the wave speeds associated with different frequencies to understand the properties of the ground the cables passed through. Stiffer materials allow fast velocities, while softer material slows it.

“We found out that the MIT campus is built on soft materials overlaying a relatively hard bedrock,” Chang says, which confirms previously known, albeit lower-resolution, information about the geology of the area that had been collected using seismometers.

Information like this is critical for regions that are susceptible to destructive earthquakes and other seismic hazards, including the Commonwealth of Massachusetts, which has experienced earthquakes as recently as this past week. Areas of Boston and Cambridge characterized by artificial fill during rapid urbanization are especially at risk due to its subsurface structure being more likely to amplify seismic frequencies and damage buildings. This non-intrusive method for site characterization can help ensure that buildings meet code for the correct seismic hazard level.

“Destructive seismic events do happen, and we need to be prepared,” she says.


Eleven MIT faculty receive Presidential Early Career Awards

Faculty members and additional MIT alumni are among 400 scientists and engineers recognized for outstanding leadership potential.


Eleven MIT faculty, including nine from the School of Engineering and two from the School of Science, were awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). More than 15 additional MIT alumni were also honored. 

Established in 1996 by President Bill Clinton, the PECASE is awarded to scientists and engineers “who show exceptional potential for leadership early in their research careers.” The latest recipients were announced by the White House on Jan. 14 under President Joe Biden. Fourteen government agencies recommended researchers for the award.

The MIT faculty and alumni honorees are among 400 scientists and engineers recognized for innovation and scientific contributions. Those from the School of Engineering and School of Science who were honored are:

Additional MIT alumni who were honored include: Elaheh Ahmadi ’20, MNG ’21; Ambika Bajpayee MNG ’07, PhD ’15; Katherine Bouman SM ’13, PhD ’17; Walter Cheng-Wan Lee ’95, MNG ’95, PhD ’05; Ismaila Dabo PhD ’08; Ying Diao SM ’10, PhD ’12; Eno Ebong ’99; Soheil Feizi- Khankandi SM ’10, PhD ’16; Mark Finlayson SM ’01, PhD ’12; Chelsea B. Finn ’14; Grace Xiang Gu SM ’14, PhD ’18; David Michael Isaacson PhD ’06, AF ’16; Lewei Lin ’05; Michelle Sander PhD ’12; Kevin Solomon SM ’08, PhD ’12; and Zhiting Tian PhD ’14.


Introducing the MIT Generative AI Impact Consortium

The consortium will bring researchers and industry together to focus on impact.


From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.

Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.

“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”

Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”

Developing the blueprint for generative AI’s next leap

The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:

  1. How can AI-human collaboration create outcomes that neither could achieve alone?
  2. What is the dynamic between AI systems and human behavior, and how do we maximize the benefits while steering clear of risks?
  3. How can interdisciplinary research guide the development of better, safer AI technologies that improve human life?

Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there's no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.

“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.

"What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time," says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.

A “perfect match” of academia and industry

At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.

The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.

“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”

Industry partners: Collaborating on AI’s evolution

At the core of the consortium’s mission is collaboration — bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.

Among the founding members is OpenAI, the creator of the generative AI chatbot ChatGPT.

“This type of collaboration between academics, practitioners, and labs is key to ensuring that generative AI evolves in ways that meaningfully benefit society,” says Anna Makanju, vice president of global impact at OpenAI, adding that OpenAI “is eager to work alongside MIT’s Generative AI Consortium to bridge the gap between cutting-edge AI research and the real-world expertise of diverse industries.”

The Coca-Cola Co. recognizes an opportunity to leverage AI innovation on a global scale. “We see a tremendous opportunity to innovate at the speed of AI and, leveraging The Coca-Cola Company's global footprint, make these cutting-edge solutions accessible to everyone,” says Pratik Thakar, global vice president and head of generative AI. “Both MIT and The Coca-Cola Company are deeply committed to innovation, while also placing equal emphasis on the legally and ethically responsible development and use of technology.”

For TWG Global, the consortium offers the ideal environment to share knowledge and drive advancements. “The strength of the consortium is its unique combination of industry leaders and academia, which fosters the exchange of valuable lessons, technological advancements, and access to pioneering research,” says Drew Cukor, head of data and artificial intelligence transformation. Cukor adds that TWG Global “is keen to share its insights and actively engage with leading executives and academics to gain a broader perspective of how others are configuring and adopting AI, which is why we believe in the work of the consortium.”

The Tata Group views the collaboration as a platform to address some of AI’s most pressing challenges. “The consortium enables Tata to collaborate, share knowledge, and collectively shape the future of generative AI, particularly in addressing urgent challenges such as ethical considerations, data privacy, and algorithmic biases,” says Aparna Ganesh, vice president of Tata Sons Ltd.

Similarly, SK Telecom sees its involvement as a launchpad for growth and innovation. Suk-geun (SG) Chung, SK Telecom executive vice president and chief AI global officer, explains, “Joining the consortium presents a significant opportunity for SK Telecom to enhance its AI competitiveness in core business areas, including AI agents, AI semiconductors, data centers (AIDC), and physical AI,” says Chung. “By collaborating with MIT and leveraging the SK AI R&D Center as a technology control tower, we aim to forecast next-generation generative AI technology trends, propose innovative business models, and drive commercialization through academic-industrial collaboration.”

Alan Lee, chief technology officer of Analog Devices (ADI), highlights how the consortium bridges key knowledge gaps for both his company and the industry at large. “ADI can’t hire a world-leading expert in every single corner case, but the consortium will enable us to access top MIT researchers and get them involved in addressing problems we care about, as we also work together with others in the industry towards common goals,” he says.

The consortium will host interactive workshops and discussions to identify and prioritize challenges. “It’s going to be a two-way conversation, with the faculty coming together with industry partners, but also industry partners talking with each other,” says Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research and statistics, who serves alongside Huttenlocher as co-chair of the GenAI Dean’s oversight group.

Preparing for the AI-enabled workforce of the future

With AI poised to disrupt industries and create new opportunities, one of the consortium’s core goals is to guide that change in a way that benefits both businesses and society.

“When the first commercial digital computers were introduced [the UNIVAC was delivered to the U.S. Census Bureau in 1951], people were worried about losing their jobs,” says Kraska. “And yes, jobs like large-scale, manual data entry clerks and human ‘computers,’ people tasked with doing manual calculations, largely disappeared over time. But the people impacted by those first computers were trained to do other jobs.”

The consortium aims to play a key role in preparing the workforce of tomorrow by educating global business leaders and employees on generative AI evolving uses and applications. With the pace of innovation accelerating, leaders face a flood of information and uncertainty.

“When it comes to educating leaders about generative AI, it’s about helping them navigate the complexity of the space right now, because there’s so much hype and hundreds of papers published daily,” says Kraska. “The hard part is understanding which developments could actually have a chance of changing the field and which are just tiny improvements. There's a kind of FOMO [fear of missing out] for leaders that we can help reduce.”

Defining success: Shared goals for generative AI impact

Success within the initiative is defined by shared progress, open innovation, and mutual growth. “Consortium participants recognize, I think, that when I share my ideas with you, and you share your ideas with me, we’re both fundamentally better off,” explains Farias. “Progress on generative AI is not zero-sum, so it makes sense for this to be an open-source initiative.”

While participants may approach success from different angles, they share a common goal of advancing generative AI for broad societal benefit. “There will be many success metrics,” says Perakis. “We’ll educate students, who will be networking with companies. Companies will come together and learn from each other. Business leaders will come to MIT and have discussions that will help all of us, not just the leaders themselves.”

For Analog Devices’ Alan Lee, success is measured in tangible improvements that drive efficiency and product innovation: “For us at ADI, it’s a better, faster quality of experience for our customers, and that could mean better products. It could mean faster design cycles, faster verification cycles, and faster tuning of equipment that we already have or that we’re going to develop for the future. But beyond that, we want to help the world be a better, more efficient place.”

Ganesh highlights success through the lens of real-world application. “Success will also be defined by accelerating AI adoption within Tata companies, generating actionable knowledge that can be applied in real-world scenarios, and delivering significant advantages to our customers and stakeholders,” she says.

Generative AI is no longer confined to isolated research labs — it’s driving innovation across industries and disciplines. At MIT, the technology has become a campus-wide priority, connecting researchers, students, and industry leaders to solve complex challenges and uncover new opportunities. “It's truly an MIT initiative,” says Farias, “one that’s much larger than any individual or department on campus.”


David Darmofal SM ’91, PhD ’93 named vice chancellor for undergraduate and graduate education

Longtime AeroAstro professor brings deep experience with academic and student life.


David L. Darmofal SM ’91, PhD ’93 will serve as MIT’s next vice chancellor for undergraduate and graduate education, effective Feb. 17. Chancellor Melissa Nobles announced Darmofal’s appointment today in a letter to the MIT community.

Darmofal succeeds Ian A. Waitz, who stepped down in May to become MIT’s vice president for research, and Daniel E. Hastings, who has been serving in an interim capacity.

A creative innovator in research-based teaching and learning, Darmofal is the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. Since 2017, he and his wife Claudia have served as heads of house at The Warehouse, an MIT graduate residence.

“Dave knows the ins and outs of education and student life at MIT in a way that few do,” Nobles says. “He’s a head of house, an alum, and the parent of a graduate. Dave will bring decades of first-hand experience to the role.”

“An MIT education is incredibly special, combining passionate students, staff, and faculty striving to use knowledge and discovery to drive positive change for the world,” says Darmofal. “I am grateful for this opportunity to play a part in supporting MIT’s academic mission.”

Darmofal’s leadership experience includes service from 2008 to 2011 as associate and interim department head in the Department of Aeronautics and Astronautics, overseeing undergraduate and graduate programs. He was the AeroAstro director of digital education from 2020 to 2022, including leading the department’s response to remote learning during the Covid-19 pandemic. He currently serves as director of the MIT Aerospace Computational Science and Engineering Laboratory and is a member of the Center for Computational Science and Engineering (CCSE) in the MIT Stephen A. Schwarzman College of Computing.

As an MIT faculty member and administrator, Darmofal has been involved in designing more flexible degree programs, developing open digital-learning opportunities, creating first-year advising seminars, and enhancing professional and personal development opportunities for students. He also contributed his expertise in engineering pedagogy to the development of the Schwarzman College of Computing’s Common Ground efforts, to address the need for computing education across many disciplines.

“MIT students, staff, and faculty share a common bond as problem solvers. Talk to any of us about an MIT education, and you will get an earful on not only what we need to do better, but also how we can actually do it. The Office of the Vice Chancellor can help bring our community of problem solvers together to enable improvements in our academics,” says Darmofal.

Overseeing the academic arm of the Chancellor’s Office, the vice chancellor’s portfolio is extensive. Darmofal will lead professionals across more than a dozen units, covering areas such as recruitment and admissions, financial aid, student systems, advising, professional and career development, pedagogy, experiential learning, and support for MIT’s more than 100 graduate programs. He will also work collaboratively with many of MIT’s student organizations and groups, including with the leaders of the Undergraduate Association and the Graduate Student Council, and administer the relationship with the graduate student union.

“Dave will be a critical part of my office’s efforts to strengthen and expand critical connections across all areas of student life and learning,” Nobles says. She credits the search advisory group, co-chaired by professors Laurie Boyer and Will Tisdale, in setting the right tenor for such an important role and leading a thorough, inclusive process.

Darmofal’s research is focused on computational methods for partial differential equations, especially fluid dynamics. He earned his SM and PhD degrees in aeronautics and astronautics in 1991 and 1993, respectively, from MIT, and his BS in aerospace engineering in 1989 from the University of Michigan. Prior to joining MIT in 1998, he was an assistant professor in the Department of Aerospace Engineering at Texas A&M University from 1995 to 1998. Currently, he is the chair of AeroAstro’s Undergraduate Committee and the graduate officer for the CCSE PhD program.

“I want to echo something that Dan Hastings said recently,” Darmofal says. “We have a lot to be proud of when it comes to an MIT education. It’s more accessible than it has ever been. It’s innovative, with unmatched learning opportunities here and around the world. It’s home to academic research labs that attract the most talented scholars, creators, experimenters, and engineers. And ultimately, it prepares graduates who do good.”


With generative AI, MIT chemists quickly calculate 3D genomic structures

A new approach, which takes minutes rather than days, predicts how a specific DNA sequence will arrange itself in the cell nucleus.


Every cell in your body contains the same genetic sequence, yet each cell expresses only a subset of those genes. These cell-specific gene expression patterns, which ensure that a brain cell is different from a skin cell, are partly determined by the three-dimensional structure of the genetic material, which controls the accessibility of each gene.

MIT chemists have now come up with a new way to determine those 3D genome structures, using generative artificial intelligence. Their technique can predict thousands of structures in just minutes, making it much speedier than existing experimental methods for analyzing the structures.

Using this technique, researchers could more easily study how the 3D organization of the genome affects individual cells’ gene expression patterns and functions.

“Our goal was to try to predict the three-dimensional genome structure from the underlying DNA sequence,” says Bin Zhang, an associate professor of chemistry and the senior author of the study. “Now that we can do that, which puts this technique on par with the cutting-edge experimental techniques, it can really open up a lot of interesting opportunities.”

MIT graduate students Greg Schuette and Zhuohan Lao are the lead authors of the paper, which appears today in Science Advances.

From sequence to structure

Inside the cell nucleus, DNA and proteins form a complex called chromatin, which has several levels of organization, allowing cells to cram 2 meters of DNA into a nucleus that is only one-hundredth of a millimeter in diameter. Long strands of DNA wind around proteins called histones, giving rise to a structure somewhat like beads on a string.

Chemical tags known as epigenetic modifications can be attached to DNA at specific locations, and these tags, which vary by cell type, affect the folding of the chromatin and the accessibility of nearby genes. These differences in chromatin conformation help determine which genes are expressed in different cell types, or at different times within a given cell.

Over the past 20 years, scientists have developed experimental techniques for determining chromatin structures. One widely used technique, known as Hi-C, works by linking together neighboring DNA strands in the cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.

This method can be used on large populations of cells to calculate an average structure for a section of chromatin, or on single cells to determine structures within that specific cell. However, Hi-C and similar techniques are labor-intensive, and it can take about a week to generate data from one cell.

To overcome those limitations, Zhang and his students developed a model that takes advantage of recent advances in generative AI to create a fast, accurate way to predict chromatin structures in single cells. The AI model that they designed can quickly analyze DNA sequences and predict the chromatin structures that those sequences might produce in a cell.

“Deep learning is really good at pattern recognition,” Zhang says. “It allows us to analyze very long DNA segments, thousands of base pairs, and figure out what is the important information encoded in those DNA base pairs.”

ChromoGen, the model that the researchers created, has two components. The first component, a deep learning model taught to “read” the genome, analyzes the information encoded in the underlying DNA sequence and chromatin accessibility data, the latter of which is widely available and cell type-specific.

The second component is a generative AI model that predicts physically accurate chromatin conformations, having been trained on more than 11 million chromatin conformations. These data were generated from experiments using Dip-C (a variant of Hi-C) on 16 cells from a line of human B lymphocytes.

When integrated, the first component informs the generative model how the cell type-specific environment influences the formation of different chromatin structures, and this scheme effectively captures sequence-structure relationships. For each sequence, the researchers use their model to generate many possible structures. That’s because DNA is a very disordered molecule, so a single DNA sequence can give rise to many different possible conformations.

“A major complicating factor of predicting the structure of the genome is that there isn’t a single solution that we’re aiming for. There’s a distribution of structures, no matter what portion of the genome you’re looking at. Predicting that very complicated, high-dimensional statistical distribution is something that is incredibly challenging to do,” Schuette says.

Rapid analysis

Once trained, the model can generate predictions on a much faster timescale than Hi-C or other experimental techniques.

“Whereas you might spend six months running experiments to get a few dozen structures in a given cell type, you can generate a thousand structures in a particular region with our model in 20 minutes on just one GPU,” Schuette says.

After training their model, the researchers used it to generate structure predictions for more than 2,000 DNA sequences, then compared them to the experimentally determined structures for those sequences. They found that the structures generated by the model were the same or very similar to those seen in the experimental data.

“We typically look at hundreds or thousands of conformations for each sequence, and that gives you a reasonable representation of the diversity of the structures that a particular region can have,” Zhang says. “If you repeat your experiment multiple times, in different cells, you will very likely end up with a very different conformation. That’s what our model is trying to predict.”

The researchers also found that the model could make accurate predictions for data from cell types other than the one it was trained on. This suggests that the model could be useful for analyzing how chromatin structures differ between cell types, and how those differences affect their function. The model could also be used to explore different chromatin states that can exist within a single cell, and how those changes affect gene expression.

“ChromoGen provides a new framework for AI-driven discovery of genome folding principles and demonstrates that generative AI can bridge genomic and epigenomic features with 3D genome structure, pointing to future work on studying the variation of genome structure and function across a broad range of biological contexts,” says Jian Ma, a professor of computational biology at Carnegie Mellon University, who was not involved in the research.

Another possible application would be to explore how mutations in a particular DNA sequence change the chromatin conformation, which could shed light on how such mutations may cause disease.

“There are a lot of interesting questions that I think we can address with this type of model,” Zhang says.

The researchers have made all of their data and the model available to others who wish to use it.

The research was funded by the National Institutes of Health.


From bench to bedside, and beyond

In the United States and abroad, Matthew Dolan ’81 has served as a leader in immunology and virology.


In medical school, Matthew Dolan ’81 briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work.

“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.”

Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the United States and abroad through the U.S. Air Force, Dolan has emerged as a leader in immunology and virology, and has served as director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and Covid-19, and has even been a guest speaker on NPR’s “Science Friday.”

“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.”

Pieces of the puzzle

Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge.

He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems such as ice machines or air conditioners, are solved at the interface between public health and ecology.

“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”

Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive. 

“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.”

Choosing To serve

Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”

One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die.

“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.”

Lasting impacts

Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives.

Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future.

“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.”

Dolan understands that the most lasting impact he has had is, likely, teaching: Time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of health-care specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the U.S. departments of State and Defense, and taught those programs around the world.

“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.” 


Rare and mysterious cosmic explosion: Gamma-ray burst or jetted tidal disruption event?

Researchers characterize the peculiar Einstein Probe transient EP240408a.


Highly energetic explosions in the sky are commonly attributed to gamma-ray bursts. We now understand that these bursts originate from either the merger of two neutron stars or the collapse of a massive star. In these scenarios, a newborn black hole is formed, emitting a jet that travels at nearly the speed of light. When these jets are directed toward Earth, we can observe them from vast distances — sometimes billions of light-years away — due to a relativistic effect known as Doppler boosting. Over the past decade, thousands of such gamma-ray bursts have been detected.

Since its launch in 2024, the Einstein Probe — an X-ray space telescope developed by the Chinese Academy of Sciences (CAS) in partnership with European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics — has been scanning the skies looking for energetic explosions, and in April the telescope observed an unusual event designated as EP240408A. Now an international team of astronomers, including Dheeraj Pasham from MIT, Igor Andreoni from University of North Carolina at Chapel Hill, and Brendan O’Connor from Carnegie Mellon University, and others have investigated this explosion using a slew of ground-based and space-based telescopes, including NuSTAR, Swift, Gemini, Keck, DECam, VLA, ATCA, and NICER, which was developed in collaboration with MIT. 

An open-access report of their findings, published Jan. 27 in The Astrophysical Journal Letters, indicates that the characteristics of this explosion do not match those of typical gamma-ray bursts. Instead, it may represent a rare new class of powerful cosmic explosion — a jetted tidal disruption event, which occurs when a supermassive black hole tears apart a star. 

“NICER’s ability to steer to pretty much any part of the sky and monitor for weeks has been instrumental in our understanding of these unusual cosmic explosions,” says Pasham, a research scientist at the MIT Kavli Institute for Astrophysics and Space Research.

While a jetted tidal disruption event is plausible, the researchers say the lack of radio emissions from this jet is puzzling. O’Connor surmises, “EP240408a ticks some of the boxes for several different kinds of phenomena, but it doesn’t tick all the boxes for anything. In particular, the short duration and high luminosity are hard to explain in other scenarios. The alternative is that we are seeing something entirely new!”

According to Pasham, the Einstein Probe is just beginning to scratch the surface of what seems possible. “I’m excited to chase the next weird explosion from the Einstein Probe”, he says, echoing astronomers worldwide who look forward to the prospect of discovering more unusual explosions from the farthest reaches of the cosmos.


Evelina Fedorenko receives Troland Award from National Academy of Sciences

Cognitive neuroscientist is recognized for her groundbreaking discoveries about the brain’s language system.


The National Academy of Sciences (NAS) recently announced that MIT Associate Professor Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions toward understanding the language network in the human brain.

The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.

Fedorenko, an associate professor of brain and cognitive sciences and a McGovern Institute for Brain Research investigator, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.

Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.

She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.

Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington.


Smart carbon dioxide removal yields economic and environmental benefits

MIT study finds a diversified portfolio of carbon dioxide removal options delivers the best return on investment.


Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.

Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products. 

To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.

The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.

First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.

By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.

“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.

The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.

“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”

Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts.


MIT Press’ Direct to Open opens access to over 80 new monographs

Support for D2O in 2025 includes two new three-year, all-consortium commitments from the Florida Virtual Campus and the Big Ten Academic Alliance.


The MIT Press has announced that Direct to Open (D2O) will open access to over 80 new monographs and edited book collections in the spring and fall publishing seasons, after reaching its full funding goal for 2025.

“It has been one of the greatest privileges of my career to contribute to this program and demonstrate that our academic community can unite to publish high-quality open-access monographs at scale,” says Amy Harris, senior manager of library relations and sales at the MIT Press. “We are deeply grateful to all of the consortia that have partnered with us and to the hundreds of libraries that have invested in this program. Together, we are expanding the public knowledge commons in ways that benefit scholars, the academy, and readers around the world.”

Among the highlights from the MIT Press’s fourth D2O funding cycle is a new three-year, consortium-wide commitment from the Florida Virtual Campus (FLVC) and a renewed three-year commitment from the Big Ten Academic Alliance (BTAA). These long-term collaborations will play a pivotal role in supporting the press’s open-access efforts for years to come.

“The Florida Virtual Campus is honored to participate in D2O in order to provide this collection of high-quality scholarship to more than 1.2 million students and faculty at the 28 state colleges and 12 state universities of Florida,” says Elijah Scott, executive director of library services for the Florida Virtual Campus. “The D2O program allows FLVC to make this research collection available to our member libraries while concurrently fostering the larger global aspiration of sustainable and equitable access to information.”

“The libraries of the Big Ten Academic Alliance are committed to supporting the creation of open-access content,” adds Kate McCready, program director for open publishing at the Big Ten Academic Alliance Library. “We're thrilled that our participation in D2O contributes to the opening of this collection, as well as championing the exploration of new models for opening scholarly monographs.”

In 2025, hundreds of libraries renewed their support thanks to the teams at consortia around the world, including the Council of Australasian University Librarians, the CBB Library Consortium, the California Digital Library, the Canadian Research Knowledge Network, CRL/NERL, the Greater Western Library Alliance, Jisc, Lyrasis, MOBIUS, PALCI, SCELC, and the Tri-College Library Consortium.

Launched in 2021, D2O is an innovative sustainable framework for open-access monographs that shifts publishing from a solely market-based, purchase model where individuals and libraries buy single e-books, to a collaborative, library-supported open-access model. 

Many other models offer open-access opportunities on a title-by-title basis or within specific disciplines. D2O’s particular advantage is that it enables a press to provide open access to its entire list of scholarly books at scale, embargo-free, during each funding cycle. Thanks to D2O, all MIT Press monograph authors have the opportunity for their work to be published open access, with equal support to traditionally underserved and underfunded disciplines in the social sciences and humanities.  

The MIT Press will now turn its attention to its fifth funding cycle and invites libraries and library consortia to participate. For details, please visit the MIT Press website or contact the Library Relations team.


Professor Emeritus Gerald Schneider, discoverer of the “two visual systems,” dies at 84

An MIT affiliate for some 60 years, Schneider was an authority on the relationships between brain structure and behavior.


Gerald E. Schneider, a professor emeritus of psychology and member of the MIT community for over 60 years, passed away on Dec. 11, 2024. He was 84.

Schneider was an authority on the relationships between brain structure and behavior, concentrating on neuronal development, regeneration or altered growth after brain injury, and the behavioral consequences of altered connections in the brain.

Using the Syrian golden hamster as his test subject of choice, Schneider made numerous contributions to the advancement of neuroscience. He laid out the concept of two visual systems — one for locating objects and one for the identification of objects — in a 1969 issue of Science, a milestone in the study of brain-behavior relationships. In 1973, he described a “pruning effect” in the optic tract axons of adult hamsters who had brain lesions early in life. In 2006, his lab reported a previously undiscovered nanobiomedical technology for tissue repair and restoration in Biological Sciences. The paper showed how a designed self-assembling peptide nanofiber scaffold could create a permissive environment for axons, not only to regenerate through the site of an acute injury in the optic tract of hamsters, but also to knit the brain tissue together.

His work shaped the research and thinking of numerous colleagues and trainees. Mriganka Sur, the Newton Professor of Neuroscience and former Department of Brain and Cognitive Sciences (BCS) department head, recalls how Schneider’s paper, “Is it really better to have your brain lesion early? A revision of the ‘Kennard Principle,’” published in 1979 in the journal Neuropsychologia, influenced his work on rewiring retinal projections to the auditory thalamus, which was used to derive principles of functional plasticity in the cortex.

“Jerry was an extremely innovative thinker. His hypothesis of two visual systems — for detailed spatial processing and for movement processing — based on his analysis of visual pathways in hamsters presaged and inspired later work on form and motion pathways in the primate brain,” Sur says. “His description of conservation of axonal arbor during development laid the foundation for later ideas about homeostatic mechanisms that co-regulate neuronal plasticity.”

Institute Professor Ann Graybiel was a colleague of Schneider’s for over five decades. She recalls early in her career being asked by Schneider to help make a map of the superior colliculus.

“I took it as an honor to be asked, and I worked very hard on this, with great excitement. It was my first such mapping, to be followed by much more in the future,” Graybiel recalls. “Jerry was fascinated by animal behavior, and from early on he made many discoveries using hamsters as his main animals of choice. He found that they could play. He found that they could operate in ways that seemed very sophisticated. And, yes, he mapped out pathways in their brains.”

Schneider was raised in Wheaton, Illinois, and graduated from Wheaton College in 1962 with a degree in physics. He was recruited to MIT by Hans-Lukas Teuber, one of the founders of the Department of Psychology, which eventually became the Department of Brain and Cognitive Sciences. Walle Nauta, another founder of the department, taught Schneider neuroanatomy. The pair were deeply influential in shaping his interests in neuroscience and his research.

“He admired them both very much and was very attached to them,” his daughter, Nimisha Schneider, says. “He was an interdisciplinary scholar and he liked that aspect of neuroscience, and he was fascinated by the mysteries of the human brain.”

Shortly after completing his PhD in psychology in 1966, he was hired as an assistant professor in 1967. He was named an associate professor in 1970, received tenure in 1975, and was appointed a full professor in 1977.

After his retirement in 2017, Schneider remained involved with the Department of BCS. Professor Pawan Sinha brought Schneider to campus for what would be his last on-campus engagement, as part of the “SilverMinds Series,” an initiative in the Sinha Lab to engage with scientists now in their “silver years.”

Schneider’s research made an indelible impact on Sinha, beginning as a graduate student when he was inspired by Schneider’s work linking brain structure and function. His work on nerve regeneration, which merged fundamental science and real-world impact, served as a “North Star” that guided Sinha’s own work as he established his lab as a junior faculty member.

“Even through the sadness of his loss, I am grateful for the inspiring example he has left for us of a life that so seamlessly combined brilliance, kindness, modesty, and tenacity,” Sinha says. “He will be missed.”

Schneider’s life centered around his research and teaching, but he also had many other skills and hobbies. Early in his life, he enjoyed painting, and as he grew older he was drawn to poetry. He was also skilled in carpentry and making furniture. He built the original hamster cages for his lab himself, along with numerous pieces of home furniture and shelving. He enjoyed nature anywhere it could be found, from the bees in his backyard to hiking and visiting state and national parks.

He was a Type 1 diabetic, and at the time of his death, he was nearing the completion of a book on the effects of hypoglycemia on the brain, which his family hopes to have published in the future. He was also the author of “Brain Structure and Its Origins,” published in 2014 by MIT Press.

He is survived by his wife, Aiping; his children, Cybele, Aniket, and Nimisha; and step-daughter Anna. He was predeceased by a daughter, Brenna. He is also survived by eight grandchildren and 10 great-grandchildren. A memorial in his honor was held on Jan. 11 at Saint James Episcopal Church in Cambridge.


Kingdoms collide as bacteria and cells form captivating connections

Studying the pathogen R. parkeri, researchers discovered the first evidence of extensive and stable interkingdom contacts between a pathogen and a eukaryotic organelle.


In biology textbooks, the endoplasmic reticulum is often portrayed as a distinct, compact organelle near the nucleus, and is commonly known to be responsible for protein trafficking and secretion. In reality, the ER is vast and dynamic, spread throughout the cell and able to establish contact and communication with and between other organelles. These membrane contacts regulate processes as diverse as fat metabolism, sugar metabolism, and immune responses.

Exploring how pathogens manipulate and hijack essential processes to promote their own life cycles can reveal much about fundamental cellular functions and provide insight into viable treatment options for understudied pathogens.

New research from the Lamason Lab in the Department of Biology at MIT recently published in the Journal of Cell Biology has shown that Rickettsia parkeri, a bacterial pathogen that lives freely in the cytosol, can interact in an extensive and stable way with the rough endoplasmic reticulum, forming previously unseen contacts with the organelle.

It’s the first known example of a direct interkingdom contact site between an intracellular bacterial pathogen and a eukaryotic membrane.

The Lamason Lab studies R. parkeri as a model for infection of the more virulent Rickettsia rickettsii. R. rickettsii, carried and transmitted by ticks, causes Rocky Mountain Spotted Fever. Left untreated, the infection can cause symptoms as severe as organ failure and death.

Rickettsia is difficult to study because it is an obligate pathogen, meaning it can only live and reproduce inside living cells, much like a virus. Researchers must get creative to parse out fundamental questions and molecular players in the R. parkeri life cycle, and much remains unclear about how R. parkeri spreads.

Detour to the junction

First author Yamilex Acevedo-Sánchez, a BSG-MSRP-Bio program alum and a graduate student at the time, stumbled across the ER and R. parkeri interactions while trying to observe Rickettsia reaching a cell junction.

The current model for Rickettsia infection involves R. parkeri spreading cell to cell by traveling to the specialized contact sites between cells and being engulfed by the neighboring cell in order to spread. Listeria monocytogenes, which the Lamason Lab also studies, uses actin tails to forcefully propel itself into a neighboring cell. By contrast, R. parkeri can form an actin tail, but loses it before reaching the cell junction. Somehow, R. parkeri is still able to spread to neighboring cells.

After an MIT seminar about the ER’s lesser-known functions, Acevedo-Sánchez developed a cell line to observe whether Rickettsia might be spreading to neighboring cells by hitching a ride on the ER to reach the cell junction.

Instead, she saw an unexpectedly high percentage of R. parkeri surrounded and enveloped by the ER, at a distance of about 55 nanometers. This distance is significant because membrane contacts for interorganelle communication in eukaryotic cells form connections from 10-80 nanometers wide. The researchers ruled out that what they saw was not an immune response, and the sections of the ER interacting with the R. parkeri were still connected to the wider network of the ER.

“I’m of the mind that if you want to learn new biology, just look at cells,” Acevedo-Sánchez says. “Manipulating the organelle that establishes contact with other organelles could be a great way for a pathogen to gain control during infection.” 

The stable connections were unexpected because the ER is constantly breaking and reforming connections, lasting seconds or minutes. It was surprising to see the ER stably associating around the bacteria. As a cytosolic pathogen that exists freely in the cytosol of the cells it infects, it was also unexpected to see R. parkeri surrounded by a membrane at all.

Small margins

Acevedo-Sánchez collaborated with the Center for Nanoscale Systems at Harvard University to view her initial observations at higher resolution using focused ion beam scanning electron microscopy. FIB-SEM involves taking a sample of cells and blasting them with a focused ion beam in order to shave off a section of the block of cells. With each layer, a high-resolution image is taken. The result of this process is a stack of images.

From there, Acevedo-Sánchez marked what different areas of the images were — such as the mitochondria, Rickettsia, or the ER — and a program called ORS Dragonfly, a machine learning program, sorted through the thousand or so images to identify those categories. That information was then used to create 3D models of the samples. 

Acevedo-Sánchez noted that less than 5 percent of R. parkeri formed connections with the ER — but small quantities of certain characteristics are known to be critical for R. parkeri infection. R. parkeri can exist in two states: motile, with an actin tail, and nonmotile, without it. In mutants unable to form actin tails, R. parkeri are unable to progress to adjacent cells — but in nonmutants, the percentage of R. parkeri that have tails starts at about 2 percent in early infection and never exceeds 15 percent at the height of it.

The ER only interacts with nonmotile R. parkeri, and those interactions increased 25-fold in mutants that couldn’t form tails.

Creating connections

Co-authors Acevedo-Sánchez, Patrick Woida, and Caroline Anderson also investigated possible ways the connections with the ER are mediated. VAP proteins, which mediate ER interactions with other organelles, are known to be co-opted by other pathogens during infection.

During infection by R. parkeri, VAP proteins were recruited to the bacteria; when VAP proteins were knocked out, the frequency of interactions between R. parkeri and the ER decreased, indicating R. parkeri may be taking advantage of these cellular mechanisms for its own purposes during infection.

Although Acevedo-Sánchez now works as a senior scientist at AbbVie, the Lamason Lab is continuing the work of exploring the molecular players that may be involved, how these interactions are mediated, and whether the contacts affect the host or bacteria’s life cycle.

Senior author and associate professor of biology Rebecca Lamason noted that these potential interactions are particularly interesting because bacteria and mitochondria are thought to have evolved from a common ancestor. The Lamason Lab has been exploring whether R. parkeri could form the same membrane contacts that mitochondria do, although they haven’t proven that yet. So far, R. parkeri is the only cytosolic pathogen that has been observed behaving this way.

“It’s not just bacteria accidentally bumping into the ER. These interactions are extremely stable. The ER is clearly extensively wrapping around the bacterium, and is still connected to the ER network,” Lamason says. “It seems like it has a purpose — what that purpose is remains a mystery.” 


A new vaccine approach could help combat future coronavirus pandemics

The nanoparticle-based vaccine shows promise against many variants of SARS-CoV-2, as well as related sarbecoviruses that could jump to humans.


A new experimental vaccine developed by researchers at MIT and Caltech could offer protection against emerging variants of SARS-CoV-2, as well as related coronaviruses, known as sarbecoviruses, that could spill over from animals to humans.

In addition to SARS-CoV-2, the virus that causes COVID-19, sarbecoviruses — a subgenus of coronaviruses — include the virus that led to the outbreak of the original SARS in the early 2000s. Sarbecoviruses that currently circulate in bats and other mammals may also hold the potential to spread to humans in the future.

By attaching up to eight different versions of sarbecovirus receptor-binding proteins (RBDs) to nanoparticles, the researchers created a vaccine that generates antibodies that recognize regions of RBDs that tend to remain unchanged across all strains of the viruses. That makes it much more difficult for viruses to evolve to escape vaccine-induced antibodies.

“This work is an example of how bringing together computation and immunological experiments can be fruitful,” says Arup K. Chakraborty, the John M. Deutch Institute Professor at MIT and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MIT, MGH and Harvard University.

Chakraborty and Pamela Bjorkman, a professor of biology and biological engineering at Caltech, are the senior authors of the study, which appears today in Cell. The paper’s lead authors are Eric Wang PhD ’24, Caltech postdoc Alexander Cohen, and Caltech graduate student Luis Caldera.

Mosaic nanoparticles

The new study builds on a project begun in Bjorkman’s lab, in which she and Cohen created a “mosaic” 60-mer nanoparticle that presents eight different sarbecovirus RBD proteins. The RBD is the part of the viral spike protein that helps the virus get into host cells. It is also the region of the coronavirus spike protein that is usually targeted by antibodies against sarbecoviruses.

RBDs contain some regions that are variable and can easily mutate to escape antibodies. Most of the antibodies generated by mRNA COVID-19 vaccines target those variable regions because they are more easily accessible. That is one reason why mRNA vaccines need to be updated to keep up with the emergence of new strains.

If researchers could create a vaccine that stimulates production of antibodies that target RBD regions that can’t easily change and are shared across viral strains, it could offer broader protection against a variety of sarbecoviruses.

Such a vaccine would have to stimulate B cells that have receptors (which then become antibodies) that target those shared, or “conserved,” regions. When B cells circulating in the body encounter a vaccine or other antigen, their B cell receptors, each of which have two “arms,” are more effectively activated if two copies of the antigen are available for binding to each arm. The conserved regions tend to be less accessible to B cell receptors, so if a nanoparticle vaccine presents just one type of RBD, B cells with receptors that bind to the more accessible variable regions, are most likely to be activated.

To overcome this, the Caltech researchers designed a nanoparticle vaccine that includes 60 copies of RBDs from eight different related sarbecoviruses, which have different variable regions but similar conserved regions. Because eight different RBDs are displayed on each nanoparticle, it’s unlikely that two identical RBDs will end up next to each other. Therefore, when a B cell receptor encounters the nanoparticle immunogen, the B cell is more likely to become activated if its receptor can recognize the conserved regions of the RBD.

“The concept behind the vaccine is that by co-displaying all these different RBDs on the nanoparticle, you are selecting for B cells that recognize the conserved regions that are shared between them,” Cohen says. “As a result, you’re selecting for B cells that are more cross-reactive. Therefore, the antibody response would be more cross-reactive and you could potentially get broader protection.”

In studies conducted in animals, the researchers showed that this vaccine, known as mosaic-8, produced strong antibody responses against diverse strains of SARS-CoV-2 and other sarbecoviruses and protected from challenges by both SARS-CoV-2 and SARS-CoV (original SARS).

Broadly neutralizing antibodies

After these studies were published in 2021 and 2022, the Caltech researchers teamed up with Chakraborty’s lab at MIT to pursue computational strategies that could allow them to identify RBD combinations that would generate even better antibody responses against a wider variety of sarbecoviruses.

Led by Wang, the MIT researchers pursued two different strategies — first, a large-scale computational screen of many possible mutations to the RBD of SARS-CoV-2, and second, an analysis of naturally occurring RBD proteins from zoonotic sarbecoviruses.

For the first approach, the researchers began with the original strain of SARS-CoV-2 and generated sequences of about 800,000 RBD candidates by making substitutions in locations that are known to affect antibody binding to variable portions of the RBD. Then, they screened those candidates for their stability and solubility, to make sure they could withstand attachment to the nanoparticle and injection as a vaccine.

From the remaining candidates, the researchers chose 10 based on how different their variable regions were. They then used these to create mosaic nanoparticles coated with either two or five different RBD proteins (mosaic-2COM and mosaic-5COM).

In their second approach, instead of mutating the RBD sequences, the researchers chose seven naturally occurring RBD proteins, using computational techniques to select RBDs that were different from each other in regions that are variable, but retained their conserved regions. They used these to create another vaccine, mosaic-7COM.

Once the researchers produced the RBD-nanoparticles, they evaluated each one in mice. After each mouse received three doses of one of the vaccines, the researchers analyzed how well the resulting antibodies bound to and neutralized seven variants of SARS-CoV-2 and four other sarbecoviruses. 

They also compared the mosaic nanoparticle vaccines to a nanoparticle with only one type of RBD displayed, and to the original mosaic-8 particle from their 2021, 2022, and 2024 studies. They found that mosaic-2COM and mosaic-5COM outperformed both of those vaccines, and mosaic-7COM showed the best responses of all. Mosaic-7COM elicited antibodies with binding to most of the viruses tested, and these antibodies were also able to prevent the viruses from entering cells.

The researchers saw similar results when they tested the new vaccines in mice that were previously vaccinated with a bivalent mRNA COVID-19 vaccine.

“We wanted to simulate the fact that people have already been infected and/or vaccinated against SARS-CoV-2,” Wang says. “In pre-vaccinated mice, mosaic-7COM is consistently giving the highest binding titers for both SARS-CoV-2 variants and other sarbecoviruses.”

Bjorkman’s lab has received funding from the Coalition for Epidemic Preparedness Innovations to do a clinical trial of the mosaic-8 RBD-nanoparticle. They also hope to move mosaic-7COM, which performed better in the current study, into clinical trials. The researchers plan to work on redesigning the vaccines so that they could be delivered as mRNA, which would make them easier to manufacture.

The research was funded by a National Science Foundation Graduate Research Fellowship, the National Institutes of Health, Wellcome Leap, the Bill and Melinda Gates Foundation, the Coalition for Epidemic Preparedness Innovations, and the Caltech Merkin Institute for Translational Research.


Toward video generative models of the molecular world

Starting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.


As the capabilities of generative AI models have grown, you've probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.

More recently, generative AI has shown potential in helping chemists and biologists explore static molecules, like proteins and DNA. Models like AlphaFold can predict molecular structures to accelerate drug discovery, and the MIT-assisted “RFdiffusion,” for example, can help design new proteins. One challenge, though, is that molecules are constantly moving and jiggling, which is important to model when constructing new proteins and drugs. Simulating these motions on a computer using physics — a technique known as molecular dynamics — can be very expensive, requiring billions of time steps on supercomputers.

As a step toward simulating these behaviors more efficiently, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Mathematics researchers have developed a generative model that learns from prior data. The team’s system, called MDGen, can take a frame of a 3D molecule and simulate what will happen next like a video, connect separate stills, and even fill in missing frames. By hitting the “play button” on molecules, the tool could potentially help chemists design new molecules and closely study how well their drug prototypes for cancer and other diseases would interact with the molecular structure it intends to impact.

Co-lead author Bowen Jing SM ’22 says that MDGen is an early proof of concept, but it suggests the beginning of an exciting new research direction. “Early on, generative AI models produced somewhat simple videos, like a person blinking or a dog wagging its tail,” says Jing, a PhD student at CSAIL. “Fast forward a few years, and now we have amazing models like Sora or Veo that can be useful in all sorts of interesting ways. We hope to instill a similar vision for the molecular world, where dynamics trajectories are the videos. For example, you can give the model the first and 10th frame, and it’ll animate what’s in between, or it can remove noise from a molecular video and guess what was hidden.”

The researchers say that MDGen represents a paradigm shift from previous comparable works with generative AI in a way that enables much broader use cases. Previous approaches were “autoregressive,” meaning they relied on the previous still frame to build the next, starting from the very first frame to create a video sequence. In contrast, MDGen generates the frames in parallel with diffusion. This means MDGen can be used to, for example, connect frames at the endpoints, or “upsample” a low frame-rate trajectory in addition to pressing play on the initial frame.

This work was presented in a paper shown at the Conference on Neural Information Processing Systems (NeurIPS) this past December. Last summer, it was awarded for its potential commercial impact at the International Conference on Machine Learning’s ML4LMS Workshop.

Some small steps forward for molecular dynamics

In experiments, Jing and his colleagues found that MDGen’s simulations were similar to running the physical simulations directly, while producing trajectories 10 to 100 times faster.

The team first tested their model’s ability to take in a 3D frame of a molecule and generate the next 100 nanoseconds. Their system pieced together successive 10-nanosecond blocks for these generations to reach that duration. The team found that MDGen was able to compete with the accuracy of a baseline model, while completing the video generation process in roughly a minute — a mere fraction of the three hours that it took the baseline model to simulate the same dynamic.

When given the first and last frame of a one-nanosecond sequence, MDGen also modeled the steps in between. The researchers’ system demonstrated a degree of realism in over 100,000 different predictions: It simulated more likely molecular trajectories than its baselines on clips shorter than 100 nanoseconds. In these tests, MDGen also indicated an ability to generalize on peptides it hadn’t seen before.

MDGen’s capabilities also include simulating frames within frames, “upsampling” the steps between each nanosecond to capture faster molecular phenomena more adequately. It can even ​​“inpaint” structures of molecules, restoring information about them that was removed. These features could eventually be used by researchers to design proteins based on a specification of how different parts of the molecule should move.

Toying around with protein dynamics

Jing and co-lead author Hannes Stärk say that MDGen is an early sign of progress toward generating molecular dynamics more efficiently. Still, they lack the data to make these models immediately impactful in designing drugs or molecules that induce the movements chemists will want to see in a target structure.

The researchers aim to scale MDGen from modeling molecules to predicting how proteins will change over time. “Currently, we’re using toy systems,” says Stärk, also a PhD student at CSAIL. “To enhance MDGen’s predictive capabilities to model proteins, we’ll need to build on the current architecture and data available. We don’t have a YouTube-scale repository for those types of simulations yet, so we’re hoping to develop a separate machine-learning method that can speed up the data collection process for our model.”

For now, MDGen presents an encouraging path forward in modeling molecular changes invisible to the naked eye. Chemists could also use these simulations to delve deeper into the behavior of medicine prototypes for diseases like cancer or tuberculosis.

“Machine learning methods that learn from physical simulation represent a burgeoning new frontier in AI for science,” says Bonnie Berger, MIT Simons Professor of Mathematics, CSAIL principal investigator, and senior author on the paper. “MDGen is a versatile, multipurpose modeling framework that connects these two domains, and we’re very excited to share our early models in this direction.”

“Sampling realistic transition paths between molecular states is a major challenge,” says fellow senior author Tommi Jaakkola, who is the MIT Thomas Siebel Professor of electrical engineering and computer science and the Institute for Data, Systems, and Society, and a CSAIL principal investigator. “This early work shows how we might begin to address such challenges by shifting generative modeling to full simulation runs.”

Researchers across the field of bioinformatics have heralded this system for its ability to simulate molecular transformations. “MDGen models molecular dynamics simulations as a joint distribution of structural embeddings, capturing molecular movements between discrete time steps,” says Chalmers University of Technology associate professor Simon Olsson, who wasn’t involved in the research. “Leveraging a masked learning objective, MDGen enables innovative use cases such as transition path sampling, drawing analogies to inpainting trajectories connecting metastable phases.”

The researchers’ work on MDGen was supported, in part, by the National Institute of General Medical Sciences, the U.S. Department of Energy, the National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency.


Physicists discover — and explain — unexpected magnetism in an atomically thin material

The work introduces a new platform for studying quantum materials.


MIT physicists have created a new ultrathin, two-dimensional material with unusual magnetic properties that initially surprised the researchers before they went on to solve the complicated puzzle behind those properties’ emergence. As a result, the work introduces a new platform for studying how materials behave at the most fundamental level — the world of quantum physics.

Ultrathin materials made of a single layer of atoms have riveted scientists’ attention since the discovery of the first such material — graphene, composed of carbon — about 20 years ago. Among other advances since then, researchers have found that stacking individual sheets of the 2D materials, and sometimes twisting them at a slight angle to each other, can give them new properties, from superconductivity to magnetism. Enter the field of twistronics, which was pioneered at MIT by Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.

In the current research, reported in the Jan. 7 issue of Nature Physics, the scientists, led by Jarillo-Herrero, worked with three layers of graphene. Each layer was twisted on top of the next at the same angle, creating a helical structure akin to the DNA helix or a hand of three cards that are fanned apart.

“Helicity is a fundamental concept in science, from basic physics to chemistry and molecular biology. With 2D materials, one can create special helical structures, with novel properties which we are just beginning to understand. This work represents a new twist in the field of twistronics, and the community is very excited to see what else we can discover using this helical materials platform!” says Jarillo-Herrero, who is also affiliated with MIT’s Materials Research Laboratory.

Do the twist

Twistronics can lead to new properties in ultrathin materials because arranging sheets of 2D materials in this way results in a unique pattern called a moiré lattice. And a moiré pattern, in turn, has an impact on the behavior of electrons.

“It changes the spectrum of energy levels available to the electrons and can provide the conditions for interesting phenomena to arise,” says Sergio C. de la Barrera, one of three co-first authors of the recent paper. De la Barrera, who conducted the work while a postdoc at MIT, is now an assistant professor at the University of Toronto.

In the current work, the helical structure created by the three graphene layers forms two moiré lattices. One is created by the first two overlapping sheets; the other is formed between the second and third sheets.

The two moiré patterns together form a third moiré, a supermoiré, or “moiré of a moiré,” says Li-Qiao Xia, a graduate student in MIT physics and another of the three co-first authors of the Nature Physics paper. “It’s like a moiré hierarchy.” While the first two moiré patterns are only nanometers, or billionths of a meter, in scale, the supermoiré appears at a scale of hundreds of nanometers superimposed over the other two. You can only see it if you zoom out to get a much wider view of the system.

A major surprise

The physicists expected to observe signatures of this moiré hierarchy. They got a huge surprise, however, when they applied and varied a magnetic field. The system responded with an experimental signature for magnetism, one that arises from the motion of electrons. In fact, this orbital magnetism persisted to -263 degrees Celsius — the highest temperature reported in carbon-based materials to date.

But that magnetism can only occur in a system that lacks a specific symmetry — one that the team’s new material should have had. “So the fact that we saw this was very puzzling. We didn’t really understand what was going on,” says Aviram Uri, an MIT Pappalardo postdoc in physics and the third co-first author of the new paper.

Other authors of the paper include MIT professor of physics Liang Fu; Aaron Sharpe of Sandia National Laboratories; Yves H. Kwan of Princeton University; Ziyan Zhu, David Goldhaber-Gordon, and Trithep Devakul of Stanford University; and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.

What was happening?

It turns out that the new system did indeed break the symmetry that prohibits the orbital magnetism the team observed, but in a very unusual way. “What happens is that the atoms in this system aren’t very comfortable, so they move in a subtle orchestrated way that we call lattice relaxation,” says Xia. And the new structure formed by that relaxation does indeed break the symmetry locally, on the moiré length scale.

This opens the possibility for the orbital magnetism the team observed. However, if you zoom out to view the system on the supermoiré scale, the symmetry is restored. “The moiré hierarchy turns out to support interesting phenomena at different length scales,” says de la Barrera.

Concludes Uri: “It’s a lot of fun when you solve a riddle and it’s such an elegant solution. We’ve gained new insights into how electrons behave in these complex systems, insights that we couldn’t have had unless our experimental observations forced to think about these things.”

This work was supported by the Army Research Office, the National Science Foundation, the Gordon and Betty Moore Foundation, the Ross M. Brown Family Foundation, an MIT Pappalardo Fellowship, the VATAT Outstanding Postdoctoral Fellowship in Quantum Science and Technology, the JSPS KAKENHI, and a Stanford Science Fellowship. This work was carried out, in part, through the use of MIT.nano facilities.


New START.nano cohort is developing solutions in health, data storage, power, and sustainable energy

With seven new startups, MIT.nano's program for hard-tech ventures expands to more than 20 companies.


MIT.nano has announced seven new companies to join START.nano, a program aimed at speeding the transition of hard-tech innovation to market. The program supports new ventures through discounted use of MIT.nano’s facilities and access to the MIT innovation ecosystem.

The advancements pursued by the newly engages startups include wearables for health care, green alternatives to fossil fuel-based energy, novel battery technologies, enhancements in data systems, and interconnecting nanofabrication knowledge networks, among others.

“The transition of the grand idea that is imagined in the laboratory to something that a million people can use in their hands is a journey fraught with many challenges,” MIT.nano Director Vladimir Bulović said at the 2024 Nano Summit, where nine START.nano companies presented their work. The program provides resources to ease startups over the first two hurdles — finding stakeholders and building a well-developed prototype.

In addition to access to laboratory tools necessary to advance their technologies, START.nano companies receive advice from MIT.nano expert staff, are connected to MIT.nano Consortium companies, gain a broader exposure at MIT conferences and community events, and are eligible to join the MIT Startup Exchange.

“MIT.nano has allowed us to push our project to the frontiers of sensing by implementing advanced fabrication techniques using their machinery,” said Uroš Kuzmanović, CEO and founder of Biosens8. “START.nano has surrounded us with exciting peers, a strong support system, and a spotlight to present our work. By taking advantage of all that the program has to offer, BioSens8 is moving faster than we could anywhere else.”

Here are the seven new START.nano participants:

Analog Photonics is developing lidar and optical communications technology using silicon photonics.

Biosens8 is engineering novel devices to enable health ownership. Their research focuses on multiplexed wearables for hormones, neurotransmitters, organ health markers, and drug use that will give insight into the body's health state, opening the door to personalized medicine and proactive, data-driven health decisions.

Casimir, Inc. is working on power-generating nanotechnology that interacts with quantum fields to create a continuous source of power. The team compares their technology to a solar panel that works in the dark or a battery that never needs to be recharged.

Central Spiral focuses on lossless data compression. Their technology allows for the compression of any type of data, including those that are already compressed, reducing data storage and transmission costs, lowering carbon dioxide emissions, and enhancing efficiency.

FabuBlox connects stakeholders across the nanofabrication ecosystem and resolves issues of scattered, unorganized, and isolated fab knowledge. Their cloud-based platform combines a generative process design and simulation interface with GitHub-like repository building capabilities.

Metal Fuels is converting industrial waste aluminum to onsite energy and high-value aluminum/aluminum-oxide powders. Their approach combines existing mature technologies of molten metal purification and water atomization to develop a self-sustaining reactor that produces alumina of higher value than our input scrap aluminum feedstock, while also collecting the hydrogen off-gas.

PolyJoule, Inc. is an energy storage startup working on conductive polymer battery technology. The team’s goal is a grid battery of the future that is ultra-safe, sustainable, long living, and low-cost.

In addition to the seven startups that are actively using MIT.nano, nine other companies have been invited to join the latest START.nano cohort:

Launched in 2021, START.nano now comprises over 20 companies and eight graduates — ventures that have moved beyond the initial startup stages and some into commercialization. 


Toward sustainable decarbonization of aviation in Latin America

Special report describes targets for advancing technologically feasible and economically viable strategies.


According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?

To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.

Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.

Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.

SAF emissions, costs, and sources

Under an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.

As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.

Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.

“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.

Recommendations

The researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.

They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.

Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.

“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”

Financial support for this study was provided by LATAM Airlines and Airbus.


Student Program for Innovation in Science and Engineering is a launching pad toward possibility

Gifted Caribbean high schoolers become SPISE alumni at MIT, and many go on to advanced academic and professional careers.


When you ask MIT students to tell you the story of how they came to Cambridge, you might hear some common themes: a favorite science teacher; an interest in computers that turned into an obsession; a bedroom decorated with NASA posters and glow-in-the-dark stars.

But for a few, the road to MIT starts with an invitation to a special summer program: not a camp with canoes or cabins or campgrounds, but instead one taking place in classrooms and labs with discussions of Arduinos, variable scope and aliasing, and Michaelis-Menten enzyme kinetics. The classroom and labs are in Barbados at the Cave Hill campus of the University of the West Indies, and all the students are gifted Caribbean high schoolers, ages 16-18, who’ve been selected for the extremely competitive Student Program for Innovation in Science and Engineering (SPISE). Their summer will not include much time for leisure or lots of sleep; instead, they’ll be tackling a five-week high-intensity curriculum with courses in university-level calculus, physics, biochemistry, computer programming, electronics and entrepreneurship, including hands-on projects in the last three. For several students currently on campus, SPISE was their gateway to MIT.

“The full story is even bigger,” says Cardinal Warde, MIT professor of electrical engineering and founder of SPISE, who is originally from Barbados in the Caribbean. “Over the past 10 years, exactly 30 of the 245 students in total from the SPISE program have attended MIT as undergrads and/or graduate students.”

While many SPISE alumni have gone on to Harvard University, Stanford University, Caltech, Princeton University, Columbia University, the University of Pennsylvania, and other prestigious schools, the emphasis on science and technology creates a natural pipeline to MIT, whose faculty and instructors volunteered their time and expertise to help Warde design a curriculum that was both challenging and engaging.

Jacob White, the Cecil H. Green Professor in Electrical Engineering, was one of the first of those volunteers. “When Covid forced SPISE to run remotely, Professor Warde felt it was critical to continue having hands-on engineering labs, and sought my help,” White explains. “Kits were cobbled together using EECS-donated microcontroller boards, motors and magnets; Dinah Sah (the SPISE director) got those kits to students spread over half-a-dozen islands.” White, and several of his graduate students, collaborated to write a curriculum that would give the students enough grounding in fundamentals to empower them to create their own designs.

When SPISE returned to in-person education, Steve Leeb, the Emanuel E. Landsman (1958) Professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE), was inspired by the challenge of teaching electronics remotely.

“SPISE is exactly the kind of opportunity we're looking for in the RLE educational outreach programs: bright, enthusiastic young folks who would benefit from new perspectives on science and engineering — a community of folks where we can bring new perspectives, share energy and excitement, and, ideally, make lifelong connections to our academic programs here at MIT. It's a natural fit that benefits us all,” says Leeb, who, together with his graduate students, adapted the portable “take-home” Electronics FIRST curriculum pioneered at MIT and taught in course 6.2030. “The Electronics FIRST exercises and lectures are designed to connect electronic circuit techniques — digital gates, microcontrollers, and other electronics technologies — that are recognizable as elements of commercial products,” says Leeb. “So the projects naturally engage students in building with components that have a connection to commercial products and product ideas. This flows naturally into a 'final project' that the students create in SPISE, a product of their own conception, for example a music synthesizer.”

Crucially, the curriculum isn’t simplified for the high school students. “We adapted the projects to fit the different program length — SPISE is shorter than a full MIT term,” says Leeb. “We did not reduce the rigor or challenge of the activities, and, in fact, have brought new ideas from the SPISE students back to campus to improve 6.2030.”

Departments beyond EECS pitched in to develop SPISE, with major teaching contributions coming from the Department of Physics, where Lecturer Alex Shvonski, Senior Technical Instructor Caleb Bonyun, and Senior Technical Instructor Joshua Wolfe, who also manages the Physics Instructional Resource Lab, collaborated on developing hands-on projects and on the teaching for both Physics I and Calculus I courses. Additional supplies came from the MIT Sea Grant Program, which supplied underwater robots to SPISE for six consecutive years before the Covid-19 pandemic. (In the wake of the pandemic, the program pivoted to focus on embedded systems.)

But the core inspiration for SPISE doesn’t come from an academic department at all. “SPISE was based on a model that’s proven to work: MITES,” explains Ebony Hearn, executive director of the MIT Introduction to Technology, Engineering, and Science. “The program, which offers access and opportunity to intensive courses in science, technology, engineering, and math for talented high school students in every zip code, has helped thousands of students for nearly 50 years gain admission to top universities and pursue successful careers in STEM while being immersed in a community of caring mentors and leaders in the profession.”

The shared DNA of the two programs is no coincidence. Cardinal Warde has been the faculty director of MITES for the past 27 years, and took the lessons of five decades of the transformative pre-college experience into account when envisioning an equivalent program in the Caribbean. Much like MITES, SPISE encourages its participants to develop a sense of belonging in STEM and to picture the possibilities at top schools; over the years, the program has added sessions with admissions officers from MIT, Columbia, Princeton, and U Penn. “SPISE changed my perspective of myself,” says Chenise Harper, a first-year student at MIT who is currently interested in Course 6-5 (Electrical Engineering With Computing). “It gave me the confidence to apply to universities I thought were completely out of my reach.”

Harper’s trajectory is exactly what the designers of the program hoped for. “We have been very successful with the shorter-term goal of increasing the numbers of Caribbean students pursuing advanced degrees in STEM and grooming the next generation of STEM and business leaders in the Region,” says Dinah Sah ’81, director of the program (and wife of Cardinal Warde). “We have SPISE graduates who have, or are currently pursuing, graduate degrees at the top universities around the world, including (but not limited to) MIT, Stanford, Harvard, Princeton, Dartmouth, Yale, Johns Hopkins, Carnegie Mellon, and Oxford, including a Rhodes Scholar. We fully believe that SPISE graduates represent part of the next generation of STEM and business leaders in the Caribbean and that SPISE has played a significant role in their trajectories.”

Notably, the SPISE program also includes an element of entrepreneurship, encouraging students to envision tech-based solutions to problems in their own backyards. Keonna Simon, who hails from St. Vincent and the Grenadines, developed a business pitch with other SPISE participants for an innovative “reverse vending machine.” “In the Caribbean, tourism is a key contributor to the economy, but littering is an issue that detracts from the beauty of our islands and harms our abundant marine life,” explains Simon, now a junior majoring in Course 6-7 (Computer Science and Molecular Biology). “Our project aimed to tackle this by placing reverse vending machines in heavily polluted areas. People could deposit recyclable plastic bottles, and the machine would convert the weight of the plastic into cash rewards on a card, redeemable for discounts at supermarkets.”

One SPISE alum, Quilee Simeon, decided to work on a renewable energy system at SPISE as a way of addressing global warming’s effects on his homeland of St. Lucia. “I chose to work on the renewable energy project, where we designed and built a prototype wind turbine using low-resource materials like PVC pipes. It was exciting because I thought it had real applications to developing island states like ours, where we don’t have an abundance of the manufacturing materials used in larger countries, and we are disproportionately affected by climate change,” says Simeon. “So building cheap and effective renewable energy resources was, in my view, an important problem to tackle.”

As Simeon worked on his prototype turbine and tackled late nights with his new classmates at SPISE, he realized how different the experience was from his prior schooling. For most students, the summer program is a first time away from home — but for all, it is the first exposure to the firehose-like experience of tackling multiple college-level courses with simultaneous assignments and problem sets. “It was honestly a primer to MIT,” says Simeon. “They not only challenged us with rigorous math and science, but also provided guidance on college applications and explained the vast opportunities a STEM degree could unlock. SPISE changed my view of myself as a scholar, though probably in an unexpected way. I thought I was smart before attending SPISE, but I realized how much I didn’t know and how many things were lacking or wrong with the style of education I had grown used to (rote learning, memorization, etc.). SPISE made me realize that being a scholar isn’t just about consuming knowledge — it's about creating and applying it.”

The difficulty of the SPISE curriculum is a deliberate choice, made to aid students in preparing for higher education, confirms Sah. “When we started SPISE in 2012, [we decided] to focus on teaching the fundamentals in each of the courses … The homework problems and the quizzes would require the application of these fundamentals to solving challenging problems. This is in distinct contrast to rote memorization of facts, which is the method of learning these students had generally been exposed to. So, yes, this was in fact a very deliberate choice, and a critical change that we wanted to bring to these very high-potential students in their approach to learning and thinking.”

MIT’s emphasis on creative, outside-the-box thinking was just the beginning of the culture shocks that awaited SPISE students who made the transition to an American university from the summer program. Many are surprised by the American students’ habit of referring to their professors by first name, which would be considered disrespectful at home. Conversely, small daily interactions in the Northeast can feel remote and chilly to Caribbean students. “Moving from a small island with just around 100,000 people to Harvard was initially jarring,” says Gerard Porter, who participated in SPISE in 2017 before attending Harvard for his undergraduate degree. “In my first year, I was often met with puzzled stares when I greeted strangers in an elevator or students in my dorm whom I did not know personally. I quickly learned that politeness meant something very different in the Northeastern United States compared to the warm Caribbean.”

Other SPISE alumni report experiencing similar chilliness — literally. Quilee Simeon’s first winter in Cambridge was jarring. “I knew about the concept of winter and was told to expect cold weather, but I never actually knew how cold 'cold' was until I felt it myself,” says Simeon. “That was terrible!” Ronaldo Lee, a first-year from Jamaica interested in computer science and electrical engineering, found warmth among fellow SPISE alumni here at MIT. “Nothing beats the tropical climate! But honestly, the community at MIT has been amazing. I was surprised by how quickly I felt comfortable, thanks to the incredible people around me. The Black and Caribbean community especially made me feel at home; I’ve met some truly fascinating, driven, and like-minded people who’ve become close friends. One of the biggest surprises was discovering how similar we all are, despite our different cultural backgrounds. Everyone here is incredibly smart and shares a common drive to make the world a better place and pursue exciting STEM projects.”

The common drive to improve the world through STEM is evident in the paths the SPISE alumni have taken.

Gerard Porter, now a graduate student in the Kiessling Group within the Department of Chemistry at MIT, conducts research “focusing on unraveling the biological roles of glycans that cover all cells on Earth. I work on developing chemical tools to study critical regions of the bacterial cell wall that have been relatively unexplored.” Porter hopes that learning more about the molecular mechanisms at play within cell walls will open the doorway to the development of novel antibiotics.

Quilee Simeon has discovered an affinity for computational neuroscience, and is currently developing a computational model of the C. elegans nervous system. “My hope is that this model organism will prove fruitful for computational neuroscience research as it has for biology,” says Simeon, who plans to work in industry after graduation.

Computational biology has also captured the attention of junior Keonna Simon, who is excited to take courses such as 6.8711 (Computational Systems Biology: Deep Learning in the Life Sciences), saying, “This nexus holds a lot of potential for solving complex biological problems through computational methods, and I’m eager to dive deeper into that space!”

Chenise Harper found SPISE’s emphasis on bringing tech entrepreneurship home inspiring. “Living in the Caribbean has stimulated a dream of a future where robots are partners in rebuilding our community after natural disasters,” she says. “There are also so many issues that I would like to one day contribute to, like climate change issues and even cybersecurity. Electrical Engineering with Computing is the kind of major that will allow me to at least touch on the areas I am interested in, and allow me to explore both software and hardware concepts that excite me and will inspire me to develop a concrete way to give back to the community that has lifted me up to where I am now.”

Ronaldo Lee also found his academic home in computer science and electrical engineering, fabricating and characterizing perovskite solar cells in his Undergraduate Research Opportunities Program project and building a small offshore wind turbine for the Collegiate Wind Competition as part of the MIT WIND team. “I’d love to focus on the energy sector, particularly in improving the grid system and integrating renewable energy sources to ensure more reliable access,” says Lee. “I want to help make energy access more sustainable and inclusive, driving development for the region as a whole.”

Lee’s plans are perfectly in line with the long-term goals set by Warde and Sah as they planned SPISE. “Diversifying the economies of the region and raising the standard of living by stimulating more technology-based entrepreneurship will take time,” says Sah. “We are optimistic that our SPISE graduates will, with time, change the world to make it a better place for all, including the Caribbean.”


Modeling complex behavior with a simple organism

By studying the roundworm C. elegans, neuroscientist Steven Flavell explores how neural circuits give rise to behavior.


The roundworm C. elegans is a simple animal whose nervous system has exactly 302 neurons. Each of the connections between those neurons has been comprehensively mapped, allowing researchers to study how they work together to generate the animal’s different behaviors.

Steven Flavell, an MIT associate professor of brain and cognitive sciences and investigator with The Picower Institute for Learning and Memory at MIT and the Howard Hughes Medical Institute, uses the worm as a model to study motivated behaviors such as feeding and navigation, in hopes of shedding light on the fundamental mechanisms that may also determine how similar behaviors are controlled in other animals.

In recent studies, Flavell’s lab has uncovered neural mechanisms underlying adaptive changes in the worms’ feeding behavior, and his lab has also mapped how the activity of each neuron in the animal’s nervous system affects the worms’ different behaviors.

Such studies could help researchers gain insight into how brain activity generates behavior in humans. “It is our aim to identify molecular and neural circuit mechanisms that may generalize across organisms,” he says, noting that many fundamental biological discoveries, including those related to programmed cell death, microRNA, and RNA interference, were first made in C. elegans.

“Our lab has mostly studied motivated state-dependent behaviors, like feeding and navigation. The machinery that’s being used to control these states in C. elegans — for example, neuromodulators — are actually the same as in humans. These pathways are evolutionarily ancient,” he says.

Drawn to the lab

Born in London to an English father and a Dutch mother, Flavell came to the United States in 1982 at the age of 2, when his father became chief scientific officer at Biogen. The family lived in Sudbury, Massachusetts, and his mother worked as a computer programmer and math teacher. His father later became a professor of immunology at Yale University.

Though Flavell grew up in a science family, he thought about majoring in English when he arrived at Oberlin College. A musician as well, Flavell took jazz guitar classes at Oberlin’s conservatory, and he also plays the piano and the saxophone. However, taking classes in psychology and physiology led him to discover that the field that most captivated him was neuroscience.

“I was immediately sold on neuroscience. It combined the rigor of the biological sciences with deep questions from psychology,” he says.

While in college, Flavell worked on a summer research project related to Alzheimer’s disease, in a lab at Case Western Reserve University. He then continued the project, which involved analyzing post-mortem Alzheimer’s tissue, during his senior year at Oberlin.

“My earliest research revolved around mechanisms of disease. While my research interests have evolved since then, my earliest research experiences were the ones that really got me hooked on working at the bench: running experiments, looking at brand new results, and trying to understand what they mean,” he says.

By the end of college, Flavell was a self-described lab rat: “I just love being in the lab.” He applied to graduate school and ended up going to Harvard Medical School for a PhD in neuroscience. Working with Michael Greenberg, Flavell studied how sensory experience and resulting neural activity shapes brain development. In particular, he focused on a family of gene regulators called MEF2, which play important roles in neuronal development and synaptic plasticity.

All of that work was done using mouse models, but Flavell transitioned to studying C. elegans during a postdoctoral fellowship working with Cori Bargmann at Rockefeller University. He was interested in studying how neural circuits control behavior, which seemed to be more feasible in simpler animal models.

“Studying how neurons across the brain govern behavior felt like it would be nearly intractable in a large brain — to understand all the nuts and bolts of how neurons interact with each other and ultimately generate behavior seemed daunting,” he says. “But I quickly became excited about studying this in C. elegans because at the time it was still the only animal with a full blueprint of its brain: a map of every brain cell and how they are all wired up together.”

That wiring diagram includes about 7,000 synapses in the entire nervous system. By comparison, a single human neuron may form more than 10,000 synapses. “Relative to those larger systems, the C. elegans nervous system is mind-bogglingly simple,” Flavell says.

Despite their much simpler organization, roundworms can execute complex behaviors such as feeding, locomotion, and egg-laying. They even sleep, form memories, and find suitable mating partners. The neuromodulators and cellular machinery that give rise to those behaviors are similar to those found in humans and other mammals.

“C. elegans has a fairly well-defined, smallish set of behaviors, which makes it attractive for research. You can really measure almost everything that the animal is doing and study it,” Flavell says.

How behavior arises

Early in his career, Flavell’s work on C. elegans revealed the neural mechanisms that underlie the animal’s stable behavioral states. When worms are foraging for food, they alternate between stably exploring the environment and pausing to feed. “The transition rates between those states really depend on all these cues in the environment. How good is the food environment? How hungry are they? Are there smells indicating a better nearby food source? The animal integrates all of those things and then adjusts their foraging strategy,” Flavell says.

These stable behavioral states are controlled by neuromodulators like serotonin. By studying serotonergic regulation of the worm’s behavioral states, Flavell’s lab has been able to uncover how this important system is organized. In a recent study, Flavell and his colleagues published an “atlas” of the C. elegans serotonin system. They identified every neuron that produces serotonin, every neuron that has serotonin receptors, and how brain activity and behavior change across the animal as serotonin is released.

“Our studies of how the serotonin system works to control behavior have already revealed basic aspects of serotonin signaling that we think ought to generalize all the way up to mammals,” Flavell says. “By studying the way that the brain implements these long-lasting states, we can tap into these basic features of neuronal function. With the resolution that you can get studying specific C. elegans neurons and the way that they implement behavior, we can uncover fundamental features of the way that neurons act.”

In parallel, Flavell’s lab has also been mapping out how neurons across the C. elegans brain control different aspects of behavior. In a 2023 study, Flavell’s lab mapped how changes in brain-wide activity relate to behavior. His lab uses special microscopes that can move along with the worms as they explore, allowing them to simultaneously track every behavior and measure the activity of every neuron in the brain. Using these data, the researchers created computational models that can accurately capture the relationship between brain activity and behavior.

This type of research requires expertise in many areas, Flavell says. When looking for faculty jobs, he hoped to find a place where he could collaborate with researchers working in different fields of neuroscience, as well as scientists and engineers from other departments.

“Being at MIT has allowed my lab to be much more multidisciplinary than it could have been elsewhere,” he says. “My lab members have had undergrad degrees in physics, math, computer science, biology, neuroscience, and we use tools from all of those disciplines. We engineer microscopes, we build computational models, we come up with molecular tricks to perturb neurons in the C. elegans nervous system. And I think being able to deploy all those kinds of tools leads to exciting research outcomes.”


MIT student encourages all learners to indulge their curiosity with MIT Open Learning's MITx

Junior Shreya Mogulothu says taking an MITx class as a high school student opened her eyes to new possibilities.


Shreya Mogulothu is naturally curious. As a high school student in New Jersey, she was interested in mathematics and theoretical computer science (TCS). So, when her curiosity compelled her to learn more, she turned to MIT Open Learning’s online resources and completed the Paradox and Infinity course on MITx Online. 

“Coming from a math and TCS background, the idea of pushing against the limits of assumptions was really interesting,” says Mogulothu, now a junior at MIT. “I mean, who wouldn’t want to learn more about infinity?”

The class, taught by Agustín Rayo, professor of philosophy and the current dean of the School of Humanities, Arts, and Social Sciences, and David Balcarras, a former instructor in philosophy and fellow in the Digital Learning Lab at Open Learning, explores the intersection of math and philosophy and guides learners through thinking about paradoxes and open-ended problems, as well as the boundaries of theorizing and the limits of standard mathematical tools.

“We talked about taking regular assumptions about numbers and objects and pushing them to extremes,” Mogulothu says. “For example, what contradictions arise when you talk about an infinite set of things, like the infinite hats paradox?” 

The infinite hats paradox, also known as Bacon’s Puzzle, involves an infinite line of people, each wearing one of two colors of hats. The puzzle posits that each individual can see only the hat of the person in front of them and must guess the color of their own hat. The puzzle challenges students to identify if there is a strategy that can ensure the least number of incorrect answers and to consider how strategy may change if there is a finite number of people. Mogulothu was thrilled that a class like this was available to her even though she wasn’t yet affiliated with MIT. 

“My MITx experience was one of the reasons I came to MIT,” she says. “I really liked the course, and I was happy it was shared with people like me, who didn’t even go to the school. I thought that a place that encouraged even people outside of campus to learn like that would be a pretty good place to study.” 

Looking back at the course, Balcarras says, “Shreya may have been the most impressive student in our online community of approximately 3,900 learners and 100 verified learners. I cannot single out another student whose performance rivaled hers.”

Because of her excellent performance, Mogulothu was invited to submit her work to the 2021 MITx Philosophy Awards. She won. In fact, Balcarras remembers, both papers she wrote for the course would have won. They demonstrated, he says, “an unusually high degree of precision, formal acumen, and philosophical subtlety for a high school student.”

Completing the course and winning the award was rewarding, Mogulothu says. It motivated her to keep exploring new things as a high school student, and then as a new student enrolled at MIT.

She came to college thinking she would declare a major in math or computer science. But when she looked at the courses she was most interested in, she realized she should pursue a physics major. 

She has enjoyed the courses in her major, especially class STS.042J/8.225J (Einstein, Oppenheimer, Feynman: Physics in the 20th Century), taught by David Kaiser, the Germeshausen Professor of the History of Science and professor of physics. She took the course on campus, but it is also available on Open Learning’s MIT OpenCourseWare. As a student, she continues to use MIT Open Learning resources to check out courses and review syllabi as she plans her coursework. 

In summer 2024, Mogulothu did research on gravitational wave detection at PIER, the partnership between research center DESY and the University of Hamburg, in Hamburg, Germany. She wants to pursue a PhD in physics to keep researching, expanding her mind, and indulging the curiosity that led her to MITx in the first place. She encourages all learners to feel comfortable and confident trying something entirely new. 

“I went into the Paradox and Infinity course thinking, ‘yeah, math is cool, computer science is cool,’” she says. “But, actually taking the course and learning about things you don’t even expect to exist is really powerful. It increases your curiosity and is super rewarding to stick with something and realize how much you can learn and grow.”  


More than an academic advisor

MIT professors Iain Stewart and Roberto Fernandez are “Committed to Caring”


Advisors are meant to guide students academically, supporting their research and career objectives. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.

Professors Iain Stewart and Roberto Fernandez are two of the 2023-25 Committed to Caring cohort, supporting their students through self-doubt, developing a welcoming environment, and serving as a friend.

Iain Stewart: Supportive, equitable, and inclusive

Iain Stewart is the Otto and Jane Morningstar Professor of Science and former director of the Center for Theoretical Physics (CTP). His research interests center around nuclear and particle physics, where he develops and applies effective field theories to understand interactions between elementary particles and particularly strong interactions described by quantum chromodynamics.

Stewart shows faith in his students’ abilities even when they doubt themselves. According to his nominators, the field of physics, like many areas of intellectual pursuit, can attract a wide range of personalities, including those who are highly confident as well as those who may grapple with self-doubt. He explains concepts in a down-to-earth manner and does not make his students feel less than they are.

For his students, Stewart’s research group comes as a refreshing change. Stewart emphasizes that graduate school is for learning, and that one is not expected to know everything from the onset.

Stewart shows a great level of empathy and emotional support for his students. For example, one of the nominators recounted a story about preparing for their oral qualification exam. The student had temporarily suspended research, and another faculty member made a disparaging comment about the student’s grasp of their research. The student approached Stewart in distress.

"As your advisor,” Stewart reassured them, “I can tell you confidently that you know your research and you are doing well, and it’s totally OK to put it off for a while to prepare for the qual."

Stewart’s words gave the student a sense of relief and validation, reminding them that progress is a journey, not a race, and that taking time to prepare thoughtfully was both wise and necessary.

Always emphasizing positivity in his feedback, Stewart reminds advisees of their achievements and progress, helping them develop a more optimistic mindset. Stewart’s mentorship style recognizes individual student needs, a trait that his students find uncommon. His research group flourishes due to this approach, and a large number of his graduate and postdoc students have achieved great success.

During his six years as director, Stewart has made significant contributions to the CTP. He has improved the culture and demographics due to strong and inclusive leadership. In particular, a noteworthy number of women have joined the CTP.

In his own research group, a large number of international and female students have found a place, which is uncommon for groups in theoretical physics. Currently, three out of seven group members are female in a field where fewer than 10 percent are women.

Stewart’s nominators believe that given the number of women he has mentored in his career, he is single-handedly contributing to improving the diversity in his field. His nominators say he supports diverse backgrounds, and financially supports and encourages participation for marginalized groups.

Roberto Fernandez: Professor and friend

Roberto Fernandez is the William F. Pounds Professor of Organization Studies at the MIT Sloan School of Management as well as the co-director of the Economic Sociology PhD Program. His research focuses on organizations, social networks, and race and gender stratification. He has extensive experience doing field research in organizations, and he currently focuses on the organizational processes surrounding the hiring of new talent.

Fernandez describes himself as a “full-service professor.” He tries to attend to differing needs and circumstances of students and the situations they find themselves in, offering advice and consolation.

Fernandez is very understanding of his students, and is happy to speak to them about academic and personal problems alike. He acknowledges that each student comes from a different background with individual experience, and Fernandez attempts to accommodate each one in an ideal manner.

He advises in a way that respects a student’s personal life, but still expects a reasonable amount of produced work that motivates the student, allows for them to excel, and keeps them to a high standard.

Fernandez says, “It is just my sense of duty to pay forward how my mentors treated me. I feel like I would dishonor their work if I were not to pass it on.”

A nominator shared that Fernandez serves as both a professor and a friend. He has gone out of his way to check in and chat with them. They said that Fernandez is the only professor who has taken the time to truly get to know their story, and Fernandez speaks to students like an equal.

The nominator noted that many people at MIT enjoy a high level of privilege. Despite the differences in their circumstances, however, the nominator feels comfortable talking to Fernandez.

Happily, the professor continued to touch base with the nominator long after their class had finished, and he is the one person who really made them feel like MIT was their home. This experience stood out as unique for the nominator, and played a large role in their experience.

In addition to providing genuine connections, Fernandez advises incoming graduate students about the need for a mindset shift. Graduate school is not like undergrad. Being an excellent student is necessary, but it is not sufficient to succeed in a PhD program. Excellent undergraduate students are consumers of knowledge; on the other hand, excellent graduate students are producers of knowledge.

The nominator enthused, “[Fernandez] really went above and beyond, and this means a lot.”


Three MIT students named 2026 Schwarzman Scholars

Yutao Gong, Brandon Man, and Andrii Zahorodnii will spend 2025-26 at Tsinghua University in China studying global affairs.


Three MIT students — Yutao Gong, Brandon Man, and Andrii Zahorodnii — have been awarded 2025 Schwarzman Scholarships and will join the program’s 10th cohort to pursue a master’s degree in global affairs at Tsinghua University in Beijing, China.

The MIT students were selected from a pool of over 5,000 applicants. This year’s class of 150 scholars represents 38 countries and 105 universities from around the world.

The Schwarzman Scholars program aims to develop leadership skills and deepen understanding of China’s changing role in the world. The fully funded one-year master’s program at Tsinghua University emphasizes leadership, global affairs, and China. Scholars also gain exposure to China through mentoring, internships, and experiential learning.

MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in Career Advising and Professional Development and the Presidential Committee on Distinguished Fellowships.

Yutao Gong will graduate this spring from the Leaders for Global Operations program at the MIT Sloan School of Management, earning a dual MBA and a MS degree in civil and environmental engineering with a focus on manufacturing and operations. Gong, who hails from Shanghai, China, has academic, work, and social engagement experiences in China, the United States, Jordan, and Denmark. She was previously a consultant at Boston Consulting Group working on manufacturing, agriculture, sustainability, and renewable energy-related projects, and spent two years in Chicago and one year in Greater China as a global ambassador. Gong graduated magna cum laude from Duke University with double majors in environmental science and statistics, where she organized the Duke China-U.S. Summit.

Brandon Man, from Canada and Hong Kong, is a master’s student in the Department of Mechanical Engineering at MIT, where he studies generative artificial intelligence (genAI) for engineering design. Previously, he graduated from Cornell University magna cum laude with honors in computer science. With a wealth of experience in robotics — from assistive robots to next-generation spacesuits for NASA to Tencent’s robot dog, Max — he is now a co-founder of Sequestor, a genAI-powered data aggregation platform that enables carbon credit investors to perform faster due diligence. His goal is to bridge the best practices of the Eastern and Western tech worlds.

Andrii Zahorodnii, from Ukraine, will graduate this spring with a bachelor of science and a master of engineering degree in computer science and cognitive sciences. An engineer as well as a neuroscientist, he has conducted research at MIT with Professor Guangyu Robert Yang’s MetaConscious Group and the Fiete Lab. Zahorodnii is passionate about using AI to uncover insights into human cognition, leading to more-informed, empathetic, and effective global decision-making and policy. Besides driving the exchange of ideas as a TEDxMIT organizer, he strives to empower and inspire future leaders internationally and in Ukraine through the Ukraine Leadership and Technology Academy he founded.


How one brain circuit encodes memories of both places and events

A new computational model explains how neurons linked to spatial navigation can also help store episodic memories.


Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.

A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.

“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”

MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.

An index of memories

To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.

In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.

“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” Fiete says. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”

Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.

An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.

In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.

“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.

In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.

When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.

“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”

Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.

Modeling memory cliffs and palaces

The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.

While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.

The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.

The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.

The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.

The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.


Fast control methods enable record-setting fidelity in superconducting qubit

The advance holds the promise to reduce error-correction resource overhead.


Quantum computing promises to solve complex problems exponentially faster than a classical computer, by using the principles of quantum mechanics to encode and manipulate information in quantum bits (qubits).

Qubits are the building blocks of a quantum computer. One challenge to scaling, however, is that qubits are highly sensitive to background noise and control imperfections, which introduce errors into the quantum operations and ultimately limit the complexity and duration of a quantum algorithm. To improve the situation, MIT researchers and researchers worldwide have continually focused on improving qubit performance. 

In new work, using a superconducting qubit called fluxonium, MIT researchers in the Department of Physics, the Research Laboratory of Electronics (RLE), and the Department of Electrical Engineering and Computer Science (EECS) developed two new control techniques to achieve a world-record single-qubit fidelity of 99.998 percent. This result complements then-MIT researcher Leon Ding’s demonstration last year of a 99.92 percent two-qubit gate fidelity

The paper’s senior authors are David Rower PhD ’24, a recent physics postdoc in MIT’s Engineering Quantum Systems (EQuS) group and now a research scientist at the Google Quantum AI laboratory; Leon Ding PhD ’23 from EQuS, now leading the Calibration team at Atlantic Quantum; and William D. Oliver, the Henry Ellis Warren Professor of EECS and professor of physics, leader of EQuS, director of the Center for Quantum Engineering, and RLE associate director. The paper recently appeared in the journal PRX Quantum.

Decoherence and counter-rotating errors

A major challenge with quantum computation is decoherence, a process by which qubits lose their quantum information. For platforms such as superconducting qubits, decoherence stands in the way of realizing higher-fidelity quantum gates.

Quantum computers need to achieve high gate fidelities in order to implement sustained computation through protocols like quantum error correction. The higher the gate fidelity, the easier it is to realize practical quantum computing.

MIT researchers are developing techniques to make quantum gates, the basic operations of a quantum computer, as fast as possible in order to reduce the impact of decoherence. However, as gates get faster, another type of error, arising from counter-rotating dynamics, can be introduced because of the way qubits are controlled using electromagnetic waves. 

Single-qubit gates are usually implemented with a resonant pulse, which induces Rabi oscillations between the qubit states. When the pulses are too fast, however, “Rabi gates” are not so consistent, due to unwanted errors from counter-rotating effects. The faster the gate, the more the counter-rotating error is manifest. For low-frequency qubits such as fluxonium, counter-rotating errors limit the fidelity of fast gates.

“Getting rid of these errors was a fun challenge for us,” says Rower. “Initially, Leon had the idea to utilize circularly polarized microwave drives, analogous to circularly polarized light, but realized by controlling the relative phase of charge and flux drives of a superconducting qubit. Such a circularly polarized drive would ideally be immune to counter-rotating errors.”

While Ding’s idea worked immediately, the fidelities achieved with circularly polarized drives were not as high as expected from coherence measurements.

“Eventually, we stumbled on a beautifully simple idea,” says Rower. “If we applied pulses at exactly the right times, we should be able to make counter-rotating errors consistent from pulse-to-pulse. This would make the counter-rotating errors correctable. Even better, they would be automatically accounted for with our usual Rabi gate calibrations!”

They called this idea “commensurate pulses,” since the pulses needed to be applied at times commensurate with intervals determined by the qubit frequency through its inverse, the time period. Commensurate pulses are defined simply by timing constraints and can be applied to a single linear qubit drive. In contrast, circularly polarized microwaves require two drives and some extra calibration.

“I had much fun developing the commensurate technique,” says Rower. “It was simple, we understood why it worked so well, and it should be portable to any qubit suffering from counter-rotating errors!”

“This project makes it clear that counter-rotating errors can be dealt with easily. This is a wonderful thing for low-frequency qubits such as fluxonium, which are looking more and more promising for quantum computing.”

Fluxonium’s promise

Fluxonium is a type of superconducting qubit made up of a capacitor and Josephson junction; unlike transmon qubits, however, fluxonium also includes a large “superinductor,” which by design helps protect the qubit from environmental noise. This results in performing logical operations, or gates, with greater accuracy.

Despite having higher coherence, however, fluxonium has a lower qubit frequency that is generally associated with proportionally longer gates.

“Here, we’ve demonstrated a gate that is among the fastest and highest-fidelity across all superconducting qubits,” says Ding. “Our experiments really show that fluxonium is a qubit that supports both interesting physical explorations and also absolutely delivers in terms of engineering performance.”

With further research, they hope to reveal new limitations and yield even faster and higher-fidelity gates.

“Counter-rotating dynamics have been understudied in the context of superconducting quantum computing because of how well the rotating-wave approximation holds in common scenarios,” says Ding. “Our paper shows how to precisely calibrate fast, low-frequency gates where the rotating-wave approximation does not hold.”

Physics and engineering team up

“This is a wonderful example of the type of work we like to do in EQuS, because it leverages fundamental concepts in both physics and electrical engineering to achieve a better outcome,” says Oliver. “It builds on our earlier work with non-adiabatic qubit control, applies it to a new qubit — fluxonium — and makes a beautiful connection with counter-rotating dynamics.”

The science and engineering teams enabled the high fidelity in two ways. First, the team demonstrated “commensurate” (synchronous) non-adiabatic control, which goes beyond the standard “rotating wave approximation” of standard Rabi approaches. This leverages ideas that won the 2023 Nobel Prize in Physics for ultrafast “attosecond” pulses of light.

Secondly, they demonstrated it using an analog to circularly polarized light. Rather than a physical electromagnetic field with a rotating polarization vector in real x-y space, they realized a synthetic version of circularly polarized light using the qubit’s x-y space, which in this case corresponds to its magnetic flux and electric charge.

The combination of a new take on an existing qubit design (fluxonium) and the application of advanced control methods applied to an understanding of the underlying physics enabled this result.

Platform-independent and requiring no additional calibration overhead, this work establishes straightforward strategies for mitigating counter-rotating effects from strong drives in circuit quantum electrodynamics and other platforms, which the researchers expect to be helpful in the effort to realize high-fidelity control for fault-tolerant quantum computing.

Adds Oliver, “With the recent announcement of Google’s Willow quantum chip that demonstrated quantum error correction beyond threshold for the first time, this is a timely result, as we have pushed performance even higher. Higher-performant qubits will lead to lower overhead requirements for implementing error correction.”  

Other researchers on the paper are RLE’s Helin ZhangMax Hays, Patrick M. Harrington, Ilan T. RosenSimon GustavssonKyle SerniakJeffrey A. Grover, and Junyoung An, who is also with EECS; and MIT Lincoln Laboratory’s Jeffrey M. Gertler, Thomas M. Hazard, Bethany M. Niedzielski, and Mollie E. Schwartz.

This research was funded, in part, by the U.S. Army Research Office, the U.S. Department of Energy Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage, U.S. Air Force, the U.S. Office of the Director of National Intelligence, and the U.S. National Science Foundation.  


For healthy hearing, timing matters

Machine-learning models let neuroscientists study the impact of auditory processing on real-world hearing.


When sound waves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.

Neurons send signals by emitting spikes — brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming sound waves.

With powerful new models of human hearing, scientists at MIT’s McGovern Institute for Brain Research have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.

The open-access findings, reported Dec. 4 in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. MIT professor and McGovern investigator Josh McDermott, who led the research, explains that his team’s models better-equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.

Science of sound

The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Sound waves oscillate at rates that determine their pitch: Low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that correspond to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also associate head of the MIT Department of Brain and Cognitive Sciences.

This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.

This has been difficult to study experimentally; animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler PhD ’24 turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.

To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.

The researchers showed that their model replicated human hearing well — better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.

When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.

The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.

“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways — it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”


Physicists measure quantum geometry for the first time

The work opens new avenues for understanding and manipulating electrons in materials.


MIT physicists and colleagues have for the first time measured the geometry, or shape, of electrons in solids at the quantum level. Scientists have long known how to measure the energies and velocities of electrons in crystalline materials, but until now, those systems’ quantum geometry could only be inferred theoretically, or sometimes not at all.

The work, reported in the Nov. 25 issue of Nature Physics, “opens new avenues for understanding and manipulating the quantum properties of materials,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work.

“We’ve essentially developed a blueprint for obtaining some completely new information that couldn’t be obtained before,” says Comin, who is also affiliated with MIT’s Materials Research Laboratory and the Research Laboratory of Electronics.

The work could be applied to “any kind of quantum material, not just the one we worked with,” says Mingu Kang PhD ’23, first author of the Nature Physics paper who conducted the work as an MIT graduate student and who is now a Kavli Postdoctoral Fellow at Cornell University’s Laboratory of Atomic and Solid State Physics. 

Kang was also invited to write an accompanying research briefing on the work, including its implications, for the Nov. 25 issue of Nature Physics.

A weird world

In the weird world of quantum physics, an electron can be described as both a point in space and a wave-like shape. At the heart of the current work is a fundamental object known as a wave function that describes the latter. “You can think of it like a surface in a three-dimensional space,” says Comin.

There are different types of wave functions, ranging from the simple to the complex. Think of a ball. That is analogous to a simple, or trivial, wave function. Now picture a Mobius strip, the kind of structure explored by M.C. Escher in his art. That’s analogous to a complex, or nontrivial, wave function. And the quantum world is filled with materials composed of the latter.

But until now, the quantum geometry of wave functions could only be inferred theoretically, or sometimes not at all. And the property is becoming more and more important as physicists find more and more quantum materials with potential applications in everything from quantum computers to advanced electronic and magnetic devices.

The MIT team solved the problem using a technique called angle-resolved photoemission spectroscopy, or ARPES. Comin, Kang, and some of the same colleagues had used the technique in other research. For example, in 2022 they reported discovering the “secret sauce” behind exotic properties of a new quantum material known as a kagome metal. That work, too, appeared in Nature Physics. In the current work, the team adapted ARPES to measure the quantum geometry of a kagome metal.

Close collaborations

Kang stresses that the new ability to measure the quantum geometry of materials “comes from the close cooperation between theorists and experimentalists.”

The Covid-19 pandemic, too, had an impact. Kang, who is from South Korea, was based in that country during the pandemic. “That facilitated a collaboration with theorists in South Korea,” says Kang, an experimentalist.

The pandemic also led to an unusual opportunity for Comin. He traveled to Italy to help run the ARPES experiments at the Italian Light Source Elettra, a national laboratory. The lab was closed during the pandemic, but was starting to reopen when Comin arrived. He found himself alone, however, when Kang tested positive for Covid and couldn’t join him. So he inadvertently ran the experiments himself with the support of local scientists. “As a professor, I lead projects, but students and postdocs actually carry out the work. So this is basically the last study where I actually contributed to the experiments themselves,” he says with a smile.

In addition to Kang and Comin, additional authors of the Nature Physics paper are Sunje Kim of Seoul National University (Kim is a co-first author with Kang); Paul M. Neves, a graduate student in the MIT Department of Physics; Linda Ye of Stanford University; Junseo Jung of Seoul National University; Denny Puntel of the University of Trieste; Federico Mazzola of Consiglio Nazionale delle Ricerche and Ca’ Foscari University of Venice; Shiang Fang of Google DeepMind; Chris Jozwiak, Aaron Bostwick, and Eli Rotenberg of Lawrence Berkeley National Laboratory; Jun Fuji and Ivana Vobornik of Consiglio Nazionale delle Ricerche; Jae-Hoon Park of Max Planck POSTECH/Korea Research Initiative and Pohang University of Science and Technology; Joseph G. Checkelsky, associate professor of physics at MIT; and Bohm-Jung Yang of Seoul National University, who co-led the research project with Comin.

This work was funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation, the Gordon and Betty Moore Foundation, the National Research Foundation of Korea, the Samsung Science and Technology Foundation, the U.S. Army Research Office, the U.S. Department of Energy Office of Science, the Heising-Simons Physics Research Fellow Program, the Tsinghua Education Foundation, the NFFA-MUR Italy Progetti Internazionali facility, the Samsung Foundation of Culture, and the Kavli Institute at Cornell.


X-ray flashes from a nearby supermassive black hole accelerate mysteriously

Their source could be the core of a dead star that’s teetering at the black hole’s edge, MIT astronomers report.


One supermassive black hole has kept astronomers glued to their scopes for the last several years. First came a surprise disappearance, and now, a precarious spinning act.

The black hole in question is 1ES 1927+654, which is about as massive as a million suns and sits in a galaxy that is 270 million light-years away. In 2018, astronomers at MIT and elsewhere observed that the black hole’s corona — a cloud of whirling, white-hot plasma — suddenly disappeared, before reassembling months later. The brief though dramatic shut-off was a first in black hole astronomy.

Members of the MIT team have now caught the same black hole exhibiting more unprecedented behavior.

The astronomers have detected flashes of X-rays coming from the black hole at a steadily increasing clip. Over a period of two years, the flashes, at millihertz frequencies, increased from every 18 minutes to every seven minutes. This dramatic speed-up in X-rays has not been seen from a black hole until now.

The researchers explored a number of scenarios for what might explain the flashes. They believe the most likely culprit is a spinning white dwarf — an extremely compact core of a dead star that is orbiting around the black hole and getting precariously closer to its event horizon, the boundary beyond which nothing can escape the black hole’s gravitational pull. If this is the case, the white dwarf must be pulling off an impressive balancing act, as it could be coming right up to the black hole’s edge without actually falling in.

“This would be the closest thing we know of around any black hole,” says Megan Masterson, a graduate student in physics at MIT, who co-led the discovery. “This tells us that objects like white dwarfs may be able to live very close to an event horizon for a relatively extended period of time.”

The researchers present their findings today at the 245th meeting of the American Astronomical Society.

If a white dwarf is at the root of the black hole’s mysterious flashing, it would also give off gravitational waves, in a range that would be detectable by next-generation observatories such as the European Space Agency's Laser Interferometer Space Antenna (LISA).

These new detectors are designed to detect oscillations on the scale of minutes, so this black hole system is in that sweet spot,” says co-author Erin Kara, associate professor of physics at MIT.

The study’s other co-authors include MIT Kavli members Christos Panagiotou, Joheen Chakraborty, Kevin Burdge, Riccardo Arcodia, Ronald Remillard, and Jingyi Wang, along with collaborators from multiple other institutions.

Nothing normal

Kara and Masterson were part of the team that observed 1ES 1927+654 in 2018, as the black hole’s corona went dark, then slowly rebuilt itself over time. For a while, the newly reformed corona — a cloud of highly energetic plasma and X-rays — was the brightest X-ray-emitting object in the sky.

“It was still extremely bright, though it wasn’t doing anything new for a couple years and was kind of gurgling along. But we felt we had to keep monitoring it because it was so beautiful,” Kara says. “Then we noticed something that has never really been seen before.”

In 2022, the team looked through observations of the black hole taken by the European Space Agency’s XMM-Newton, a space-based observatory that detects and measures X-ray emissions from black holes, neutron stars, galactic clusters, and other extreme cosmic sources. They noticed that X-rays from the black hole appeared to pulse with increasing frequency. Such “quasi-periodic oscillations” have only been observed in a handful of other supermassive black holes, where X-ray flashes appear with regular frequency.

In the case of 1ES 1927+654, the flickering seemed to steadily ramp up, from every 18 minutes to every seven minutes over the span of two years.

“We’ve never seen this dramatic variability in the rate at which it’s flashing,” Masterson says. “This looked absolutely nothing like a normal supermassive black hole.”

The fact that the flashing was detected in the X-ray band points to the strong possibility that the source is somewhere very close to the black hole. The innermost regions of a black hole are extremely high-energy environments, where X-rays are produced by fast-moving, hot plasma. X-rays are less likely to be seen at farther distances, where gas can circle more slowly in an accretion disk. The cooler environment of the disk can emit optical and ultraviolet light, but rarely gives off X-rays.

Seeing something in the X-rays is already telling you you’re pretty close to the black hole,” Kara says. “When you see variability on the timescale of minutes, that’s close to the event horizon, and the first thing your mind goes to is circular motion, and whether something could be orbiting around the black hole.”

X-ray kick-up

Whatever was producing the X-ray flashes was doing so at an extremely close distance from the black hole, which the researchers estimate to be within a few million miles of the event horizon.

Masterson and Kara explored models for various astrophysical phenomena that could explain the X-ray patterns that they observed, including a possibility relating to the black hole’s corona.

“One idea is that this corona is oscillating, maybe blobbing back and forth, and if it starts to shrink, those oscillations get faster as the scales get smaller,” Masterson says. “But we’re in the very early stages of understanding coronal oscillations.”

Another promising scenario, and one that scientists have a better grasp on in terms of the physics involved, has to do with a daredevil of a white dwarf. According to their modeling, the researchers estimate the white dwarf could have been about one-tenth the mass of the sun. In contrast, the supermassive black hole itself is on the order of 1 million solar masses.

When any object gets this close to a supermassive black hole, gravitational waves are expected to be emitted, dragging the object closer to the black hole. As it circles closer, the white dwarf moves at a faster rate, which can explain the increasing frequency of X-ray oscillations that the team observed.

The white dwarf is practically at the precipice of no return and is estimated to be just a few million miles from the event horizon. However, the researchers predict that the star will not fall in. While the black hole’s gravity may pull the white dwarf inward, the star is also shedding part of its outer layer into the black hole. This shedding acts as a small kick-back, such that the white dwarf — an incredibly compact object itself — can resist crossing the black hole’s boundary.

“Because white dwarfs are small and compact, they’re very difficult to shred apart, so they can be very close to a black hole,” Kara says. “If this scenario is correct, this white dwarf is right at the turn around point, and we may see it get further away.”

The team plans to continue observing the system, with existing and future telescopes, to better understand the extreme physics at work in a black hole’s innermost environments. They are particularly excited to study the system once the space-based gravitational-wave detector LISA launches — currently planned for the mid 2030s — as the gravitational waves that the system should give off will be in a sweet spot that LISA can clearly detect.

“The one thing I’ve learned with this source is to never stop looking at it because it will probably teach us something new,” Masterson says. “The next step is just to keep our eyes open.”


Study suggests how the brain, with sleep, learns meaningful maps of spaces

Place cells are known to encode individual locations, but research finds stitching together a “cognitive map” of a whole environment requires a broader ensemble of cells, aided by sleep, over several days.


On the first day of your vacation in a new city, your explorations expose you to innumerable individual places. While the memories of these spots (like a beautiful garden on a quiet side street) feel immediately indelible, it might be days before you have enough intuition about the neighborhood to direct a newer tourist to that same site and then maybe to the café you discovered nearby. A new study of mice by MIT neuroscientists at The Picower Insitute for Learning and Memory provides new evidence for how the brain forms cohesive cognitive maps of whole spaces and highlights the critical importance of sleep for the process.

Scientists have known for decades that the brain devotes neurons in a region called the hippocampus to remembering specific locations. So-called “place cells” reliably activate when an animal is at the location the neuron is tuned to remember. But more useful than having markers of specific spaces is having a mental model of how they all relate in a continuous overall geography. Though such “cognitive maps” were formally theorized in 1948, neuroscientists have remained unsure of how the brain constructs them. The new study in the December edition of Cell Reports finds that the capability may depend upon subtle but meaningful changes over days in the activity of cells that are only weakly attuned to individual locations, but that increase the robustness and refinement of the hippocampus’s encoding of the whole space. With sleep, the study’s analyses indicate, these “weakly spatial” cells increasingly enrich neural network activity in the hippocampus to link together these places into a cognitive map.

“On Day 1, the brain doesn’t represent the space very well,” says lead author Wei Guo, a research scientist in the lab of senior author Matthew Wilson, the Sherman Fairchild Professor in The Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences. “Neurons represent individual locations, but together they don’t form a map. But on Day 5 they form a map. If you want a map, you need all these neurons to work together in a coordinated ensemble.”

Mice mapping mazes

To conduct the study, Guo and Wilson, along with labmates Jie “Jack” Zhang and Jonathan Newman, introduced mice to simple mazes of varying shapes and let them explore them freely for about 30 minutes a day for several days. Importantly, the mice were not directed to learn anything specific through the offer of any rewards. They just wandered. Previous studies have shown that mice naturally demonstrate “latent learning” of spaces from this kind of unrewarded experience after several days.

To understand how latent learning takes hold, Guo and his colleagues visually monitored hundreds of neurons in the CA1 area of the hippocampus by engineering cells to flash when a buildup of calcium ions made them electrically active. They not only recorded the neurons’ flashes when the mice were actively exploring, but also while they were sleeping. Wilson’s lab has shown that animals “replay” their previous journeys during sleep, essentially refining their memories by dreaming about their experiences.

Analysis of the recordings showed that the activity of the place cells developed immediately and remained strong and unchanged over several days of exploration. But this activity alone wouldn’t explain how latent learning or a cognitive map evolves over several days. So unlike in many other studies where scientists focus solely on the strong and clear activity of place cells, Guo extended his analysis to the more subtle and mysterious activity of cells that were not so strongly spatially tuned. 

Using an emerging technique called “manifold learning” he was able to discern that many of the “weakly spatial” cells gradually correlated their activity not with locations, but with activity patterns among other neurons in the network. As this was happening, Guo’s analyses showed, the network encoded a cognitive map of the maze that increasingly resembled the literal, physical space.

“Although not responding to specific locations like strongly spatial cells, weakly spatial cells specialize in responding to ‘‘mental locations,’’ i.e., specific ensemble firing patterns of other cells,” the study authors wrote. “If a weakly spatial cell’s mental field encompasses two subsets of strongly spatial cells that encode distinct locations, this weakly spatial cell can serve as a bridge between these locations.”

In other words, the activity of the weakly spatial cells likely stitches together the individual locations represented by the place cells into a mental map.

The need for sleep

Studies by Wilson’s lab and many others have shown that memories are consolidated, refined, and processed by neural activity, such as replay, that occurs during sleep and rest. Guo and Wilson’s team therefore sought to test whether sleep was necessary for the contribution of weakly spatial cells to latent learning of cognitive maps.

To do this they let some mice explore a new maze twice during the same day with a three-hour siesta in between. Some of the mice were allowed to sleep but some were not. The ones that did showed a significant refinement of their mental map, but the ones that weren’t allowed to sleep showed no such improvement. Not only did the network encoding of the map improve, but also measures of the tuning of individual cells during showed that sleep helped cells become better attuned both to places and to patterns of network activity, so-called “mental places” or “fields.”

Mental map meaning

The “cognitive maps” the mice encoded over several days were not literal, precise maps of the mazes, Guo notes. Instead they were more like schematics. Their value is that they provide the brain with a topology that can be explored mentally, without having to be in the physical space. For instance, once you’ve formed your cognitive map of the neighborhood around your hotel, you can plan the next morning’s excursion (e.g., you could imagine grabbing a croissant at the bakery you observed a few blocks west and then picture eating it on one of those benches you noticed in the park along the river).

Indeed, Wilson hypothesized that the weakly spatial cells’ activity may be overlaying salient non-spatial information that brings additional meaning to the maps (i.e., the idea of a bakery is not spatial, even if it’s closely linked to a specific location). The study, however, included no landmarks within the mazes and did not test any specific behaviors among the mice. But now that the study has identified that weakly spatial cells contribute meaningfully to mapping, Wilson said future studies can investigate what kind of information they may be incorporating into the animals’ sense of their environments. We seem to intuitively regard the spaces we inhabit as more than just sets of discrete locations.

“In this study we focused on animals behaving naturally and demonstrated that during freely exploratory behavior and subsequent sleep, in the absence of reinforcement, substantial neural plastic changes at the ensemble level still occur,” the authors concluded. “This form of implicit and unsupervised learning constitutes a crucial facet of human learning and intelligence, warranting further in-depth investigations.”

The Freedom Together Foundation, The Picower Institute, and the National Institutes of Health funded the study.


Teaching AI to communicate sounds like humans do

Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.


Whether you’re describing the sound of your faulty car engine or meowing like your neighbor’s cat, imitating sounds with your voice can be a helpful way to relay a concept when words don’t do the trick.

Vocal imitation is the sonic equivalent of doodling a quick picture to communicate something you saw — except that instead of using a pencil to illustrate an image, you use your vocal tract to express a sound. This might seem difficult, but it’s something we all do intuitively: To experience it for yourself, try using your voice to mirror the sound of an ambulance siren, a crow, or a bell being struck.

Inspired by the cognitive science of how we communicate, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have developed an AI system that can produce human-like vocal imitations with no training, and without ever having "heard" a human vocal impression before.

To achieve this, the researchers engineered their system to produce and interpret sounds much like we do. They started by building a model of the human vocal tract that simulates how vibrations from the voice box are shaped by the throat, tongue, and lips. Then, they used a cognitively-inspired AI algorithm to control this vocal tract model and make it produce imitations, taking into consideration the context-specific ways that humans choose to communicate sound.

The model can effectively take many sounds from the world and generate a human-like imitation of them — including noises like leaves rustling, a snake’s hiss, and an approaching ambulance siren. Their model can also be run in reverse to guess real-world sounds from human vocal imitations, similar to how some computer vision systems can retrieve high-quality images based on sketches. For instance, the model can correctly distinguish the sound of a human imitating a cat’s “meow” versus its “hiss.”

In the future, this model could potentially lead to more intuitive “imitation-based” interfaces for sound designers, more human-like AI characters in virtual reality, and even methods to help students learn new languages.

The co-lead authors — MIT CSAIL PhD students Kartik Chandra SM ’23 and Karima Ma, and undergraduate researcher Matthew Caren — note that computer graphics researchers have long recognized that realism is rarely the ultimate goal of visual expression. For example, an abstract painting or a child’s crayon doodle can be just as expressive as a photograph.

“Over the past few decades, advances in sketching algorithms have led to new tools for artists, advances in AI and computer vision, and even a deeper understanding of human cognition,” notes Chandra. “In the same way that a sketch is an abstract, non-photorealistic representation of an image, our method captures the abstract, non-phono-realistic ways humans express the sounds they hear. This teaches us about the process of auditory abstraction.”

The art of imitation, in three parts

The team developed three increasingly nuanced versions of the model to compare to human vocal imitations. First, they created a baseline model that simply aimed to generate imitations that were as similar to real-world sounds as possible — but this model didn’t match human behavior very well.

The researchers then designed a second “communicative” model. According to Caren, this model considers what’s distinctive about a sound to a listener. For instance, you’d likely imitate the sound of a motorboat by mimicking the rumble of its engine, since that’s its most distinctive auditory feature, even if it’s not the loudest aspect of the sound (compared to, say, the water splashing). This second model created imitations that were better than the baseline, but the team wanted to improve it even more.

To take their method a step further, the researchers added a final layer of reasoning to the model. “Vocal imitations can sound different based on the amount of effort you put into them. It costs time and energy to produce sounds that are perfectly accurate,” says Chandra. The researchers’ full model accounts for this by trying to avoid utterances that are very rapid, loud, or high- or low-pitched, which people are less likely to use in a conversation. The result: more human-like imitations that closely match many of the decisions that humans make when imitating the same sounds.

After building this model, the team conducted a behavioral experiment to see whether the AI- or human-generated vocal imitations were perceived as better by human judges. Notably, participants in the experiment favored the AI model 25 percent of the time in general, and as much as 75 percent for an imitation of a motorboat and 50 percent for an imitation of a gunshot.

Toward more expressive sound technology

Passionate about technology for music and art, Caren envisions that this model could help artists better communicate sounds to computational systems and assist filmmakers and other content creators with generating AI sounds that are more nuanced to a specific context. It could also enable a musician to rapidly search a sound database by imitating a noise that is difficult to describe in, say, a text prompt.

In the meantime, Caren, Chandra, and Ma are looking at the implications of their model in other domains, including the development of language, how infants learn to talk, and even imitation behaviors in birds like parrots and songbirds.

The team still has work to do with the current iteration of their model: It struggles with some consonants, like “z,” which led to inaccurate impressions of some sounds, like bees buzzing. They also can’t yet replicate how humans imitate speech, music, or sounds that are imitated differently across different languages, like a heartbeat.

Stanford University linguistics professor Robert Hawkins says that language is full of onomatopoeia and words that mimic but don’t fully replicate the things they describe, like the “meow” sound that very inexactly approximates the sound that cats make. “The processes that get us from the sound of a real cat to a word like ‘meow’ reveal a lot about the intricate interplay between physiology, social reasoning, and communication in the evolution of language,” says Hawkins, who wasn’t involved in the CSAIL research. “This model presents an exciting step toward formalizing and testing theories of those processes, demonstrating that both physical constraints from the human vocal tract and social pressures from communication are needed to explain the distribution of vocal imitations.”

Caren, Chandra, and Ma wrote the paper with two other CSAIL affiliates: Jonathan Ragan-Kelley, MIT Department of Electrical Engineering and Computer Science associate professor, and Joshua Tenenbaum, MIT Brain and Cognitive Sciences professor and Center for Brains, Minds, and Machines member. Their work was supported, in part, by the Hertz Foundation and the National Science Foundation. It was presented at SIGGRAPH Asia in early December.


Personal interests can influence how children’s brains respond to language

McGovern Institute neuroscientists use children’s interests to probe language in the brain.


A recent study from the McGovern Institute for Brain Research shows how interests can modulate language processing in children’s brains and paves the way for personalized brain research.

The paper, which appears in Imaging Neuroscience, was conducted in the lab of MIT professor and McGovern Institute investigator John Gabrieli, and led by senior author Anila D’Mello, a recent McGovern postdoc who is now an assistant professor at the University of Texas Southwestern Medical Center and the University of Texas at Dallas.

“Traditional studies give subjects identical stimuli to avoid confounding the results,” says Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. “However, our research tailored stimuli to each child’s interest, eliciting stronger — and more consistent — activity patterns in the brain’s language regions across individuals.” 

Funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research in MIT’s Yang Tan Collective, this work unveils a new paradigm that challenges current methods and shows how personalization can be a powerful strategy in neuroscience. The paper’s co-first authors are Halie Olson, a postdoc at the McGovern Institute, and Kristina Johnson PhD '21, an assistant professor at Northeastern University and former doctoral student at the MIT Media Lab. “Our research integrates participants’ lived experiences into the study design,” says Johnson. “This approach not only enhances the validity of our findings, but also captures the diversity of individual perspectives, often overlooked in traditional research.”

Taking interest into account

When it comes to language, our interests are like operators behind the switchboard. They guide what we talk about and who we talk to. Research suggests that interests are also potent motivators and can help improve language skills. For instance, children score higher on reading tests when the material covers topics that are interesting to them.

But neuroscience has shied away from using personal interests to study the brain, especially in the realm of language. This is mainly because interests, which vary between people, could throw a wrench into experimental control — a core principle that drives scientists to limit factors that can muddle the results.

Gabrieli, D’Mello, Olson, and Johnson ventured into this unexplored territory. The team wondered if tailoring language stimuli to children’s interests might lead to higher responses in language regions of the brain. “Our study is unique in its approach to control the kind of brain activity our experiments yield, rather than control the stimuli we give subjects,” says D’Mello. “This stands in stark contrast to most neuroimaging studies that control the stimuli but might introduce differences in each subject’s level of interest in the material.”

In their recent study, the authors recruited a cohort of 20 children to investigate how personal interests affected the way the brain processes language. Caregivers described their child’s interests to the researchers, spanning baseball, train lines, “Minecraft,” and musicals. During the study, children listened to audio stories tuned to their unique interests. They were also presented with audio stories about nature (this was not an interest among the children) for comparison. To capture brain activity patterns, the team used functional magnetic resonance imaging (fMRI), which measures changes in blood flow caused by underlying neural activity.

New insights into the brain

“We found that, when children listened to stories about topics they were really interested in, they showed stronger neural responses in language areas than when they listened to generic stories that weren’t tailored to their interests,” says Olson. “Not only does this tell us how interests affect the brain, but it also shows that personalizing our experimental stimuli can have a profound impact on neuroimaging results.”

The researchers noticed a particularly striking result. “Even though the children listened to completely different stories, their brain activation patterns were more overlapping with their peers when they listened to idiosyncratic stories compared to when they listened to the same generic stories about nature,” says D’Mello. This, she notes, points to how interests can boost both the magnitude and consistency of signals in language regions across subjects without changing how these areas communicate with each other.

Gabrieli noted another finding: “In addition to the stronger engagement of language regions for content of interest, there was also stronger activation in brain regions associated with reward and also with self-reflection.” Personal interests are individually relevant and can be rewarding, potentially driving higher activation in these regions during personalized stories.

These personalized paradigms might be particularly well-suited to studies of the brain in unique or neurodivergent populations. Indeed, the team is already applying these methods to study language in the brains of autistic children.

This study breaks new ground in neuroscience and serves as a prototype for future work that personalizes research to unearth further knowledge of the brain. In doing so, scientists can compile a more complete understanding of the type of information that is processed by specific brain circuits and more fully grasp complex functions such as language. 


A new way to determine whether a species will successfully invade an ecosystem

MIT physicists develop a predictive formula, based on bacterial communities, that may also apply to other types of ecosystems, including the human GI tract.


When a new species is introduced into an ecosystem, it may succeed in establishing itself, or it may fail to gain a foothold and die out. Physicists at MIT have now devised a formula that can predict which of those outcomes is most likely.

The researchers created their formula based on analysis of hundreds of different scenarios that they modeled using populations of soil bacteria grown in their laboratory. They now plan to test their formula in larger-scale ecosystems, including forests. This approach could also be helpful in predicting whether probiotics or fecal microbiota treatments (FMT) would successfully combat infections of the human GI tract.

“People eat a lot of probiotics, but many of them can never invade our gut microbiome at all, because if you introduce it, it does not necessarily mean that it can grow and colonize and benefit your health,” says Jiliang Hu SM ’19, PhD ’24, the lead author of the study.

MIT professor of physics Jeff Gore is the senior author of the paper, which appears today in the journal Nature Ecology and Evolution. Matthieu Barbier, a researcher at the Plant Health Institute Montpellier, and Guy Bunin, a professor of physics at Technion, are also authors of the paper.

Population fluctuations

Gore’s lab specializes in using microbes to analyze interspecies interactions in a controlled way, in hopes of learning more about how natural ecosystems behave. In previous work, the team has used bacterial populations to demonstrate how changing the environment in which the microbes live affects the stability of the communities they form.

In this study, the researchers wanted to study what determines whether an invasion by a new species will succeed or fail. In natural communities, ecologists have hypothesized that the more diverse an ecosystem is, the more it will resist an invasion, because most of the ecological niches will already be occupied and few resources are left for an invader.

However, in both natural and experimental systems, scientists have observed that this is not consistently true: While some highly diverse populations are resistant to invasion, other highly diverse populations are more likely to be invaded.

To explore why both of those outcomes can occur, the researchers set up more than 400 communities of soil bacteria, which were all native to the soil around MIT. The researchers established communities of 12 to 20 species of bacteria, and six days later, they added one randomly chosen species as the invader. On the 12th day of the experiment, they sequenced the genomes of all the bacteria to determine if the invader had established itself in the ecosystem.

In each community, the researchers also varied the nutrient levels in the culture medium on which the bacteria were grown. When nutrient levels were high, the microbes displayed strong interactions, characterized by heightened competition for food and other resources, or mutual inhibition through mechanisms such as pH-mediated cross-toxin effects. Some of these populations formed stable states in which the fraction of each microbe did not vary much over time, while others formed communities in which most of the species fluctuated in number.

The researchers found that these fluctuations were the most important factor in the outcome of the invasion. Communities that had more fluctuations tended to be more diverse, but they were also more likely to be invaded successfully.

“The fluctuation is not driven by changes in the environment, but it is internal fluctuation driven by the species interaction. And what we found is that the fluctuating communities are more readily invaded and also more diverse than the stable ones,” Hu says.

In some of the populations where the invader established itself, the other species remained, but in smaller numbers. In other populations, some of the resident species were outcompeted and disappeared completely. This displacement tended to happen more often in ecosystems when there were stronger competitive interactions between species.

In ecosystems that had more stable, less diverse populations, with stronger interactions between species, invasions were more likely to fail.

Regardless of whether the community was stable or fluctuating, the researchers found that the fraction of the original species that survived in the community before invasion predicts the probability of invasion success. This “survival fraction” could be estimated in natural communities by taking the ratio of the diversity within a local community (measured by the number of species in that area) to the regional diversity (number of species found in the entire region).

“It would be exciting to study whether the local and regional diversity could be used to predict susceptibility to invasion in natural communities,” Gore says.

Predicting success

The researchers also found that under certain circumstances, the order in which species arrived in the ecosystem played a role in whether an invasion was successful. When the interactions between species were strong, the chances of a species becoming successfully incorporated went down when that species was introduced after other species have already become established.

When the interactions are weak, this “priority effect” disappears and the same stable equilibrium is reached no matter what order the microbes arrived in.

“Under a strong interaction regime, we found the invader has some disadvantage because it arrived later. This is of interest in ecology because people have always found that in some cases the order in which species arrived matters a lot, while in the other cases it doesn't matter,” Hu says.

The researchers now plan to try to replicate their findings in ecosystems for which species diversity data is available, including the human gut microbiome. Their formula could allow them to predict the success of probiotic treatment, in which beneficial bacteria are consumed orally, or FMT, an experimental treatment for severe infections such as C. difficile, in which beneficial bacteria from a donor’s stool are transplanted into a patient’s colon.

“Invasions can be harmful or can be good depending on the context,” Hu says. “In some cases, like probiotics, or FMT to treat C. difficile infection, we want the healthy species to invade successfully. Also for soil protection, people introduce probiotics or beneficial species to the soil. In that case people also want the invaders to succeed.”

The research was funded by the Schmidt Polymath Award and the Sloan Foundation.


An abundant phytoplankton feeds a global network of marine microbes

New findings illuminate how Prochlorococcus’ nightly “cross-feeding” plays a role in regulating the ocean’s capacity to cycle and store carbon.


One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.

Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.

What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.

Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.

“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”

Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.

“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).

Spotting castaways

Cross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.

By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.

But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?

Global symphony

In their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.

They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.

The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?

For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.

The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?

It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.

“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.

Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.

“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”

This work was supported, in part, by the Simons Foundation and the National Science Foundation.


At MIT, Clare Grey stresses battery development to electrify the planet

In her 2024 Dresselhaus Lecture, the Cambridge University professor of chemistry describes her work making batteries more reliable and sustainable.


“How do we produce batteries at the cost that is suitable for mass adoption globally, and how do you do this to electrify the planet?” Clare Grey asked an audience of over 450 combined in-person and virtual attendees at the sixth annual Dresselhaus Lecture, organized by MIT.nano on Nov. 18. “The biggest challenge is, how do you make batteries to allow more renewables on the grid.”

These questions emphasized one of Grey’s key messages in her presentation: The future of batteries aligns with global climate efforts. She addressed sustainability issues with lithium mining and stressed the importance of increasing the variety of minerals that can be used in batteries. But the talk primarily focused on advanced imaging techniques to produce insights into the behaviors of materials that will guide the development of new technology. “We need to come up with new chemistries and new materials that are both more sustainable and safer,” she said, as well as think about other issues like secondhand use, which requires batteries to be made to last longer.

Better understanding will produce better batteries

“Batteries have really transformed the way we live,” Grey said. “In order to improve batteries, we need to understand how they work, we need to understand how they operate, and we need to understand how they degrade.”

Grey, a Royal Society Research Professor and the Geoffrey Moorhouse-Gibson Professor of Chemistry at Cambridge University, introduced new optical methods for studying batteries while they are operating, visualizing reactions down to the nanoscale. “It is much easier to study an operating device in-situ,” she said. “When you take batteries apart, sometimes there are processes that don’t survive disassembling.”

Grey presented work coming out of her research group that uses in-situ metrologies to better understand different dynamics and transformational phenomena of various materials. For example, in-situ nuclear magnetic resonance can identify issues with wrapping lithium with silicon (it does not form a passivating layer) and demonstrate why anodes cannot be replaced with sodium (it is the wrong size molecule). Grey discussed the value of being able to use in-situ metrology to look at higher energy density materials that are more sustainable such as lithium sulfur or lithium air batteries.

The lecture connected local structure to mechanisms and how materials intercalate. Grey spoke about using interferometric scattering (iSCAT) microscopy, typically used by biologists, to follow how ions are pulled in and out of materials. Sharing iSCAT images of graphite, she gave a shout out to the late Institute Professor and lecture namesake Mildred Dresselhaus when discussing nucleation, the process by which atoms come together to form new structures that is important for considering new, more sustainable materials for batteries.

“Millie, in her solid-state physics class for undergrads, nicely explained what’s going on here,” Grey explained. “There is a dramatic change in the conductivity as you go from diluted state to the dense state. The conductivity goes up. With this information, you can explore nucleation.”

Designing for the future

“How do we design for fast charging?” Grey asked, discussing gradient spectroscopy to visualize different materials. “We need to find a material that operates at a high enough voltage to avoid lithium plating and has high lithium mobility.”

“To return to the theme of graphite and Millie Dresselhaus,” said Grey, “I’ve been trying to really understand what is the nature of the passivating layer that grows on both graphite and lithium metal. Can we enhance this layer?” In the question-and-answer session that followed, Grey spoke about the pros and cons of incorporating nitrogen in the anode.

After the lecture, Grey was joined by Yet-Ming Chiang, the Kyocera Professor of Ceramics in the MIT Department of Materials Science and Engineering, for a fireside chat. The conversation touched on political and academic attitudes toward climate change in the United Kingdom, and audience members applauded Grey’s development of imaging methods that allow researchers to look at the temperature dependent response of battery materials.

This was the sixth Dresselhaus Lecture, named in honor of MIT Institute Professor Mildred Dresselhaus, known to many as the "Queen of Carbon Science.” “It’s truly wonderful to be here to celebrate the life and the science of Millie Dresselhaus,” said Grey. “She was a very strong advocate for women in science. I’m honored to be here to give a lecture in honor of her.”


High school teams compete at 2024 MIT Science Bowl Invitational

A celebration of scientific acumen and teamwork brings together top students from across the country.


A quiet intensity held the room on edge as the clock ticked down in the final moments of the 2024 MIT Science Bowl Invitational. Montgomery Blair High School clung to a razor-thin lead over Mission San Jose High School — 70 to 60 — with just two minutes remaining.

Mission San Jose faced a pivotal bonus opportunity that could tie the score. The moderator’s steady voice filled the room as he read the question. Mission San Jose’s team of four huddled together, pencils moving quickly across their white scratch paper. Across the stage, Montgomery Blair’s players sat still, their eyes darting between the scoreboard and the opposing team attempting to close the gap.

Mission San Jose team captain Advaith Mopuri called out their final answer.

“Incorrect,” the moderator announced.

Montgomery Blair’s team collectively exhaled, the tension breaking as they sealed their championship victory, but the gravity of those final moments when everything was on the line lingered — a testament to just how close the competition had been. Their showdown in the final round was a fitting culmination of the event, showcasing the mental agility and teamwork honed through months of practice.

“That final round was so tense. It came down to the final question,” says Jonathan Huang, a senior undergraduate at MIT and the co-president of the MIT Science Bowl Club. “It’s rare for it to come down to the very last question, so that was really exciting.”​

A tournament of science and strategy

Now in its sixth year at the high school level, the MIT Science Bowl Invitational welcomed 48 teams from across the country this year for a full day of competition. The buzzer-style tournament challenged students on topics that spanned disciplines such as biology, chemistry, and physics. The rapid pace and diverse subject matter demanded a combination of deep knowledge, quick reflexes, and strategic teamwork.

Montgomery Blair’s hard-fought victory marked the culmination of months of preparation. “It was so exciting,” says Katherine Wang, Montgomery Blair senior and Science Bowl team member. “I can’t even describe it. You never think anything like that would happen to you.”

The volunteers who make it happen

Behind the scenes, the invitational is powered by a team of more than 120 dedicated volunteers, many of them current MIT students. From moderating matches to coordinating logistics, these volunteers form the backbone of the invitational.

Preparation for the competition starts months in advance. “By the time summer started, we already had to figure out who was going to be the head writers for each subject,” Huang says. “Every week over the summer, volunteers spent their own time to start writing up questions.”

“Every single question you hear today was written by a volunteer,” said Paolo Adajar, an MIT graduate student who served in roles like questions judge this year and is a former president of the MIT Science Bowl Club. Adajar, who competed in the National Science Bowl as a high school student, has been involved in the MIT Invitational since it began in 2019. “There's just something so fun about the games and just watching people be excited to get a question right.”

For many volunteers, the event is a chance to reconnect with a shared community. “It’s so nice to get together with the community every year,” says Emily Liu, a master’s student in computer science at MIT and a veteran volunteer. “And I’m always pleasantly surprised to see how much I remember.”

Looking ahead

For competitors, the invitational offers more than just a chance to win. It’s an opportunity to connect with peers who share their passion for science, to experience the energy of MIT’s campus, and to sharpen skills they’ll carry into future endeavors. 

As the crowd dispersed and the auditorium emptied, the spirit of the competition remained — a testament to the dedication, curiosity, and camaraderie that define the MIT Science Bowl Invitational.


A new computational model can predict antibody structures more accurately

Using this model, researchers may be able to identify antibody drugs that can target a variety of infectious diseases.


By adapting artificial intelligence models known as large language models, researchers have made great progress in their ability to predict a protein’s structure from its sequence. However, this approach hasn’t been as successful for antibodies, in part because of the hypervariability seen in this type of protein.

To overcome that limitation, MIT researchers have developed a computational technique that allows large language models to predict antibody structures more accurately. Their work could enable researchers to sift through millions of possible antibodies to identify those that could be used to treat SARS-CoV-2 and other infectious diseases.

“Our method allows us to scale, whereas others do not, to the point where we can actually find a few needles in the haystack,” says Bonnie Berger, the Simons Professor of Mathematics, the head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and one of the senior authors of the new study. “If we could help to stop drug companies from going into clinical trials with the wrong thing, it would really save a lot of money.”

The technique, which focuses on modeling the hypervariable regions of antibodies, also holds potential for analyzing entire antibody repertoires from individual people. This could be useful for studying the immune response of people who are super responders to diseases such as HIV, to help figure out why their antibodies fend off the virus so effectively.

Bryan Bryson, an associate professor of biological engineering at MIT and a member of the Ragon Institute of MGH, MIT, and Harvard, is also a senior author of the paper, which appears this week in the Proceedings of the National Academy of Sciences. Rohit Singh, a former CSAIL research scientist who is now an assistant professor of biostatistics and bioinformatics and cell biology at Duke University, and Chiho Im ’22 are the lead authors of the paper. Researchers from Sanofi and ETH Zurich also contributed to the research.

Modeling hypervariability

Proteins consist of long chains of amino acids, which can fold into an enormous number of possible structures. In recent years, predicting these structures has become much easier to do, using artificial intelligence programs such as AlphaFold. Many of these programs, such as ESMFold and OmegaFold, are based on large language models, which were originally developed to analyze vast amounts of text, allowing them to learn to predict the next word in a sequence. This same approach can work for protein sequences — by learning which protein structures are most likely to be formed from different patterns of amino acids.

However, this technique doesn’t always work on antibodies, especially on a segment of the antibody known as the hypervariable region. Antibodies usually have a Y-shaped structure, and these hypervariable regions are located in the tips of the Y, where they detect and bind to foreign proteins, also known as antigens. The bottom part of the Y provides structural support and helps antibodies to interact with immune cells.

Hypervariable regions vary in length but usually contain fewer than 40 amino acids. It has been estimated that the human immune system can produce up to 1 quintillion different antibodies by changing the sequence of these amino acids, helping to ensure that the body can respond to a huge variety of potential antigens. Those sequences aren’t evolutionarily constrained the same way that other protein sequences are, so it’s difficult for large language models to learn to predict their structures accurately.

“Part of the reason why language models can predict protein structure well is that evolution constrains these sequences in ways in which the model can decipher what those constraints would have meant,” Singh says. “It’s similar to learning the rules of grammar by looking at the context of words in a sentence, allowing you to figure out what it means.”

To model those hypervariable regions, the researchers created two modules that build on existing protein language models. One of these modules was trained on hypervariable sequences from about 3,000 antibody structures found in the Protein Data Bank (PDB), allowing it to learn which sequences tend to generate similar structures. The other module was trained on data that correlates about 3,700 antibody sequences to how strongly they bind three different antigens.

The resulting computational model, known as AbMap, can predict antibody structures and binding strength based on their amino acid sequences. To demonstrate the usefulness of this model, the researchers used it to predict antibody structures that would strongly neutralize the spike protein of the SARS-CoV-2 virus.

The researchers started with a set of antibodies that had been predicted to bind to this target, then generated millions of variants by changing the hypervariable regions. Their model was able to identify antibody structures that would be the most successful, much more accurately than traditional protein-structure models based on large language models.

Then, the researchers took the additional step of clustering the antibodies into groups that had similar structures. They chose antibodies from each of these clusters to test experimentally, working with researchers at Sanofi. Those experiments found that 82 percent of these antibodies had better binding strength than the original antibodies that went into the model.

Identifying a variety of good candidates early in the development process could help drug companies avoid spending a lot of money on testing candidates that end up failing later on, the researchers say.

“They don’t want to put all their eggs in one basket,” Singh says. “They don’t want to say, I’m going to take this one antibody and take it through preclinical trials, and then it turns out to be toxic. They would rather have a set of good possibilities and move all of them through, so that they have some choices if one goes wrong.”

Comparing antibodies

Using this technique, researchers could also try to answer some longstanding questions about why different people respond to infection differently. For example, why do some people develop much more severe forms of Covid, and why do some people who are exposed to HIV never become infected?

Scientists have been trying to answer those questions by performing single-cell RNA sequencing of immune cells from individuals and comparing them — a process known as antibody repertoire analysis. Previous work has shown that antibody repertoires from two different people may overlap as little as 10 percent. However, sequencing doesn’t offer as comprehensive a picture of antibody performance as structural information, because two antibodies that have different sequences may have similar structures and functions.

The new model can help to solve that problem by quickly generating structures for all of the antibodies found in an individual. In this study, the researchers showed that when structure is taken into account, there is much more overlap between individuals than the 10 percent seen in sequence comparisons. They now plan to further investigate how these structures may contribute to the body’s overall immune response against a particular pathogen.

“This is where a language model fits in very beautifully because it has the scalability of sequence-based analysis, but it approaches the accuracy of structure-based analysis,” Singh says.

The research was funded by Sanofi and the Abdul Latif Jameel Clinic for Machine Learning in Health. 


MIT scientists pin down the origins of a fast radio burst

The fleeting cosmic firework likely emerged from the turbulent magnetosphere around a far-off neutron star.


Fast radio bursts are brief and brilliant explosions of radio waves emitted by extremely compact objects such as neutron stars and possibly black holes. These fleeting fireworks last for just a thousandth of a second and can carry an enormous amount of energy — enough to briefly outshine entire galaxies.

Since the first fast radio burst (FRB) was discovered in 2007, astronomers have detected thousands of FRBs, whose locations range from within our own galaxy to as far as 8 billion light-years away. Exactly how these cosmic radio flares are launched is a highly contested unknown.

Now, astronomers at MIT have pinned down the origins of at least one fast radio burst using a novel technique that could do the same for other FRBs. In their new study, appearing today in the journal Nature, the team focused on FRB 20221022A — a previously discovered fast radio burst that was detected from a galaxy about 200 million light-years away.

The team zeroed in further to determine the precise location of the radio signal by analyzing its “scintillation,” similar to how stars twinkle in the night sky. The scientists studied changes in the FRB’s brightness and determined that the burst must have originated from the immediate vicinity of its source, rather than much further out, as some models have predicted.

The team estimates that FRB 20221022A exploded from a region that is extremely close to a rotating neutron star, 10,000 kilometers away at most. That’s less than the distance between New York and Singapore. At such close range, the burst likely emerged from the neutron star’s magnetosphere — a highly magnetic region immediately surrounding the ultracompact star.

The team’s findings provide the first conclusive evidence that a fast radio burst can originate from the magnetosphere, the highly magnetic environment immediately surrounding an extremely compact object.

“In these environments of neutron stars, the magnetic fields are really at the limits of what the universe can produce,” says lead author Kenzie Nimmo, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “There’s been a lot of debate about whether this bright radio emission could even escape from that extreme plasma.”

“Around these highly magnetic neutron stars, also known as magnetars, atoms can’t exist — they would just get torn apart by the magnetic fields,” says Kiyoshi Masui, associate professor of physics at MIT. “The exciting thing here is, we find that the energy stored in those magnetic fields, close to the source, is twisting and reconfiguring such that it can be released as radio waves that we can see halfway across the universe.”

The study’s MIT co-authors include Adam Lanman, Shion Andrew, Daniele Michilli, and Kaitlyn Shin, along with collaborators from multiple institutions.

Burst size

Detections of fast radio bursts have ramped up in recent years, due to the Canadian Hydrogen Intensity Mapping Experiment (CHIME). The radio telescope array comprises four large, stationary receivers, each shaped like a half-pipe, that are tuned to detect radio emissions within a range that is highly sensitive to fast radio bursts.

Since 2020, CHIME has detected thousands of FRBs from all over the universe. While scientists generally agree that the bursts arise from extremely compact objects, the exact physics driving the FRBs is unclear. Some models predict that fast radio bursts should come from the turbulent magnetosphere immediately surrounding a compact object, while others predict that the bursts should originate much further out, as part of a shockwave that propagates away from the central object.

To distinguish between the two scenarios, and determine where fast radio bursts arise, the team considered scintillation — the effect that occurs when light from a small bright source such as a star, filters through some medium, such as a galaxy’s gas. As the starlight filters through the gas, it bends in ways that make it appear, to a distant observer, as if the star is twinkling. The smaller or the farther away an object is, the more it twinkles. The light from larger or closer objects, such as planets in our own solar system, experience less bending, and therefore do not appear to twinkle.

The team reasoned that if they could estimate the degree to which an FRB scintillates, they might determine the relative size of the region from where the FRB originated. The smaller the region, the closer in the burst would be to its source, and the more likely it is to have come from a magnetically turbulent environment. The larger the region, the farther the burst would be, giving support to the idea that FRBs stem from far-out shockwaves.

Twinkle pattern

To test their idea, the researchers looked to FRB 20221022A, a fast radio burst that was detected by CHIME in 2022. The signal lasts about two milliseconds, and is a relatively run-of-the-mill FRB, in terms of its brightness. However, the team’s collaborators at McGill University found that FRB 20221022A exhibited one standout property: The light from the burst was highly polarized, with the angle of polarization tracing a smooth S-shaped curve.  This pattern is interpreted as evidence that the FRB emission site is rotating — a characteristic previously observed in pulsars, which are highly magnetized, rotating neutron stars.

To see a similar polarization in fast radio bursts was a first, suggesting that the signal may have arisen from the close-in vicinity of a neutron star. The McGill team’s results are reported in a companion paper today in Nature.

The MIT team realized that if FRB 20221022A originated from close to a neutron star, they should be able to prove this, using scintillation.

In their new study, Nimmo and her colleagues analyzed data from CHIME and observed steep variations in brightness that signaled scintillation — in other words, the FRB was twinkling. They confirmed that there is gas somewhere between the telescope and FRB that is bending and filtering the radio waves. The team then determined where this gas could be located, confirming that gas within the FRB’s host galaxy was responsible for some of the scintillation observed. This gas acted as a natural lens, allowing the researchers to zoom in on the FRB site and determine that the burst originated from an extremely small region, estimated to be about 10,000 kilometers wide.

“This means that the FRB is probably within hundreds of thousands of kilometers from the source,” Nimmo says. “That’s very close. For comparison, we would expect the signal would be more than tens of millions of kilometers away if it originated from a shockwave, and we would see no scintillation at all.”

“Zooming in to a 10,000-kilometer region, from a distance of 200 million light years, is like being able to measure the width of a DNA helix, which is about 2 nanometers wide, on the surface of the moon,” Masui says. “There’s an amazing range of scales involved.”

The team’s results, combined with the findings from the McGill team, rule out the possibility that FRB 20221022A emerged from the outskirts of a compact object. Instead, the studies prove for the first time that fast radio bursts can originate from very close to a neutron star, in highly chaotic magnetic environments.

“These bursts are always happening, and CHIME detects several a day,” Masui says. “There may be a lot of diversity in how and where they occur, and this scintillation technique will be really useful in helping to disentangle the various physics that drive these bursts.”

“The pattern traced by the polarization angle was so strikingly similar to that seen from pulsars in our own Milky Way Galaxy that there was some initial concern that the source wasn't actually an FRB but a misclassified pulsar,” says Ryan Mckinven, a co-author of the study from McGill University. “Fortunately, these concerns were put to rest with the help of data collected from an optical telescope that confirmed the FRB originated in a galaxy millions of light-years away.”

“Polarimetry is one of the few tools we have to probe these distant sources,” Mckinven explains. “This result will likely inspire follow-up studies of similar behavior in other FRBs and prompt theoretical efforts to reconcile the differences in their polarized signals.”

This research was supported by various institutions including the Canada Foundation for Innovation, the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto, the Canadian Institute for Advanced Research, the Trottier Space Institute at McGill University, and the University of British Columbia.


MIT’s top research stories of 2024

Stories on tamper-proof ID tags, sound-suppressing silk, and generative AI’s understanding of the world were some of the most popular topics on MIT News.


MIT’s research community had another year full of scientific and technological advances in 2024. To celebrate the achievements of the past twelve months, MIT News highlights some of our most popular stories from this year. We’ve also rounded up the year’s top MIT community-related stories.


MIT-Kalaniyot launches programs for visiting Israeli scholars

Inviting recent postdocs and sabbatical-eligible faculty to pursue their research at MIT, new programs envision eventually supporting 16 Israeli scholars on campus annually.


Over the past 14 months, as the impact of the ongoing Israel-Gaza war has rippled across the globe, a faculty-led initiative has emerged to support MIT students and staff by creating a community that transcends ethnicity, religion, and political views. Named for a flower that blooms along the Israel-Gaza border, MIT-Kalaniyot began hosting weekly community lunches that typically now draw about 100 participants. These gatherings have gained the interest of other universities seeking to help students not only cope with but thrive through troubled times, with some moving to replicate MIT’s model on their own campuses.

Now, scholars at Israel’s nine state-recognized universities will be able to compete for MIT-Kalaniyot fellowships designed to allow Israel’s top researchers to come to MIT for collaboration and training, advancing research while contributing to a better understanding of their country.

The MIT-Kalaniyot Postdoctoral Fellows Program will support scholars who have recently graduated from Israeli PhD programs to continue their postdoctoral training at MIT. Meanwhile, the new MIT-Kalaniyot Sabbatical Scholars Program will provide faculty and researchers holding sabbatical-eligible appointments at Israeli research institutions with fellowships for two academic terms at MIT.

Announcement of the fellowships through the association of Israeli university presidents spawned an enthusiastic response. 

“We’ve received many emails, from questions about the program to messages of gratitude. People have told us that, during a time of so much negativity, seeing such a top-tier academic program emerge feels like a breath of fresh air,” says Or Hen, the Class of 1956 Associate Professor of Physics and associate director of the Laboratory for Nuclear Science, who co-founded MIT-Kalaniyot with Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology.

Hen adds that the response from potential program donors has been positive, as well.

“People have been genuinely excited to learn about forward-thinking efforts and how they can simultaneously support both MIT and Israeli science,” he says. “We feel truly privileged to be part of this meaningful work.”

MIT-Kalaniyot is “a faculty-led initiative that emerged organically as we came to terms with some of the challenges that MIT was facing trying to keep focusing on its mission during a very difficult period for the U.S., and obviously for Israelis and Palestinians,” Fraenkel says.

As the MIT-Kalaniyot Program gained momentum, he adds, “we started talking about positive things faculty can do to help MIT fulfill its mission and then help the world, and we recognized many of the challenges could actually be helped by bringing more brilliant scholars from Israel to MIT to do great research and to humanize the face of Israelis so that people who interact with them can see them, not as some foreign entity, but as the talented person working down the hallway.”

“MIT has a long tradition of connecting scholarly communities around the world,” says MIT President Sally Kornbluth. “Programs like this demonstrate the value of bringing people and cultures together, in pursuit of new ideas and understanding.”    

Open to applicants in the humanities, architecture, management, engineering, and science, both fellowship programs aim to embrace Israel’s diverse demographics by encouraging applications from all communities and minority groups throughout Israel.

Fraenkel notes that because Israeli universities reflect the diversity of the country, he expects scholars who identify as Israeli Arabs, Palestinian citizens of Israel, and others could be among the top candidates applying and ultimately selected for MIT-Kalaniyot fellowships. 

MIT is also expanding its Global MIT At-Risk Fellows Program (GMAF), which began last year with recruitment of scholars from Ukraine, to bring Palestinian scholars to campus next fall. Fraenkel and Hen noted their close relationship with GMAF-Palestine director Kamal Youcef-Toumi, a professor in MIT’s Department of Mechanical Engineering.  

“While the programs are independent of each other, we value collaboration at MIT and are hoping to find positive ways that we can interact with each other,” Fraenkel says.

Also growing up alongside MIT-Kalaniyot’s fellowship programs will be new Kalaniyot chapters at universities such as the University of Pennsylvania and Dartmouth College, where programs have already begun, and others where activity is starting up. MIT’s inspiration for these efforts, Hen and Fraenkel say, is a key aspect of the Kalaniyot story.

“We formed a new model of faculty-led communities,” Hen says. “As faculty, our roles typically center on teaching, mentoring, and research. After October 7 happened, we saw what was happening around campus and across the nation and realized that our roles had to expand. We had to go beyond the classroom and the lab to build deeper connections within the community that transcends traditional academic structures. This faculty-led approach has become the essence of MIT-Kalaniyot, and is now inspiring similar efforts across the nation.”

Once the programs are at scale, MIT plans to bring four MIT-Kalaniyot Postdoctoral Fellows to campus annually (for three years each), as well as four MIT-Kalaniyot Sabbatical Scholars, for a total of 16 visiting Israeli scholars at any one time.

“We also hope that when they go back, they will be able to maintain their research ties with MIT, so we plan to give seed grants to encourage collaboration after someone leaves,” Fraenkel says. “I know for a lot of our postdocs, their time at MIT is really critical for making networks, regardless of where they come from or where they go. Obviously, it’s harder when you’re across the ocean in a very challenging region, and so I think for both programs it would be great to be able to maintain those intellectual ties and collaborate beyond the term of their fellowships.”

A common thread between the new Kalaniyot programs and GMAF-Palestine, Hen says, is to rise beyond differences that have been voiced post-Oct. 7 and refocus on the Institute’s core research mission.

“We're bringing in the best scholars from the region — Jews, Israelis, Arabs, Palestinians — and normalizing interactions with them and among them through collaborative research,” Hen says. “Our mission is clear: to focus on academic excellence by bringing outstanding talent to MIT and reinforcing that we are here to advance research in service of humanity.”


MIT affiliates receive 2025 IEEE honors

Five MIT faculty and staff, along with 19 additional alumni, are honored for electrical engineering and computer science advances.


The IEEE recently announced the winners of their 2025 prestigious medals, technical awards, and fellowships. Four MIT faculty members, one staff member, and five alumni were recognized.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health within the Department of Electrical Engineering and Computer Science (EECS) at MIT, received the IEEE Frances E. Allen Medal for “innovative machine learning algorithms that have led to advances in human language technology and demonstrated impact on the field of medicine.” Barzilay focuses on machine learning algorithms for modeling molecular properties in the context of drug design, with the goal of elucidating disease biochemistry and accelerating the development of new therapeutics. In the field of clinical AI, she focuses on algorithms for early cancer diagnostics. She is also the AI faculty lead within the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and an affiliate of the Computer Science and Artificial Intelligence Laboratory, Institute for Medical Engineering and Science, and Koch Institute for Integrative Cancer Research. Barzilay is a member of the National Academy of Engineering, the National Academy of Medicine, and the American Academy of Arts and Sciences. She has earned the MacArthur Fellowship, MIT’s Jamieson Award for excellence in teaching, and the Association for the Advancement of Artificial Intelligence’s $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. Barzilay is a fellow of AAAI, ACL, and AIMBE.

James J. Collins, the Termeer Professor of Medical Engineering and Science, professor of biological engineering at MIT, and member of the Harvard-MIT Health Sciences and Technology faculty, earned the 2025 IEEE Medal for Innovations in Healthcare Technology for his work in “synthetic gene circuits and programmable cells, launching the field of synthetic biology, and impacting healthcare applications.” He is a core founding faculty member of the Wyss Institute for Biologically Inspired Engineering at Harvard University and an Institute Member of the Broad Institute of MIT and Harvard. Collins is known as a pioneer in synthetic biology, and currently focuses on employing engineering principles to model, design, and build synthetic gene circuits and programmable cells to create novel classes of diagnostics and therapeutics. His patented technologies have been licensed by over 25 biotech, pharma, and medical device companies, and he has co-founded several companies, including Synlogic, Senti Biosciences, Sherlock Biosciences, Cellarity, and the nonprofit Phare Bio. Collins’ many accolades are the MacArthur “Genius” Award, the Dickson Prize in Medicine, and election to the National Academies of Sciences, Engineering, and Medicine.

Roozbeh Jafari, principal staff member in MIT Lincoln Laboratory's Biotechnology and Human Systems Division, was elected IEEE Fellow for his “contributions to sensors and systems for digital health paradigms.” Jafari seeks to establish impactful and highly collaborative programs between Lincoln Laboratory, MIT campus, and other U.S. academic entities to promote health and wellness for national security and public health. His research interests are wearable-computer design, sensors, systems, and AI for digital health, most recently focusing on digital twins for precision health. He has published more than 200 refereed papers and served as general chair and technical program committee chair for several flagship conferences focused on wearable computers. Jafari has received a National Science Foundation Faculty Early Career Development (CAREER) Award (2012), the IEEE Real-Time and Embedded Technology and Applications Symposium Best Paper Award (2011), the IEEE Andrew P. Sage Best Transactions Paper Award (2014), and the Association for Computing Machinery Transactions on Embedded Computing Systems Best Paper Award (2019), among other honors.

William Oliver SM ’97, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT, was elected an IEEE Fellow for his “contributions to superconductive quantum computing technology and its teaching.” Director of the MIT Center for Quantum Engineering and associate director of the MIT Research Laboratory of Electronics, Oliver leads the Engineering Quantum Systems (EQuS) group at MIT. His research focuses on superconducting qubits, their use in small-scale quantum processors, and the development of cryogenic packaging and control electronics. The EQuS group closely collaborates with the Quantum Information and Integrated Nanosystems Group at Lincoln Laboratory, where Oliver was previously a staff member and a Laboratory Fellow from 2017 to 2023. Through MIT xPRO, Oliver created four online professional development courses addressing the fundamentals and practical realities of quantum computing. He is member of the National Quantum Initiative Advisory Committee and has published more than 130 journal articles and seven book chapters. Inventor or co-inventor on more than 10 patents, he is a fellow of the American Association for the Advancement of Science and the American Physical Society; serves on the U.S. Committee for Superconducting Electronics; and is a lead editor for the IEEE Applied Superconductivity Conference.

Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Laboratory,  MIT Schwarzman College of Computing deputy dean of research, and the Andrew (1956) and Erna Viterbi Professor within the Department of Electrical Engineering and Computer Science, was awarded the IEEE Edison Medal for “sustained leadership and pioneering contributions in modern robotics.” Rus’ research in robotics, artificial intelligence, and data science focuses primarily on developing the science and engineering of autonomy, where she envisions groups of robots interacting with each other and with people to support humans with cognitive and physical tasks. Rus is a Class of 2002 MacArthur Fellow, a fellow of the Association for Computing Machinery, of the Association for the Advancement of Artificial Intelligence and of IEEE, and a member of the National Academy of Engineers and the American Academy of Arts and Sciences.

Nineteen additional MIT alumni were also recognized.

Steve Mann PhD ’97, a graduate of the Program in Media Arts and Sciences, received the Masaru Ibuka Consumer Technology Award “for contributions to the advancement of wearable computing and high dynamic range imaging.” He founded the MIT Wearable Computing Project and is currently professor of computer engineering at the University of Toronto as well as an IEEE Fellow.

Thomas Louis Marzetta ’72 PhD ’78, a graduate of the Department of Electrical Engineering and Computer Science, received the Eric E. Sumner Award “for originating the Massive MIMO technology in wireless communications.” Marzetta is a distinguished industry professor at New York University’s (NYU) Tandon School of Engineering and is director of NYU Wireless, an academic research center within the department. He is also an IEEE Life Fellow.

Michael Menzel ’81, a graduate of the Department of Physics, was awarded the Simon Ramo Medal “for development of the James Webb Space Telescope [JWST], first deployed to see the earliest galaxies in the universe,” along with Bill Ochs, JWST project manager at NASA, and Scott Willoughby, vice president and program manager for the JWST program at Northrop Grumman. Menzel is a mission systems engineer at NASA and a member of the American Astronomical Society.

Jose Manuel Fonseca Moura ’73, SM ’73, ScD ’75, a graduate of the Department of Electrical Engineering and Computer Science, received the Haraden Pratt Award “for sustained leadership and outstanding contributions to the IEEE in education, technical activities, awards, and global connections.” Currently, Moura is the Philip L. and Marsha Dowd University Professor at Carnegie Mellon University. He is also a member of the U.S. National Academy of Engineers, fellow of the U.S. National Academy of Inventors, a member of the Portugal Academy of Science, an IEEE Fellow, and a fellow of the American Association for the Advancement of Science.

Marc Raibert PhD ’77, a graduate of the former Department of Psychology, now a part of the Department of Brain and Cognitive Sciences, received the Robotics and Automation Award “for pioneering and leading the field of dynamic legged locomotion.” He is founder of Boston Dynamics, an MIT spinoff and robotics company, and The AI Institute, based in Cambridge, Massachusetts, where he also serves as the executive director. Raibert is an IEEE Member.

The following alumni were named IEEE Fellows: 

Solomon Assefa ’01, MNG ’01, PhD ’04 (EECS); Yuriy Brun ’03, MNG ’03 (EECS); Whitfield Diffie ’65 (Mathematics); Brian P. Ginsburg ’02, MNG ’03, PhD ’07 (EECS); Saikat Guha SM ’04, PhD ’08 (EECS); Cherie Kagan PhD ’96 (Materials Science and Engineering); Thierry E. Klein PhD ’01 (EECS); Bennett A. Landman ’01, MNG ’02 (EECS); Debra Lew ’88 (Physics and EECS); Karen Livescu SM ’99, PhD ’05 (EECS); Patrick P. Mercier SM ’08, PhD ’12 (EECS); Shayan Mookherjea SM ’00 (EECS); Ramakrishna Mukkamala SM ’95, PhD ’00 (EECS); and Suresh Ramalingam SM '90, PhD '94 (Chemical Engineering).


Making classical music and math more accessible

In math and in music, senior Holden Mui values interesting ideas, solving problems creatively, and finding meaning in their structures.


Senior Holden Mui appreciates the details in mathematics and music. A well-written orchestral piece and a well-designed competitive math problem both require a certain flair and a well-tuned sense of how to keep an audience’s interest.

“People want fresh, new, non-recycled approaches to math and music,” he says. Mui sees his role as a guide of sorts, someone who can take his ideas for a musical composition or a math problem and share them with audiences in an engaging way. His ideas must make the transition from his mind to the page in as precise a way as possible. Details matter.

A double major in math and music from Lisle, Illinois, Mui believes it’s important to invite people into a creative process that allows a kind of conversation to occur between a piece of music he writes and his audience, for example. Or a math problem and the people who try to solve it. “Part of math’s appeal is its ability to reveal deep truths that may be hidden in simple statements,” he argues, “while contemporary classical music should be available for enjoyment by as many people as possible.”

Mui’s first experience at MIT was as a high school student in 2017. He visited as a member of a high school math competition team attending an event hosted and staged by MIT and Harvard University students. The following year, Mui met other students at math camps and began thinking seriously about what was next.

“I chose math as a major because it’s been a passion of mine since high school. My interest grew through competitions and I continued to develop it through research,” he says. “I chose MIT because it boasts one of the most rigorous and accomplished mathematics departments in the country.”

Mui is also a math problem writer for the Harvard-MIT Math Tournament (HMMT) and performs with Ribotones, a club that travels to places like retirement homes or public spaces on the Institute’s campus to play music for free.

Mui studies piano with Timothy McFarland, an artist affiliate at MIT, through the MIT Emerson/Harris Fellowship Program, and previously studied with Kate Nir and Matthew Hagle of the Music Institute of Chicago. He started piano at the age of five and cites French composer Maurice Ravel as one of his major musical influences.

As a music student at MIT, Mui is involved in piano performance, chamber music, collaborative piano, the MIT Symphony Orchestra as a violist, conducting, and composition.

He enjoys the incredible variety available within MIT’s music program. “It offers everything from electronic music to world music studies,” he notes, “and has broadened my understanding and appreciation of music’s diversity.”

Collaborating to create

Throughout his academic career, Mui found himself among like-minded students like former Yale University undergraduate Andrew Wu. Together, Mui and Wu won an Emergent Ventures grant. In this collaboration, Mui wrote the music Wu would play. Wu described his experience with one of Mui’s compositions, “Poetry,” as “demanding serious focus and continued re-readings,” yielding nuances even after repeated listens.

Another of Mui’s compositions, “Landscapes,” was performed by MIT’s Symphony Orchestra in October 2024 and offered audiences opportunities to engage with the ideas he explores in his music.

One of the challenges Mui discovered early is that academic composers sometimes create music audiences might struggle to understand. “People often say that music is a universal language, but one of the most valuable insights I’ve gained at MIT is that music isn’t as universally experienced as one might think,” he says. “There are notable differences, for example, between Western music and world music.” 

This, Mui says, broadened his perspective on how to approach music and encouraged him to consider his audience more closely when composing. He treats music as an opportunity to invite people into how he thinks. 

Creative ideas, accessible outcomes

Mui understands the value of sharing his skills and ideas with others, crediting the MIT International Science and Technology Initiatives (MISTI) program with offering multiple opportunities for travel and teaching. “I’ve been on three MISTI trips during IAP [Independent Activities Period] to teach mathematics,” he says. 

Mui says it’s important to be flexible, dynamic, and adaptable in preparation for a fulfilling professional life. Music and math both demand the development of the kinds of soft skills that can help him succeed as a musician, composer, and mathematician.

“Creating math problems is surprisingly similar to writing music,” he argues. “In both cases, the work needs to be complex enough to be interesting without becoming unapproachable.” For Mui, designing original math problems is “like trying to write down an original melody.”

“To write math problems, you have to have seen a lot of math problems before. To write music, you have to know the literature — Bach, Beethoven, Ravel, Ligeti — as diverse a group of personalities as possible.”

A future in the notes and numbers

Mui points to the professional and personal virtues of exploring different fields. “It allows me to build a more diverse network of people with unique perspectives,” he says. “Professionally, having a range of experiences and viewpoints to draw on is invaluable; the broader my knowledge and network, the more insights I can gain to succeed.”

After graduating, Mui plans to pursue doctoral study in mathematics following the completion of a cryptography internship. “The connections I’ve made at MIT, and will continue to make, are valuable because they’ll be useful regardless of the career I choose,” he says. He wants to continue researching math he finds challenging and rewarding. As with his music, he wants to strike a balance between emotion and innovation.

“I think it’s important not to pull all of one’s eggs in one basket,” he says. “One important figure that comes to mind is Isaac Newton, who split his time among three fields: physics, alchemy, and theology.” Mui’s path forward will inevitably include music and math. Whether crafting compositions or designing math problems, Mui seeks to invite others into a world where notes and numbers converge to create meaning, inspire connection, and transform understanding.


MIT welcomes Frida Polli as its next visiting innovation scholar

The neuroscientist turned entrepreneur will be hosted by the MIT Schwarzman College of Computing and focus on advancing the intersection of behavioral science and AI across MIT.


Frida Polli, a neuroscientist, entrepreneur, investor, and inventor known for her leading-edge contributions at the crossroads of behavioral science and artificial intelligence, is MIT’s new visiting innovation scholar for the 2024-25 academic year. She is the first visiting innovation scholar to be housed within the MIT Schwarzman College of Computing.

Polli began her career in academic neuroscience with a focus on multimodal brain imaging related to health and disease. She was a fellow at the Psychiatric Neuroimaging Group at Mass General Brigham and Harvard Medical School. She then joined the Department of Brain and Cognitive Sciences at MIT as a postdoc, where she worked with John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences.

Her research has won many awards, including a Young Investigator Award from the Brain and Behavior Research Foundation. She authored over 30 peer-reviewed articles, with notable publications in the Proceedings of the National Academy of Sciences, the Journal of Neuroscience, and Brain. She transitioned from academia to entrepreneurship by completing her MBA at the Harvard Business School (HBS) as a Robert Kaplan Life Science Fellow. During this time, she also won the Life Sciences Track and the Audience Choice Award in the 2010 MIT $100K Entrepreneurship competition as a member of Aukera Therapeutics.

After HBS, Polli launched pymetrics, which harnessed advancements in cognitive science and machine learning to develop analytics-driven decision-making and performance enhancement software for the human capital sector. She holds multiple patents for the technology developed at pymetrics, which she co-founded in 2012 and led as CEO until her successful exit in 2022. Pymetrics was a World Economic Forum’s Technology Pioneer and Global Innovator, an Inc. 5000’s Fastest-Growing company, and Forbes Artificial Intelligence 50 company. Polli and pymetrics also played a pivotal role in passing the first-in-the-nation algorithmic bias law — New York’s Automated Employment Decision Tool law — which went into effect in July 2023.

Making her return to MIT as a visiting innovation scholar, Polli is collaborating closely with Sendhil Mullainathan, the Peter de Florez Professor in the departments of Electrical Engineering and Computer Science and Economics, and a principal investigator in the Laboratory for Information and Decision Systems. With Mullainathan, she is working to bring together a broad array of faculty, students, and postdocs across MIT to address concrete problems where humans and algorithms intersect, to develop a new subdomain of computer science specific to behavioral science, and to train the next generation of scientists to be bilingual in these two fields.

“Sometimes you get lucky, and sometimes you get unreasonably lucky. Frida has thrived in each of the facets we’re looking to have impact in — academia, civil society, and the marketplace. She combines a startup mentality with an abiding interest in positive social impact, while capable of ensuring the kind of intellectual rigor MIT demands. It’s an exceptionally rare combination, one we are unreasonably lucky to have,” says Mullainathan.

“People are increasingly interacting with algorithms, often with poor results, because most algorithms are not built with human interplay in mind,” says Polli. “We will focus on designing algorithms that will work synergistically with people. Only such algorithms can help us address large societal challenges in education, health care, poverty, et cetera.”

Polli was recognized as one of Inc.'s Top 100 Female Founders in 2019, followed by being named to Entrepreneur's Top 100 Powerful Women in 2020, and to the 2024 list of 100 Brilliant Women in AI Ethics. Her work has been highlighted by major outlets including The New York Times, The Wall Street Journal, The Financial Times, The Economist, Fortune, Harvard Business Review, Fast Company, Bloomberg, and Inc.

Beyond her role at pymetrics, she founded Alethia AI in 2023, an organization focused on promoting transparency in technology, and in 2024, she launched Rosalind Ventures, dedicated to investing in women founders in science and health care. She is also an advisor at the Buck Institute’s Center for Healthy Aging in Women.

"I'm delighted to welcome Dr. Polli back to MIT. As a bilingual expert in both behavioral science and AI, she is a natural fit for the college. Her entrepreneurial background makes her a terrific inaugural visiting innovation scholar,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.


New autism research projects represent a broad range of approaches to achieving a shared goal

At a symposium of the Simons Center for the Social Brain, six speakers described a diversity of recently launched studies aimed at improving understanding of the autistic brain.


From studies of the connections between neurons to interactions between the nervous and immune systems to the complex ways in which people understand not just language, but also the unspoken nuances of conversation, new research projects at MIT supported by the Simons Center for the Social Brain are bringing a rich diversity of perspectives to advancing the field’s understanding of autism.

As six speakers lined up to describe their projects at a Simons Center symposium Nov. 15, MIT School of Science dean Nergis Mavalvala articulated what they were all striving for: “Ultimately, we want to seek understanding — not just the type that tells us how physiological differences in the inner workings of the brain produce differences in behavior and cognition, but also the kind of understanding that improves inclusion and quality of life for people living with autism spectrum disorders.”

Simons Center director Mriganka Sur, Newton Professor of Neuroscience in The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences (BCS), said that even though the field still lacks mechanism-based treatments or reliable biomarkers for autism spectrum disorders, he is optimistic about the discoveries and new research MIT has been able to contribute. MIT research has led to five clinical trials so far, and he praised the potential for future discovery, for instance in the projects showcased at the symposium.

“We are, I believe, at a frontier — at a moment where a lot of basic science is coming together with the vision that we could use that science for the betterment of people,” Sur said.

The Simons Center funds that basic science research in two main ways that each encourage collaboration, Sur said: large-scale projects led by faculty members across several labs, and fellowships for postdocs who are mentored by two faculty members, thereby bringing together two labs. The symposium featured talks and panel discussions by faculty and fellows leading new research.

In her remarks, Associate Professor Gloria Choi of The Picower Institute and BCS department described her collaboration’s efforts to explore the possibility of developing an autism therapy using the immune system. Previous research in mice by Choi and collaborator Jun Huh of Harvard Medical School has shown that injection of the immune system signaling molecule IL-17a into a particular region of the brain’s cortex can reduce neural hyperactivity and resulting differences in social and repetitive behaviors seen in autism model mice compared to non-autism models. Now Choi’s team is working on various ways to induce the immune system to target the cytokine to the brain by less invasive means than direct injection. One way under investigation, for example, is increasing the population of immune cells that produce IL-17a in the meningeal membranes that surround the brain.

In a different vein, Associate Professor Ev Fedorenko of The McGovern Institute for Brain Research and BCS is leading a seven-lab collaboration aimed at understanding the cognitive and neural infrastructure that enables people to engage in conversation, which involves not only the language spoken but also facial expressions, tone of voice, and social context. Critical to this effort, she said, is going beyond previous work that studied each related brain area in isolation to understand the capability as a unified whole. A key insight, she said, is that they are all nearby each other in the lateral temporal cortex.

“Going beyond these individual components we can start asking big questions like, what are the broad organizing principles of this part of the brain?,” Fedorenko said. “Why does it have this particular arrangement of areas, and how do these work together to exchange information to create the unified percept of another individual we’re interacting with?”

While Choi and Fedorenko are looking at factors that account for differences in social behavior in autism, Picower Professor Earl K. Miller of The Picower Institute and BCS is leading a project that focuses on another phenomenon: the feeling of sensory overload that many autistic people experience. Research in Miller’s lab has shown that the brain’s ability to make predictions about sensory stimuli, which is critical to filtering out mundane signals so attention can be focused on new ones, depends on a cortex-wide coordination of the activity of millions of neurons implemented by high frequency “gamma” brain waves and lower-frequency “beta” waves. Working with animal models and human volunteers at Boston Children’s Hospital (BCH), Miller said his team is testing the idea that there may be a key difference in these brain wave dynamics in the autistic brain that could be addressed with closed-loop brain wave stimulation technology.

Simons postdoc Lukas Vogelsang, who is based in BCS Professor Pawan Sinha’s lab, is looking at potential differences in prediction between autistic and non-autistic individuals in a different way: through experiments with volunteers that aim to tease out how these differences are manifest in behavior. For instance, he’s finding that in at least one prediction task that requires participants to discern the probability of an event from provided cues, autistic people exhibit lower performance levels and undervalue the predictive significance of the cues, while non-autistic people slightly overvalue it. Vogelsang is co-advised by BCH researcher and Harvard Medical School Professor Charles Nelson.

Fundamentally, the broad-scale behaviors that emerge from coordinated brain-wide neural activity begins with the molecular details of how neurons connect with each other at circuit junctions called synapses. In her research based in The Picower Institute lab of Menicon Professor Troy Littleton, Simons postdoc Chhavi Sood is using the genetically manipulable model of the fruit fly to investigate how mutations in the autism-associated protein FMRP may alter the expression of molecular gates regulating ion exchange at the synapse , which would in turn affect how frequently and strongly a pre-synaptic neuron excites a post-synaptic one. The differences she is investigating may be a molecular mechanism underlying neural hyperexcitability in fragile X syndrome, a profound autism spectrum disorder.

In her talk, Simons postdoc Lace Riggs, based in The McGovern Institute lab of Poitras Professor of Neuroscience Guoping Feng, emphasized how many autism-associated mutations in synaptic proteins promote pathological anxiety. She described her research that is aimed at discerning where in the brain’s neural circuitry that vulnerability might lie. In her ongoing work, Riggs is zeroing in on a novel thalamocortical circuit between the anteromedial nucleus of the thalamus and the cingulate cortex, which she found drives anxiogenic states. Riggs is co-supervised by Professor Fan Wang.

After the wide-ranging talks, supplemented by further discussion at the panels, the last word came via video conference from Kelsey Martin, executive vice president of the Simons Foundation Autism Research Initiative. Martin emphasized that fundamental research, like that done at the Simons Center, is the key to developing future therapies and other means of supporting members of the autism community.

“We believe so strongly that understanding the basic mechanisms of autism is critical to being able to develop translational and clinical approaches that are going to impact the lives of autistic individuals and their families,” she said.

From studies of synapses to circuits to behavior, MIT researchers and their collaborators are striving for exactly that impact.


Physicists magnetize a material with light

The technique provides researchers with a powerful tool for controlling magnetism, and could help in designing faster, smaller, more energy-efficient memory chips.


MIT physicists have created a new and long-lasting magnetic state in a material, using only light.

In a study appearing today in Nature, the researchers report using a terahertz laser — a light source that oscillates more than a trillion times per second — to directly stimulate atoms in an antiferromagnetic material. The laser’s oscillations are tuned to the natural vibrations among the material’s atoms, in a way that shifts the balance of atomic spins toward a new magnetic state.

The results provide a new way to control and switch antiferromagnetic materials, which are of interest for their potential to advance information processing and memory chip technology.

In common magnets, known as ferromagnets, the spins of atoms point in the same direction, in a way that the whole can be easily influenced and pulled in the direction of any external magnetic field. In contrast, antiferromagnets are composed of atoms with alternating spins, each pointing in the opposite direction from its neighbor. This up, down, up, down order essentially cancels the spins out, giving antiferromagnets a net zero magnetization that is impervious to any magnetic pull.

If a memory chip could be made from antiferromagnetic material, data could be “written” into microscopic regions of the material, called domains. A certain configuration of spin orientations (for example, up-down) in a given domain would represent the classical bit “0,” and a different configuration (down-up) would mean “1.” Data written on such a chip would be robust against outside magnetic influence.

For this and other reasons, scientists believe antiferromagnetic materials could be a more robust alternative to existing magnetic-based storage technologies. A major hurdle, however, has been in how to control antiferromagnets in a way that reliably switches the material from one magnetic state to another.

“Antiferromagnetic materials are robust and not influenced by unwanted stray magnetic fields,” says Nuh Gedik, the Donner Professor of Physics at MIT. “However, this robustness is a double-edged sword; their insensitivity to weak magnetic fields makes these materials difficult to control.”

Using carefully tuned terahertz light, the MIT team was able to controllably switch an antiferromagnet to a new magnetic state. Antiferromagnets could be incorporated into future memory chips that store and process more data while using less energy and taking up a fraction of the space of existing devices, owing to the stability of magnetic domains.

“Generally, such antiferromagnetic materials are not easy to control,” Gedik says. “Now we have some knobs to be able to tune and tweak them.”

Gedik is the senior author of the new study, which also includes MIT co-authors Batyr Ilyas, Tianchuang Luo, Alexander von Hoegen, Zhuquan Zhang, and Keith Nelson, along with collaborators at the Max Planck Institute for the Structure and Dynamics of Matter in Germany, University of the Basque Country in Spain, Seoul National University, and the Flatiron Institute in New York.

Off balance

Gedik’s group at MIT develops techniques to manipulate quantum materials in which interactions among atoms can give rise to exotic phenomena.

“In general, we excite materials with light to learn more about what holds them together fundamentally,” Gedik says. “For instance, why is this material an antiferromagnet, and is there a way to perturb microscopic interactions such that it turns into a ferromagnet?”

In their new study, the team worked with FePS3 — a material that transitions to an antiferromagnetic phase at a critical temperature of around 118 kelvins (-247 degrees Fahrenheit).

The team suspected they might control the material’s transition by tuning into its atomic vibrations.

“In any solid, you can picture it as different atoms that are periodically arranged, and between atoms are tiny springs,” von Hoegen explains. “If you were to pull one atom, it would vibrate at a characteristic frequency which typically occurs in the terahertz range.”

The way in which atoms vibrate also relates to how their spins interact with each other. The team reasoned that if they could stimulate the atoms with a terahertz source that oscillates at the same frequency as the atoms’ collective vibrations, called phonons, the effect could also nudge the atoms’ spins out of their perfectly balanced, magnetically alternating alignment. Once knocked out of balance, atoms should have larger spins in one direction than the other, creating a preferred orientation that would shift the inherently nonmagnetized material into a new magnetic state with finite magnetization.

“The idea is that you can kill two birds with one stone: You excite the atoms’ terahertz vibrations, which also couples to the spins,” Gedik says.

Shake and write

To test this idea, the team worked with a sample of FePS3 that was synthesized by colleages at Seoul National University. They placed the sample in a vacuum chamber and cooled it down to temperatures at and below 118 K. They then generated a terahertz pulse by aiming a beam of near-infrared light through an organic crystal, which transformed the light into the terahertz frequencies. They then directed this terahertz light toward the sample.

“This terahertz pulse is what we use to create a change in the sample,” Luo says. “It’s like ‘writing’ a new state into the sample.”

To confirm that the pulse triggered a change in the material’s magnetism, the team also aimed two near-infrared lasers at the sample, each with an opposite circular polarization. If the terahertz pulse had no effect, the researchers should see no difference in the intensity of the transmitted infrared lasers.

“Just seeing a difference tells us the material is no longer the original antiferromagnet, and that we are inducing a new magnetic state, by essentially using terahertz light to shake the atoms,” Ilyas says.

Over repeated experiments, the team observed that a terahertz pulse successfully switched the previously antiferromagnetic material to a new magnetic state — a transition that persisted for a surprisingly long time, over several milliseconds, even after the laser was turned off.

“People have seen these light-induced phase transitions before in other systems, but typically they live for very short times on the order of a picosecond, which is a trillionth of a second,” Gedik says.

In just a few milliseconds, scientists now might have a decent window of time during which they could probe the properties of the temporary new state before it settles back into its inherent antiferromagnetism. Then, they might be able to identify new knobs to tweak antiferromagnets and optimize their use in next-generation memory storage technologies.

This research was supported, in part, by the U.S. Department of Energy, Materials Science and Engineering Division, Office of Basic Energy Sciences, and the Gordon and Betty Moore Foundation. 


How humans continuously adapt while walking stably

Research could help improve motor rehabilitation programs and assistive robot control.



Researchers have developed a model that explains how humans adapt continuously during complex tasks, like walking, while remaining stable.

The findings were detailed in a recent paper published in the journal Nature Communications authored by Nidhi Seethapathi, an assistant professor in MIT’s Department of Brain and Cognitive Sciences; Barrett C. Clark, a robotics software engineer at Bright Minds Inc.; and Manoj Srinivasan, an associate professor in the Department of Mechanical and Aerospace Engineering at Ohio State University.

In episodic tasks, like reaching for an object, errors during one episode do not affect the next episode. In tasks like locomotion, errors can have a cascade of short-term and long-term consequences to stability unless they are controlled. This makes the challenge of adapting locomotion in a new environment  more complex.

"Much of our prior theoretical understanding of adaptation has been limited to episodic tasks, such as reaching for an object in a novel environment," Seethapathi says. "This new theoretical model captures adaptation phenomena in continuous long-horizon tasks in multiple locomotor settings."

To build the model, the researchers identified general principles of locomotor adaptation across a variety of task settings, and  developed a unified modular and hierarchical model of locomotor adaptation, with each component having its own unique mathematical structure.

The resulting model successfully encapsulates how humans adapt their walking in novel settings such as on a split-belt treadmill with each foot at a different speed, wearing asymmetric leg weights, and wearing  an exoskeleton. The authors report that the model successfully reproduced human locomotor adaptation phenomena across novel settings in 10 prior studies and correctly predicted the adaptation behavior observed in two new experiments conducted as part of the study.

The model has potential applications in sensorimotor learning, rehabilitation, and wearable robotics.

"Having a model that can predict how a person will adapt to a new environment has immense utility for engineering better rehabilitation paradigms and wearable robot control," Seethapathi says. "You can think of a wearable robot itself as a new environment for the person to move in, and our model can be used to predict how a person will adapt for different robot settings. Understanding such human-robot adaptation is currently an experimentally intensive process, and our model  could help speed up the process by narrowing the search space."


3 Questions: Tracking MIT graduates’ career trajectories

Deborah Liverman, executive director of MIT Career Advising and Professional Development, offers a window into undergraduate and graduate students’ post-graduation paths.


In a fall letter to MIT alumni, President Sally Kornbluth wrote: “[T]he world has never been more ready to reward our graduates for what they know — and know how to do.” During her tenure leading MIT Career Advising and Professional Development (CAPD), Deborah Liverman has seen firsthand how — and how well — MIT undergraduate and graduate students leverage their education to make an impact around the globe in academia, industry, entrepreneurship, medicine, government and nonprofits, and other professions. Here, Liverman shares her observations about trends in students’ career paths and the complexities of the job market they must navigate along the way.

Q: How do our students fare when they graduate from MIT?

A: We routinely survey our undergraduates and graduate students to track post-graduation outcomes, so fortunately we have a wealth of data. And ultimately, this enables us to stay on top of changes from year to year and to serve our students better.

The short answer is that our students fare exceptionally well when they leave the Institute! In our 2023 Graduating Student Survey, which is an exit survey for bachelor’s degree and master’s degree students, 49 percent of bachelor’s respondents and 79 percent of master’s respondents entered the workforce after graduating, and 43 percent and 14 percent started graduate school programs, respectively. Among those seeking immediate employment, 92 percent of bachelor’s and 87 percent of master’s degree students reported obtaining a job within three months of graduation.

What is notable, and frankly, wonderful, is that these two cohorts really took advantage of the rich ecosystem of experiential learning opportunities we have at MIT. The majority of Class of 2023 seniors participated in some form of experiential learning before graduation: 94 percent of them had a UROP [Undergraduate Research Opportunities Program], 75 percent interned, 66 percent taught or tutored, and 38 percent engaged with or mentored at campus makerspaces. Among master’s degree graduates in 2023, 56 percent interned, 45 percent taught or tutored, and 30 percent took part in entrepreneurial ventures or activities. About 47 percent of bachelor’s graduates said that a previous internship or externship led to the offer that they accepted, and 46 percent of master’s graduates are a founding member of a company.

We conduct a separate survey for doctoral students. I think there’s a common misperception that most of our PhD students go into academia. But a sizable portion choose not to stay in the academy. According to our 2024 Doctoral Exit Survey, 41 percent of graduates planned to go into industry. As of the survey date, of those who were going on to employment, 76 percent had signed a contract or made a definite commitment to a postdoc or other work, and only 9 percent were seeking a position but had no specific prospects.

A cohort of students, as well as some alumni, work with CAPD’s Prehealth Advising staff to apply for medical school. Last year we supported 73 students and alumni consisting of 25 undergrads, eight graduate students, and 40 alumni, with an acceptance rate of 79 percent — well above the national rate of 41 percent.

Q: How does CAPD work with students and postdocs to cultivate their professional development and help them evaluate their career options?

A: As you might expect, the career and graduate school landscape is constantly changing. In turn, CAPD strives to continuously evolve, so that we can best support and prepare our students. It certainly keeps us on our feet!

One of the things we have changed recently is our fundamental approach to working with students. We migrated our advising model from a major-specific focus to instead center on career interest areas. That allows us to prioritize skills and use a cross-disciplinary approach to advising students. So when an advisor sits down (or Zooms) with a student, that one-on-one session creates plenty of space to discuss a student’s individual values, goals, and other career-decision influencing factors.

I would say that another area we have been heavily focused on is providing new ways for students to explore careers. To that end, we developed two roles — an assistant director of career exploration and an assistant director of career prototype — to support new initiatives. And we provide career exploration fellowships and grants for undergraduate and graduate students so that they can explore fields that may be niche to MIT.

Career exploration is really important, but we want to meet students and postdocs where they are. We know they are incredibly busy at MIT, so our goal is to provide a variety of formats to make that possible, from a one-hour workshop or speaker, to a daylong shadowing experience, or a longer-term internship. For example, we partnered with departments to create the Career Exploration Series and the Infinite Careers speaker series, where we show students various avenues to get to a career. We have also created more opportunities to interact with alumni or other employers through one-day shadowing opportunities, micro-internships, internships, and employer coffee chats. The Prehealth Advising program I mentioned before offers many avenues to explore the field of medicine, so students can really make informed decisions about the path they want to pursue.

We are also looking at our existing programming to identify opportunities to build in career exploration, such as the Fall Career Fair. We have been working on identifying employers who are open to having career exploration conversations with — or hiring — first-year undergraduates, with access to these employers 30 minutes before the start of the fair. This year, the fair drew 4,400 candidates (students, postdocs, and alumni) and 180 employers, so it’s a great opportunity to leverage an event we already have in place and make it even more fruitful for both students and employers.

I do want to underscore that career exploration is just as important for graduate students as it is for undergraduates. In the doctoral exit survey I mentioned, 37 percent of 2024 graduates said they had changed their mind about the type of employer for whom they expected to work since entering their graduate program, and 38 percent had changed their mind about the type of position they expected to have. CAPD has developed exploration programming geared specifically for them, such as the CHAOS Process and our Graduate Student Professional Development offerings.

Q: What kinds of trends are you seeing in the current job market? And as students receive job offers, how do they weigh factors like the ethical considerations of working for a certain company or industry, the political landscape in the U.S. and abroad, the climate impact of a certain company or industry, or other issues?

A: Well, one notable trend is just the sheer volume of job applications. With platforms like LinkedIn’s Easy Apply, it’s easier for job seekers to apply to hundreds of jobs at once. Employers and organizations have more candidates, so applicants have to do more to stand out. Companies that, in the past, have had to seek out candidates are now deciding the best use of their recruiting efforts.

I would say the current job market is mixed. MIT students, graduates, and postdocs have experienced delayed job offers and starting dates pushed back in consulting and some tech firms. Companies are being intentional about recruiting and hiring college graduates. So students need to keep an open mind and not have their heart set on a particular employer. And if that employer isn’t hiring, then they may have to optimize their job search and consider other opportunities where they can gain experience.

On a more granular level, we do see trends in certain fields. Biotech has had a tough year, but there’s an uptick in opportunities in government, space, aerospace, and in the climate/sustainability and energy sectors. Companies are increasingly adopting AI in their business practices, so they’re hiring in that area. And financial services is a hot market for MIT candidates with strong technical skills.

As for how a student evaluates a job offer, according to the Graduating Student Survey, students look at many factors, including the job content, fit with the employer’s culture, opportunity for career advancement, and of course salary. However, students are also interested in exploring how an organization fits with their values.

CAPD provides various opportunities and resources to help them zero in on what matters most to them, from on-demand resources to one-on-one sessions with our advisors. As they research potential companies, we encourage them to make the most of career fairs and recruiting events. Throughout the academic year, MIT hosts and collaborates on over a dozen career fairs and large recruiting events. Companies are invited based on MIT candidates’ interests. The variety of opportunities means students can connect with different industries, explore careers, and apply to internships, jobs and research opportunities.

We also recommend that they take full advantage of MIT’s curated instance of Handshake, an online recruiting platform for higher education students and alumni. CAPD has collaborated with offices and groups to create filters and identifiers in Handshake to help candidates decide what is important to them, such as a company’s commitment to inclusive practices or their sustainability initiatives.

As advisors, we encourage each student to think about which factors are important for them when evaluating job offers and determine if an employer aligns with their values and goals. And we encourage and honor each student’s right to include those values and goals in their career decision-making process. Accepting a job is a very personal decision, and we are here to support each student every step of the way.


Photos: 2024 Nobel winners with MIT ties honored in Stockholm

Laureates participated in various Nobel Week events, including lectures, a concert, a banquet, and the Nobel ceremony on Dec. 10.


MIT-affiliated winners of the 2024 Nobel Prizes were celebrated in Stockholm, Sweden, as part of Nobel Week, which culminated with a grand Nobel ceremony on Dec. 10.

This year’s laureates with MIT ties include Daron Acemoglu, an Institute Professor, and Simon Johnson, the Ronald A. Kurtz Professor of Entrepreneurship, who together shared the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with James Robinson of the University of Chicago, for their work on the relationship between economic growth and political institutions. MIT Department of Biology alumnus Victor Ambros ’75, PhD ’79 also shared the Nobel Prize in Physiology or Medicine with Gary Ruvkun, who completed his postdoctoral research at the Institute alongside Ambros in the 1980s. The two were honored for their discovery of MicroRNA.

The honorees and their invited guests took part in a number of activities in Stockholm during this year’s Nobel Week, which began Dec. 5 with press conferences and a tour of special Nobel Week Lights around the city. Lectures, a visit to the Nobel Prize Museum, and a concert followed.

Per tradition, the winners received their medals from King Carl XVI Gustaf of Sweden on Dec. 10, the anniversary of the death of Alfred Nobel. (Winners of the Nobel Peace Prize were honored on the same day in Oslo, Norway.)

At least 105 MIT affiliates — including faculty, staff, alumni, and others — have won Nobel Prizes, according to MIT Institutional Research. Photos from the festivities appear below.


Professor Emeritus Hale Van Dorn Bradt, an X-ray astronomy pioneer, dies at 93

Longtime MIT faculty member used X-ray astronomy to study neutron stars and black holes and led the All-Sky Monitor instrument on NASA's Rossi X-ray Timing Explorer.


MIT Professor Emeritus Hale Van Dorn Bradt PhD ’61 of Peabody, Massachusetts, formerly of Salem and Belmont, beloved husband of Dorothy A. (Haughey) Bradt, passed away on Thursday, Nov. 14 at Salem Hospital, surrounded by his loving family. He was 93.  

Bradt, a longtime member of the Department of Physics, worked primarily in X-ray astronomy with NASA rockets and satellites, studying neutron stars and black holes in X-ray binary systems using rocket-based and satellite-based instrumentation. He was the original principal investigator for the All-Sky Monitor instrument on NASA's Rossi X-ray Timing Explorer (RXTE), which operated from 1996 to 2012.

Much of his research was directed toward determining the precise locations of celestial X-ray sources, most of which were neutron stars or black holes. This made possible investigations of their intrinsic natures at optical, radio, and X-ray wavelengths.

“Hale was the last of the cosmic ray group that converted to X-ray astronomy,” says Bruno Rossi Professor of Physics Claude Canizares. “He was devoted to undergraduate teaching and, as a postdoc, I benefited personally from his mentoring and guidance.”

He shared the Bruno Rossi Prize in High-Energy Astrophysics from the American Astronomical Society in 1999.

Bradt earned his PhD at MIT in 1961, working with advisor George Clark in cosmic ray physics, and taught undergraduate courses in physics from 1963 to 2001.

In the 1970s, he created the department's undergraduate astrophysics electives 8.282 and 8.284, which are still offered today. He wrote two textbooks based on that material, “Astronomy Methods” (2004) and “Astrophysics Processes” (2008), the latter which earned him the 2010 Chambliss Astronomical Writing Prize of the American Astronomical Society (AAS).

Son of a musician and academic

Born on Dec. 7, 1930, to Wilber and Norma Bradt in Colfax, Washington, he was raised in Washington State, as well as Maine, New York City, and Washington, where he graduated from high school.

His mother was a musician and writer, and his father was a chemistry professor at the University of Maine who served in the Army during World War II.

Six weeks after Bradt's father returned home from the war, he took his own life. Hale Bradt was 15. In 1980, Bradt discovered a stack of his father’s personal letters written during the war, which led to a decades-long research project that took him to the Pacific islands where his father served. This culminated with the book trilogy “Wilber’s War,” which earned him two silver awards from the IBPA’s Benjamin Franklin and Foreword Reviews’ IndieFAB; he was also an award finalist from National Indie Excellence.

Bradt discovered his love of music early; he sang in the Grace Church School choir in fifth and sixth grades, and studied the violin from the age of 8 until he was 21. He studied musicology and composition at Princeton University, where he played in the Princeton Orchestra. He also took weekly lessons in New York City with one of his childhood teachers, Irma Zacharias, who was the mother of MIT professor Jerrold Zacharias. “I did not work at the music courses very hard and thus did poorly,” he recalled.

In the 1960s, at MIT he played with a string quartet that included MIT mathematicians Michael ArtinLou Howard, and Arthur Mattuck. Bradt and his wife, Dottie, also sang with the MIT Chorale Society from about 1961 to 1971, including a 1962 trip to Europe. 

Well into his 80s, Bradt retained an interest in classical music, both as a violinist and as a singer, performing with diverse amateur choruses, orchestras, and chamber groups. At one point he played with the Belmont Community Orchestra, and sang with the Paul Madore Chorale in Salem. In retirement, he and his wife enjoyed chamber music, opera, and the Boston Symphony Orchestra. 

In the Navy

In the summer before his senior year he began Naval training, which is where he discovered a talent for “mathematical-technical stuff,” he said. “I discovered that on quantitative topics, like navigation, I was much more facile than my fellow students. I could picture vector diagrams and gun mechanisms easily.”

He said he came back to Princeton “determined to get a major in physics,” but because that would involve adding a fifth year to his studies, “the dean wisely convinced me to get my degree in music, get my Navy commission, and serve my two years.” He graduated in 1952, trained for the Navy with the Reserve Officer Candidate program, and served in the U.S. Navy as a deck officer and navigator on the USS Diphda cargo ship during the Korean War. 

MIT years

He returned to Princeton to work in the Cosmic Ray lab, and then joined MIT as a graduate student in 1955, working in Bruno Rossi’s Cosmic Ray Group as a research assistant. Recalled Bradt, “The group was small, with only a half-dozen faculty and a similar number of students. Sputnik was launched, and the group was soon involved in space experiments with rockets, balloons, and satellites.”

The beginnings of celestial X-ray and gamma-ray astronomy took root in Cambridge, Massachusetts, as did the exploration of interplanetary space. Bradt also worked under Bill Kraushaar, George Clark, and Herbert Bridge, and was soon joined by radio astronomers Alan Barrett and Bernard Burke, and theorist Phil Morrison.

While working on his PhD thesis on cosmic rays, he took his measuring equipment to an old cement mine in New York State, to study cosmic rays that had enough energy to get through the 30 feet of overhead rock.

As a professor, he studied extensive air showers with gamma-ray primaries (as low-mu showers) on Mt. Chacaltaya in Bolivia, and in 1966, he participated in a rocket experiment that led to a precise celestial location and optical identification of the first stellar X-ray source, Scorpius X-1.

“X-ray astronomy was sort of a surprise,” said Bradt. “Nobody really predicted that there should be sources of X-rays out there.”

His group studied X-rays originating from the Milky Way Galaxy by using data collected with rockets, balloons, and satellites. In 1967, he collaborated with NASA to design and launch sounding rockets from White Sands Missile Range, which would use specialized instruments to detect X-rays above Earth’s atmosphere.

Bradt was a senior participant or a principal investigator for instruments on the NASA X-ray astronomy satellite missions SAS-3 that launched in 1975, HEAO-1 in 1977, and RXTE in 1995.

All Sky Monitor and RXTE

In 1980, Bradt and his colleagues at MIT, Goddard Space Flight Center, and the University of California at San Diego began designing a satellite that would measure X-ray bursts  and other phenomena on time scales from milliseconds to years. By 1995, the team launched RXTE.

Until 2001, Bradt was the principal investigator of RXTE’s All Sky Monitor, which scanned vast swaths of the sky during each orbit. When it was decommissioned in 2012, the RXTE provided a 16-year record of X-ray emissions from various celestial objects, including black holes and neutron stars. The 1969 sounding rocket experiment by Bradt’s group discovered X-ray pulsations from the Crab pulsar, which demonstrated that the X-ray and optical pulses from this distant neutron star arrived almost simultaneously, despite traveling through interstellar space for thousands of years.

He received NASA’s Exceptional Scientific Achievement Medal in 1978 for his contributions to the HEAO-1 mission and shared the 1999 Bruno Rossi Prize of the American Astronomical Society’s High Energy Astrophysics Division for his role with RXTE.

“Hale's work on precision timing of compact stars, and his role as an instrument PI on NASA's Rossi X-ray Timing Explorer played an important part in cultivating the entrepreneurial spirit in MIT's Center for Space Research, now the MIT Kavli Institute,” says Rob Simcoe, the Francis L. Friedman Professor of Physics and director of the MIT Kavli Institute for Astrophysics and Space Research.

Without Bradt’s persistence, the HEAO 1 and RXTE missions may not have launched, recalls Alan Levine PhD ’76, a principal research scientist at Kavli who was the project scientist for RXTE. “Hale had to skillfully negotiate to have his MIT team join together with a (non-MIT) team that had been competing for the opportunities to provide both experimental hardware and scientific mission guidance,” he says. “The A-3 experiment was eventually carried out as a joint project between MIT under Hale and Harvard/Smithsonian under Herbert (Herb) Gursky.”

“Hale had a strong personality,” recalls Levine. “When he wanted something to be done, he came on strong and it was difficult to refuse. Often it was quicker to do what he wanted rather than to say no, only to be asked several more times and have to make up excuses.”

“He was persistent,” agrees former student, Professor Emeritus Saul Rappaport PhD ’68. “If he had a suggestion, he never let up.”

Rappaport also recalls Bradt’s exacting nature. For example, for one sounding rocket flight at White Sands Missile Range, “Hale took it upon himself to be involved in every aspect of the rocket payload, including parts of it that were built by Goddard Space Flight Center — I think this annoyed the folks at GSFC,” recalls Rappaport. “He would be checking everything three times. There was a famous scene where he stuck his ear in the (compressed-air) jet to make sure that it went off, and there was a huge blast of air that he wasn’t quite expecting. It scared the hell out of everybody, and the Goddard people were, you know, a bit amused. The point is that he didn’t trust anything unless he could verify it himself.”

Supportive advisor

Many former students recalled Hale’s supportive teaching style, which included inviting MIT students over to their Belmont home, and was a strong advocate for his students’ professional development.  

“He was a wonderful mentor: kind, generous, and encouraging,” recalls physics department head Professor Deepto Chakrabarty ’88, who had Bradt as his postdoctoral advisor when he returned to MIT in 1996.

“I’m so grateful to have had the chance to work with Hale as an undergraduate,” recalls University of California at Los Angeles professor and Nobel laureate Andrea Ghez ’87. “He taught me so much about high-energy astrophysics, the research world, and how to be a good mentor. Over the years, he continuously gave me new opportunities — starting with working on onboard data acquisition and data analysis modes for the future Rossi X-Ray Timing Explorer with Ed Morgan and Al Levine. Later, he introduced me to a project to do optical identification of X-ray sources, which began with observing with the MIT-Michigan-Dartmouth Telescope (MDM) with then-postdoc Meg Urry and him.”

Bradt was a relatively new professor when he became Saul Rappaport’s advisor in 1963. At the time, MIT researchers were switching from the study of cosmic rays to the new field of X-ray astronomy. “Hale turned the whole rocket program over to me as a relatively newly minted PhD, which was great for my career, and he went on to some satellite business, the SAS 3 satellite in particular. He was very good in terms of looking out for the careers of junior scientists with whom he was associated.”

Bradt looked back on his legacy at MIT physics with pride. “Today, the astrophysics division of the department is a thriving community of faculty, postdocs, and graduate students,” Bradt said recently. “I cast my lot with X-ray astronomy in 1966 and had a wonderfully exciting time observing the X-ray sky from space until my retirement in 2001.”

After retirement, Bradt served for 16 years as academic advisor for MIT’s McCormick Hall first-year students. He received MIT's Buechner Teaching Prize in Physics in 1990, Outstanding Freshman Advisor of the Year Award in 2004, and the Alan J. Lazarus (1953) Excellence in Advising Award in 2017.

Recalls Ghez, “He was a remarkable and generous mentor and helped me understand the importance of helping undergraduates make the transition from the classroom to the wonderfully enriching world of research.”

Post-retirement, Bradt transitioned into department historian and mentor.

“I arrived at MIT in 2003, and it was several years before I realized that Hale had actually retired two years earlier — he was frequently around, and always happy to talk with young researchers,” says Simcoe. “In his later years, Hale became an unofficial historian for CSR and MKI, providing firsthand accounts of important events and people central to MIT's contribution to the ‘space race’ of the mid-20th century, and explaining how we evolved into a major center for research and education in spaceflight and astrophysics.”

Bradt’s other recognitions include earning a 2015 Darius and Susan Anderson Distinguished Service Award of the Institute of Governmental Studies, a 1978 NASA Exceptional Scientific Achievement Medal, and being named a 1972 American Physical Society Fellow and 2020 AAS Legacy Fellow.

Bradt served as secretary-treasurer (1973–75) and chair (1981) of the AAS High Energy Astrophysics Division, and on the National Academy of Science’s Committee for Space Astronomy and Astrophysics from 1979 to 1982. He recruited many of his colleagues and students to help him host the 1989 meeting of the American Astronomical Society in Boston, a major astronomy conference.

The son of the late Lt. Col. Wilber E. Bradt and Norma Sparlin Bourjaily, and brother of the late Valerie Hymes of Annapolis, Maryland, he is survived by his wife, Dorothy Haughey Bradt, whom he married in 1958; two daughters and their husbands, Elizabeth Bradt and J. Bartlett “Bart” Hoskins of Salem, and Dorothy and Bart McCrum of Buxton, Maine; two grandchildren, Benjamin and Rebecca Hoskins; two other sisters, Abigail Campi of St. Michael’s, Maryland, and Dale Anne Bourjaily of the Netherlands, and 10 nieces and nephews.

In lieu of flowers, contributions may be made to the Salem Athenaeum, or the Thomas Fellowship. Hale established the Thomas Fellowship in memory of Barbara E. Thomas, who was the Department of Physics undergraduate administrator from 1931 to 1965, as well as to honor the support staff who have contributed to the department's teaching and research programs.  

“MIT has provided a wonderful environment for me to teach and to carry out research,” said Bradt. “I am exceptionally grateful for that and happy to be in a position to give back.” He added, “Besides, I am told you cannot take it with you.”

The Barbara E. Thomas Fund in support of physics graduate students has been established in the Department of Physics. You may contribute to the fund (#3312250) online at the MIT website giving.mit.edu by selecting “Give Now,” then “Physics.” 


Introducing MIT HEALS, a life sciences initiative to address pressing health challenges

The MIT Health and Life Sciences Collaborative will bring together researchers from across the Institute to deliver health care solutions at scale.


At MIT, collaboration between researchers working in the life sciences and engineering is a frequent occurrence. Under a new initiative launched last week, the Institute plans to strengthen and expand those collaborations to take on some of the most pressing health challenges facing the world.

The new MIT Health and Life Sciences Collaborative, or MIT HEALS, will bring together researchers from all over the Institute to find new solutions to challenges in health care. HEALS will draw on MIT’s strengths in life sciences and other fields, including artificial intelligence and chemical and biological engineering, to accelerate progress in improving patient care.

“As a source of new knowledge, of new tools and new cures, and of the innovators and the innovations that will shape the future of biomedicine and health care, there is just no place like MIT,” MIT President Sally Kornbluth said at a launch event last Wednesday in Kresge Auditorium. “Our goal with MIT HEALS is to help inspire, accelerate, and deliver solutions, at scale, to some of society’s most urgent and intractable health challenges.”

The launch event served as a day-long review of MIT’s historical impact in the life sciences and a preview of what it hopes to accomplish in the future.

“The talent assembled here has produced some truly towering accomplishments. But also — and, I believe, more importantly — you represent a deep well of creative potential for even greater impact,” Kornbluth said.

Massachusetts Governor Maura Healey, who addressed the filled auditorium, spoke of her excitement about the new initiative, emphasizing that “MIT’s leadership and the work that you do are more important than ever.”

“One of things as governor that I really appreciate is the opportunity to see so many of our state’s accomplished scientists and bright minds come together, work together, and forge a new commitment to improving human life,” Healey said. “It’s even more exciting when you think about this convening to think about all the amazing cures and treatments and discoveries that will result from it. I’m proud to say, and I really believe this, this is something that could only happen in Massachusetts. There’s no place that has the ecosystem that we have here, and we must fight hard to always protect that and to nurture that.”

A history of impact

MIT has a long history of pioneering new fields in the life sciences, as MIT Institute Professor Phillip Sharp noted in his keynote address. Fifty years ago, MIT’s Center for Cancer Research was born, headed by Salvador Luria, a molecular biologist and a 1975 Nobel laureate.

That center helped to lead the revolutions in molecular biology, and later recombinant DNA technology, which have had significant impacts on human health. Research by MIT Professor Robert Weinberg and others identifying cancer genes has led the development of targeted drugs for cancer, including Herceptin and Gleevec.

In 2007, the Center for Cancer Research evolved into the Koch Institute for Integrative Cancer Research, whose faculty members are divided evenly between the School of Science and the School of Engineering, and where interdisciplinary collaboration is now the norm.

While MIT has long been a pioneer in this kind of collaborative health research, over the past several years, MIT’s visiting committees reported that there was potential to further enhance those collaborations, according to Nergis Mavalvala, dean of MIT’s School of Science.

“One of the very strong themes that emerged was that there’s an enormous hunger among our colleagues to collaborate more. And not just within their disciplines and within their departments, but across departmental boundaries, across school boundaries, and even with the hospitals and the biotech sector,” Mavalvala told MIT News.

To explore whether MIT could be doing more to encourage interdisciplinary research in the life sciences, Mavalvala and Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, appointed a faculty committee called VITALS (Vision to Integrate, Translate and Advance Life Sciences).

That committee was co-chaired by Tyler Jacks, the David H. Koch Professor of Biology at MIT and a member and former director of the Koch Institute, and Kristala Jones Prather, head of MIT’s Department of Chemical Engineering.

“We surveyed the faculty, and for many people, the sense was that they could do more if there were improved mechanisms for interaction and collaboration. Not that those don’t exist — everybody knows that we have a highly collaborative environment at MIT, but that we could do even more if we had some additional infrastructure in place to facilitate bringing people together, and perhaps providing funding to initiate collaborative projects,” Jacks said before last week’s launch.

These efforts will build on and expand existing collaborative structures. MIT is already home to a number of institutes that promote collaboration across disciplines, including not only the Koch Institute but also the McGovern Institute for Brain Research, the Picower Institute for Learning and Memory, and the Institute for Medical Engineering and Science.

“We have some great examples of crosscutting work around MIT, but there's still more opportunity to bring together faculty and researchers across the Institute,” Chandrakasan said before the launch event. “While there are these great individual pieces, we can amplify those while creating new collaborations.”

Supporting science

In her opening remarks on Wednesday, Kornbluth announced several new programs designed to support researchers in the life sciences and help promote connections between faculty at MIT, surrounding institutions and hospitals, and companies in the Kendall Square area.

“A crucial part of MIT HEALS will be finding ways to support, mentor, connect, and foster community for the very best minds, at every stage of their careers,” she said.

With funding provided by Noubar Afeyan PhD ’87, an executive member of the MIT Corporation and founder and CEO of Flagship Pioneering, MIT HEALS will offer fellowships for graduate students interested in exploring new directions in the life sciences.

Another key component of MIT HEALS will be the new Hood Pediatric Innovation Hub, which will focus on development of medical treatments specifically for children. This program, established with a gift from the Charles H. Hood Foundation, will be led by Elazer Edelman, a cardiologist and the Edward J. Poitras Professor in Medical Engineering and Science at MIT.

“Currently, the major market incentives are for medical innovations intended for adults — because that’s where the money is. As a result, children are all too often treated with medical devices and therapies that don’t meet their needs, because they’re simply scaled-down versions of the adult models,” Kornbluth said.

As another tool to help promising research projects get off the ground, MIT HEALS will include a grant program known as the MIT-MGB Seed Program. This program, which will fund joint research projects between MIT and Massachusetts General Hospital/Brigham and Women’s Hospital, is being launched with support from Analog Devices, to establish the Analog Devices, Inc. Fund for Health and Life Sciences.

Additionally, the Biswas Family Foundation is providing funding for postdoctoral fellows, who will receive four-year appointments to pursue collaborative health sciences research. The details of the fellows program will be announced in spring 2025.

“One of the things we have learned through experience is that when we do collaborative work that is cross-disciplinary, the people who are actually crossing disciplinary boundaries and going into multiple labs are students and postdocs,” Mavalvala said prior to the launch event. “The trainees, the younger generation, are much more nimble, moving between labs, learning new techniques and integrating new ideas.”

Revolutions

Discussions following the release of the VITALS committee report identified seven potential research areas where new research could have a big impact: AI and life science, low-cost diagnostics, neuroscience and mental health, environmental life science, food and agriculture, the future of public health and health care, and women’s health. However, Chandrakasan noted that research within HEALS will not be limited to those topics.

“We want this to be a very bottom-up process,” he told MIT News. “While there will be a few areas like AI and life sciences that we will absolutely prioritize, there will be plenty of room for us to be surprised on those innovative, forward-looking directions, and we hope to be surprised.”

At the launch event, faculty members from departments across MIT shared their work during panels that focused on the biosphere, brains, health care, immunology, entrepreneurship, artificial intelligence, translation, and collaboration. In addition, a poster session highlighted over 100 research projects in areas such as diagnostics, women’s health, neuroscience, mental health, and more. 

The program, which was developed by Amy Keating, head of the Department of Biology, and Katharina Ribbeck, the Andrew and Erna Viterbi Professor of Biological Engineering, also included a spoken-word performance by Victory Yinka-Banjo, an MIT senior majoring in computer science and molecular biology. In her performance, called “Systems,” Yinka-Banjo urged the audience to “zoom out,” look at systems in their entirety, and pursue collective action.

“To be at MIT is to contribute to an era of infinite impact. It is to look beyond the microscope, zooming out to embrace the grander scope. To be at MIT is to latch onto hope so that in spite of a global pandemic, we fight and we cope. We fight with science and policy across clinics, academia, and industry for the betterment of our planet, for our rights, for our health,” she said.

In a panel titled “Revolutions,” Douglas Lauffenburger, the Ford Professor of Engineering and one of the founders of MIT’s Department of Biological Engineering, noted that engineers have been innovating in medicine since the 1950s, producing critical advances such as kidney dialysis, prosthetic limbs, and sophisticated medical imaging techniques.

MIT launched its program in biological engineering in 1998, and it became a full-fledged department in 2005. The department was founded based on the concept of developing new approaches to studying biology and developing potential treatments based on the new advances being made in molecular biology and genomics.

“Those two revolutions laid the foundation for a brand new kind of engineering that was not possible before them,” Lauffenburger said.

During that panel, Jacks and Ruth Lehmann, director of the Whitehead Institute for Biomedical Research, outlined several interdisciplinary projects underway at the Koch Institute and the Whitehead Institute. Those projects include using AI to analyze mammogram images and detect cancer earlier, engineering drought-resistant plants, and using CRISPR to identify genes involved in toxoplasmosis infection.

These examples illustrate the potential impact that can occur when “basic science meets translational science,” Lehmann said.

“I’m really looking forward to HEALS further enlarging the interactions that we have, and I think the possibilities for science, both at a mechanistic level and understanding the complexities of health and the planet, are really great,” she said.

The importance of teamwork

To bring together faculty and students with common interests and help spur new collaborations, HEALS plans to host workshops on different health-related topics. A faculty committee is now searching for a director for HEALS, who will coordinate these efforts.

Another important goal of the HEALS initiative, which was the focus of the day’s final panel discussion, is enhancing partnerships with Boston-area hospitals and biotech companies.

“There are many, many different forms of collaboration,” said Anne Klibanski, president and CEO of Mass General Brigham. “Part of it is the people. You bring the people together. Part of it is the ideas. But I have found certainly in our system, the way to get the best and the brightest people working together is to give them a problem to solve. You give them a problem to solve, and that’s where you get the energy, the passion, and the talent working together.”

Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, noted the importance of tackling fundamental challenges without knowing exactly where they will lead. Langer, trained as a chemical engineer, began working in biomedical research in the 1970s, when most of his engineering classmates were going into jobs in the oil industry.

At the time, he worked with Judah Folkman at Boston Children’s Hospital on the idea of developing drugs that would starve tumors by cutting off their blood supply. “It took many, many years before those would [reach patients],” he says. “It took Genentech doing great work, building on some of the things we did that would lead to Avastin and many other drugs.”

Langer has spent much of his career developing novel strategies for delivering molecules, including messenger RNA, into cells. In 2010, he and Afeyan co-founded Moderna to further develop mRNA technology, which was eventually incorporated into mRNA vaccines for Covid.

“The important thing is to try to figure out what the applications are, which is a team effort,” Langer said. “Certainly when we published those papers in 1976, we had obviously no idea that messenger RNA would be important, that Covid would even exist. And so really it ends up being a team effort over the years.”


MIT astronomers find the smallest asteroids ever detected in the main belt

The team’s detection method, which identified 138 space rocks ranging from bus- to stadium-sized, could aid in tracking potential asteroid impactors.


The asteroid that extinguished the dinosaurs is estimated to have been about 10 kilometers across. That’s about as wide as Brooklyn, New York. Such a massive impactor is predicted to hit Earth rarely, once every 100 million to 500 million years.

In contrast, much smaller asteroids, about the size of a bus, can strike Earth more frequently, every few years. These “decameter” asteroids, measuring just tens of meters across, are more likely to escape the main asteroid belt and migrate in to become near-Earth objects. If they make impact, these small but mighty space rocks can send shockwaves through entire regions, such as the 1908 impact in Tunguska, Siberia, and the 2013 asteroid that broke up in the sky over Chelyabinsk, Urals. Being able to observe decameter main-belt asteroids would provide a window into the origin of meteorites.

Now, an international team led by physicists at MIT have found a way to spot the smallest decameter asteroids within the main asteroid belt — a rubble field between Mars and Jupiter where millions of asteroids orbit. Until now, the smallest asteroids that scientists were able to discern there were about a kilometer in diameter. With the team’s new approach, scientists can now spot asteroids in the main belt as small as 10 meters across. 

In a paper appearing today in the journal Nature, the researchers report that they have used their approach to detect more than 100 new decameter asteroids in the main asteroid belt. The space rocks range from the size of a bus to several stadiums wide, and are the smallest asteroids within the main belt that have been detected to date.

Animation of a population of small asteroids being revealed in infrared light.

The researchers envision that the approach can be used to identify and track asteroids that are likely to approach Earth.

“We have been able to detect near-Earth objects down to 10 meters in size when they are really close to Earth,” says the study’s lead author, Artem Burdanov, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “We now have a way of spotting these small asteroids when they are much farther away, so we can do more precise orbital tracking, which is key for planetary defense.”

The study’s co-authors include MIT professors of planetary science Julien de Wit and Richard Binzel, along with collaborators from multiple other institutions, including the University of Liege in Belgium, Charles University in the Czech Republic, the European Space Agency, and institutions in Germany including Max Planck Institute for Extraterrestrial Physics, and the University of Oldenburg.

Image shift

De Wit and his team are primarily focused on searches and studies of exoplanets — worlds outside the solar system that may be habitable. The researchers are part of the group that in 2016 discovered a planetary system around TRAPPIST-1, a star that’s about 40 light years from Earth. Using the Transiting Planets and Planetismals Small Telescope (TRAPPIST) in Chile, the team confirmed that the star hosts rocky, Earth-sized planets, several of which are in the habitable zone.

Scientists have since trained many telescopes, focused at various wavelengths, on the TRAPPIST-1 system to further characterize the planets and look for signs of life. With these searches, astronomers have had to pick through the “noise” in telescope images, such as any gas, dust, and planetary objects between Earth and the star, to more clearly decipher the TRAPPIST-1 planets. Often, the noise they discard includes passing asteroids.

“For most astronomers, asteroids are sort of seen as the vermin of the sky, in the sense that they just cross your field of view and affect your data,” de Wit says.

De Wit and Burdanov wondered whether the same data used to search for exoplanets could be recycled and mined for asteroids in our own solar system. To do so, they looked to “shift and stack,” an image processing technique that was first developed in the 1990s. The method involves shifting multiple images of the same field of view and stacking the images to see whether an otherwise faint object can outshine the noise.

Applying this method to search for unknown asteroids in images that are originally focused on far-off stars would require significant computational resources, as it would involve testing a huge number of scenarios for where an asteroid might be. The researchers would then have to shift thousands of images for each scenario to see whether an asteroid is indeed where it was predicted to be.

Several years ago, Burdanov, de Wit, and MIT graduate student Samantha Hasler found they could do that using state-of-the-art graphics processing units that can process an enormous amount of imaging data at high speeds.

They initially tried their approach on data from the SPECULOOS (Search for habitable Planets EClipsing ULtra-cOOl Stars) survey — a system of ground-based telescopes that takes many images of a star over time. This effort, along with a second application using data from a telescope in Antarctica, showed that researchers could indeed spot a vast amount of new asteroids in the main belt.

“An unexplored space”

For the new study, the researchers looked for more asteroids, down to smaller sizes, using data from the world’s most powerful observatory — NASA’s James Webb Space Telescope (JWST), which is particularly sensitive to infrared rather than visible light. As it happens, asteroids that orbit in the main asteroid belt are much brighter at infrared wavelengths than at visible wavelengths, and thus are far easier to detect with JWST’s infrared capabilities.

The team applied their approach to JWST images of TRAPPIST-1. The data comprised more than 10,000 images of the star, which were originally obtained to search for signs of atmospheres around the system’s inner planets. After processing the images, the researchers were able to spot eight known asteroids in the main belt. They then looked further and discovered 138 new asteroids around the main belt, all within tens of meters in diameter — the smallest main belt asteroids detected to date. They suspect a few asteroids are on their way to becoming near-Earth objects, while one is likely a Trojan — an asteroid that trails Jupiter.

“We thought we would just detect a few new objects, but we detected so many more than expected, especially small ones,” de Wit says. “It is a sign that we are probing a new population regime, where many more small objects are formed through cascades of collisions that are very efficient at breaking down asteroids below roughly 100 meters.”

“Statistics of these decameter main belt asteroids are critical for modelling,” adds Miroslav Broz, co-author from the Prague Charles University in Czech Republic, and a specialist of the various asteroid populations in the solar system. “In fact, this is the debris ejected during collisions of bigger, kilometers-sized asteroids, which are observable and often exhibit similar orbits about the Sun, so that we group them into ‘families’ of asteroids.”

“This is a totally new, unexplored space we are entering, thanks to modern technologies,” Burdanov says. “It’s a good example of what we can do as a field when we look at the data differently. Sometimes there’s a big payoff, and this is one of them.”

This work was supported, in part, by the Heising-Simons Foundation, the Czech Science Foundation, and the NVIDIA Academic Hardware Grant Program.


Troy Van Voorhis to step down as department head of chemistry

Professor oversaw department growth, strengthened community, and developed outreach programs.


Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, will step down as department head of the Department of Chemistry at the end of this academic year. Van Voorhis has served as department head since 2019, previously serving the department as associate department head since 2015.

“Troy has been an invaluable partner and sounding board who could always be counted on for a wonderful mix of wisdom and pragmatism,” says Nergis Mavalvala, the Kathleen and Curtis Marble professor of astrophysics and dean of the MIT School of Science. “While department head, Troy provided calm guidance during the Covid pandemic, encouraging and financially supporting additional programs to improve his community’s quality of life.”

“I have had the pleasure of serving as head of our department for the past five-plus years. It has been a period of significant upheaval in our world,” says Van Voorhis. “Throughout it all, one of my consistent joys has been the privilege of working within the chemistry department and across the wider MIT community on research, education, and community building.”

Under Van Voorhis’ leadership, the Department of Chemistry implemented a department-wide statement of values that launched the Diversity, Equity, and Inclusion Committee, a Future Faculty Symposium that showcases rising stars in chemistry, and the Creating Bonds in Chemistry program that partners MIT faculty with chemistry faculty at select historically Black colleges and universities and minority-serving institutions.

Van Voorhis also oversaw a time of tremendous faculty growth in the department with the addition of nine new faculty. During his tenure as head, he also guided the department through a period of significant growth of interest in chemistry with the number of undergraduate majors, enrolled students, graduate students, and graduate student yields all up significantly.

Van Voorhis also had the honor of celebrating with the entire Institute for Professor Moungi Bawendi’s Nobel Prize in Chemistry — the department’s first win in 18 years, since Professor Richard R. Schrock’s win in 2005.

In addition to his service to the department within the School of Science, Van Voorhis had also co-chaired the Working Group on Curricula and Degrees for the MIT Stephen A. Schwarzman College of Computing. This service relates to Van Voorhis’ own research interests and programs.

Van Voorhis’ research lies at the nexus of chemistry and computation, and his work has impact on renewable energy and quantum computing. His lab is focused on developing new methods that provide an accurate description of electron dynamics in molecules and materials. Over the years, his research has led to advances in light-emitting diodes, solar cells, and other devices and technologies crucial to addressing 21st-century energy concerns.   

Van Voorhis received his bachelor's degree in chemistry and mathematics from Rice University and his PhD in chemistry from the University of California at Berkeley in 2001. Following a postdoctoral fellowship at Harvard University, he joined the faculty of MIT in 2003 and was promoted to professor of chemistry in 2012.

He has received many honors and awards, including being named an Alfred P. Sloan research fellow, a fellow of the David and Lucille Packard Foundation, and a recipient of a National Science Foundation CAREER award. He has also received the MIT School of Science’s award for excellence in graduate teaching.