General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Study suggests 40Hz sensory stimulation may benefit some Alzheimer’s patients for years

Five volunteers received 40Hz stimulation for around two years after an early-stage clinical study. Those with late-onset Alzheimer’s performed better on assessments than Alzheimer’s patients outside the trial.


A new research paper documents the outcomes of five volunteers who continued to receive 40Hz light and sound stimulation for around two years after participating in an MIT early-stage clinical study of the potential Alzheimer’s disease (AD) therapy. The results show that for the three participants with late-onset Alzheimer’s disease, several measures of cognition remained significantly higher than comparable Alzheimer’s patients in national databases. Moreover, in the two late-onset volunteers who donated plasma samples, levels of Alzheimer’s biomarker tau proteins were significantly decreased.

The three volunteers who experienced these benefits were all female. The two other participants, each of whom were males with early-onset forms of the disease, did not exhibit significant benefits after two years. The dataset, while small, represents the longest-term test so far of the safe, noninvasive treatment method (called GENUS, for gamma entrainment using sensory stimuli), which is also being evaluated in a nationwide clinical trial run by MIT-spinoff company Cognito Therapeutics.

“This pilot study assessed the long-term effects of daily 40Hz multimodal GENUS in patients with mild AD,” the authors wrote in an open-access paper in Alzheimer's & Dementia: The Journal of the Alzheimer’s Association. “We found that daily 40Hz audiovisual stimulation over 2 years is safe, feasible, and may slow cognitive decline and biomarker progression, especially in late-onset AD patients.”

Diane Chan, a former research scientist in The Picower Institute for Learning and Memory and a neurologist at Massachusetts General Hospital, is the study’s lead and co-corresponding author. Picower Professor Li-Huei Tsai, director of The Picower Institute and the Aging Brain Initiative at MIT, is the study’s senior and co-corresponding author.

An “open label” extension

In 2020, MIT enrolled 15 volunteers with mild Alzheimer’s disease in an early-stage trial to evaluate whether an hour a day of 40Hz light and sound stimulation, delivered via an LED panel and speaker in their homes, could deliver clinically meaningful benefits. Several studies in mice had shown that the sensory stimulation increases the power and synchrony of 40Hz gamma frequency brain waves, preserves neurons and their network connections, reduces Alzheimer’s proteins such as amyloid and tau, and sustains learning and memory. Several independent groups have also made similar findings over the years.

MIT’s trial, though cut short by the Covid-19 pandemic, found significant benefits after three months. The new study examines outcomes among five volunteers who continued to use their stimulation devices on an “open label” basis for two years. These volunteers came back to MIT for a series of tests 30 months after their initial enrollment. Because four participants started the original trial as controls (meaning they initially did not receive 40Hz stimulation), their open label usage was six to nine months shorter than the 30-month period.

The testing at zero, three, and 30 months of enrollment included measurements of their brain wave response to the stimulation, MRI scans of brain volume, measures of sleep quality, and a series of five standard cognitive and behavioral tests. Two participants gave blood samples. For comparison to untreated controls, the researchers combed through three national databases of Alzheimer’s patients, matching thousands of them on criteria such as age, gender, initial cognitive scores, and retests at similar time points across a 30-month span.

Outcomes and outlook

The three female late-onset Alzheimer’s volunteers showed improvement or slower decline on most of the cognitive tests, including significantly positive differences compared to controls on three of them. These volunteers also showed increased brain-wave responsiveness to the stimulation at 30 months and showed improvement in measures of circadian rhythms. In the two late-onset volunteers who gave blood samples, there were significant declines in phosphorylated tau (47 percent for one and 19.4 percent for the other) on a test recently approved by the U.S. Food and Drug Administration as the first plasma biomarker for diagnosing Alzheimer’s.

“One of the most compelling findings from this study was the significant reduction of plasma pTau217, a biomarker strongly correlated with AD pathology, in the two late-onset patients in whom follow-up blood samples were available,” the authors wrote in the journal. “These results suggest that GENUS could have direct biological impacts on Alzheimer’s pathology, warranting further mechanistic exploration in larger randomized trials.”

Although the initial trial results showed preservation of brain volume at three months among those who received 40Hz stimulation, that was not significant at the 30-month time point. And the two male early-onset volunteers did not show significant improvements on cognitive test scores. Notably, the early onset patients showed significantly reduced brain-wave responsiveness to the stimulation.

Although the sample is small, the authors hypothesize that the difference between the two sets of patients is likely attributable to the difference in disease onset, rather than the difference in gender.

“GENUS may be less effective in early onset Alzheimer’s disease patients, potentially owing to broad pathological differences from late-onset Alzheimer’s disease that could contribute to differential responses,” the authors wrote. “Future research should explore predictors of treatment response, such as genetic and pathological markers.”

Currently, the research team is studying whether GENUS may have a preventative effect when applied before disease onset. The new trial is recruiting participants aged 55-plus with normal memory who have or had a close family member with Alzheimer's disease, including early-onset.

In addition to Chan and Tsai, the paper’s other authors are Gabrielle de Weck, Brennan L. Jackson, Ho-Jun Suk, Noah P. Milman, Erin Kitchener, Vanesa S. Fernandez Avalos, MJ Quay, Kenji Aoki, Erika Ruiz, Andrew Becker, Monica Zheng, Remi Philips, Rosalind Firenze, Ute Geigenmüller, Bruno Hammerschlag, Steven Arnold, Pia Kivisäkk, Michael Brickhouse, Alexandra Touroutoglou, Emery N. Brown, Edward S. Boyden, Bradford C. Dickerson, and Elizabeth B. Klerman.

Funding for the research came from the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, the Eleanor Schwartz Charitable Foundation, the Dolby Family, Che King Leo, Amy Wong and Calvin Chin, Kathleen and Miguel Octavio, the Degroof-VM Foundation, the Halis Family Foundation, Chijen Lee, Eduardo Eurnekian, Larry and Debora Hilibrand, Gary Hua and Li Chen, Ko Han Family, Lester Gimpelson, David B Emmes, Joseph P. DiSabato and Nancy E. Sakamoto, Donald A. and Glenda G. Mattes, the Carol and Gene Ludwig Family Foundation, Alex Hu and Anne Gao, Elizabeth K. and Russell L. Siegelman, the Marc Haas Foundation, Dave and Mary Wargo, James D. Cook, and the Nobert H. Hardner Foundation.


John Marshall and Erin Kara receive postdoctoral mentoring award

Faculty recognized for the exceptional professional and personal guidance they provide postdocs.


Shining a light on the critical role of mentors in a postdoc’s career, the MIT Postdoctoral Association presented the fourth annual Excellence in Postdoctoral Mentoring Awards to professors John Marshall and Erin Kara.

The awards honor faculty and principal investigators who have distinguished themselves across four areas: the professional development opportunities they provide, the work environment they create, the career support they provide, and their commitment to continued professional relationships with their mentees. 

They were presented at the annual Postdoctoral Appreciation event hosted by the Office of the Vice President for Research (VPR), on Sept. 17.

An MIT Postdoctoral Association (PDA) committee, chaired this year by Danielle Coogan, oversees the awards process in coordination with VPR and reviews nominations by current and former postdocs. “[We’re looking for] someone who champions a researcher, a trainee, but also challenges them,” says Bettina Schmerl, PDA president in 2024-25. “Overall, it’s about availability, reasonable expectations, and empathy. Someone who sees the postdoctoral scholar as a person of their own, not just someone who is working for them.” Marshall’s and Kara’s steadfast dedication to their postdocs set them apart, she says.

Speaking at the VPR resource fair during National Postdoc Appreciation Week, Vice President for Research Ian Waitz acknowledged “headwinds” in federal research funding and other policy issues, but urged postdocs to press ahead in conducting the very best research. “Every resource in this room is here to help you succeed in your path,” he said.

Waitz also commented on MIT’s efforts to strengthen postdoctoral mentoring over the last several years, and the influence of these awards in bringing lasting attention to the importance of mentoring. “The dossiers we’re getting now to nominate people [for the awards] may have five, 10, 20 letters of support,” he noted. “What we know about great mentoring is that it carries on between academic generations. If you had a great mentor, then you are more likely to be an amazing mentor once you’ve seen it demonstrated.”

Ann Skoczenski, director of MIT Postdoctoral Services, works closely with Waitz and the Postdoctoral Association to address the goals and concerns of MIT’s postdocs to ensure a successful experience at the Institute. “The PDA and the whole postdoctoral community do critical work at MIT, and it’s a joy to recognize them and the outstanding mentors who guide them,” said Skoczenski.

A foundation in good science

The awards recognize excellent mentors in two categories. Marshall, professor of oceanography in the Department of Earth, Atmospheric and Planetary Sciences, received the “Established Mentor Award.” 

Nominators described Marshall’s enthusiasm for research as infectious, creating an exciting work environment that sets the tone. “John’s mentorship is unique in that he immerses his mentees in the heart of cutting-edge research. His infectious curiosity and passion for scientific excellence make every interaction with him a thrilling and enriching experience,” one postdoc wrote.

At the heart of Marshall’s postdoc relationships is a straightforward focus on doing good science and working alongside postdocs and students as equals. As one nominator wrote, “his approach is centered on empowering his mentees to assume full responsibility for their work, engage collaboratively with colleagues, and make substantial contributions to the field of science.” 

His high expectations are matched by the generous assistance he provides his postdocs when needed. “He balances scientific rigor with empathy, offers his time generously, and treats his mentees as partners in discovery,” a nominator wrote.

Navigating career decisions and gaining the right experience along the way are important aspects of the postdoc experience. “When it was time for me to move to a different step in my career, John offered me the opportunities to expand my skills by teaching, co-supervising PhD students, working independently with other MIT faculty members, and contributing to grant writing,” one postdoc wrote. 

Marshall’s research group has focused on ocean circulation and coupled climate dynamics involving interactions between motions on different scales, using theory, laboratory experiments, observations and innovative approaches to global ocean modeling.

“I’ve always told my postdocs, if you do good science, everything will sort itself out. Just do good work,” Marshall says. “And I think it’s important that you allow the glory to trickle down.” 

Marshall sees postdoc appointments as a time they can learn to play to their strengths while focusing on important scientific questions. “Having a great postdoc [working] with you and then seeing them going on to great things, it’s such a pleasure to see them succeed,” he says. 

“I’ve had a number of awards. This one means an awful lot to me, because the students and the postdocs matter as much as the science.”

Supporting the whole person

Kara, associate professor of physics, received the “Early Career Mentor Award.”

Many nominators praised Kara’s ability to give advice based on her postdocs’ individual goals. “Her mentoring style is carefully tailored to the particular needs of every individual, to accommodate and promote diverse backgrounds while acknowledging different perspectives, goals, and challenges,” wrote one nominator.

Creating a welcoming and supportive community in her research group, Kara empowers her postdocs by fostering their independence. “Erin’s unique approach to mentorship reminds us of the joy of pursuing our scientific curiosities, enables us to be successful researchers, and prepares us for the next steps in our chosen career path,” said one. Another wrote, “Rather than simply giving answers, she encourages independent thinking by asking the right questions, helping me to arrive at my own solutions and grow as a researcher.”

Kara’s ability to offer holistic, nonjudgmental advice was a throughline in her nominations. “Beyond her scientific mentorship, what truly sets Erin apart is her thoughtful and honest guidance around career development and life beyond work,” one wrote. Another nominator highlighted their positive relationship, writing, “I feel comfortable sharing my concerns and challenges with her, knowing that I will be met with understanding, insightful advice, and unwavering support.” 

Kara’s research group is focused on understanding the physics behind how black holes grow and affect their environments. Kara has advanced a new technique called X-ray reverberation mapping, which allows astronomers to map the gas falling on to black holes and measure the effects of strongly curved spacetime close to the event horizon. 

“I feel like postdocs hold a really special place in our research groups because they come with their own expertise,” says Kara. “I’ve hired them particularly because I want to learn and grow from them as well, and hopefully vice versa.” Kara focuses her mentorship on providing for autonomy, giving postdocs their own mentorship opportunities, and treating them like colleagues.

A postdoc appointment “is this really pivotal time in your career, when you’re figuring out what it is you want to do with the rest of your life,” she says. “So if I can help postdocs navigate that by giving them some support, but also giving them independence to be able to take their next steps, that feels incredibly valuable.”

“I just feel like they make my work/life so rich, and it’s not a hard thing to mentor them because they all are such awesome people and they make our research group really fun.”


MIT Haystack scientists study recent geospace storms and resulting light shows

Solar maximum occurred within the past year — good news for aurora watchers, as the most active period for displays at New England latitudes occurs in the three years following solar maximum.


The northern lights, or aurora borealis, one of nature's most spectacular visual shows, can be elusive. Conventional wisdom says that to see them, we need to travel to northern Canada or Alaska. However, in the past two years, New Englanders have been seeing these colorful atmospheric displays on a few occasions — including this week — from the comfort of their backyards, as auroras have been visible in central and southern New England and beyond. These unusual auroral events have been driven by increased space weather activity, a phenomenon studied by a team of MIT Haystack Observatory scientists.

Auroral events are generated when particles in space are energized by complicated processes in the near-Earth environment, following which they interact with gases high up in the atmosphere. Space weather events such as coronal mass ejections, in which large amounts of material are ejected from our sun, along with geomagnetic storms, greatly increase energy input into those space regions near Earth. These inputs then trigger other processes that cause an increase in energetic particles entering our atmosphere. 

The result is variable colorful lights when the newly energized particles crash into atoms and molecules high above Earth's surface. Recent significant geomagnetic storm events have triggered these auroral displays at latitudes lower than normal — including sightings across New England and other locations across North America.

New England has been enjoying more of these spectacular light shows, such as this week's displays and those during the intense geomagnetic solar storms in May and October 2024, because of increased space weather activity.

Research has determined that auroral displays occur when selected atoms and molecules high in the upper atmosphere are excited by incoming charged particles, which are boosted in energy by intense solar activity. The most common auroral display colors are pink/red and green, with colors varying according to the altitude at which these reactions occur. Red auroras come from lower-energy particles exciting neutral oxygen and cause emissions at altitudes above 150 miles. Green auroras come from higher-energy particles exciting neutral oxygen and cause emissions at altitudes below 150 miles. Rare purple and blue aurora come from excited molecular nitrogen ions and occur during the most intense events.

Scientists measure the magnitude of geomagnetic activity driving auroras in several different ways. One of these uses sensitive magnetic field-measuring equipment at stations around the planet to obtain a geomagnetic storm measurement known as Kp, on a scale from 1 (least activity) to 9 (greatest activity), in three-hour intervals. Higher Kp values indicate the possibility — not a guarantee — of greater auroral sightings as the location of auroral displays move to lower latitudes. Typically, when the Kp index reaches a range of 6 or higher, this indicates that aurora viewings are more likely outside the usual northern ranges. The geomagnetic storm events of this week reached a Kp value of 9, indicating very strong activity in the sun–Earth system.

At MIT Haystack Observatory in Westford, Massachusetts, geospace and atmospheric physics scientists study the atmosphere and its aurora year-round by combining observations from many different instruments. These include ground-based sensors — including large upper-atmosphere radars that bounce signals off particles in the ionosphere — as well as data from space satellites. These tools provide key information, such as density, temperature, and velocity, on conditions and disturbances in the upper atmosphere: basic information that helps researchers at MIT and elsewhere understand the weather in space. 

Haystack geospace research is primarily funded through science funding by U.S. federal agencies such as the National Science Foundation (NSF) and NASA. This work is crucial for our increasingly spacefaring civilization, which requires continual expansion of our understanding of how space weather affects life on Earth, including vital navigation systems such as GPS, worldwide communication infrastructure, and the safety of our power grids. Research in this area is especially important in modern times, as humans increasingly use low Earth orbit for commercial satellite constellations and other systems, and as civilization further progresses into space.

Studies of the variations in our atmosphere and its charged component, known as the ionosphere, have revealed the strong influence of the sun. Beyond the normal white light that we experience each day, the sun also emits many other wavelengths of light, from infrared to extreme ultraviolet. Of particular interest are the extreme ultraviolet portions of solar output, which have enough energy to ionize atoms in the upper atmosphere. Unlike its white light component, the sun's output at these very short wavelengths has many different short- and long-term variations, but the most well known is the approximately 11-year solar cycle, in which the sun goes from minimum to maximum output. 

Scientists have determined that the most recent peak in activity, known as solar maximum, occurred within the past 12 months. This is good news for auroral watchers, as the most active period for severe geomagnetic storms that drive auroral displays at New England latitudes occurs during the three-year period following solar maximum.

Despite intensive research to date, we still have a great deal more to learn about space weather and its effects on the near-Earth environment. MIT Haystack Observatory continues to advance knowledge in this area. 

Larisa Goncharenko, lead geospace scientist and assistant director at Haystack, states, "In general, understanding space weather well enough to forecast it is considerably more challenging than even normal weather forecasting near the ground, due to the vast distances involved in space weather forces. Another important factor comes from the combined variation of Earth's neutral atmosphere, affected by gravity and pressure, and from the charged particle portion of the atmosphere, created by solar radiation and additionally influenced by the geometry of our planet's magnetic field. The complex interplay between these elements provides rich complexity and a sustained, truly exciting scientific opportunity to improve our understanding of basic physics in this vital part of our home in the solar system, for the benefit of civilization."

For up-to-date space weather forecasts and predictions of possible aurora events, visit SpaceWeather.com or NOAA's Aurora Viewline site.


MIT startup aims to expand America’s lithium production

Lithios, founded by Mo Alkhadra PhD ’22 and Professor Martin Bazant, is scaling up an electrochemical lithium extraction technology to secure supply chains of the critical metal.


China dominates the global supply of lithium. The country processes about 65 percent of the battery material and has begun on-again, off-again export restrictions of lithium-based products critical to the economy.

Fortunately, the U.S. has significant lithium reserves, most notably in the form of massive underground brines across south Arkansas and east Texas. But recovering that lithium through conventional techniques would be an energy-intensive and environmentally damaging proposition — if it were profitable at all.

Now, the startup Lithios, founded by Mo Alkhadra PhD ’22 and Martin Z. Bazant, the Chevron Chair Professor of Chemical Engineering, is commercializing a new process of lithium recovery it calls Advanced Lithium Extraction. The company uses electricity to drive a reaction with electrode materials that capture lithium from salty brine water, leaving behind other impurities.

Lithios says its process is more selective and efficient than other direct lithium-extraction techniques being developed. It also represents a far cleaner and less energy-intensive alternative to mining and the solar evaporative ponds that are used to extract lithium from underground brines in the high deserts of South America.

Lithios has been running a pilot system continuously extracting lithium from real brine waters from around the world since June. It also recently shipped an early version of its system to a commercial partner scaling up operations in Arkansas.

With the core technology of its modular systems largely validated, next year Lithios plans to begin operating a larger version capable of producing 10 to 100 tons of lithium carbonate per year. From there, the company plans to build a commercial facility that will be able to produce 25,000 tons of lithium carbonate each year. That would represent a massive increase in the total lithium production of the U.S., which is currently limited to less than 5,000 tons per year.

“There’s been a big push recently, and especially in the last year, to secure domestic supplies of lithium and break away from the Chinese chokehold on the critical mineral supply chain,” Alkhadra says. “We have an abundance of lithium deposits at our disposal in the U.S., but we lack the tools to turn those resources into value.”

Adapting a technology

Bazant realized the need for new approaches to mining lithium while working with battery companies through his lab in MIT’s Department of Chemical Engineering. His group has studied battery materials and electrochemical separation for decades.

As part of his PhD in Bazant’s lab, Alkhadra studied electrochemical processes for separation of dissolved metals, with a focus on removing lead from drinking water and treating industrial wastewater. As Alkhadra got closer to graduation, he and Bazant looked at the most promising commercial applications for his work.

It was 2021, and lithium prices were in the midst of a historic spike driven by the metal’s importance in batteries.

Today, lithium comes primarily from mining or through a slow evaporative process that uses miles of surface ponds to refine and recover lithium from wastewater. Both are energy-intensive and damaging to the environment. They are also dominated by Chinese companies and supply chains.

“A lot of hard rock mining is done in Australia, but most of the rock is shipped as a concentrate to China for refining because they’re the ones who have the technology,” Bazant explains.

Other direct lithium-extraction methods use chemicals and filters, but the founders say those methods struggle to be profitable with U.S. lithium reserves, which have low concentrations of lithium and high levels of impurities.

“Those methods work when you have a good grade of lithium brine, but they become increasingly uneconomical as you get lower-quality resources, which is exactly what the industry is going through right now,” Alkhadra says. “The evaporative process has a huge footprint — we’re talking about the size of Manhattan island for a single project. Conveniently, recovering minerals from those low concentrations was the essence of my PhD work at MIT. We simply had to adapt the technology to the new use case.”

While conducting early talks with potential customers, Alkhadra received guidance from MIT’s Venture Mentoring Service, the MIT Sandbox Innovation Fund, and the Massachusetts Clean Energy Center. Lithios officially formed when he completed his PhD in 2022 and received the Activate Fellowship. Lithios grew at The Engine, an MIT startup incubator, before moving to their pilot and manufacturing facility in Medford, Massachusetts, in 2024.

Today, Lithios uses an undisclosed electrode material that attaches to lithium when exposed to precise voltages.

“Think of a big battery with water flowing into the system,” Alkhadra explains. “When the brine comes into contact with our electrodes, it selectively pulls lithium while rejecting all the other contaminants. When the lithium has been loaded onto our capture materials, we can simply change the direction of the electrical current to release the lithium back into a clean water stream. It’s similar to charging and discharging a battery.”

Bazant says the company’s lithium-absorbing materials are an ideal fit for this application.

“One of the main challenges of using battery electrodes to extract lithium is how to complete the system,” Bazant says. “We have a great lithium-extraction material that is very stable in water and has wonderful performance. We also learned how to formulate both electrodes with controlled ion transport and mixing to make the process much more efficient and low cost.”

Growing in the ‘MIT spirit’

A U.S. geological survey last year showed the underground Smackover Formation contains between 5 and 19 million tons of lithium in southwest Arkansas alone.

“If you just estimate how much lithium is in that region based on today’s prices, it’s about $2 trillion worth of lithium that can’t be accessed,” Bazant says. “If you could extract these resources efficiently, it would make a huge impact.”

Earlier this year, Lithios shipped its pilot system to a commercial partner in Arkansas to further validate its approach in the region. Lithios also plans to deploy several additional pilot and demonstration projects with other major partners in the oil and gas and mining industries in the coming years.

“After this field deployment, Lithios will quickly scale toward a commercial demonstration plant that will be operational by 2027, with the intent to scale to a kiloton-per-year commercial facility before the end of the decade,” Alkhadra says.

Although Lithios is currently focused on lithium, Bazant says the company’s approach could also be adopted to materials such as rare earth elements and transition metals further down the line.

“We’re developing a unique technology that could make the U.S. the center of the world for critical minerals separation, and we couldn’t have done this anywhere else,” Bazant says. “MIT was the perfect environment, mainly because of the people. There are so many fantastic scientists and businesspeople in the MIT ecosystem who are very technically savvy and ready to jump into a project like this. Our first employees were all MIT people, and they really brought the MIT spirit to our company.”


From nanoscale to global scale: Advancing MIT’s special initiatives in manufacturing, health, and climate

MIT.nano cleanroom complex named after Robert Noyce PhD ’53 at the 2025 Nano Summit.


“MIT.nano is essential to making progress in high-priority areas where I believe that MIT has a responsibility to lead,” opened MIT president Sally Kornbluth at the 2025 Nano Summit. “If we harness our collective efforts, we can make a serious positive impact.”

It was these collective efforts that drove discussions at the daylong event hosted by MIT.nano and focused on the importance of nanoscience and nanotechnology across MIT's special initiatives — projects deemed critical to MIT’s mission to help solve the world’s greatest challenges. With each new talk, common themes were reemphasized: collaboration across fields, solutions that can scale up from lab to market, and the use of nanoscale science to enact grand-scale change.

“MIT.nano has truly set itself apart, in the Institute's signature way, with an emphasis on cross-disciplinary collaboration and open access,” said Kornbluth. “Today, you're going to hear about the transformative impact of nanoscience and nanotechnology, and how working with the very small can help us do big things for the world together.”

Collaborating on health

Angela Koehler, faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS) and the Charles W. and Jennifer C. Johnson Professor of Biological Engineering, opened the first session with a question: How can we build a community across campus to tackle some of the most transformative problems in human health? In response, three speakers shared their work enabling new frontiers in medicine.

Ana Jaklenec, principal research scientist at the Koch Institute for Integrative Cancer Research, spoke about single-injection vaccines, and how her team looked to the techniques used in fabrication of electrical engineering components to see how multiple pieces could be packaged into a tiny device. “MIT.nano was instrumental in helping us develop this technology,” she said. “We took something that you can do in microelectronics and the semiconductor industry and brought it to the pharmaceutical industry.”

While Jaklenec applied insight from electronics to her work in health care, Giovanni Traverso, the Karl Van Tassel Career Development Professor of Mechanical Engineering, who is also a gastroenterologist at Brigham and Women’s Hospital, found inspiration in nature, studying the cephalopod squid and remora fish to design ingestible drug delivery systems. Representing the industry side of life sciences, Mirai Bio senior vice president Jagesh Shah SM ’95, PhD ’99 presented his company’s precision-targeted lipid nanoparticles for therapeutic delivery. Shah, as well as the other speakers, emphasized the importance of collaboration between industry and academia to make meaningful impact, and the need to strengthen the pipeline for young scientists.

Manufacturing, from the classroom to the workforce

Paving the way for future generations was similarly emphasized in the second session, which highlighted MIT’s Initiative for New Manufacturing (MIT INM). “MIT’s dedication to manufacturing is not only about technology research and education, it’s also about understanding the landscape of manufacturing, domestically and globally,” said INM co-director A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. “It’s about getting people — our graduates who are budding enthusiasts of manufacturing — out of campus and starting and scaling new companies,” he said.

On progressing from lab to market, Dan Oran PhD ’21 shared his career trajectory from technician to PhD student to founding his own company, Irradiant Technologies. “How are companies like Dan’s making the move from the lab to prototype to pilot production to demonstration to commercialization?” asked the next speaker, Elisabeth Reynolds, professor of the practice in urban studies and planning at MIT. “The U.S. capital market has not historically been well organized for that kind of support.” She emphasized the challenge of scaling innovations from prototype to production, and the need for workforce development.

“Attracting and retaining workforce is a major pain point for manufacturing businesses,” agreed John Liu, principal research scientist in mechanical engineering at MIT. To keep new ideas flowing from the classroom to the factory floor, Liu proposes a new worker type in advanced manufacturing — the technologist — someone who can be a bridge to connect the technicians and the engineers.

Bridging ecosystems with nanoscience

Bridging people, disciplines, and markets to affect meaningful change was also emphasized by Benedetto Marelli, mission director for the MIT Climate Project and associate professor of civil and environmental engineering at MIT.

“If we’re going to have a tangible impact on the trajectory of climate change in the next 10 years, we cannot do it alone,” he said. “We need to take care of ecology, health, mobility, the built environment, food, energy, policies, and trade and industry — and think about these as interconnected topics.”

Faculty speakers in this session offered a glimpse of nanoscale solutions for climate resiliency. Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering, presented his group’s work on using nanoparticles to turn waste methane and urea into renewable materials. Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor, spoke about scaling carbon dioxide removal systems. Mechanical engineering professor Kripa Varanasi highlighted, among other projects, his lab’s work on improving agricultural spraying so pesticides adhere to crops, reducing agricultural pollution and cost.

In all of these presentations, the MIT faculty highlighted the tie between climate and the economy. “The economic systems that we have today are depleting to our resources, inherently polluting,” emphasized Plata. “The goal here is to use sustainable design to transition the global economy.”

What do people do at MIT.nano?

This is where MIT.nano comes in, offering shared access facilities where researchers can design creative solutions to these global challenges. “What do people do at MIT.nano?” asked associate director for Fab.nano Jorg Scholvin ’00, MNG ’01, PhD ’06 in the session on MIT.nano’s ecosystem. With 1,500 individuals and over 20 percent of MIT faculty labs using MIT.nano, it’s a difficult question to quickly answer. However, in a rapid-fire research showcase, students and postdocs gave a response that spanned 3D transistors and quantum devices to solar solutions and art restoration. Their work reflects the challenges and opportunities shared at the Nano Summit: developing technologies ready to scale, uniting disciplines to tackle complex problems, and gaining hands-on experience that prepares them to contribute to the future of hard tech.

The researchers’ enthusiasm carried the excitement and curiosity that President Kornbluth mentioned in her opening remarks, and that many faculty emphasized throughout the day. “The solutions to the problems we heard about today may come from inventions that don't exist yet,” said Strano. “These are some of the most creative people, here at MIT. I think we inspire each other.”

Robert N. Noyce (1953) Cleanroom at MIT.nano

Collaborative inspiration is not new to the MIT culture. The Nano Summit sessions focused on where we are today, and where we might be going in the future, but also reflected on how we arrived at this moment. Honoring visionaries of nanoscience and nanotechnology, President Emeritus L. Rafael Reif delivered the closing remarks and an exciting announcement — the dedication of the MIT.nano cleanroom complex. Made possible through a gift by Ray Stata SB ’57, SM ’58, this research space, 45,000 square feet of ISO 5, 6, and 7 cleanrooms, will be named the Robert N. Noyce (1953) Cleanroom.

“Ray Stata was — and is — the driving force behind nanoscale research at MIT,” said Reif. “I want to thank Ray, whose generosity has allowed MIT to honor Robert Noyce in such a fitting way.”

Ray Stata co-founded Analog Devices in 1965, and Noyce co-founded Fairchild Semiconductor in 1957, and later Intel in 1968. Noyce, widely regarded as the “Mayor of Silicon Valley,” became chair of the Semiconductor Industry Association in 1977, and over the next 40 years, semiconductor technology advanced a thousandfold, from micrometers to nanometers.

“Noyce was a pioneer of the semiconductor industry,” said Stata. “It is due to his leadership and remarkable contributions that electronics technology is where it is today. It is an honor to be able to name the MIT.nano cleanroom after Bob Noyce, creating a permanent tribute to his vision and accomplishments in the heart of the MIT campus.”

To conclude his remarks and the 2025 Nano Summit, Reif brought the nano journey back to today, highlighting technology giants such as Lisa Su ’90, SM ’91, PhD ’94, for whom Building 12, the home of MIT.nano, is named. “MIT has educated a large number of remarkable leaders in the semiconductor space,” said Reif. “Now, with the Robert Noyce Cleanroom, this amazing MIT community is ready to continue to shape the future with the next generation of nano discoveries — and the next generation of nano leaders, who will become living legends in their own time.”


Green bananas can’t throw 3.091 Fun Run off course

Quick thinking and good spirit marked the Department of Materials Science and Engineering’s first-ever community run.


The night before the Department of Materials Science and Engineering (DMSE)’s 3.091 Fun Run, organizer Bianca Sinausky opened a case of bananas she’d ordered and was met with a surprise: the fruit was bright green.

“I looked around for paper bags, but I only found a few,” says Sinausky, graduate academic administrator for the department, referring to a common hack for speeding up ripening. “It was hopeless.”

That is, until facilities manager Kevin Rogers came up with a plan: swap the green bananas for ripe ones from MIT’s Banana Lounge, a free campus snack and study space stocked with fruit.

“It was genius,” Sinausky says. “The runners would have their snack, and the race could go on.”

DMSE checked in with the Banana Lounge a little late, but logistics lead senior Colin Clark approved anyway. “So that’s where that box came from,” he says.

On a bright fall morning, ripe bananas awaited 20 DMSE students and faculty in the Oct. 15 run, which started and finished at the Zesiger Sports and Fitness Center and wound along pedestrian paths across the MIT campus. Department head Polina Anikeeva, an avid runner, says the goal was to build community, enjoy the outdoors, and celebrate 3.091 (Introduction to Solid-State Chemistry), a popular first-year class and General Institute Requirement.

“We realized 3.091 was so close to 5 kilometers — 3.1 miles — it was the perfect opportunity,” Anikeeva says, admitting she made the initial connection. “I think about things like that.”

For many participants, running is a regular hobby—but doing it with colleagues made it even more enjoyable. “I usually run a few times a week, and I thought it would be fun to log some more miles in my training block with the DMSE community,” says graduate student Jessica Dong, who is training for the Cambridge Half Marathon this month.

Fellow graduate student Rishabh Kothari agrees. “I was excited to support a department event that aligns with my general hobbies,” says Kothari, who recently ran the Chicago Marathon and tied for first in his age category in the DMSE run. “I find running to be a great community-building activity.”

While fun runs are usually noncompetitive, organizers still recognized the fastest runners by age group.

Unlike an official road race, organized by a race company — the City of Cambridge currently isn’t allowing new races — the DMSE run was managed internally by an informal cohort of colleagues, Sinausky says, which meant a fair amount of work.

“The hardest part was walking the route and putting the mileage out, and also putting out arrows,” she says. “When a race company does it, they do it properly.”

There were a few minor snags — some runners went the wrong way, and two walkers got lost. “So I think we need to mark the course better,” Sinausky says.

Others found charm in the run’s rough edges.

“My favorite part of the run was when a group of us got confused about the route, so we cut through the lawn in front of Tang Hall,” Dong says. At the finish line, she showed off a red DMSE hat — one of the giveaways laid out alongside ripe bananas and bottles of water.

Looking ahead to what organizers hope will be an annual event, the team is considering purchasing race timing equipment. Modern road races distribute bibs outfitted with RFID chips, which track each runner’s start and finish. Sinausky’s method — employing a smartphone timer and Anikeeva tracking finish times on a clipboard — was less high-tech, but effective for the small number of participants.

“We would see the runners coming, and Polina would say, ‘OK, bib 21.’ And then I would yell out the time,” she says. “I think that if more people showed up, it would’ve been harder.”

Sinausky hopes to boost participation in coming years. Early interest was strong, with 63 registering, but fewer than a third showed up on race day. The week’s delay due to rain — and several straight days of rain since — likely didn’t help, she says.

Overall, she says, the run was a success, with participants saying they hope it will become a new DMSE tradition.

“It was great to see everyone finish and enjoy themselves,” Kothari says. “A nice morning to be around friends.”


Transforming complex research into compelling stories

New oral communication studio at MIT supports professional development in STEM.


For students, postdocs, and early-career researchers, communicating complex ideas in a clear and compelling manner has become an essential skill. Whether applying for academic positions, pitching research to funders, or collaborating across disciplines, the ability to present work clearly and effectively can be as critical as the work itself.

Recognizing this need, The MIT Office of Graduate Education (OGE) has partnered with the Writing and Communication Center (WCC) to launch the WCC Communication Studio: a self-service recording and editing space designed to help users sharpen their oral presentation and communication skills. Open to all members of the MIT community as of this fall, the studio offers a first-of-its-kind resource at MIT for developing and refining research presentations, mock interview conversations, elevator pitches, and more.

Housed in WCC’s Ames Street office, the studio is equipped with high-quality microphones and user-friendly video recording and editing tools, all designed to be used with the PitchVantage software.

How does it work? Users can access tutorials, example videos, and a reservation system through the WCC’s website. After completing a short orientation on how to use the technology and space responsibly, users are ready to pitch to simulated audiences, who react in real time to various elements of delivery. Users can also watch their recorded presentations and receive personalized feedback on nine elements of presentation delivery: pitch, pace, volume variability, verbal distractors, pace, eye contact, volume, engagement, and pauses.

Designed with students in mind

“Through years of individual and group consultations with MIT students and scholars, we realized that developing strong presentation skills requires more than feedback — it requires sustained, embodied practice,” explains Elena Kallestinova, director of the WCC. “The Oral Communication Studio was created to fill that gap.”

Those who have used the studio during its initial lifespan say that its interactive format helps to provide real-time, actionable feedback on their verbal delivery. Additionally, the program offers notes on overall stage presence, including subtle actions such as hand gestures and eye contact. For students, this can be the key to ensuring that their delivery is both confident and clearly accessible once it comes time to present. 

“I’ve been using the studio to practice for conferences and job interviews,” says Fabio Castro, a PhD student studying civil engineering. His favorite feature? The instant feedback from the virtual figures watching the presentation, which allows him to not only prepare to speak in front of an audience, but to read their nonverbal cues and adjust his delivery accordingly.

The studio also addresses a practical challenge facing many PhD students and postdocs in their role as emerging researchers: the high stakes of presenting. For many, their first major talk may be in front of a hiring committee, research institute, or funding body — audiences that may heavily influence their next career step. The studio gives them a low-pressure environment in which to rehearse so that they enter these spaces confidently.

Aditi Ramakrishnan, an MBA student in the MIT Sloan School of Management, acknowledges the importance of this tool for emerging professionals. As a business student, she explains, “a lot of your job involves pitching.” She credits the WCC with helping to take her pitching game “from good to excellent,” identifying small details such as unnecessary “filler” words and understanding the difference between a strong stage presence and a distracting one. 

A new frontier in communication support at MIT

While MIT has long been recognized for its excellence in technical education, the studio represents a broader focus on arming students and researchers alike with the tools that they need to amplify their work to larger audiences. 

“The WCC Communication Studio  gives students a place to rehearse, get immediate feedback, and iterate until their ideas land clearly and confidently,” explains Denzil Streete, OGE’s senior associate dean and director. “It’s not just about better slides or smoother delivery; it’s about unlocking and scaling access to more modern tools so more graduate students can translate breakthrough research into real-world impact.”

"The studio is a resource for the entire MIT community,” says Kallestinova, emphasizing that this new resource serves as a support for not only graduate students, but also undergrads, researchers, and even faculty. “Whether used as a supplement to classroom instruction or as a follow-up to coaching sessions, the studio offers a dedicated space for rehearsal, reflection, and growth, helping all users build confidence, clarity, and command in their communication."

The studio joins an array of existing resources within the WCC, including a Public Speaking Certificate Program, a peer-review group for creative writers, and a number of revolving workshops throughout the year. 

A culture of communication

From grant funding and academic collaboration to public outreach and policy impact, effective speaking skills are more important than ever.

“No matter how brilliant the idea, it has to be clearly communicated by the researcher or scholar in order to have impact,” says Amanda Cornwall, associate director of graduate student professional development at Career Advising and Professional Development (CAPD). 

“Explaining complex concepts to a broader audience takes practice and skill. When a researcher can build confidence in their speaking abilities, they have the power to transport their audience and show the way to new possibilities,” she adds. “This is why communication is one of the professional development competencies that we emphasize at MIT; it matters in every context, from small conversations to teaching to speeches that might change the world.”

The studio’s launch comes among a broader institutional focus on communication. CAPD, the Teaching and Learning Lab, the OGE, and academic departments have recognized the value of, and provided increasing levels of support for, professional development training alongside technical expertise.

Workshops already offered by the WCC, CAPD, and other campus partners work to highlight best practices for conference talks, long-form interviews, and more. The WCC Communication Studio provides a practical extension of these efforts. Looking ahead, the studio aims to not only serve as a training space, but also help foster a culture of communication excellence among researchers and educators.


Returning farming to city centers

4.182 (Resilient Urbanism: Green Commons in the City), a new subject funded by the MIT Human Insight Collaborative (MITHIC), teaches students about sustainable agriculture in urban areas.


A new class is giving MIT students the opportunity to examine the historical and practical considerations of urban farming while developing a real-world understanding of its value by working alongside a local farm’s community.

Course 4.182 (Resilient Urbanism: Green Commons in the City) is taught in two sections by instructors in the Program in Science, Technology, and Society and the School of Architecture and Planning, in collaboration with The Common Good Co-op in Dorchester.

The first section was completed in spring 2025 and the second section is scheduled for spring 2026. The course is taught by STS professor Kate Brownvisiting lecturer Justin Brazier MArch ’24, and Kafi Dixon, lead farmer and executive director of The Common Good.

“This project is a way for students to investigate the real political, financial, and socio-ecological phenomena that can help or hinder an urban farm’s success,” says Brown, the Thomas M. Siebel Distinguished Professor in History of Science. 

Brown teaches environmental history, the history of food production, and the history of plants and people. She describes a history of urban farming that centered sustainable practices, financial investment and stability, and lasting connections among participants. 

Brown says urban farms have sustained cities for decades.

“Cities are great places to grow produce,” Brown asserts. “City dwellers produce lots of compostable materials.”

Brazier’s research ranges from affordable housing to urban agricultural gardens, exploring topics like sustainable architecture, housing, and food security.

“My work designing vacant lots as community gardens offered a link between Kafi’s work with Common Good and my interests in urban design,” Brazier says. “Urban farms offer opportunities to eliminate food deserts in underserved areas while also empowering historically marginalized communities.”

Before they agreed to collaborate on the course, Dixon reached out to Brown asking for help with several challenges related to her urban farm including zoning, location, and infrastructure.

“As the lead farmer and executive director of Common Good Co-op, I happened upon Kate Brown’s research and work and saw that it aligned with our cooperative model’s intentions,” Dixon says. “I reached out to Kate, and she replied, which humbled and excited me.” 

“Design itself is a form of communication,” Dixon adds, describing the collaborative nature of farming sustenance and development. “For many under-resourced communities, innovating requires a research-based approach.”

The project is among the inaugural cohort of initiatives to receive support from the SHASS Education Innovation Fund, which is administered by the MIT Human Insight Collaborative (MITHIC).

Community development, investment, and collaboration

The class’s first section paired students with community members and the City of Boston to change the farm’s zoning status and create a green space for long-term farming and community use. Students spent time at Common Good during the course, including one weekend during which they helped with weeding the garden beds for spring planting.

One objective of the class is to help Common Good avoid potential pitfalls associated with gentrification. “A study in Philadelphia showed that gentrification occurs within 1,000 feet of a community garden,” Brown says. 

“Farms and gardens are a key part of community and public health,” Dixon continues. 

Students in the second section will design and build infrastructure — including a mobile chicken coop and a pavilion to protect farmers from the elements — for Common Good. The course also aims to secure a green space designation for the farm and ensure it remains an accessible community space. “We want to prevent developers from acquiring the land and displacing the community,” Brown says, avoiding past scenarios in which governments seized inhabitants’ property while offering little or no compensation.

Students in the 2025 course also produced a guide on how to navigate the complex rules surrounding zoning and related development. Students in the next STS section will research the history of food sovereignty and Black feminist movements in Dorchester and Roxbury. Using that research, they will construct an exhibit focused on community activism for incorporation into the coop’s facade.

Imani Bailey, a second-year master’s student in the Department of Architecture’s MArch program, was among the students in the course’s first section.

“By taking this course, I felt empowered to directly engage with the community in a way no other class I have taken so far has afforded me the ability to,” she says.

Bailey argues for urban farms’ value as both a financial investment and space for communal interaction, offering opportunities for engagement and the implementation of sustainable practices. 

“Urban farms are important in the same way a neighbor is,” she adds. “You may not necessarily need them to own your home, but a good one makes your property more valuable, sometimes financially, but most importantly in ways that cannot be assigned a monetary value.”

The intersection of agriculture, community, and technology

Technology, the course’s participants believe, can offer solutions to some of the challenges related to ensuring urban farms’ viability. 

“Cities like Amsterdam are redesigning themselves to improve walkability, increase the appearance of small gardens in the city, and increase green space,” Brown says. By creating spaces that center community and a collective approach to farming, it’s possible to reduce both greenhouse emissions and impacts related to climate change.

Additionally, engineers, scientists, and others can partner with communities to develop solutions to transportation and public health challenges. By redesigning sewer systems, empowering microbiologists to design microbial inoculants that can break down urban food waste at the neighborhood level, and centering agriculture-related transportation in the places being served, it’s possible to sustain community support and related infrastructure.

“Community is cultivated, nurtured, and grown from prolonged interaction, sharing ideas, and the creation of place through a shared sense of ownership,” Bailey argues. “Urban farms present the conditions for communities to develop.” 

Bailey values the course because it leaves the theoretical behind, instead focusing on practical solutions. “We seldom see our design ideas become tangible," she says. “This class offered an opportunity to design and build for a real client in the real world.”

Brazier says the course and its projects prove everyone has something to contribute and can have a voice in what happens with their neighborhoods. “Despite these communities’ distrust of some politicians, we partnered to work on solutions related to zoning,” he says, “and supported community members’ advocacy efforts.”


How drones are altering contemporary warfare

A new book by scholar and military officer Erik Lin-Greenberg examines the evolving dynamics of military and state action centered around drones.


In recent months, Russia has frequently flown drones into NATO territory, where NATO countries typically try to shoot them down. By contrast, when three Russian fighter jets made an incursion into Estonian airspace in September, they were intercepted and no attempt was made to shoot them down — although the incident did make headlines and led to a Russian diplomat being expelled from Estonia.

Those incidents follow a global pattern of recent years. Drone operations, to this point, seem to provoke different responses compared to other kinds of military action, especially the use of piloted warplanes. Drone warfare is expanding but not necessarily provoking major military responses, either by the countries being attacked or by the aggressor countries that have drones shot down.

“There was a conventional wisdom that drones were a slippery slope that would enable leaders to use force in all kinds of situations, with a massively destabilizing effect,” says MIT political scientist Erik Lin-Greenberg. “People thought if drones were used all over the place, this would lead to more escalation. But in many cases where drones are being used, we don’t see that escalation.”

On the other hand, drones have made military action more pervasive. It is at least possible that in the future, drone-oriented combat will be both more common and more self-contained.

“There is a revolutionary effect of these systems, in that countries are essentially increasing the range of situations in which leaders are willing to deploy military force,” Lin-Greenberg says. To this point, though, he adds, “these confrontations are not necessarily escalating.”

Now Lin-Greenberg examines these dynamics in a new book, “The Remote Revolution: Drones and Modern Statecraft,” published by Cornell University Press. Lin-Greenberg is an associate professor in MIT’s Department of Political Science.

Lin-Greenberg brings a distinctive professional background to the subject of drone warfare. Before returning to graduate school, he served as a U.S. Air Force officer; today he commands a U.S. Air Force reserve squadron. His thinking is informed by his experiences as both a scholar and practitioner.

“The Remote Revolution” also has a distinctive methodology that draws on multiple ways of studying the topic. In writing the book, Lin-Greenberg conducted experiments based on war games played by national security professionals; conducted surveys of expert and public thinking about drones; developed in-depth case studies from history; and dug into archives broadly to fully understand the history of drone use, which in fact goes back several decades.

The book’s focus is drone use during the 2000s, as the technology has become more readily available; today about 100 countries have access to military drones. Many have used them during tensions and skirmishes with other countries.

“Where I argue this is actually revolutionary is during periods of crises, which fall below the threshold of war, in that these new technologies take human operators out of harm’s way and enable states to do things they wouldn’t otherwise do,” Lin-Greenberg says.

Indeed, a key point is that drones lower the costs of military action for countries — and not just financial costs, but human and political costs, too. Incidents and problems that might plague leaders if they involved military personnel, forcing major responses, seem to lessen when drones are involved.

“Because these systems don’t have a human on board, they’re inherently cheaper and different in the minds of decision-makers,” Lin-Greenberg says. “That means they’re willing to use these systems during disputes, and if other states are shooting them down, the side sending them is less likely to retaliate, because they’re losing a machine but not a man or woman on board.”

In this sense, the uses of drones “create new rungs on the escalation ladder,” as Lin-Greenberg writes in the book. Drone incidents don’t necessarily lead to wider military action, and may not even lead to the same kinds of international relations issues as incidents involving piloted aircraft.

Consider a counterfactual that Lin-Greenberg raises in the book. One of the most notorious episodes of Cold War tension between the U.S. and U.S.S.R. occurred in 1960, when U.S. pilot Gary Powers was shot down and captured in the Soviet Union, leading to a diplomatic standoff and a canceled summit between U.S. President Dwight Eisenhower and Soviet leader Nikita Khrushchev.

“Had that been a drone, it’s very likely the summit would have continued,” Lin-Greenberg says. “No one would have said anything. The Soviet Union would have been embarrassed to admit their airspace was violated and the U.S. would have just [publicly] ignored what was going on, because there would not have been anyone sitting in a prison. There are a lot of exercises where you can ask how history could have been different.”

None of this is to say that drones present straightforward solutions to international relations problems. They may present the appearance of low-cost military engagement, but as Lin-Greenberg underlines in the book, the effects are more complicated.

“To be clear, the remote revolution does not suggest that drones prevent war,” Lin-Greenberg writes. Indeed, one of the problems they raise, he emphasizes, is the “moral hazard” that arises from leaders viewing drones as less costly, which can lead to even more military confrontations.

Moreover, the trends in drone warfare so far yield predictions for the future that are “probabilistic rather than deterministic,” as Lin-Greenberg writes. Perhaps some political or military leaders will start to use drones to attack new targets that will inevitably generate major responses and quickly escalate into broad wars. Current trends do not guarantee future outcomes.

“There are a lot of unanswered questions in this area,” Lin-Greenberg says. “So much is changing. What does it look like when more drones are more autonomous? I still hope this book lays a foundation for future dicussions, even as drones are used in different ways.”

Other scholars have praised “The Remote Revolution.” Joshua Kertzer, a professor of international studies and government at Harvard University, has hailed Lin-Greenberg’s “rich expertise, methodological rigor, and creative insight,” while Michael Horowitz, a political scientist and professor of international relations at the University of Pennsylvania, has called it “an incredible book about the impact of drones on the international security environment.”

For his part, Lin-Greenberg says, “My hope is the book will be read by academics and practitioners and people who choose to focus on parts of it they’re interested in. I tried to write the book in way that’s approachable.”

Publication of the book was supported by funding from MIT’s Security Studies Program. 


MIT senior turns waste from the fishing industry into biodegradable plastic

Jacqueline Prawira’s innovation, featured on CBS’s “The Visioneers,” tackles one of the world’s most pressing environmental challenges.


Sometimes the answers to seemingly intractable environmental problems are found in nature itself.
 
Take the growing challenge of plastic waste. Jacqueline Prawira, an MIT senior in the Department of Materials Science and Engineering (DMSE), has developed biodegradable, plastic-like materials from fish offal, as featured in a recent segment on the CBS show “The Visioneers with Zay Harding.”
 
“We basically made plastics to be too good at their job. That also means the environment doesn’t know what to do with this, because they simply won’t degrade,” Prawira told Harding. “And now we’re literally drowning in plastic. By 2050, plastics are expected to outweigh fish in the ocean.”
 
“The Visioneers” regularly highlights environmental innovators. The episode featuring Prawira premiered during a special screening at Climate Week NYC on Sept. 24.

Her inspiration came from the Asian fish market her family visits. Once the fish they buy are butchered, the scales are typically discarded.
 
“But I also started noticing they’re actually fairly strong. They’re thin, somewhat flexible, and pretty lightweight, too, for their strength,” Prawira says. “And that got me thinking: Well, what other material has these properties? Plastics.”
 
She transformed this waste product into a transparent, thin-film material that can be used for disposable products such as grocery bags, packaging, and utensils.
 
Both her fish-scale material and a composite she developed don’t just mimic plastic — they address one of its biggest flaws. “If you put them in composting environments, [they] will degrade on their own naturally without needing much, if any, external help,” Prawira says.
 
This isn’t Prawira’s first environmental innovation. Working in DMSE Professor Yet-Ming Chiang’s lab, she helped develop a low-carbon process for making cement — the world’s most widely used construction material, and a major emitter of carbon dioxide. The process, called silicate subtraction, enables compounds to form at lower temperatures, cutting fossil fuel use.
 
Prawira and her co-inventors in the Chiang lab are also using the method to extract valuable lithium with zero waste. The process is patented and is being commercialized through the startup Rock Zero.
 
For her achievements, Prawira recently received the Barry Goldwater Scholarship, awarded to undergraduates pursuing careers in science, mathematics, or engineering.
 
In her “Visioneers” interview, she shared her hope for more sustainable ways of living. 

“I’m hoping that we can have daily lives that can be more in sync with the environment,” Prawira said. “So you don’t always have to choose between the convenience of daily life and having to help protect the environment.”


New lightweight polymer film can prevent corrosion

Because it’s nearly impermeable to gases, the polymer coating developed by MIT engineers could be used to protect solar panels, machinery, infrastructure, and more.


MIT researchers have developed a lightweight polymer film that is nearly impenetrable to gas molecules, raising the possibility that it could be used as a protective coating to prevent solar cells and other infrastructure from corrosion, and to slow the aging of packaged food and medicines.

The polymer, which can be applied as a film mere nanometers thick, completely repels nitrogen and other gases, as far as can be detected by laboratory equipment, the researchers found. That degree of impermeability has never been seen before in any polymer, and rivals the impermeability of molecularly-thin crystalline materials such as graphene.

“Our polymer is quite unusual. It’s obviously produced from a solution-phase polymerization reaction, but the product behaves like graphene, which is gas-impermeable because it’s a perfect crystal. However, when you examine this material, one would never confuse it with a perfect crystal,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT.

The polymer film, which the researchers describe today in Nature, is made using a process that can be scaled up to large quantities and applied to surfaces much more easily than graphene.

Strano and Scott Bunch, an associate professor of mechanical engineering at Boston University, are the senior authors of the new study. The paper’s lead authors are Cody Ritt, a former MIT postdoc who is now an assistant professor at the University of Colorado at Boulder; Michelle Quien, an MIT graduate student; and Zitang Wei, an MIT research scientist.

Bubbles that don’t collapse

Strano’s lab first reported the novel material — a two-dimensional polymer called a 2D polyaramid that self-assembles into molecular sheets using hydrogen bonds — in 2022. To create such 2D polymer sheets, which had never been done before, the researchers used a building block called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can expand in two dimensions, forming nanometer-sized disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.

That polymer, which the researchers call 2DPA-1, is stronger than steel but has only one-sixth the density of steel.

In their 2022 study, the researchers focused on testing the material’s strength, but they also did some preliminary studies of its gas permeability. For those studies, they created “bubbles” out of the films and filled them with gas. With most polymers, such as plastics, gas that is trapped inside will seep out through the material, causing the bubble to deflate quickly.

However, the researchers found that bubbles made of 2DPA-1 did not collapse — in fact, bubbles that they made in 2021 are still inflated. “I was quite surprised initially,” Ritt says. “The behavior of the bubbles didn’t follow what you’d expect for a typical, permeable polymer. This required us to rethink how to properly study and understand molecular transport across this new material.”  

“We set up a series of careful experiments to first prove that the material is molecularly impermeable to nitrogen,” Strano says. “It could be considered tedious work. We had to make micro-bubbles of the polymer and fill them with a pure gas like nitrogen, and then wait. We had to repeatedly check over an exceedingly long period of time that they weren’t collapsed, in order to report the record impermeability value.”

Traditional polymers allow gases through because they consist of a tangle of spaghetti-like molecules that are loosely joined together. This leaves tiny gaps between the strands. Gas molecules can seep through these gaps, which is why polymers always have at least some degree of gas permeability.

However, the new 2D polymer is essentially impermeable because of the way that the layers of disks stick to each other.

“The fact that they can pack flat means there’s no volume between the two-dimensional disks, and that’s unusual. With other polymers, there’s still space between the one-dimensional chains, so most polymer films allow at least a little bit of gas to get through,” Strano says.

George Schatz, a professor of chemistry and chemical and biological engineering at Northwestern University, described the results as “remarkable.”

“Normally polymers are reasonably permeable to gases, but the polyaramids reported in this paper are orders of magnitude less permeable to most gases under conditions with industrial relevance,” says Schatz, who was not involved in the study.

A protective coating

In addition to nitrogen, the researchers also exposed the polymer to helium, argon, oxygen, methane, and sulfur hexafluoride. They found that 2DPA-1’s permeability to those gases was at least 1/10,000 that of any other existing polymer. That makes it nearly as impermeable as graphene, which is completely impermeable to gases because of its defect-free crystalline structure.

Scientists have been working on developing graphene coatings as a barrier to prevent corrosion in solar cells and other devices. However, scaling up the creation of graphene films is difficult, in large part because they can’t be simply painted onto surfaces.

“We can only make crystal graphene in very small patches,” Strano says. “A little patch of graphene is molecularly impermeable, but it doesn’t scale. People have tried to paint it on, but graphene does not stick to itself but slides when sheared. Graphene sheets moving past each other are considered almost frictionless.”

On the other hand, the 2DPA-1 polymer sticks easily because of the strong hydrogen bonds between the layered disks. In this paper, the researchers showed that a layer just 60 nanometers thick could extend the lifetime of a perovskite crystal by weeks. Perovskites are materials that hold promise as cheap and lightweight solar cells, but they tend to break down much faster than the silicon solar panels that are now widely used.

A 60-nanometer coating extended the perovskite’s lifetime to about three weeks, but a thicker coating would offer longer protection, the researchers say. The films could also be applied to a variety of other structures.

“Using an impermeable coating such as this one, you could protect infrastructure such as bridges, buildings, rail lines — basically anything outside exposed to the elements. Automotive vehicles, aircraft and ocean vessels could also benefit. Anything that needs to be sheltered from corrosion. The shelf life of food and medications can also be extended using such materials,” Strano says.

The other application demonstrated in this paper is a nanoscale resonator — essentially a tiny drum that vibrates at a particular frequency. Larger resonators, with sizes around 1 millimeter or less, are found in cell phones, where they allow the phone to pick up the frequency bands it uses to transmit and receive signals.

“In this paper, we made the first polymer 2D resonator, which you can do with our material because it’s impermeable and quite strong, like graphene,” Strano says. “Right now, the resonators in your phone and other communications devices are large, but there’s an effort to shrink them using nanotechnology. To make them less than a micron in size would be revolutionary. Cell phones and other devices could be smaller and reduce the power expenditures needed for signal processing.”

Resonators can also be used as sensors to detect very tiny molecules, including gas molecules. 

The research was funded, in part, by the Center for Enhanced Nanofluidic Transport-Phase 2, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, as well as the National Science Foundation.

This research was carried out, in part, using MIT.nano’s facilities.


Teaching large language models how to absorb new knowledge

With a new method developed at MIT, an LLM behaves more like a student, writing notes that it studies to memorize new information.


In an MIT classroom, a professor lectures while students diligently write down notes they will reread later to study and internalize key information ahead of an exam.

Humans know how to learn new information, but large language models can’t do this in the same way. Once a fully trained LLM has been deployed, its “brain” is static and can’t permanently adapt itself to new knowledge.

This means that if a user tells an LLM something important today, it won’t remember that information the next time this person starts a new conversation with the chatbot.

Now, a new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information. Just like a student, the LLM generates its own study sheets from a user’s input, which it uses to memorize the information by updating its inner workings.

The model generates multiple self-edits to learn from one input, then applies each one to see which improves its performance the most. This trial-and-error process teaches the model the best way to train itself.

The researchers found this approach improved the accuracy of LLMs at question-answering and pattern-recognition tasks, and it enabled a small model to outperform much larger LLMs.

While there are still limitations that must be overcome, the technique could someday help artificial intelligence agents consistently adapt to new tasks and achieve changing goals in evolving environments.   

“Just like humans, complex AI systems can’t remain static for their entire lifetimes. These LLMs are not deployed in static environments. They are constantly facing new inputs from users. We want to make a model that is a bit more human-like — one that can keep improving itself,” says Jyothish Pari, an MIT graduate student and co-lead author of a paper on this technique.

He is joined on the paper by co-lead author Adam Zweiger, an MIT undergraduate; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, an associate professor in EECS and member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems.

Teaching the model to learn

LLMs are neural network models that have billions of parameters, called weights, that contain the model’s knowledge and process inputs to make predictions. During training, the model adapts these weights to learn new information contained in its training data.

But once it is deployed, the weights are static and can’t be permanently updated anymore.

However, LLMs are very good at a process called in-context learning, in which a trained model learns a new task by seeing a few examples. These examples guide the model’s responses, but the knowledge disappears before the next conversation.

The MIT researchers wanted to leverage a model’s powerful in-context learning capabilities to teach it how to permanently update its weights when it encounters new knowledge.

The framework they developed, called SEAL for “self-adapting LLMs,” enables an LLM to generate new synthetic data based on an input, and then determine the best way to adapt itself and learn from that synthetic data. Each piece of synthetic data is a self-edit the model can apply.

In the case of language, the LLM creates synthetic data by rewriting the information, and its implications, in an input passage. This is similar to how students make study sheets by rewriting and summarizing original lecture content.

The LLM does this multiple times, then quizzes itself on each self-edit to see which led to the biggest boost in performance on a downstream task like question answering. It uses a trial-and-error method known as reinforcement learning, where it receives a reward for the greatest performance boost.

Then the model memorizes the best study sheet by updating its weights to internalize the information in that self-edit.

“Our hope is that the model will learn to make the best kind of study sheet — one that is the right length and has the proper diversity of information — such that updating the model based on it leads to a better model,” Zweiger explains.

Choosing the best method

Their framework also allows the model to choose the way it wants to learn the information. For instance, the model can select the synthetic data it wants to use, the rate at which it learns, and how many iterations it wants to train on.

In this case, not only does the model generate its own training data, but it also configures the optimization that applies that self-edit to its weights.

“As humans, we know how we learn best. We want to grant that same ability to large language models. By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” Pari says.

SEAL outperformed several baseline methods across a range of tasks, including learning a new skill from a few examples and incorporating knowledge from a text passage. On question answering, SEAL improved model accuracy by nearly 15 percent and on some skill-learning tasks, it boosted the success rate by more than 50 percent.

But one limitation of this approach is a problem called catastrophic forgetting: As the model repeatedly adapts to new information, its performance on earlier tasks slowly declines.

The researchers plan to mitigate catastrophic forgetting in future work. They also want to apply this technique in a multi-agent setting where several LLMs train each other.

“One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information. Though fully deployed self-adapting models are still far off, we hope systems able to learn this way could eventually overcome this and help advance science,” Zweiger says.

This work is supported, in part, by the U.S. Army Research Office, the U.S. Air Force AI Accelerator, the Stevens Fund for MIT UROP, and the MIT-IBM Watson AI Lab. 


Understanding the nuances of human-like intelligence

Associate Professor Phillip Isola studies the ways in which intelligent machines “think,” in an effort to safely integrate AI into human society.


What can we learn about human intelligence by studying how machines “think?” Can we better understand ourselves if we better understand the artificial intelligence systems that are becoming a more significant part of our everyday lives?

These questions may be deeply philosophical, but for Phillip Isola, finding the answers is as much about computation as it is about cogitation.

Isola, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS), studies the fundamental mechanisms involved in human-like intelligence from a computational perspective.

While understanding intelligence is the overarching goal, his work focuses mainly on computer vision and machine learning. Isola is particularly interested in exploring how intelligence emerges in AI models, how these models learn to represent the world around them, and what their “brains” share with the brains of their human creators.

“I see all the different kinds of intelligence as having a lot of commonalities, and I’d like to understand those commonalities. What is it that all animals, humans, and AIs have in common?” says Isola, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

To Isola, a better scientific understanding of the intelligence that AI agents possess will help the world integrate them safely and effectively into society, maximizing their potential to benefit humanity.

Asking questions

Isola began pondering scientific questions at a young age.

While growing up in San Francisco, he and his father frequently went hiking along the northern California coastline or camping around Point Reyes and in the hills of Marin County.

He was fascinated by geological processes and often wondered what made the natural world work. In school, Isola was driven by an insatiable curiosity, and while he gravitated toward technical subjects like math and science, there was no limit to what he wanted to learn.

Not entirely sure what to study as an undergraduate at Yale University, Isola dabbled until he came upon cognitive sciences.

“My earlier interest had been with nature — how the world works. But then I realized that the brain was even more interesting, and more complex than even the formation of the planets. Now, I wanted to know what makes us tick,” he says.

As a first-year student, he started working in the lab of his cognitive sciences professor and soon-to-be mentor, Brian Scholl, a member of the Yale Department of Psychology. He remained in that lab throughout his time as an undergraduate.

After spending a gap year working with some childhood friends at an indie video game company, Isola was ready to dive back into the complex world of the human brain. He enrolled in the graduate program in brain and cognitive sciences at MIT.

“Grad school was where I felt like I finally found my place. I had a lot of great experiences at Yale and in other phases of my life, but when I got to MIT, I realized this was the work I really loved and these are the people who think similarly to me,” he says.

Isola credits his PhD advisor, Ted Adelson, the John and Dorothy Wilson Professor of Vision Science, as a major influence on his future path. He was inspired by Adelson’s focus on understanding fundamental principles, rather than only chasing new engineering benchmarks, which are formalized tests used to measure the performance of a system.

A computational perspective

At MIT, Isola’s research drifted toward computer science and artificial intelligence.

“I still loved all those questions from cognitive sciences, but I felt I could make more progress on some of those questions if I came at it from a purely computational perspective,” he says.

His thesis was focused on perceptual grouping, which involves the mechanisms people and machines use to organize discrete parts of an image as a single, coherent object.

If machines can learn perceptual groupings on their own, that could enable AI systems to recognize objects without human intervention. This type of self-supervised learning has applications in areas such autonomous vehicles, medical imaging, robotics, and automatic language translation.

After graduating from MIT, Isola completed a postdoc at the University of California at Berkeley so he could broaden his perspectives by working in a lab solely focused on computer science.

“That experience helped my work become a lot more impactful because I learned to balance understanding fundamental, abstract principles of intelligence with the pursuit of some more concrete benchmarks,” Isola recalls.

At Berkeley, he developed image-to-image translation frameworks, an early form of generative AI model that could turn a sketch into a photographic image, for instance, or turn a black-and-white photo into a color one.

He entered the academic job market and accepted a faculty position at MIT, but Isola deferred for a year to work at a then-small startup called OpenAI.

“It was a nonprofit, and I liked the idealistic mission at that time. They were really good at reinforcement learning, and I thought that seemed like an important topic to learn more about,” he says.

He enjoyed working in a lab with so much scientific freedom, but after a year Isola was ready to return to MIT and start his own research group.

Studying human-like intelligence

Running a research lab instantly appealed to him.

“I really love the early stage of an idea. I feel like I am a sort of startup incubator where I am constantly able to do new things and learn new things,” he says.

Building on his interest in cognitive sciences and desire to understand the human brain, his group studies the fundamental computations involved in the human-like intelligence that emerges in machines.

One primary focus is representation learning, or the ability of humans and machines to represent and perceive the sensory world around them.

In recent work, he and his collaborators observed that the many varied types of machine-learning models, from LLMs to computer vision models to audio models, seem to represent the world in similar ways.

These models are designed to do vastly different tasks, but there are many similarities in their architectures. And as they get bigger and are trained on more data, their internal structures become more alike.

This led Isola and his team to introduce the Platonic Representation Hypothesis (drawing its name from the Greek philosopher Plato) which says that the representations all these models learn are converging toward a shared, underlying representation of reality.

“Language, images, sound — all of these are different shadows on the wall from which you can infer that there is some kind of underlying physical process — some kind of causal reality — out there. If you train models on all these different types of data, they should converge on that world model in the end,” Isola says.

A related area his team studies is self-supervised learning. This involves the ways in which AI models learn to group related pixels in an image or words in a sentence without having labeled examples to learn from.

Because data are expensive and labels are limited, using only labeled data to train models could hold back the capabilities of AI systems. With self-supervised learning, the goal is to develop models that can come up with an accurate internal representation of the world on their own.

“If you can come up with a good representation of the world, that should make subsequent problem solving easier,” he explains.

The focus of Isola’s research is more about finding something new and surprising than about building complex systems that can outdo the latest machine-learning benchmarks.

While this approach has yielded much success in uncovering innovative techniques and architectures, it means the work sometimes lacks a concrete end goal, which can lead to challenges.

For instance, keeping a team aligned and the funding flowing can be difficult when the lab is focused on searching for unexpected results, he says.

“In a sense, we are always working in the dark. It is high-risk and high-reward work. Every once in while, we find some kernel of truth that is new and surprising,” he says.

In addition to pursuing knowledge, Isola is passionate about imparting knowledge to the next generation of scientists and engineers. Among his favorite courses to teach is 6.7960 (Deep Learning), which he and several other MIT faculty members launched four years ago.

The class has seen exponential growth, from 30 students in its initial offering to more than 700 this fall.

And while the popularity of AI means there is no shortage of interested students, the speed at which the field moves can make it difficult to separate the hype from truly significant advances.

“I tell the students they have to take everything we say in the class with a grain of salt. Maybe in a few years we’ll tell them something different. We are really on the edge of knowledge with this course,” he says.

But Isola also emphasizes to students that, for all the hype surrounding the latest AI models, intelligent machines are far simpler than most people suspect.

“Human ingenuity, creativity, and emotions — many people believe these can never be modeled. That might turn out to be true, but I think intelligence is fairly simple once we understand it,” he says.

Even though his current work focuses on deep-learning models, Isola is still fascinated by the complexity of the human brain and continues to collaborate with researchers who study cognitive sciences.

All the while, he has remained captivated by the beauty of the natural world that inspired his first interest in science.

Although he has less time for hobbies these days, Isola enjoys hiking and backpacking in the mountains or on Cape Cod, skiing and kayaking, or finding scenic places to spend time when he travels for scientific conferences.

And while he looks forward to exploring new questions in his lab at MIT, Isola can’t help but contemplate how the role of intelligent machines might change the course of his work.

He believes that artificial general intelligence (AGI), or the point where machines can learn and apply their knowledge as well as humans can, is not that far off.

“I don’t think AIs will just do everything for us and we’ll go and enjoy life at the beach. I think there is going to be this coexistence between smart machines and humans who still have a lot of agency and control. Now, I’m thinking about the interesting questions and applications once that happens. How can I help the world in this post-AGI future? I don’t have any answers yet, but it’s on my mind,” he says.


Leading quantum at an inflection point

The MIT Quantum Initiative is taking shape, leveraging quantum breakthroughs to drive the future of scientific and technological progress.


Danna Freedman is seeking the early adopters.

She is the faculty director of the nascent MIT Quantum Initiative, or QMIT. In this new role, Freedman is giving shape to an ambitious, Institute-wide effort to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.

The interdisciplinary endeavor, the newest of MIT President Sally Kornbluth’s strategic initiatives, will bring together MIT researchers and domain experts from a range of industries to identify and tackle practical challenges wherever quantum solutions could achieve the greatest impact.

“We’ve already seen how the breadth of progress in quantum has created opportunities to rethink the future of security and encryption, imagine new modes of navigation, and even measure gravitational waves more precisely to observe the cosmos in an entirely new way,” says Freedman, the Frederick George Keyes Professor of Chemistry. “What can we do next? We’re investing in the promise of quantum, and where the legacy will be in 20 years.”

QMIT — the name is a nod to the “qubit,” the basic unit of quantum information — will formally launch on Dec. 8 with an all-day event on campus. Over time, the initiative plans to establish a physical home in the heart of campus for academic, public, and corporate engagement with state-of-the-art integrated quantum systems. Beyond MIT’s campus, QMIT will also work closely with the U.S. government and MIT Lincoln Laboratory, applying the lab’s capabilities in quantum hardware development, systems engineering, and rapid prototyping to national security priorities.

“The MIT Quantum Initiative seizes a timely opportunity in service to the nation’s scientific, economic, and technological competitiveness,” says Ian A. Waitz, MIT’s vice president for research. “With quantum capabilities approaching an inflection point, QMIT will engage students and researchers across all our schools and the college, as well as companies around the world, in thinking about what a step change in sensing and computational power will mean for a wide range of fields. Incredible opportunities exist in health and life sciences, fundamental physics research, cybersecurity, materials science, sensing the world around us, and more.”

Identifying the right questions

Quantum phenomena are as foundational to our world as light or gravity. At an extremely small scale, the interactions of atoms and subatomic particles are controlled by a different set of rules than the physical laws of the macro-sized world. These rules are called quantum mechanics.

“Quantum, in a sense, is what underlies everything,” says Freedman.

By leveraging quantum properties, quantum devices can process information at incredible speed to solve complex problems that aren’t feasible on classical supercomputers, and to enable ultraprecise sensing and measurement. Those improvements in speed and precision will become most powerful when optimized in relation to specific use cases, and as part of a complete quantum system. QMIT will focus on collaboration across domains to co-develop quantum tools, such as computers, sensors, networks, simulations, and algorithms, alongside the intended users of these systems.

As it develops, QMIT will be organized into programmatic pillars led by top researchers in quantum including Paola Cappellaro, Ford Professor of Engineering and professor of nuclear science and engineering and of physics; Isaac Chuang, Julius A. Stratton Professor in Electrical Engineering and Physics; Pablo Jarillo-Herrero, Cecil and Ida Green Professor of Physics; William Oliver, Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics; Vladan Vuletić, Lester Wolfe Professor of Physics; and Jonilyn Yoder, associate leader of the Quantum-Enabled Computation Group at MIT Lincoln Laboratory.

While supporting the core of quantum research in physics, engineering, mathematics, and computer science, QMIT promises to expand the community at its frontiers, into astronomy, biology, chemistry, materials science, and medicine.

“If you provide a foundation that somebody can integrate with, that accelerates progress a lot,” says Freedman. “Perhaps we want to figure out how a quantum simulator we’ve built can model photosynthesis, if that’s the right question — or maybe the right question is to study 10 failed catalysts to see why they failed.”

“We are going to figure out what real problems exist that we could approach with quantum tools, and work toward them in the next five years,” she adds. “We are going to change the forward momentum of quantum in a way that supports impact.”

The MIT Quantum Initiative will be administratively housed in the Research Laboratory of Electronics (RLE), with support from the Office of the Vice President for Research (VPR) and the Office of Innovation and Strategy.

QMIT is a natural expansion of MIT’s Center for Quantum Engineering (CQE), a research powerhouse that engages more than 80 principal investigators across the MIT campus and Lincoln Laboratory to accelerate the practical application of quantum technologies.

“CQE has cultivated a tremendously strong ecosystem of students and researchers, engaging with U.S. government sponsors and industry collaborators, including through the popular Quantum Annual Research Conference (QuARC) and professional development classes,” says Marc Baldo, the Dugald C. Jackson Professor in Electrical Engineering and director of RLE.

“With the backing of former vice president for research Maria Zuber, former Lincoln Lab director Eric Evans, and Marc Baldo, we launched CQE and its industry membership group in 2019 to help bridge MIT’s research efforts in quantum science and engineering,” says Oliver, CQE’s director, who also spent 20 years at Lincoln Laboratory, most recently as a Laboratory Fellow. “We have an important opportunity now to deepen our commitment to quantum research and education, and especially in engaging students from across the Institute in thinking about how to leverage quantum science and engineering to solve hard problems.”

Two years ago, Peter Fisher, the Thomas A. Frank (1977) Professor of Physics, in his role as associate vice president for research computing and data, assembled a faculty group led by Cappellaro and involving Baldo, Oliver, Freedman, and others, to begin to build an initiative that would span the entire Institute. Now, capitalizing on CQE’s success, Oliver will lead the new MIT Quantum Initiative’s quantum computing pillar, which will broaden the work of CQE into a larger effort that focuses on quantum computing, industry engagement, and connecting with end users.

The “MIT-hard” problem

QMIT will build upon the Institute’s historic leadership in quantum science and engineering. In the spring of 1981, MIT hosted the first Physics of Computation Conference at the Endicott House, bringing together nearly 50 physics and computing researchers to consider the practical promise of quantum — an intellectual moment that is now widely regarded as the kickoff of the second quantum revolution. (The first was the fundamental articulation of quantum mechanics 100 years ago.)

Today, research in quantum science and engineering produces a steady stream of “firsts” in the lab and a growing number of startup companies.

In collaboration with partners in industry and government, MIT researchers develop advances in areas like quantum sensing, which involves the use of atomic-scale systems to measure certain properties, like distance and acceleration, with extreme precision. Quantum sensing could be used in applications like brain imaging devices that capture more detail, or air traffic control systems with greater positional accuracy.

Another key area of research is quantum simulation, which uses the power of quantum computers to accurately emulate complex systems. This could fuel the discovery of new materials for energy-efficient electronics or streamline the identification of promising molecules for drug development.

“Historically, when we think about the most well-articulated challenges that quantum will solve,” Freedman says, “the best ones have come from inside of MIT. We’re open to technological solutions to problems, and nontraditional approaches to science. In many respects, we are the early adopters.”

But she also draws a sharp distinction between blue-sky thinking about what quantum might do, and the deeply technical, deeply collaborative work of actually drawing the roadmap. “That’s the ‘MIT-hard’ problem,” she says.

The QMIT launch event on Dec. 8 will feature talks and discussions featuring MIT faculty, including Nobel laureates and industry leaders.


Particles that enhance mRNA delivery could reduce vaccine dosage and costs

Using these nanoparticles to deliver a flu vaccine, researchers observed an effective immune response at a much lower dose.


A new delivery particle developed at MIT could make mRNA vaccines more effective and potentially lower the cost per vaccine dose.

In studies in mice, the researchers showed that an mRNA influenza vaccine delivered with their new lipid nanoparticle could generate the same immune response as mRNA delivered by nanoparticles made with FDA-approved materials, but at around 1/100 the dose.

“One of the challenges with mRNA vaccines is the cost,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES). “When you think about the cost of making a vaccine that could be distributed widely, it can really add up. Our goal has been to try to make nanoparticles that can give you a safe and effective vaccine response but at a much lower dose.”

While the researchers used their particles to deliver a flu vaccine, they could also be used for vaccines for Covid-19 and other infectious diseases, they say.

Anderson is the senior author of the study, which appears today in Nature Nanotechnology. The lead authors of the paper are Arnab Rudra, a visiting scientist at the Koch Institute; Akash Gupta, a Koch Institute research scientist; and Kaelan Reed, an MIT graduate student.

Efficient delivery

To protect mRNA vaccines from breaking down in the body after injection, they are packaged inside a lipid nanoparticle, or LNP. These fatty spheres help mRNA get into cells so that it can be translated into a fragment of a protein from a pathogen such as influenza or SARS-CoV-2.

In the new study, the MIT team sought to develop particles that can induce an effective immune response, but at a lower dose than the particles now used to deliver Covid-19 mRNA vaccines. That could not only reduce the costs per vaccine dose, but may also help to lessen the potential side effects, the researchers say.

LNPs typically consist of five elements: an ionizable lipid, cholesterol, a helper phospholipid, a polyethylene glycol lipid, and mRNA. In this study, the researchers focused on the ionizable lipid, which plays a key role in vaccine strength.

Based on their knowledge of chemical structures that might improve delivery efficiency, the researchers designed a library of new ionizable lipids. These contained cyclic structures, which can help enhance mRNA delivery, as well as chemical groups called esters, which the researchers believed could also help improve biodegradability.

The researchers then created and screened many combinations of these particle structures in mice to see which could most effectively deliver the gene for luciferase, a bioluminescent protein. Then, they took their top-performing particle and created a library of new variants, which they tested in another round of screening.

From these screens, the top LNP that emerged is one that the researchers called AMG1541. One key feature of these new LNPs is that they are more effective in dealing with a major barrier for delivery particles, known as endosomal escape. After LNPs enter cells, they are isolated in cellular compartments called endosomes, which they need to break out of to deliver their mRNA. The new particles did this more effectively than existing LNPs.

Another advantage of the new LNPs is that the ester groups in the tails make the particles degradable once they have delivered their cargo. This means they can be cleared from the body quickly, which the researchers believe could reduce side effects from the vaccine.

More powerful vaccines

To demonstrate the potential applications of the AMG1541 LNP, the researchers used it to deliver an mRNA influenza vaccine in mice. They compared this vaccine’s effectiveness to a flu vaccine made with a lipid called SM-102, which is FDA-approved and was used by Moderna in its Covid-19 vaccine.

Mice vaccinated with the new particles generated the same antibody response as mice vaccinated with the SM-102 particle, but only 1/100 of the dose was needed to generate that response, the researchers found.

“It’s almost a hundredfold lower dose, but you generate the same amount of antibodies, so that can significantly lower the dose. If it translates to humans, it should significantly lower the cost as well,” Rudra says.

Further experiments revealed that the new LNPs are better able to deliver their cargo to a critical type of immune cells called antigen-presenting cells. These cells chop up foreign antigens and display them on their surfaces, which signals other immune cells such as B and T cells to become activated against that antigen.

The new LNPs are also more likely to accumulate in the lymph nodes, where they encounter many more immune cells.

Using these particles to deliver mRNA flu vaccines could allow vaccine developers to better match the strains of flu that circulate each winter, the researchers say. “With traditional flu vaccines, they have to start being manufactured almost a year ahead of time,” Reed says. “With mRNA, you can start producing it much later in the season and get a more accurate guess of what the circulating strains are going to be, and it may help improve the efficacy of flu vaccines.”

The particles could also be adapted for vaccines for Covid-19, HIV, or any other infectious disease, the researchers say.

“We have found that they work much better than anything that has been reported so far. That’s why, for any intramuscular vaccines, we think that our LNP platforms could be used to develop vaccines for a number of diseases,” Gupta says.

The research was funded by Sanofi, the National Institutes of Health, the Marble Center for Cancer Nanomedicine, and the Koch Institute Support (core) Grant from the National Cancer Institute.


Giving buildings an “MRI” to make them more energy-efficient and resilient

Founded by a team from MIT, Lamarr.AI uses drones, thermal imaging, and AI to help property owners make targeted investments in their buildings.


Older buildings let thousands of dollars-worth of energy go to waste each year through leaky roofs, old windows, and insufficient insulation. But even as building owners face mounting pressure to comply with stricter energy codes, making smart decisions about how to invest in efficiency is a major challenge.

Lamarr.AI, born in part from MIT research, is making the process of finding ways to improve the energy efficiency of buildings as easy as clicking a button. When customers order a building review, it triggers a coordinated symphony of drones, thermal and visible-range cameras, and artificial intelligence designed to identify problems and quantify the impact of potential upgrades. Lamarr.AI’s technology also assesses structural conditions, creates detailed 3D models of buildings, and recommends retrofits. The solution is already being used by leading organizations across facilities management as well as by architecture, engineering, and construction firms.

“We identify the root cause of the anomalies we find,” says CEO and co-founder Tarek Rakha PhD ’15. “Our platform doesn’t just say, ‘This is a hot spot and this is a cold spot.’ It specifies ‘This is infiltration or exfiltration. This is missing insulation. This is water intrusion.’ The detected anomalies are also mapped to a 3D model of the building, and there are deeper analytics, such as the cost of each retrofit and the return on investment.”

To date, the company estimates its platform has helped clients across health care, higher education, and multifamily housing avoid over $3 million in unnecessary construction and retrofit costs by recommending targeted interventions over costly full-system replacements, while improving energy performance and extending asset life. For building owners managing portfolios worth hundreds of millions of dollars, Lamarr.AI’s approach represents a fundamental shift from reactive maintenance to strategic asset management.

The founders, who also include MIT Professor John Fernández and Research Scientist Norhan Bayomi SM ’17, PhD ’21, are thrilled to see their technology accelerating the transition to more energy-efficient and higher-performing buildings.

“Reducing carbon emissions in buildings gets you the greatest return on investment in terms of climate interventions, but what has been needed are the technologies and tools to help the real estate and construction sectors make the right decisions in a timely and economical way,” Fernández says.

Automating building scans

Bayomi and Rakha completed their PhDs in the MIT Department of Architecture’s Building Technology Program. For her thesis, Bayomi developed technology to detect features of building exteriors and classify thermal anomalies through scans of buildings, with a specific focus on the impact of heat waves on low-income communities. Bayomi and her collaborators eventually deployed the system to detect air leaks as part of a partnership with a community in New York City.

After graduating MIT, Rakha became an assistant professor at Syracuse University. In 2015, together with fellow Syracuse University Professor Senem Velipasalar, he began developing his concept for drone-based building analytics — an idea that later received support through a grant from New York State’s Department of Economic Development. In 2019, Bayomi and Fernández joined the project, and the team received a $1.8 million research award from the U.S. Department of Energy.

“The technology is like giving a building an MRI using drones, infrared imaging, visible light imaging, and proprietary AI that we developed through computer vision technology, along with large language models for report generation,” Rakha explains.

“When we started the research, we saw firsthand how vulnerable communities were suffering from inefficient buildings, but couldn’t afford comprehensive diagnostics,” Bayomi says. “We knew that if we could automate this process and reduce costs while improving accuracy, we’d unlock a massive market. Now we’re seeing demand from everyone, from municipal buildings to major institutional portfolios.”

Lamarr.AI was officially founded in 2021 to commercialize the technology, and the founders wasted no time tapping into MIT’s entrepreneurial ecosystem. First, they received a small seed grant from the MIT Sandbox Innovation Fund. In 2022, they won the MITdesignX prize and were semifinalists in the MIT $100K Entrepreneurship Competition. The founders named the company after Hedy Lamarr, the famous actress and inventor of a patented technology that became the basis for many modern secure communications.

Current methods for detecting air leaks in buildings utilize fan pressurizers or smoke. Contractors or building engineers may also spot-check buildings with handheld infrared cameras to manually identify temperature differences across individual walls, windows, and ductwork.

Lamarr.AI’s system can perform building inspections far more quickly. Building managers can order the company’s scans online and select when they’d like the drone to fly. Lamarr.AI partners with drone companies worldwide to fly off-the-shelf drones around buildings, providing them with flight plans and specifications for success. Images are then uploaded onto Lamarr.AI’s platform for automated analysis.

“As an example, a survey of a 180,000-square-foot building like the MIT Schwarzman College of Computing, which we scanned, produces around 2,000 images,” Fernández says. “For someone to go through those manually would take a couple of weeks. Our models autonomously analyze those images in a few seconds.”

After the analysis, Lamarr.AI’s platform generates a report that includes the suspected root cause of every weak point found, an estimated cost to correct that problem, and its estimated return on investment using advanced building energy simulations.

“We knew if we were able to quickly, inexpensively, and accurately survey the thermal envelope of buildings and understand their performance, we would be addressing a huge need in the real estate, building construction, and built environment sectors,” Fernández explains. “Thermal anomalies are a huge cause of unwanted heat loss, and more than 45 percent of construction defects are tied to envelope failures.”

The ability to operate at scale is especially attractive to building owners and operators, who often manage large portfolios of buildings across multiple campuses.

“We see Lamarr.AI becoming the premier solution for building portfolio diagnostics and prognosis across the globe, where every building can be equipped not just for the climate crisis, but also to minimize energy losses and be more efficient, safer, and sustainable,” Rakha says.

Building science for everyone

Lamarr.AI has worked with building operators across the U.S. as well as in Canada, the United Kingdom, and the United Arab Emirates.

In June, Lamarr.AI partnered with the City of Detroit, with support from Newlab and Michigan Central, to inspect three municipal buildings to identify areas for improvement. Across two of the buildings, the system identified more than 460 problems like insulation gaps and water leaks. The findings were presented in a report that also utilized energy simulations to demonstrate that upgrades, such as window replacements and targeted weatherization, could reduce HVAC energy use by up to 22 percent.

The entire process took a few days. The founders note that it was the first building inspection drone flight to utilize an off-site operator, an approach that further enhances the scalability of their platform. It also helps further reduce costs, which could make building scans available to a broader swath of people around the world.

“We’re democratizing access to very high-value building science expertise that previously cost tens of thousands per audit,” Bayomi says. “Our platform makes advanced diagnostics affordable enough for routine use, not just one-time assessments. The bigger vision is automated, regular building health monitoring that keeps facilities teams informed in real-time, enabling proactive decisions rather than reactive crisis management. When building intelligence becomes continuous and accessible, operators can optimize performance systematically rather than waiting for problems to emerge.”


Charting the future of AI, from safer answers to faster thinking

MIT PhD students who interned with the MIT-IBM Watson AI Lab Summer Program are pushing AI tools to be more flexible, efficient, and grounded in truth.


Adoption of new tools and technologies occurs when users largely perceive them as reliable, accessible, and an improvement over the available methods and workflows for the cost. Five PhD students from the inaugural class of the MIT-IBM Watson AI Lab Summer Program are utilizing state-of-the-art resources, alleviating AI pain points, and creating new features and capabilities to promote AI usefulness and deployment — from learning when to trust a model that predicts another’s accuracy to more effectively reasoning over knowledge bases. Together, the efforts from the students and their mentors form a through-line, where practical and technically rigorous research leads to more dependable and valuable models across domains.

Building probes, routers, new attention mechanisms, synthetic datasets, and program-synthesis pipelines, the students’ work spans safety, inference efficiency, multimodal data, and knowledge-grounded reasoning. Their techniques emphasize scaling and integration, with impact always in sight.

Learning to trust, and when

MIT math graduate student Andrey Bryutkin’s research prioritizes the trustworthiness of models. He seeks out internal structures within problems, such as equations governing a system and conservation laws, to understand how to leverage them to produce more dependable and robust solutions. Armed with this and working with the lab, Bryutkin developed a method to peer into the nature of large learning models (LLMs) behaviors. Together with the lab’s Veronika Thost of IBM Research and Marzyeh Ghassemi — associate professor and the Germeshausen Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems — Bryutkin explored the “uncertainty of uncertainty” of LLMs. 

Classically, tiny feed-forward neural networks two-to-three layers deep, called probes, are trained alongside LLMs and employed to flag untrustworthy answers from the larger model to developers; however, these classifiers can also produce false negatives and only provide point estimates, which don’t offer much information about when the LLM is failing. Investigating safe/unsafe prompts and question-answer tasks, the MIT-IBM team used prompt-label pairs, as well as the hidden states like activation vectors and last tokens from an LLM, to measure gradient scores, sensitivity to prompts, and out-of-distribution data to determine how reliable the probe was and learn areas of data that are difficult to predict. Their method also helps identify potential labeling noise. This is a critical function, as the trustworthiness of AI systems depends entirely on the quality and accuracy of the labeled data they are built upon. More accurate and consistent probes are especially important for domains with critical data in applications like IBM’s Granite Guardian family of models.

Another way to ensure trustworthy responses to queries from an LLM is to augment them with external, trusted knowledge bases to eliminate hallucinations. For structured data, such as social media connections, financial transactions, or corporate databases, knowledge graphs (KG) are natural fits; however, communications between the LLM and KGs often use fixed, multi-agent pipelines that are computationally inefficient and expensive. Addressing this, physics graduate student Jinyeop Song, along with lab researchers Yada Zhu of IBM Research and EECS Associate Professor Julian Shun created a single-agent, multi-turn, reinforcement learning framework that streamlines this process. Here, the group designed an API server hosting Freebase and Wikidata KGs, which consist of general web-based knowledge data, and a LLM agent that issues targeted retrieval actions to fetch pertinent information from the server. Then, through continuous back-and-forth, the agent appends the gathered data from the KGs to the context and responds to the query. Crucially, the system uses reinforcement learning to train itself to deliver answers that strike a balance between accuracy and completeness. The framework pairs an API server with a single reinforcement learning agent to orchestrate data-grounded reasoning with improved accuracy, transparency, efficiency, and transferability.

Spending computation wisely

The timeliness and completeness of a model’s response carry similar weight to the importance of its accuracy. This is especially true for handling long input texts and those where elements, like the subject of a story, evolve over time, so EECS graduate student Songlin Yang is re-engineering what models can handle at each step of inference. Focusing on transformer limitations, like those in LLMs, the lab’s Rameswar Panda of IBM Research and Yoon Kim, the NBX Professor and associate professor in EECS, joined Yang to develop next-generation language model architectures beyond transformers.

Transformers face two key limitations: high computational complexity in long-sequence modeling due to the softmax attention mechanism, and limited expressivity resulting from the weak inductive bias of RoPE (rotary positional encoding). This means that as the input length doubles, the computational cost quadruples. RoPE allows transformers to understand the sequence order of tokens (i.e., words); however, it does not do a good job capturing internal state changes over time, like variable values, and is limited to the sequence lengths seen during training.

To address this, the MIT-IBM team explored theoretically grounded yet hardware-efficient algorithms. As an alternative to softmax attention, they adopted linear attention, reducing the quadratic complexity that limits the feasible sequence length. They also investigated hybrid architectures that combine softmax and linear attention to strike a better balance between computational efficiency and performance.

Increasing expressivity, they replaced RoPE with a dynamic reflective positional encoding based on the Householder transform. This approach enables richer positional interactions for deeper understanding of sequential information, while maintaining fast and efficient computation. The MIT-IBM team’s advancement reduces the need for transformers to break problems into many steps, instead enabling them to handle more complex subproblems with fewer inference tokens.

Visions anew

Visual data contain multitudes that the human brain can quickly parse, internalize, and then imitate. Using vision-language models (VLMs), two graduate students are exploring ways to do this through code.

Over the past two summers and under the advisement of Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory; and IBM Research’s Rogerio Feris, Dan Gutfreund, and Leonid Karlinsky (now at Xero), Jovana Kondic of EECS has explored visual document understanding, specifically charts. These contain elements, such as data points, legends, and axes labels, that require optical character recognition and numerical reasoning, which models still struggle with. In order to facilitate the performance on tasks such as these, Kondic’s group set out to create a large, open-source, synthetic chart dataset from code that could be used for training and benchmarking. 

With their prototype, ChartGen, the researchers created a pipeline that passes seed chart images through a VLM, which is prompted to read the chart and generate a Python script that was likely used to create the chart in the first place. The LLM component of the framework then iteratively augments the code from many charts to ultimately produce over 200,000 unique pairs of charts and their codes, spanning nearly 30 chart types, as well as supporting data and annotation like descriptions and question-answer pairs about the charts. The team is further expanding their dataset, helping to enable critical multimodal understanding to data visualizations for enterprise applications like financial and scientific reports, blogs, and more.

Instead of charts, EECS graduate student Leonardo Hernandez Cano has his eyes on digital design, specifically visual texture generation for CAD applications and the goal of discovering efficient ways to enable to capabilities in VLMs. Teaming up with the lab groups led by Armando Solar-Lezama, EECS professor and Distinguished Professor of Computing in the MIT Schwarzman College of Computing, and IBM Research’s Nathan Fulton, Hernandez Cano created a program synthesis system that learns to refine code on its own. The system starts with a texture description given by a user in the form of an image. It then generates an initial Python program, which produces visual textures, and iteratively refines the code with the goal of finding a program that produces a texture that matches the target description, learning to search for new programs from the data that the system itself produces. Through these refinements, the novel program can create visualizations with the desired luminosity, color, iridescence, etc., mimicking real materials.

When viewed together, these projects, and the people behind them, are making a cohesive push toward more robust and practical artificial intelligence. By tackling the core challenges of reliability, efficiency, and multimodal reasoning, the work paves the way for AI systems that are not only more powerful, but also more dependable and cost-effective, for real-world enterprise and scientific applications.


Where climate meets community

MIT’s Living Climate Futures Lab takes a human-centered approach to investigating a global challenge.


The MIT Living Climate Futures Lab (LCFL) centers the human dimensions of climate change, bringing together expertise from across MIT to address one of the world’s biggest challenges.

The LCFL has three main goals: “addressing how climate change plays out in everyday life, focusing on community-oriented partnerships, and encouraging cross-disciplinary conversations around climate change on campus,” says Chris Walley, the SHASS Dean’s Distinguished Professor of Anthropology and head of MIT’s Anthropology Section. “We think this is a crucial direction for MIT and will make a strong statement about the kind of human-centered, interdisciplinary work needed to tackle this issue.”

Walley is faculty lead of LCFL, working in collaboration with a group of 19 faculty colleagues and researchers. The LCFL began to coalesce in 2022 when MIT faculty and affiliates already working with communities dealing with climate change issues organized a symposium, inviting urban farmers, place-based environmental groups, and others to MIT. Since then, the lab has consolidated the efforts of faculty and affiliates representing disciplines from across the MIT School of Humanities, Arts, and Social Sciences (SHASS) and the Institute.

Amah Edoh, a cultural anthropologist and managing director of LCFL, says the lab’s collaboration with community organizations and development of experiential learning classes aims to bridge the gap that can exist between the classroom and the real world.

“Sometimes we can find ourselves in a bubble where we’re only in conversation with other people from within academia or our own field of practice. There can be a disconnect between what students are learning somewhat abstractly and the ‘real world’ experience of the issues” Edoh says. “By taking up topics from the multidimensional approach that experiential learning makes possible, students learn to take complexity as a given, which can help to foster more critical thinking in them, and inform their future practice in profound ways.”

Edoh points out that the effects of climate change play out in a huge array of areas: health, food security, livelihoods, housing, and governance structures, to name a few.

“The Living Climate Futures Lab supports MIT researchers in developing the long-term collaborations with community partners that are essential to adequately identifying and responding to the challenges that climate change creates in everyday life,” she says.

Manduhai Buyandelger, professor of anthropology and one of the participants in LCFL, developed the class 21A.S01 (Anthro-Engineering: Decarbonization at the Million-Person Scale), which has in turn sparked related classes. The goal is “to merge technological innovation with people-centered environments.” Working closely with residents of Ulaanbaatar, Mongolia, Buyandelger and collaborator Mike Short, the Class of 1941 Professor of Nuclear Science and Engineering, helped develop a molten salt heat bank as a reusable energy source.

“My work with Mike Short on energy and alternative heating in Mongolia helps to cultivate a new generation of creative and socially minded engineers who prioritize people in thinking about technical solutions,” Buyandelger says, adding, “In our course, we collaborate on creating interdisciplinary methods where we fuse anthropological methods with engineering innovations so that we can expand and deepen our approach to mitigate climate change.”

Iselle Barrios ’25, says 21A.S01 was her first anthropology course. She traveled to Mongolia and was able to experience firsthand all the ways in which the air pollution and heating problem was much larger and more complicated than it seemed from MIT’s Cambridge, Massachusetts, campus.

“It was my first exposure to anthropological and STS critiques of science and engineering, as well as international development,” says Barrios, a chemical engineering major. “It fundamentally reshaped the way I see the role of technology and engineers in the broader social context in which they operate. It really helped me learn to think about problems in a more holistic and people-centered way.”

LCFL participant Alvin Harvey, a postdoc in the MIT Media Lab’s Space Enabled Research Group and a citizen of the Navajo Nation, works to incorporate traditional knowledge in engineering and science to “support global stewardship of earth and space ecologies.”

"I envision the Living Climate Futures Lab as a collaborative space that can be an igniter and sustainer of relationships, especially between MIT and those whose have generational and cultural ties to land and space that is being impacted by climate change,” Harvey says. “I think everyone in our lab understands that protecting our climate future is a collective journey."

Kate Brown, the Thomas M. Siebel Distinguished Professor in History of Science, is also a participant in LCFL. Her current interest is urban food sovereignty movements, in which working-class city dwellers used waste to create “the most productive agriculture in recorded human history,” Brown says. While pursuing that work, Brown has developed relationships and worked with urban farmers in Mansfield, Ohio, as well as in Washington and Amsterdam.

Brown and Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, teach a class called STS.055 (Living Dangerously: Environmental Programs from 1900 to Today) that presents the environmental problems and solutions of the 20th century, and how some “solutions” created more problems over time. Brown also plans to teach a class on the history of global food production once she gets access to a small plot of land on campus for a lab site.

“The Living Climate Futures Lab gives us the structure and flexibility to work with communities that are struggling to find solutions to the problems being created by the climate crisis,” says Brown.

Earlier this year, the MIT Human Insight Collaborative (MITHIC) selected the Living Climate Futures Lab as its inaugural Faculty-Driven Initiative (FDI), which comes with a $500,000 seed grant.

MIT Provost Anantha Chandrakasan, co-chair of MITHIC, says the LCFL exemplifies how we can confront the climate crisis by working in true partnership with the communities most affected.

“By combining scientific insight with cultural understanding and lived experience, this initiative brings a deeper dimension to MIT’s climate efforts — one grounded in collaboration, empathy, and real-world impact,” says Chandrakasan.

Agustín Rayo, the Kenan Sahin Dean of SHASS and co-chair of MITHIC, says the LCFL is precisely the type of interdisciplinary collaboration the FDI program was designed to support.

"By bringing together expertise from across MIT, I am confident the Living Climate Futures Lab will make significant contributions in the Institute’s effort to address the climate crisis," says Rayo.

Walley said the seed grant will support a second symposium in 2026 to be co-designed with community groups, a suite of experiential learning classes, workshops, a speaker series, and other programming. Throughout this development phase, the lab will solicit donor support to build it into an ongoing MIT initiative and a leader in the response to climate change.


MIT physicists observe key evidence of unconventional superconductivity in magic-angle graphene

The findings could open a route to new forms of higher-temperature superconductors.


Superconductors are like the express trains in a metro system. Any electricity that “boards” a superconducting material can zip through it without stopping and losing energy along the way. As such, superconductors are extremely energy efficient, and are used today to power a variety of applications, from MRI machines to particle accelerators.

But these “conventional” superconductors are somewhat limited in terms of uses because they must be brought down to ultra-low temperatures using elaborate cooling systems to keep them in their superconducting state. If superconductors could work at higher, room-like temperatures, they would enable a new world of technologies, from zero-energy-loss power cables and electricity grids to practical quantum computing systems. And so scientists at MIT and elsewhere are studying “unconventional” superconductors — materials that exhibit superconductivity in ways that are different from, and potentially more promising than, today’s superconductors.

In a promising breakthrough, MIT physicists have today reported their observation of new key evidence of unconventional superconductivity in “magic-angle” twisted tri-layer graphene (MATTG) — a material that is made by stacking three atomically-thin sheets of graphene at a specific angle, or twist, that then allows exotic properties to emerge.

MATTG has shown indirect hints of unconventional superconductivity and other strange electronic behavior in the past. The new discovery, reported in the journal Science, offers the most direct confirmation yet that the material exhibits unconventional superconductivity.

In particular, the team was able to measure MATTG’s superconducting gap — a property that describes how resilient a material’s superconducting state is at given temperatures. They found that MATTG’s superconducting gap looks very different from that of the typical superconductor, meaning that the mechanism by which the material becomes superconductive must also be different, and unconventional.

“There are many different mechanisms that can lead to superconductivity in materials,” says study co-lead author Shuwen Sun, a graduate student in MIT’s Department of Physics. “The superconducting gap gives us a clue to what kind of mechanism can lead to things like room-temperature superconductors that will eventually benefit human society.”

The researchers made their discovery using a new experimental platform that allows them to essentially “watch” the superconducting gap, as the superconductivity emerges in two-dimensional materials, in real-time. They plan to apply the platform to further probe MATTG, and to map the superconducting gap in other 2D materials — an effort that could reveal promising candidates for future technologies.

“Understanding one unconventional superconductor very well may trigger our understanding of the rest,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT and a member of the Research Laboratory of Electronics. “This understanding may guide the design of superconductors that work at room temperature, for example, which is sort of the Holy Grail of the entire field.”

The study’s other co-lead author is Jeong Min Park PhD ’24; Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan are also co-authors.

The ties that bind

Graphene is a material that comprises a single layer of carbon atoms that are linked in a hexagonal pattern resembling chicken wire. A sheet of graphene can be isolated by carefully exfoliating an atom-thin flake from a block of graphite (the same stuff of pencil lead). In the 2010s, theorists predicted that if two graphene layers were stacked at a very special angle, the resulting structure should be capable of exotic electronic behavior.

In 2018, Jarillo-Herrero and his colleagues became the first to produce magic-angle graphene in experiments, and to observe some of its extraordinary properties. That discovery sprouted an entire new field known as “twistronics,” and the study of atomically thin, precisely twisted materials. Jarillo-Herrero’s group has since studied other configurations of magic-angle graphene with two, three, and more layers, as well as stacked and twisted structures of other two-dimensional materials. Their work, along with other groups, have revealed some signatures of unconventional superconductivity in some structures.

Superconductivity is a state that a material can exhibit under certain conditions (usually at very low temperatures). When a material is a superconductor, any electrons that pass through can pair up, rather than repelling and scattering away. When they couple up in what is known as “Cooper pairs,” the electrons can glide through a material without friction, instead of knocking against each other and flying away as lost energy. This pairing up of electrons is what enables superconductivity, though the way in which they are bound can vary.

“In conventional superconductors, the electrons in these pairs are very far away from each other, and weakly bound,” says Park. “But in magic-angle graphene, we could already see signatures that these pairs are very tightly bound, almost like a molecule. There were hints that there is something very different about this material.”

Tunneling through

In their new study, Jarillo-Herrero and his colleagues aimed to directly observe and confirm unconventional superconductivity in a magic-angle graphene structure. To do so, they would have to measure the material’s superconducting gap.

“When a material becomes superconducting, electrons move together as pairs rather than individually, and there’s an energy ‘gap’ that reflects how they’re bound,” Park explains. “The shape and symmetry of that gap tells us the underlying nature of the superconductivity.”

Scientists have measured the superconducting gap in materials using specialized techniques, such as tunneling spectroscopy. The technique takes advantage of a quantum mechanical property known as “tunneling.” At the quantum scale, an electron behaves not just as a particle, but also as a wave; as such, its wave-like properties enable an electron to travel, or “tunnel,” through a material, as if it could move through walls.

Such tunneling spectroscopy measurements can give an idea of how easy it is for an electron to tunnel into a material, and in some sense, how tightly packed and bound the electrons in the material are. When performed in a superconducting state, it can reflect the properties of the superconducting gap. However, tunneling spectroscopy alone cannot always tell whether the material is, in fact, in a superconducting state. Directly linking a tunneling signal to a genuine superconducting gap is both essential and experimentally challenging.

In their new work, Park and her colleagues developed an experimental platform that combines electron tunneling with electrical transport — a technique that is used to gauge a material’s superconductivity, by sending current through and continuously measuring its electrical resistance (zero resistance signals that a material is in a superconducting state).

The team applied the new platform to measure the superconducting gap in MATTG. By combining tunneling and transport measurements in the same device, they could unambiguously identify the superconducting tunneling gap, one that appeared only when the material exhibited zero electrical resistance, which is the hallmark of superconductivity. They then tracked how this gap evolved under varying temperature and magnetic fields. Remarkably, the gap displayed a distinct V-shaped profile, which was clearly different from the flat and uniform shape of conventional superconductors.

This V shape reflects a certain unconventional mechanism by which electrons in MATTG pair up to superconduct. Exactly what that mechanism is remains unknown. But the fact that the shape of the superconducting gap in MATTG stands out from that of the typical superconductor provides key evidence that the material is an unconventional superconductor.

In conventional superconductors, electrons pair up through vibrations of the surrounding atomic lattice, which effectively jostle the particles together. But Park suspects that a different mechanism could be at work in MATTG.

“In this magic-angle graphene system, there are theories explaining that the pairing likely arises from strong electronic interactions rather than lattice vibrations,” she posits. “That means electrons themselves help each other pair up, forming a superconducting state with special symmetry.”

Going forward, the team will test other two-dimensional twisted structures and materials using the new experimental platform.

“This allows us to both identify and study the underlying electronic structures of superconductivity and other quantum phases as they happen, within the same sample,” Park says. “This direct view can reveal how electrons pair and compete with other states, paving the way to design and control new superconductors and quantum materials that could one day power more efficient technologies or quantum computers.”

This research was supported, in part, by the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, the MIT/MTL Samsung Semiconductor Research Fund, the Sagol WIS-MIT Bridge Program, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Ramon Areces Foundation.


Q&A: How folk ballads explain the world

Ruth Perry’s new book profiles Anna Gordon, a Scotswoman who preserved and transmitted precious popular ballads, and with them national traditions.


Traditional folk ballads are one of our most enduring forms of cultural expression. They can also be lost to society, forgotten over time. That’s why, in the mid-1700s, when a Scottish woman named Anna Gordon was found to know three dozen ancient ballads, collectors tried to document all of these songs — a volume of work that became a kind of sensation in its time, a celebrated piece of cultural heritage.

That story is told in MIT Professor Emerita Ruth Perry’s latest book, “The Ballad World of Anna Gordon, Mrs. Brown of Falkland,” published this year by Oxford University Press. In it, Perry details what we know about the ways folk ballads were created and transmitted; how Anna Gordon came to know so many; the social and political climate in which they existed; and why these songs meant so much in Scotland and elsewhere in the Atlantic world. Indeed, Scottish immigrants brought their music to the U.S., among other places.

MIT News sat down with Perry, who is MIT’s Ann Fetter Friedlaender Professor of Humanities, Emerita, to talk about the book.

Q: This is fascinating topic with a lot of threads woven together. To you, what is the book about?

A: It’s really three books. It’s a book about Anna Gordon and her family, a very interesting middle-class family living in Aberdeen in the middle of the 18th century. And it’s a book about balladry and what a ballad is — a story told in song, and ballads are the oldest known poetry in English. Some of them are gorgeous. Third, it’s a book about the relationship between Scotland and England, the effects of the Jacobite uprising in 1745, social attitudes, how people lived, what they ate, education — it’s very much about 18th century Scotland.

Q: Okay, who was Anna Gordon, and what was her family milieu?

A: Anna’s father, Thomas Gordon, was a professor at King’s College, now the University of Aberdeen. He was a professor of humanity, which in those days meant Greek and Latin, and was well-connected to the intellectual community of the Scottish Enlightenment. A friend of his, an Edinburgh writer, lawyer, and judge, William Tytler, who heard cases all over the country and always stayed with Thomas Gordon and his family when he came to Aberdeen, was intensely interested in Scottish traditional music. He found out that Anna Gordon had learned all these ballads as a child, from her mother and aunt and some servants. Tytler asked if she would write them down, both tunes and words.

That was the earliest manuscript of ballads ever collected from a named person in Scotland. Once it was in existence, all kinds of people wanted to see it; it got spread throughout the country. In my book, I detail much of the excitement over this manuscript.

The thing about Anna’s ballads is: It’s not just that there are more of them, and more complete versions that are fuller, with more verses. They’re more beautiful. The language is more archaic, and there are marvelous touches. It is thought, and I agree, that Anna Gordon was an oral poet. As she remembered ballads and reproduced them, she improved on them. She had a great memory for the best bits and would improve other parts.

Q: How did it come about that at this time, a woman such as Anna Gordon would be the keeper and creator of cultural knowledge?

A: Women were more literate in Scotland than elsewhere. The Scottish Parliament passed an act in 1695 requiring every parish in the Church of Scotland to have not only a minister, but a teacher. Scotland was the most literate country in Europe in the 18th century. And those parish schoolmasters taught local kids. The parents did have to pay a few pennies for their classes, and, true, more parents paid for sons than for daughters. But there were daughters who took classes. And there were no opportunities like this in England at the time. Education was better for women in Scotland. So was their legal position, under common law in Scotland. When the Act of Union was formed in 1707, Scotland retained its own legal system, which had more extensive rights for women than in England.

Q: I know it’s complex, but generally, why was this?

A: Scotland was a much more democratic country, culture, and society than England, period. When Elizabeth I died in 1603, the person who inherited the throne was the King of Scotland James VI, who went to England with his court — which included the Scottish aristocracy. So, the Scottish aristocracy ended up in London. I’m sure they went back to their hunting lodges for the hunting season, but they didn’t live there [in Scotland] and they didn’t set the tone of the country. It was democratized because all that was left were a lot of lawyers and ministers and teachers.

Q: What is distinctive about the ballads in this corpus of songs Anna Gordon knew and documented?

A: A common word about ballads is that there’s a high body count, and they’re all about people dying and killing each other. But that is not true of Anna Gordon’s ballads. They’re about younger women triumphing in the world, often against older women, which is interesting, and even more often against fathers. The ballads are about family discord, inheritance, love, fidelity, lack of fidelity, betrayal. There are ballads about fighting and bloodshed, but not so many. They’re about the human condition. And they have interesting qualities because they’re oral poetry, composed and remembered and changed and transmitted from mouth to ear and not written down. There are repetitions and parallelisms, and other hallmarks of oral poetry. The sort of thing you learned when you read Homer.

Q: So is this a form of culture generated in opposition to those controlling society? Or at least, one that’s popular regardless of what some elites thought?

A: It is in Scotland, because of the enmity between Scotland and England. We’re talking about the period of Great Britain when England is trying to gobble up Scotland and some Scottish folks don’t want that. They want to retain their Scottishness. And the ballad was a Scottish tradition that was not influenced by England. That’s one reason balladry was so important in 18th-century Scotland. Everybody was into balladry partly because it was a unique part of Scottish culture.

Q: To that point, it seems like an unexpected convergence, for the time, to see a more middle-class woman like Anna Gordon transmitting ballads that had often been created and sung by people of all classes.

A: Yes. At first I thought I was just working on a biography of Anna Gordon. But it’s fascinating how the culture was transmitted, how intellectually rich that society was, how much there is to examine in Scottish culture and society of the 18th century. Today people may watch “Outlander,” but they still wouldn’t know anything about this!


MIT researchers invent new human brain model to enable disease research, drug discovery

Cultured from induced pluripotent stem cells, “miBrains” integrate all major brain cell types and model brain structures, cellular interactions, activity, and pathological features.


A new 3D human brain tissue platform developed by MIT researchers is the first to integrate all major brain cell types, including neurons, glial cells, and the vasculature, into a single culture. 

Grown from individual donors’ induced pluripotent stem cells, these models — dubbed Multicellular Integrated Brains (miBrains) — replicate key features and functions of human brain tissue, are readily customizable through gene editing, and can be produced in quantities that support large-scale research.

Although each unit is smaller than a dime, miBrains may be worth a great deal to researchers and drug developers who need more complex living lab models to better understand brain biology and treat diseases.

“The miBrain is the only in vitro system that contains all six major cell types that are present in the human brain,” says Li-Huei Tsai, Picower Professor, director of The Picower Institute for Learning and Memory, and a senior author of the open-access study describing miBrains, published Oct. 17 in the Proceedings of the National Academy of Sciences.

“In their first application, miBrains enabled us to discover how one of the most common genetic markers for Alzheimer’s disease alters cells’ interactions to produce pathology,” she adds.

Tsai’s co-senior authors are Robert Langer, David H. Koch (1962) Institute Professor, and Joel Blanchard, associate professor in the Icahn School of Medicine at Mt. Sinai in New York, and a former Tsai Laboratory postdoc. The study is led by Alice Stanton, former postdoc in the Langer and Tsai labs and now assistant professor at Harvard Medical School and Massachusetts General Hospital, and Adele Bubnys, a former Tsai lab postdoc and current senior scientist at Arbor Biotechnologies.

Benefits from two kinds of models

The more closely a model recapitulates the brain’s complexity, the better suited it is for extrapolating how human biology works and how potential therapies may affect patients. In the brain, neurons interact with each other and with various helper cells, all of which are arranged in a three-dimensional tissue environment that includes blood vessels and other components. All of these interactions are necessary for health, and any of them can contribute to disease.

Simple cultures of just one or a few cell types can be created in quantity relatively easily and quickly, but they cannot tell researchers about the myriad interactions that are essential to understanding health or disease. Animal models embody the brain’s complexity, but can be difficult and expensive to maintain, slow to yield results, and different enough from humans to yield occasionally divergent results.

MiBrains combine advantages from each type of model, retaining much of the accessibility and speed of lab-cultured cell lines while allowing researchers to obtain results that more closely reflect the complex biology of human brain tissue. Moreover, they are derived from individual patients, making them personalized to an individual’s genome. In the model, the six cell types self-assemble into functioning units, including blood vessels, immune defenses, and nerve signal conduction, among other features. Researchers ensured that miBrains also possess a blood-brain-barrier capable of gatekeeping which substances may enter the brain, including most traditional drugs.

“The miBrain is very exciting as a scientific achievement,” says Langer. “Recent trends toward minimizing the use of animal models in drug development could make systems like this one increasingly important tools for discovering and developing new human drug targets.”

Two ideal blends for functional brain models

Designing a model integrating so many cell types presented challenges that required many years to overcome. Among the most crucial was identifying a substrate able to provide physical structure for cells and support their viability. The research team drew inspiration from the environment that surrounds cells in natural tissue, the extracellular matrix (ECM). The miBrain’s hydrogel-based “neuromatrix” mimics the brain’s ECM with a custom blend of polysaccharides, proteoglycans, and basement membrane that provide a scaffold for all the brain’s major cell types while promoting the development of functional neurons.

A second blend would also prove critical: the proportion of cells that would result in functional neurovascular units. The actual ratios of cell types have been a matter of debate for the last several decades, with even the more advanced methodologies providing only rough brushstrokes for guidance, for example 45-75 percent for oligodendroglia of all cells or 19-40 percent for astrocytes.

The researchers developed the six cell types from patient-donated induced pluripotent stem cells, verifying that each cultured cell type closely recreated naturally-occurring brain cells. Then, the team experimentally iterated until they hit on a balance of cell types that resulted in functional, properly structured neurovascular units. This laborious process would turn out to be an advantageous feature of miBrains: because cell types are cultured separately, they can each be genetically edited so that the resulting model is tailored to replicate specific health and disease states.

“Its highly modular design sets the miBrain apart, offering precise control over cellular inputs, genetic backgrounds, and sensors — useful features for applications such as disease modeling and drug testing,” says Stanton.

Alzheimer’s discovery using miBrain

To test miBrain’s capabilities, the researchers embarked on a study of the gene variant APOE4, which is the strongest genetic predictor for the development of Alzheimer’s disease. Although one brain cell type, astrocytes, are known to be a primary producer of the APOE protein, the role that astrocytes carrying the APOE4 variant play in disease pathology is poorly understood.

MiBrains were well-suited to the task for two reasons. First of all, they integrate astrocytes with the brain’s other cell types, so that their natural interactions with other cells can be mimicked. Second, because the platform allowed the team to integrate cell types individually, APOE4 astrocytes could be studied in cultures where all other cell types carried APOE3, a gene variant that does not increase Alzheimer’s risk. This enabled the researchers to isolate the contribution APOE4 astrocytes make to pathology.

In one experiment, the researchers examined APOE4 astrocytes cultured alone, versus ones in APOE4 miBrains. They found that only in the miBrains did the astrocytes express many measures of immune reactivity associated with Alzheimer’s disease, suggesting the multicellular environment contributes to that state.

The researchers also tracked the Alzheimer’s-associated proteins amyloid and phosphorylated tau, and found all-APOE4 miBrains accumulated them, whereas all-APOE3 miBrains did not, as expected. However, in APOE3 miBrains with APOE4 astrocytes, they found that APOE4 miBrains still exhibited amyloid and tau accumulation.

Then the team dug deeper into how APOE4 astrocytes’ interactions with other cell types might lead to their contribution to disease pathology. Prior studies have implicated molecular cross-talk with the brain’s microglia immune cells. Notably, when the researchers cultured APOE4 miBrains without microglia, their production of phosphorylated tau was significantly reduced. When the researchers dosed APOE4 miBrains with culture media from astrocytes and microglia combined, phosphorylated tau increased, whereas when they dosed them with media from cultures of astrocytes or microglia alone, the tau production did not increase. The results therefore provided new evidence that molecular cross-talk between microglia and astrocytes is indeed required for phosphorylated tau pathology.

In the future, the research team plans to add new features to miBrains to more closely model characteristics of working brains, such as leveraging microfluidics to add flow through blood vessels, or single-cell RNA sequencing methods to improve profiling of neurons.

Researchers expect that miBrains could advance research discoveries and treatment modalities for Alzheimer’s disease and beyond. 

“Given its sophistication and modularity, there are limitless future directions,” says Stanton. “Among them, we would like to harness it to gain new insights into disease targets, advanced readouts of therapeutic efficacy, and optimization of drug delivery vehicles.”

“I’m most excited by the possibility to create individualized miBrains for different individuals,” adds Tsai. “This promises to pave the way for developing personalized medicine.”

Funding for the study came from the BT Charitable Foundation, Freedom Together Foundation, the Robert A. and Renee E. Belfer Family, Lester A. Gimpelson, Eduardo Eurnekian, Kathleen and Miguel Octavio, David B. Emmes, the Halis Family, the Picower Institute, and an anonymous donor.


MIT study finds targets for a new tuberculosis vaccine

Using these antigens, researchers plan to develop vaccine candidates that they hope would stimulate a strong immune response against the world’s deadliest pathogen.


A large-scale screen of tuberculosis proteins has revealed several possible antigens that could be developed as a new vaccine for TB, the world’s deadliest infectious disease.

In the new study, a team of MIT biological engineers was able to identify a handful of immunogenic peptides, out of more than 4,000 bacterial proteins, that appear to stimulate a strong response from a type of T cells responsible for orchestrating immune cells’ response to infection.

There is currently only one vaccine for tuberculosis, known as BCG, which is a weakened version of a bacterium that causes TB in cows. This vaccine is widely administered in some parts of the world, but it poorly protects adults against pulmonary TB. Worldwide, tuberculosis kills more than 1 million people every year.

“There’s still a huge TB burden globally that we’d like to make an impact on,” says Bryan Bryson, an associate professor of biological engineering at MIT and a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “What we’ve tried to do in this initial TB vaccine is focus on antigens that we saw frequently in our screen and also appear to stimulate a response in T cells from people with prior TB infection.”

Bryson and Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT, and a member of the Koch Institute for Integrative Cancer Research, are the senior authors of the study, which appears today in Science Translational Medicine. Owen Leddy PhD ’25 is the paper’s lead author.

Identifying vaccine targets

Since the BCG vaccine was developed more than 100 years ago, no other TB vaccines have been approved for use. Mycobacterium tuberculosis produces more than 4,000 proteins, which makes it a daunting challenge to pick out proteins that might elicit a strong immune response if used as a vaccine.

In the new study, Bryson and his students set out to narrow the field of candidates by identifying TB proteins presented on the surface of infected human cells. When an immune cell such as a phagocyte is infected with Mycobacterium tuberculosis, some of the bacterial proteins get chopped into fragments called peptides, which are then displayed on the surface of the cell by MHC proteins. These MHC-peptide complexes act as a signal that can activate T cells.

MHCs, or major histocompatibility complexes, come in two types known as class I and class II. Class I MHCs activate killer T cells, while class II MHCs stimulate helper T cells. In human cells, there are three genes that can encode MHC-II proteins, and each of these comes in hundreds of variants. This means that any two people can have a very different repertoire of MHC-II molecules, which present different antigens.

“Instead of looking at all of those 4,000 TB proteins, we wanted to ask which of those proteins from TB actually end up being displayed to the rest of the immune system via MHC,” Bryson says. “If we could just answer that question, then we could design vaccines to match that.”

To try to answer the question, the researchers infected human phagocytes with Mycobacterium tuberculosis. After three days, they extracted MHC-peptide complexes from the cell surfaces, then identified the peptides using mass spectrometry.

Focusing on peptides bound to MHC-II, the researchers found 27 TB peptides, from 13 proteins, that appeared most often in the infected cells. Then, they further tested those peptides by exposing them to T cells donated by people who had previously been infected with TB.

They found that 24 of these peptides did elicit a T cell response in at least some of the samples. None of the proteins from which these peptides came worked for every single donor, but Bryson believes that a vaccine using a combination of these peptides would likely work for most people.

“In a perfect world, if you were trying to design a vaccine, you would pick one protein and that protein would be presented across every donor. It should work for every person,” Bryson says. “However, using our measurements, we’ve not yet found a TB protein that covers every donor we’ve analyzed thus far.”

Enter mRNA vaccines

Among the vaccine candidates that the researchers identified are several peptides from a class of proteins called type 7 secretion systems (T7SSs). Some of these peptides also turned up in an earlier study from Bryson’s lab on MHC-1.

“Type 7 secretion system substrates are a very small sliver of the overall TB proteome, but when you look at MHC class I or MHC class II, it seems as though the cells are preferentially presenting these,” Bryson says.

Two of the best-known of these proteins, EsxA and EsxB, are secreted by bacteria to help them escape from the membranes that phagocytes use to envelop them within the cell. Neither protein can break through the membrane on its own, but when joined together to form a heterodimer, they can poke holes, which also allow other T7SS proteins to escape.

To evaluate whether the proteins they identified could make a good vaccine, the researchers created mRNA vaccines encoding two protein sequences — EsxB and EsxG. The researchers designed several versions of the vaccine, which were targeted to different compartments within the cells.

The researchers then delivered this vaccine into human phagocytes, where they found that vaccines that targeted cell lysosomes — organelles that break down molecules — were the most effective. These vaccines induced 1,000 times more MHC presentation of TB peptides than any of the others.

They later found that the presentation was even higher if they added EsxA to the vaccine, because it allows the formation of the heterodimers that can poke through the lysosomal membrane.

The researchers currently have a mix of eight proteins that they believe could offer protection against TB for most people, but they are continuing to test the combination with blood samples from people around the world. They also hope to run additional studies to explore how much protection this vaccine offers in animal models. Tests in humans are likely several years away.

The research was funded by the MIT Center for Precision Cancer Research at the Koch Institute, the National Institutes of Health, the National Institute of Environmental Health Sciences, and the Frederick National Laboratory for Cancer Research.


Teaching robots to map large environments

A new approach developed at MIT could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.


A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.

Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.

To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds. 

The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.

Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.

Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.

“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.

Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.

Mapping out a solution

For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.

Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.

While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.

To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.

“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.

Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.

Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.

“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.

A more flexible approach

Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.

Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.

“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.

Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.

The average error in these 3D reconstructions was less than 5 centimeters.

In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.

“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.

This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.


New therapeutic brain implants could defy the need for surgery

MIT researchers created microscopic wireless electronic devices that travel through blood and implant in target brain regions, where they provide electrical stimulation.


What if clinicians could place tiny electronic chips in the brain that electrically stimulate a precise target, through a simple injection in the arm? This may someday help treat deadly or debilitating brain diseases, while eliminating surgery-related risks and costs.

MIT researchers have taken a major step toward making this scenario a reality. They developed microscopic, wireless bioelectronics that could travel through the body’s circulatory system and autonomously self-implant in a target region of the brain, where they would provide focused treatment.

In a study on mice, the researchers show that after injection, these miniscule implants can identify and travel to a specific brain region without the need for human guidance. Once there, they can be wirelessly powered to provide electrical stimulation to the precise area. Such stimulation, known as neuromodulation, has shown promise as a way to treat brain tumors and diseases like Alzheimer’s and multiple sclerosis.

Moreover, because the electronic devices are integrated with living, biological cells before being injected, they are not attacked by the body’s immune system and can cross the blood-brain barrier while leaving it intact. This maintains the barrier’s crucial protection of the brain.

The researchers demonstrated the use of this technology, which they call “circulatronics,” to target brain inflammation, a major factor in the progression of many neurological diseases. They show that the implants can provide localized neuromodulation deep inside the brain achieving high precision, to within several microns around the target area.

In addition, the biocompatible implants do not damage surrounding neurons.

While brain implants usually require hundreds of thousands of dollars in medical costs and risky surgical procedures, circulatronics technology holds the potential to make therapeutic brain implants accessible to all by eliminating the need for surgery, says Deblina Sarkar, the AT&T Career Development Associate Professor in the MIT Media Lab and MIT Center for Neurobiological Engineering, head of the Nano-Cybernetic Biotrek Lab, and senior author of a study on the work.

She is joined on the paper by lead author Shubham Yadav, an MIT graduate student; as well as others at MIT, Wellesley College, and Harvard University. The research appears today in Nature Biotechnology.

Hybrid implants

The team has been working on circulatronics for more than six years. The electronic devices, each about one-billionth the length of a grain of rice, are composed of organic semiconducting polymer layers sandwiched between metallic layers to create an electronic heterostructure.

They are fabricated using CMOS-compatible processes in the MIT.nano facilities, and then integrated with living cells to create cell-electronics hybrids. To do this, the researchers lift the devices off the silicon wafer on which they are fabricated, so they are free-floating in a solution.

“The electronics worked perfectly when they were attached to the substrate, but when we originally lifted them off, they didn’t work anymore. Solving that challenge took us more than a year,” Sarkar says.

Key to their operation is the high wireless power conversion efficiency of the tiny electronics. This enables the devices to work deep inside the brain and still harness enough energy for neuromodulation.

The researchers use a chemical reaction to bond the electronic devices to cells. In the new study, they fused the electronics with a type of immune cell called monocytes, which target areas of inflammation in the body. They also applied a fluorescent dye, allowing them to trace the devices as they crossed the intact blood-brain barrier and self-implanted in the target brain region.

While they explored brain inflammation in this study, the researchers hope to use different cell types and engineer the cells to target specific regions of the brain.

“Our cell-electronics hybrid fuses the versatility of electronics with the biological transport and biochemical sensing prowess of living cells,” Sarkar says. “The living cells camouflage the electronics so that they aren’t attacked by the body’s immune system and they can travel seamlessly through the bloodstream. This also enables them to squeeze through the intact blood-brain barrier without the need to invasively open it.”

Over the course of about four years, the team tried many methods to autonomously and noninvasively cross the blood-brain barrier before they perfected this cellular integration technique.

In addition, because the circulatronics devices are so tiny, they offer much higher precision than conventional electrodes. They can self-implant, leading to millions of microscopic stimulation sites that take the exact shape of the target region.

Their small size also enables the biocompatible devices to live alongside neurons without causing harmful effects. Through a series of biocompatibility tests, the researchers found that circulatronics can safely integrate among neurons without impacting the brain processes behind cognition or motion.

After the devices have self-implanted in the target region, a clinician or researcher uses an external transmitter to provide electromagnetic waves, in the form of near-infrared light, that power the technology and enable electrical stimulation of the neurons.

Targeting deadly diseases

The Sarkar lab is currently working on developing their technology to treat multiple diseases including brain cancer, Alzheimer’s disease, and chronic pain.

The tiny size and self-implantation capabilities of circulatronics devices could make them well-suited to treat brain cancers such as glioblastoma that cause tumors at multiple locations, some of which may be too small to identify with imaging techniques. They may also provide new avenues for treating especially deadly cancers like diffuse intrinsic pontine glioma, an aggressive type of tumor found in the brain stem that usually cannot be surgically removed.

“This is a platform technology and may be employed to treat multiple brain diseases and mental illnesses,” Sarkar says. “Also, this technology is not just confined to the brain but could also be extended to other parts of the body in future.”

The researchers hope to move the technology into clinical trials within three years through the recently launched startup Cahira Technologies.

They are also exploring integration of additional nanoelectronic circuits into their devices to enable functionalities including sensing, feedback based on-chip data analysis, and capabilities such as creating synthetic electronic neurons.

“Our tiny electronic devices seamlessly integrate with the neurons and co-live and co-exist with the brain cells creating a unique brain-computer symbiosis. We are working dedicatedly to employ this technology for treating neural diseases, where drugs or standard therapies fail, for alleviating human suffering and envision a future where humans could transcend beyond diseases and biological limitations,” says Sarkar.


What should countries do with their nuclear waste?

A new study by MIT researchers analyzes different nuclear waste management strategies, with a focus on the radionuclide iodine-129.


One of the highest-risk components of nuclear waste is iodine-129 (I-129), which stays radioactive for millions of years and accumulates in human thyroids when ingested. In the U.S., nuclear waste containing I-129 is scheduled to be disposed of in deep underground repositories, which scientists say will sufficiently isolate it.

Meanwhile, across the globe, France routinely releases low-level radioactive effluents containing iodine-129 and other radionuclides into the ocean. France recycles its spent nuclear fuel, and the reprocessing plant discharges about 153 kilograms of iodine-129 each year, under the French regulatory limit.

Is dilution a good solution? What’s the best way to handle spent nuclear fuel? A new study by MIT researchers and their collaborators at national laboratories quantifies I-129 release under three different scenarios: the U.S. approach of disposing spent fuel directly in deep underground repositories, the French approach of dilution and release, and an approach that uses filters to capture I-129 and disposes of them in shallow underground waste repositories.

The researchers found France’s current practice of reprocessing releases about 90 percent of the waste’s I-129 into the biosphere. They found low levels of I-129 in ocean water around France and the U.K.’s former reprocessing sites, including the English Channel and North Sea. Although the low level of I-129 in the water in Europe is not considered to pose health risks, the U.S. approach of deep underground disposal leads to far less I-129 being released, the researchers found.

The researchers also investigated the effect of environmental regulations and technologies related to I-129 management, to illuminate the tradeoffs associated with different approaches around the world.

“Putting these pieces together to provide a comprehensive view of Iodine-129 is important,” says MIT Assistant Professor Haruko Wainwright, a first author on the paper who holds a joint appointment in the departments of Nuclear Science and Engineering and of Civil and Environmental Engineering. “There are scientists that spend their lives trying to clean up iodine-129 at contaminated sites. These scientists are sometimes shocked to learn some countries are releasing so much iodine-129. This work also provides a life-cycle perspective. We’re not just looking at final disposal and solid waste, but also when and where release is happening. It puts all the pieces together.”

MIT graduate student Kate Whiteaker SM ’24 led many of the analyses with Wainwright. Their co-authors are Hansell Gonzalez-Raymat, Miles Denham, Ian Pegg, Daniel Kaplan, Nikolla Qafoku, David Wilson, Shelly Wilson, and Carol Eddy-Dilek. The study appears today in Nature Sustainability.

Managing waste

Iodine-129 is often a key focus for scientists and engineers as they conduct safety assessments of nuclear waste disposal sites around the world. It has a half-life of 15.7 million years, high environmental mobility, and could potentially cause cancers if ingested. The U.S. sets a strict limit on how much I-129 can be released and how much I-129 can be in drinking water — 5.66 nanograms per liter, the lowest such level of any radionuclides.

“Iodine-129 is very mobile, so it is usually the highest-dose contributor in safety assessments,” Wainwright says.

For the study, the researchers calculated the release of I-129 across three different waste management strategies by combining data from current and former reprocessing sites as well as repository assessment models and simulations.

The authors defined the environmental impact as the release of I-129 into the biosphere that humans could be exposed to, as well as its concentrations in surface water. They measured I-129 release per the total electrical energy generated by a 1-gigawatt power plant over one year, denoted as kg/GWe.y.

Under the U.S. approach of deep underground disposal with barrier systems, assuming the barrier canisters fail at 1,000 years (a conservative estimate), the researchers found 2.14 x 10–8 kg/GWe.y of I-129 would be released between 1,000 and 1 million years from today.

They estimate that 4.51 kg/GWe.y of I-129, or 91 percent of the total, would be released into the biosphere in the scenario where fuel is reprocessed and the effluents are diluted and released. About 3.3 percent of I-129 is captured by gas filters, which are then disposed of in shallow subsurfaces as low-level radioactive waste. A further 5.2 percent remains in the waste stream of the reprocessing plant, which is then disposed of as high-level radioactive waste.

If the waste is recycled with gas filters to directly capture I-129, 0.05 kg/GWe.y of the I-129 is released, while 94 percent is disposed of in the low-level disposal sites. For shallow disposal, some kind of human disruption and intrusion is assumed to occur after government or institutional control expires (typically 100-1,000 years). That results in a potential release of the disposed amount to the environment after the control period.

Overall, the current practice of recycling spent nuclear fuel releases the majority of I-129 into the environment today, while the direct disposal of spent fuel releases around 1/100,000,000 that amount over 1 million years. When the gas filters are used to capture I-129, the majority of I-129 goes to shallow underground repositories, which could be accidentally released through human intrusion down the line.

The researchers also quantified the concentration of I-129 in different surface waters near current and former fuel reprocessing facilities, including the English Channel and the North Sea near reprocessing plants in France and U.K. They also analyzed the U.S. Columbia River downstream of a site in Washington state where material for nuclear weapons was produced during the Cold War, and they studied a similar site in South Carolina. The researchers found far higher concentrations of I-129 within the South Carolina site, where the low-level radioactive effluents were released far from major rivers and hence resulted in less dilution in the environment.

“We wanted to quantify the environmental factors and the impact of dilution, which in this case affected concentrations more than discharge amounts,” Wainwright says. “Someone might take our results to say dilution still works: It’s reducing the contaminant concentration and spreading it over a large area. On the other hand, in the U.S., imperfect disposal has led to locally higher surface water concentrations. This provides a cautionary tale that disposal could concentrate contaminants, and should be carefully designed to protect local communities.”

Fuel cycles and policy

Wainwright doesn’t want her findings to dissuade countries from recycling nuclear fuel. She says countries like Japan plan to use increased filtration to capture I-129 when they reprocess spent fuel. Filters with I-129 can be disposed of as low-level waste under U.S. regulations.

“Since I-129 is an internal carcinogen without strong penetrating radiation, shallow underground disposal would be appropriate in line with other hazardous waste,” Wainwright says. “The history of environmental protection since the 1960s is shifting from waste dumping and release to isolation. But there are still industries that release waste into the air and water. We have seen that they often end up causing issues in our daily life — such as CO2, mercury, PFAS and others — especially when there are many sources or when bioaccumulation happens. The nuclear community has been leading in waste isolation strategies and technologies since the 1950s. These efforts should be further enhanced and accelerated. But at the same time, if someone does not choose nuclear energy because of waste issues, it would encourage other industries with much lower environmental standards.”

The work was supported by MIT’s Climate Fast Forward Faculty Fund and the U.S. Department of Energy.


A new way to understand and predict gene splicing

The KATMAP model, developed by researchers in the Department of Biology, can predict alternative cell splicing, which allows cells to create endless diversity from the same sets of genetic blueprints.


Although heart cells and skin cells contain identical instructions for creating proteins encoded in their DNA, they’re able to fill such disparate niches because molecular machinery can cut out and stitch together different segments of those instructions to create endlessly unique combinations.

The ingenuity of using the same genes in different ways is made possible by a process called splicing and is controlled by splicing factors; which splicing factors a cell employs determines what sets of instructions that cell produces, which, in turn, gives rise to proteins that allow cells to fulfill different functions. 

In an open-access paper published today in Nature Biotechnology, researchers in the MIT Department of Biology outlined a framework for parsing the complex relationship between sequences and splicing regulation to investigate the regulatory activities of splicing factors, creating models that can be applied to interpret and predict splicing regulation across different cell types, and even different species. Called Knockdown Activity and Target Models from Additive regression Predictions, KATMAP draws on experimental data from disrupting the expression of a splicing factor and information on which sequences the splicing factor interacts with to predict its likely targets. 

Aside from the benefits of a better understanding of gene regulation, splicing mutations — either in the gene that is spliced or in the splicing factor itself — can give rise to diseases such as cancer by altering how genes are expressed, leading to the creation or accumulation of faulty or mutated proteins. This information is critical for developing therapeutic treatments for those diseases. The researchers also demonstrated that KATMAP can potentially be used to predict whether synthetic nucleic acids, a promising treatment option for disorders including a subset of muscular atrophy and epilepsy disorders, affect splicing.

Perturbing splicing 

In eukaryotic cells, including our own, splicing occurs after DNA is transcribed to produce an RNA copy of a gene, which contains both coding and non-coding regions of RNA. The noncoding intron regions are removed, and the coding exon segments are spliced back together to make a near-final blueprint, which can then be translated into a protein. 

According to first author Michael P. McGurk, a postdoc in the lab of MIT Professor Christopher Burge, previous approaches could provide an average picture of regulation, but could not necessarily predict the regulation of splicing factors at particular exons in particular genes.

KATMAP draws on RNA sequencing data generated from perturbation experiments, which alter the expression level of a regulatory factor by either overexpressing it or knocking down its levels. The consequences of overexpression or knockdown are that the genes regulated by the splicing factor should exhibit different levels of splicing after perturbation, which helps the model identify the splicing factor’s targets. 

Cells, however, are complex, interconnected systems, where one small change can cause a cascade of effects. KATMAP is also able to distinguish between direct targets from indirect, downstream impacts by incorporating known information about the sequence the splicing factor is likely to interact with, referred to as a binding site or binding motif.

“In our analyses, we identify predicted targets as exons that have binding sites for this particular factor in the regions where this model thinks they need to be to impact regulation,” McGurk says, while non-targets may be affected by perturbation but don’t have the likely appropriate binding sites nearby. 

This is especially helpful for splicing factors that aren’t as well-studied. 

“One of our goals with KATMAP was to try to make the model general enough that it can learn what it needs to assume for particular factors, like how similar the binding site has to be to the known motif or how regulatory activity changes with the distance of the binding sites from the splice sites,” McGurk says. 

Starting simple

Although predictive models can be very powerful at presenting possible hypotheses, many are considered “black boxes,” meaning the rationale that gives rise to their conclusions is unclear. KATMAP, on the other hand, is an interpretable model that enables researchers to quickly generate hypotheses and interpret splicing patterns in terms of regulatory factors while also understanding how the predictions were made. 

“I don’t just want to predict things, I want to explain and understand,” McGurk says. “We set up the model to learn from existing information about splicing and binding, which gives us biologically interpretable parameters.” 

The researchers did have to make some simplifying assumptions in order to develop the model. KATMAP considers only one splicing factor at a time, although it is possible for splicing factors to work in concert with one another. The RNA target sequence could also be folded in such a way that the factor wouldn’t be able to access a predicted binding site, so the site is present but not utilized.

“When you try to build up complete pictures of complex phenomena, it’s usually best to start simple,” McGurk says. “A model that only considers one splicing factor at a time is a good starting point.” 

David McWaters, another postdoc in the Burge Lab and a co-author on the paper, conducted key experiments to test and validate that aspect of the KATMAP model.

Future directions

The Burge lab is collaborating with researchers at Dana-Farber Cancer Institute to apply KATMAP to the question of how splicing factors are altered in disease contexts, as well as with other researchers at MIT as part of an MIT HEALS grant to model splicing factor changes in stress responses. McGurk also hopes to extend the model to incorporate cooperative regulation for splicing factors that work together. 

“We’re still in a very exploratory phase, but I would like to be able to apply these models to try to understand splicing regulation in disease or development. In terms of variation of splicing factors, they are related, and we need to understand both,” McGurk says.

Burge, the Uncas (1923) and Helen Whitaker Professor and senior author of the paper, will continue to work on generalizing this approach to build interpretable models for other aspects of gene regulation.

“We now have a tool that can learn the pattern of activity of a splicing factor from types of data that can be readily generated for any factor of interest,” says Burge, who is also an extra-mural member of the Koch Institute for Integrative Cancer Research and an associate member of the Broad Institute of MIT and Harvard. “As we build up more of these models, we’ll be better able to infer which splicing factors have altered activity in a disease state from transcriptomic data, to help understand which splicing factors are driving pathology.”


A new patch could help to heal the heart

MIT engineers developed a programmable drug-delivery patch that can promote tissue healing and blood vessel regrowth following a heart attack.


MIT engineers have developed a flexible drug-delivery patch that can be placed on the heart after a heart attack to help promote healing and regeneration of cardiac tissue.

The new patch is designed to carry several different drugs that can be released at different times, on a pre-programmed schedule. In a study of rats, the researchers showed that this treatment reduced the amount of damaged heart tissue by 50 percent and significantly improved cardiac function.

If approved for use in humans, this type of patch could help heart attack victims recover more of their cardiac function than is now possible, the researchers say.

“When someone suffers a major heart attack, the damaged cardiac tissue doesn’t regenerate effectively, leading to a permanent loss of heart function. The tissue that was damaged doesn’t recover,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “Our goal is to restore that function and help people regain a stronger, more resilient heart after a myocardial infarction.”

Jaklenec and Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, are the senior authors of the new study, which appears today in Cell Biomaterials. Former MIT postdoc Erika Wangis the lead author of the paper.

Programmed drug delivery

After a heart attack, many patients end up having bypass surgery, which improves blood flow to the heart but doesn’t repair the cardiac tissue that was damaged. In the new study, the MIT team wanted to create a patch that could be applied to the heart at the same time that the surgery is performed.

This patch, they hoped, could deliver drugs over an extended time period to promote tissue healing. Many diseases, including heart conditions, require phase-specific treatment, but most systems release drugs all at once. Timed delivery better synchronizes therapy with recovery.

“We wanted to see if it’s possible to deliver a precisely orchestrated therapeutic intervention to help heal the heart, right at the site of damage, while the surgeon is already performing open-heart surgery,” Jaklenec says.

To achieve this, the researchers set out to adapt drug-delivery microparticles they had previously developed, which consist of capsules similar to tiny coffee cups with lids. These capsules are made from a polymer called PLGA and can be sealed with a drug inside.

By changing the molecular weight of the polymers used to form the lids, the researchers can control how quickly they degrade, which enables them to program the particles to release their contents at specific times. For this application, the researchers designed particles that break down during days 1-3, days 7-9, and days 12-14 after implantation.

This allowed them to devise a regimen of three drugs that promote heart healing in different ways. The first set of particles release neuregulin-1, a growth factor that helps to prevent cell death. At the next time point, particles release VEGF, a growth factor that promotes formation of blood vessels surrounding the heart. The last batch of particles releases a small molecule drug called GW788388, which inhibits the formation of scar tissue that can occur following a heart attack.

“When tissue regenerates, it follows a carefully timed series of steps,” Jaklenec says. “Dr. Wang created a system that delivers key components at just the right time, in the sequence that the body naturally uses to heal.”

The researchers embedded rows of these particles into thin sheets of a tough but flexible hydrogel, similar to a contact lens. This hydrogel is made from alginate and PEGDA, two biocompatible polymers that eventually break down in the body. For this study, the researchers created compact, miniature patches only a few millimeters across.

“We encapsulate arrays of these particles in a hydrogel patch, and then we can surgically implant this patch into the heart. In this way, we’re really programming the treatment into this material,” Wang says.

Better heart function

Once they created these patches, the researchers tested them on spheres of heart tissue that included cardiomyocytes generated from induced pluripotent stem cells. These spheres also included endothelial cells and human ventricular cardiac fibroblasts, which are also important components of the heart.

The researchers exposed those spheres to low-oxygen conditions, mimicking the effects of a heart attack, then placed the patches over them. They found that the patches promoted blood vessel growth, helped more cells to survive, and reduced the amount of fibrosis that developed.

In tests in a rat model of heart attack, the researchers also saw significant improvements following treatment with the patch. Compared to no treatment or IV injection of the same drugs, animals treated with the patch showed 33 percent higher survival rates, a 50 percent reduction in the amount of damaged tissue, and significantly increased cardiac output.

The researchers showed that the patches would eventually dissolve over time, becoming a very thin layer over the course of a year without disrupting the heart’s mechanical function.

“This is an important way to combine drug delivery and biomaterials to potentially new treatments for patients,” Langer says.

Of the drugs tested in this study, neuregulin-1 and VEGF have been tested in clinical trials to treat heart conditions, but GW788388 has only been explored in animal models. The researchers now hope to test their patches in additional animal models in hopes of running a clinical trial in the future.

The current version of the patch needs to be implanted surgically, but the researchers are exploring the possibility of incorporating these microparticles into stents that could be inserted into arteries to deliver drugs on a programmed schedule.

Other authors of the paper include Elizabeth Calle, Binbin Ying, Behnaz Eshaghi, Linzixuan Zhang, Xin Yang, Stacey Qiaohui Lin, Jooli Han, Alanna Backx, Yuting Huang, Sevinj Mursalova, Chuhan Joyce Qi, and Yi Liu.

The researchers were supported by the Natural Sciences and Engineering Research Council of Canada and the U.S. National Heart, Lung, and Blood Institute.


Lightning-prediction tool could help protect the planes of the future

The new approach maps aircraft sections most vulnerable to lightning, including on planes with experimental designs.


More than 70 aircraft are struck by lightning every day. If you happen to be flying when a strike occurs, chances are you won’t feel a thing, thanks to lightning protection measures that are embedded in key zones throughout the aircraft.

Lightning protection systems work well, largely because they are designed for planes with a “tube-and-wing” structure, a simple geometry common to most aircraft today. But future airplanes may not look and fly the same way. The aviation industry is exploring new designs, including blended-wing bodies and truss-braced wings, partly to reduce fuel and weight costs. But researchers don’t yet know how these unconventional designs might respond to lightning strikes.

MIT aerospace engineers are hoping to change that with a new physics-based approach that predicts how lightning would sweep across a plane with any design. The tool then generates a zoning map highlighting sections of an aircraft that would require various degrees of lightning protection, given how they are likely to experience a strike.

“People are starting to conceive aircraft that look very different from what we’re used to, and we can’t apply exactly what we know from historical data to these new configurations because they’re just too different,” says Carmen Guerra-Garcia, associate professor of aeronautics and astronautics (AeroAstro) at MIT. “Physics-based methods are universal. They’re agnostic to the type of geometry or vehicle. This is the path forward to be able to do this lightning zoning and protect future aircraft.”

She and her colleagues report their results in a study appearing this week in IEEE Access. The study’s first author is AeroAstro graduate student Nathanael Jenkins. Other co-authors include Louisa Michael and Benjamin Westin of Boeing Research and Technology.

First strike

When lightning strikes, it first attaches to a part of a plane — typically a sharp edge or extremity — and hangs on for up to a second. During this brief flash, the plane continues speeding through the air, causing the lightning current to “sweep” over parts of its surface, potentially changing in intensity and re-attaching at certain points where the intense current flow could damage vulnerable sections of an aircraft.

In previous work, Guerra-Garcia’s group developed a model to predict the parts of a plane where lightning is most likely to first connect. That work, led by graduate student Sam Austin, established a starting point for the team’s new work, which aims to predict how and where the lightning will then sweep over the plane’s surface. The team next converted their lightning sweep predictions into zoning maps to identify vulnerable regions requiring certain levels of protection.

A typical tube-and-wing plane is divided into three main zones, as classified by the aviation industry. Each zone has a clear description of the level of current it must withstand in order to be certified for flight. Parts of a plane that are more likely to be hit by lightning are generally classified as zone 1 and require more protection, which can include embedded metal foil in the skin of the airplane that conducts away a lightning current.

To date, an airplane’s lightning zones have been determined over many years of flight inspections after lightning strikes and fine-tuning of protection measures. Guerra-Garcia and her colleagues looked to develop a zoning approach based on physics, rather than historical flight data. Such a physics-based mapping could be applied to any shape of aircraft, such as unconventional and largely untested designs, to identify regions that really require reinforcement.

“Protecting aircraft from lightning is heavy,” Jenkins says. “Embedding copper mesh or foil throughout an aircraft is an added weight penalty. And if we had the greatest level of protection for every part of the plane’s surface, the plane would weigh far too much. So zoning is about trying to optimize the weight of the system while also having it be as safe as possible.”

In the zone

For their new approach, the team developed a model to predict the pattern of lightning sweep and the corresponding lightning protection zones, for a given airplane geometry. Starting with a specific airplane shape — in their case, a typical tube-and-wing structure — the researchers simulated the fluid dynamics, or how air would flow around a plane, given a certain speed, altitude, and pitch angle. They also incorporated their previous model that predicts the places where lightning is more likely to initially attach.

For each initial attachment point, the team simulated tens of thousands of potential lightning arcs, or angles from which the current strikes the plane. They then ran the model forward to predict how the tens of thousands of potential strikes would follow the air flow across the plane’s surface. These runs produced a statistical representation of where lightning, striking a specific point on a plane, is likely to flow and potentially cause damage. The team converted this statistical representation into a map of zones of varying vulnerability.

They validated the method on a conventional tube-and-wing structure, showing that the zoning maps generated by the physics-based approach were consistent with what the aviation industry has determined over decades of fine-tuning.

“We now have a physics-based tool that provides some metrics like the probability of lightning attachment and dwell time, which is how long an arc will linger at a specific point,” Guerra-Garcia explains. “We convert those physics metrics into zoning maps to show, if I’m in this red region, the lightning arc will stay for a long time, so that region needs to be heavily protected.”

The team is starting to apply the approach to new geometries, such as blended-wing designs and truss-braced structures. The researchers envision that the tool can help designers incorporate safe and efficient lightning-protection systems early on in the design process.

“Lightning is incredible and terrifying at the same time, and I have full confidence in flying on planes at the moment,” Jenkins says. “I want to have that same confidence in 20 years’ time. So, we need a new way to zone aircraft.”

“With physics-based methods like the ones developed with professor Guerra-Garcia’s group we have the opportunity to shape industry standards and as an industry rely on the underlying physics to develop guidelines for aircraft certification through simulation,” says co-author Louisa Michael of Boeing Technology Innovation. Currently, we are engaging with industrial committees to propose these methods to be included in Aerospace Recommended Practices.”

“Zoning unconventional aircraft is not an easy task,” adds co-author Ben Westin of Boeing Technology Innovation. “But these methods will allow us to confidently identify which threat levels each part of the aircraft needs to be protected against and certified for, and they give our design engineers a platform to do their best work to optimize aircraft design.”

Beyond airplanes, Guerra-Garcia is looking at ways to adapt the lightning protection model to other technologies, including wind turbines.

“About 60 percent of blade losses are due to lightning and will become worse as we move offshore because wind turbines will be even bigger and more susceptible to upward lightning,” she says. “They have many of the same challenges of a flowing gas environment. It’s more complex, and we will apply this same sort of methodology to this space.”

This research was funded, in part, by the Boeing Company.


Startup provides a nontechnical gateway to coding on quantum computers

Co-founded by Kanav Setia and Jason Necaise ’20, qBraid lets users access the most popular quantum devices and software programs on an intuitive, cloud-based platform.


Quantum computers have the potential to model new molecules and weather patterns better than any computer today. They may also one day accelerate artificial intelligence algorithms at a much lower energy footprint. But anyone interested in using quantum computers faces a steep learning curve that starts with getting access to quantum devices and then figuring out one of the many quantum software programs on the market.

Now qBraid, founded by Kanav Setia and Jason Necaise ’20, is providing a gateway to quantum computing with a platform that gives users access to the leading quantum devices and software. Users can log on to qBraid’s cloud-based interface and connect with quantum devices and other computing resources from leading companies like Nvidia, Microsoft, and IBM. In a few clicks, they can start coding or deploy cutting-edge software that works across devices.

“The mission is to take you from not knowing anything about quantum computing to running your first program on these amazing machines in less than 10 minutes,” Setia says. “We’re a one-stop platform that gives access to everything the quantum ecosystem has to offer. Our goal is to enable anyone — whether they’re enterprise customers, academics, or individual users — to build and ultimately deploy applications.”

Since its founding in June of 2020, qBraid has helped more than 20,000 people in more than 120 countries deploy code on quantum devices. That traction is ultimately helping to drive innovation in a nascent industry that’s expected to play a key role in our future.

“This lowers the barrier to entry for a lot of newcomers,” Setia says. “They can be up and running in a few minutes instead of a few weeks. That’s why we’ve gotten so much adoption around the world. We’re one of the most popular platforms for accessing quantum software and hardware.”

A quantum “software sandbox”

Setia met Necaise while the two interned at IBM. At the time, Necaise was an undergraduate at MIT majoring in physics, while Setia was at Dartmouth College. The two enjoyed working together, and Necaise said if Setia ever started a company, he’d be interested in joining.

A few months later, Setia decided to take him up on the offer. At Dartmouth, Setia had taken one of the first applied quantum computing classes, but students spent weeks struggling to install all the necessary software programs before they could even start coding.

“We hadn’t even gotten close to developing any useful algorithms,” Seita said. “The idea for qBraid was, ‘Why don’t we build a software sandbox in the cloud and give people an easy programming setup out of the box?’ Connection with the hardware would already be done.”

The founders received early support from the MIT Sandbox Innovation Fund and took part in the delta v summer startup accelerator run by the Martin Trust Center for MIT Entrepreneurship.

“Both programs provided us with very strong mentorship,” Setia says. “They give you frameworks on what a startup should look like, and they bring in some of the smartest people in the world to mentor you — people you’d never have access to otherwise.”

Necaise left the company in 2021. Setia, meanwhile, continued to find problems with quantum software outside of the classroom.

“This is a massive bottleneck,” Setia says. “I’d worked on several quantum software programs that pushed out updates or changes, and suddenly all hell broke loose on my codebase. I’d spend two to four weeks jostling with these updates that had almost nothing to do with the quantum algorithms I was working on.”

QBraid started as a platform with pre-installed software that let developers start writing code immediately. The company also added support for version-controlled quantum software so developers could build applications on top without worrying about changes. Over time, qBraid added connections to quantum computers and tools that lets quantum programs run across different devices.

“The pitch was you don’t need to manage a bunch of software or a whole bunch of cloud accounts,” Setia says. “We’re a single platform: the quantum cloud.”

QBraid also launched qBook, a learning platform that offers interactive courses in quantum computing.

“If you see a piece of code you like, you just click play and the code runs,” Setia says. “You can run a whole bunch of code, modify it on the fly, and you can understand how it works. It runs on laptops, iPads, and phones. A significant portion of our users are from developing countries, and they’re developing applications from their phones.”

Democratizing quantum computing

Today qBraid’s 20,000 users come from over 400 universities and 100 companies around the world. As qBraid’s user base has grown, the company went from integrating quantum computers onto their platform from the outside to creating a quantum operating system, qBraid-OS, that is currently being used by four leading quantum companies.

“We are productizing these quantum computers,” Setia explains. “Many quantum companies are realizing they want to focus their energy completely on the hardware, with us productizing their infrastructure. We’re like the operating system for quantum computers.”

People are using qBraid to build quantum applications in AI and machine learning, to discover new molecules or develop new drugs, and to develop applications in finance and cybersecurity. With every new use case, Setia says qBraid is democratizing quantum computing to create the quantum workforce that will continue to advance the field.

“[In 2018], an article in The New York Times said there were possibly less than 1,000 people in the world that could be called experts in quantum programming,” Setia says. “A lot of people want to access these cutting-edge machines, but they don’t have the right software backgrounds. They are just getting started and want to play with algorithms. QBraid gives those people an easy programming setup out of the box.”


Turning on an immune pathway in tumors could lead to their destruction

MIT researchers show they can use messenger RNA to activate the pathway and trigger the immune system to attack tumors.


By stimulating cancer cells to produce a molecule that activates a signaling pathway in nearby immune cells, MIT researchers have found a way to force tumors to trigger their own destruction.

Activating this signaling pathway, known as the cGAS-STING pathway, worked even better when combined with existing immunotherapy drugs known as checkpoint blockade inhibitors, in a study of mice. That dual treatment was successfully able to control tumor growth.

The researchers turned on the cGAS-STING pathway in immune cells using messenger RNA delivered to cancer cells. This approach may avoid the side effects of delivering large doses of a STING activator, and takes advantage of a natural process in the body. This could make it easier to develop a treatment for use in patients, the researchers say.

“Our approach harnesses the tumor’s own machinery to produce immune-stimulating molecules, creating a powerful antitumor response,” says Natalie Artzi, a principal research scientist at MIT’s Institute for Medical Engineering and Science, an associate professor of medicine at Harvard Medical School, a core faculty member at the Wyss Institute for Biologically Inspired Engineering at Harvard, and the senior author of the study.

“By increasing cGAS levels inside cancer cells, we can enhance delivery efficiency — compared to targeting the more scarce immune cells in the tumor microenvironment — and stimulate the natural production of cGAMP, which then activates immune cells locally,” she says. “This strategy not only strengthens antitumor immunity but also reduces the toxicity associated with direct STING agonist delivery, bringing us closer to safer and more effective cancer immunotherapies.”

Alexander Cryer, a visiting scholar at IMES, is the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.

Immune activation

STING (short for stimulator of interferon genes) is a protein that helps to trigger immune responses. When STING is activated, it turns on a pathway that initiates production of type one interferons, which are cytokines that stimulate immune cells.

Many research groups, including Artzi’s, have explored the possibility of artificially stimulating this pathway with molecules called STING agonists, which could help immune cells to recognize and attack tumor cells. This approach has worked well in animal models, but it has had limited success in clinical trials, in part because the required doses can cause harmful side effects.

While working on a project exploring new ways to deliver STING agonists, Cryer became intrigued when he learned from previous work that cancer cells can produce a STING activator known as cGAMP. The cells then secrete cGAMP, which can activate nearby immune cells.

“Part of my philosophy of science is that I really enjoy using endogenous processes that the body already has, and trying to utilize them in a slightly different context. Evolution has done all the hard work. We just need to figure out how push it in a different direction,” Cryer says. “Once I saw that cancer cells produce this molecule, I thought: Maybe there’s a way to take this process and supercharge it.”

Within cells, the production of cGAMP is catalyzed by an enzyme called cGAS. To get tumor cells to activate STING in immune cells, the researchers devised a way to deliver messenger RNA that encodes cGAS. When this enzyme detects double-stranded DNA in the cell body, which can be a sign of either infection or cancer-induced damage, it begins producing cGAMP.

“It just so happens that cancer cells, because they’re dividing so fast and not particularly accurately, tend to have more double-stranded DNA fragments than healthy cells,” Cryer says.

The tumor cells then release cGAMP into tumor microenvironment, where it can be taken up by neighboring immune cells and activate their STING pathway.

Targeting tumors

Using a mouse model of melanoma, the researchers evaluated their new strategy’s potential to kill cancer cells. They injected mRNA encoding cGAS, encapsulated in lipid nanoparticles, into tumors. One group of mice received this treatment alone, while another received a checkpoint blockade inhibitor, and a third received both treatments.

Given on their own, cGAS and the checkpoint inhibitor each significantly slowed tumor growth. However, the best results were seen in the mice that received both treatments. In that group, tumors were completely eradicated in 30 percent of the mice, while none of the tumors were fully eliminated in the groups that received just one treatment.

An analysis of the immune response showed that the mRNA treatment stimulated production of interferon as well as many other immune signaling molecules. A variety of immune cells, including macrophages and dendritic cells, were activated. These cells help to stimulate T cells, which can then destroy cancer cells.

The researchers were able to elicit these responses with just a small dose of cancer-cell-produced cGAMP, which could help to overcome one of the potential obstacles to using cGAMP on its own as therapy: Large doses are required to stimulate an immune response, and these doses can lead to widespread inflammation, tissue damage, and autoimmune reactions. When injected on its own, cGAMP tends to spread through the body and is rapidly cleared from the tumor, while in this study, the mRNA nanoparticles and cGAMP remained at the tumor site.

“The side effects of this class of molecule can be pretty severe, and one of the potential advantages of our approach is that you’re able to potentially subvert some toxicity that you might see if you’re giving the free molecules,” Cryer says.

The researchers now hope to work on adapting the delivery system so that it could be given as a systemic injection, rather than injecting it into the tumor. They also plan to test the mRNA therapy in combination with chemotherapy drugs or radiotherapy that damage DNA, which could make the therapy even more effective because there could be even more double-stranded DNA available to help activate the synthesis of cGAMP.


A faster problem-solving tool that guarantees feasibility

The FSNet system, developed at MIT, could help power grid operators rapidly find feasible solutions for optimizing the flow of electricity.


Managing a power grid is like trying to solve an enormous puzzle.

Grid operators must ensure the proper amount of power is flowing to the right areas at the exact time when it is needed, and they must do this in a way that minimizes costs without overloading physical infrastructure. Even more, they must solve this complicated problem repeatedly, as rapidly as possible, to meet constantly changing demand.

To help crack this consistent conundrum, MIT researchers developed a problem-solving tool that finds the optimal solution much faster than traditional approaches while ensuring the solution doesn’t violate any of the system’s constraints. In a power grid, constraints could be things like generator and line capacity.

This new tool incorporates a feasibility-seeking step into a powerful machine-learning model trained to solve the problem. The feasibility-seeking step uses the model’s prediction as a starting point, iteratively refining the solution until it finds the best achievable answer.

The MIT system can unravel complex problems several times faster than traditional solvers, while providing strong guarantees of success. For some extremely complex problems, it could find better solutions than tried-and-true tools. The technique also outperformed pure machine learning approaches, which are fast but can’t always find feasible solutions.

In addition to helping schedule power production in an electric grid, this new tool could be applied to many types of complicated problems, such as designing new products, managing investment portfolios, or planning production to meet consumer demand.

“Solving these especially thorny problems well requires us to combine tools from machine learning, optimization, and electrical engineering to develop methods that hit the right tradeoffs in terms of providing value to the domain, while also meeting its requirements. You have to look at the needs of the application and design methods in a way that actually fulfills those needs,” says Priya Donti, the Silverman Family Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS).

Donti, senior author of an open-access paper on this new tool, called FSNet, is joined by lead author Hoang Nguyen, an EECS graduate student. The paper will be presented at the Conference on Neural Information Processing Systems.

Combining approaches

Ensuring optimal power flow in an electric grid is an extremely hard problem that is becoming more difficult for operators to solve quickly.

“As we try to integrate more renewables into the grid, operators must deal with the fact that the amount of power generation is going to vary moment to moment. At the same time, there are many more distributed devices to coordinate,” Donti explains.

Grid operators often rely on traditional solvers, which provide mathematical guarantees that the optimal solution doesn’t violate any problem constraints. But these tools can take hours or even days to arrive at that solution if the problem is especially convoluted.

On the other hand, deep-learning models can solve even very hard problems in a fraction of the time, but the solution might ignore some important constraints. For a power grid operator, this could result in issues like unsafe voltage levels or even grid outages.

“Machine-learning models struggle to satisfy all the constraints due to the many errors that occur during the training process,” Nguyen explains.

For FSNet, the researchers combined the best of both approaches into a two-step problem-solving framework.

Focusing on feasibility

In the first step, a neural network predicts a solution to the optimization problem. Very loosely inspired by neurons in the human brain, neural networks are deep learning models that excel at recognizing patterns in data.

Next, a traditional solver that has been incorporated into FSNet performs a feasibility-seeking step. This optimization algorithm iteratively refines the initial prediction while ensuring the solution does not violate any constraints.

Because the feasibility-seeking step is based on a mathematical model of the problem, it can guarantee the solution is deployable.

“This step is very important. In FSNet, we can have the rigorous guarantees that we need in practice,” Hoang says.

The researchers designed FSNet to address both main types of constraints (equality and inequality) at the same time. This makes it easier to use than other approaches that may require customizing the neural network or solving for each type of constraint separately.

“Here, you can just plug and play with different optimization solvers,” Donti says.

By thinking differently about how the neural network solves complex optimization problems, the researchers were able to unlock a new technique that works better, she adds.

They compared FSNet to traditional solvers and pure machine-learning approaches on a range of challenging problems, including power grid optimization. Their system cut solving times by orders of magnitude compared to the baseline approaches, while respecting all problem constraints.

FSNet also found better solutions to some of the trickiest problems.

“While this was surprising to us, it does make sense. Our neural network can figure out by itself some additional structure in the data that the original optimization solver was not designed to exploit,” Donti explains.

In the future, the researchers want to make FSNet less memory-intensive, incorporate more efficient optimization algorithms, and scale it up to tackle more realistic problems.

“Finding solutions to challenging optimization problems that are feasible is paramount to finding ones that are close to optimal. Especially for physical systems like power grids, close to optimal means nothing without feasibility. This work provides an important step toward ensuring that deep-learning models can produce predictions that satisfy constraints, with explicit guarantees on constraint enforcement,” says Kyri Baker, an associate professor at the University of Colorado Boulder, who was not involved with this work.

"A persistent challenge for machine learning-based optimization is feasibility. This work elegantly couples end-to-end learning with an unrolled feasibility-seeking procedure that minimizes equality and inequality violations. The results are very promising and I look forward to see where this research will head," adds Ferdinando Fioretto, an assistant professor at the University of Virginia, who was not involved with this work.


Study: Good management of aid projects reduces local violence

World Bank data show how the organization of programs influences political conflict — indicating a path to better aid delivery.


Good management of aid projects in developing countries reduces violence in those areas — but poorly managed projects increase the chances of local violence, according to a new study by an MIT economist.

The research, examining World Bank projects in Africa, illuminates a major question surrounding international aid. Observers have long wondered if aid projects, by bringing new resources into developing countries, lead to conflict over those goods as an unintended consequence. Previously, some scholars have identified an increase in violence attached to aid, while others have found a decrease.

The new study shows those prior results are not necessarily wrong, but not entirely right, either. Instead, aid oversight matters. World Bank programs earning the highest evaluation scores for their implementation reduce the likelihood of conflict by up to 12 percent, compared to the worst-managed programs.

“I find that the management quality of these projects has a really strong effect on whether that project leads to conflict or not,” says MIT economist Jacob Moscona, who conducted the research. “Well-managed aid projects can actually reduce conflict, and poorly managed projects increase conflict, relative to no project. So, the way aid programs are organized is very important.”

The findings also suggest aid projects can work well almost anywhere. At times, observers have suggested the political conditions in some countries prevent aid from being effective. But the new study finds otherwise.

“There are ways these programs can have their positive effects without the negative consequences,” Moscona says. “And it’s not the result of what politics looks like on the receiving end; it’s about the organization itself.”

Moscona’s paper detailing the study, “The Management of Aid and Conflict in Africa,” is published in the November issue of the American Economic Journal: Economic Policy. Moscona, the paper’s sole author, is the 3M Career Development Assistant Professor in MIT’s Department of Economics.

Decisions on the ground

To conduct the study, Moscona examined World Bank data from the 1997-2014 time period, using the information compiled by AidData, a nonprofit group that also studies World Bank programs. Importantly, the World Bank conducts extensive evaluations of its projects and includes the identities of project leaders as part of those reviews.

“There are a lot of decisions on the ground made by managers of aid, and aid organizations themselves, that can have a huge impact on whether or not aid leads to conflict, and how aid resources are used and whether they are misappropriated or captured and get into the wrong hands,” Moscona says.

For instance, diligent daily checks about food distribution programs can and have substantially reduced the amount of food that is stolen or “leaks” out of the program. Other projects have created innovative ways of tagging small devices to ensure those objects are used by program participants, reducing appropriation by others.

Moscona combined the World Bank data with statistics from the Armed Conflict Location and Event Data Project (ACLED), a nonprofit that monitors political violence. That enabled him to evaluate how the quality of aid project implementation — and even the quality of the project leadership — influenced local outcomes.

For instance, by looking at the ratings of World Bank project leaders, Moscona found that shifting from a project leader at the 25th percentile, in terms of how frequently projects are linked with conflict, to one at the 75th percentile, increases the chances of local conflict by 15 percent.

“The magnitudes are pretty large, in terms of the probability that a conflict starts in the vicinity of a project,” Moscona observes.

Moscona’s research identified several other aspects of the interaction between aid and conflict that hold up over the region and time period. The establishment of aid programs does not seem to lead to long-term strategic activity by non-government forces, such as land acquisition or the establishment of rebel bases. The effects are also larger in areas that have had recent political violence. And armed conflict is greater when the resources at stake can be expropriated — such as food or medical devices.

“It matters most if you have more divertable resources, like food and medical devices that can be captured, as opposed to infrastructure projects,” Moscona says.

Reconciling the previous results

Moscona also found a clear trend in the data about the timing of violence in relation to aid. Government and other armed groups do not engage in much armed conflict when aid programs are being established; it is the appearance of desired goods themselves that sets off violent activity.

“You don’t see much conflict when the projects are getting off the ground,” Moscona says.” You really see the conflict start when the money is coming in or when the resources start to flow. Which is consistent with the idea of the relevant mechanism being about aid resources and their misappropriation, rather than groups trying to deligitimize a project.”

All told, Moscona’s study finds a logical mechanism explaining the varying results other scholars have found with regard to aid and conflict. If aid programs are not equally well-administered, it stands to reason that their outcomes will not be identical, either.

“There wasn’t much work trying to make those two sets of results speak to each other,” says Moscona. “I see it less as overturning existing results than providing a way to reconcile different results and experiences.”

Moscona’s findings may also speak to the value of aid in general — and provide actionable ideas for institutions such as the World Bank. If better management makes such a difference, then the potential effectiveness of aid programs may increase.

“One goal is to change the conversation about aid,” Moscona says. The data, he suggests, shows that the public discourse about aid can be “less defeatist about the potential negative consequences of aid, and the idea that it’s out of the control of the people who administer it.” 


New nanoparticles stimulate the immune system to attack ovarian tumors

Targeted particles carrying the cytokine IL-12 can jump-start T cells, allowing them to clear tumors while avoiding side effects.


Cancer immunotherapy, which uses drugs that stimulate the body’s immune cells to attack tumors, is a promising approach to treating many types of cancer. However, it doesn’t work well for some tumors, including ovarian cancer.

To elicit a better response, MIT researchers have designed new nanoparticles that can deliver an immune-stimulating molecule called IL-12 directly to ovarian tumors. When given along with immunotherapy drugs called checkpoint inhibitors, IL-12 helps the immune system launch an attack on cancer cells.

Studying a mouse model of ovarian cancer, the researchers showed that this combination treatment could eliminate metastatic tumors in more than 80 percent of the mice. When the mice were later injected with more cancer cells, to simulate tumor recurrence, their immune cells remembered the tumor proteins and cleared them again.

“What’s really exciting is that we’re able to deliver IL-12 directly in the tumor space. And because of the way that this nanomaterial is designed to allow IL-12 to be borne on the surfaces of the cancer cells, we have essentially tricked the cancer into stimulating immune cells to arm themselves against that cancer,” says Paula Hammond, an MIT Institute Professor, MIT’s vice provost for faculty, and a member of the Koch Institute for Integrative Cancer Research.

Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Nature Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital, is the lead author of the paper.

“Hitting the gas”

Most tumors express and secrete proteins that suppress immune cells, creating a microenvironment in which the immune response is weakened. One of the main players that can kill tumor cells are T cells, but they get sidelined or blocked by the cancer cells and are unable to attack the tumor. Checkpoint inhibitors are an FDA-approved treatment designed to take those brakes off the immune system by removing the immune-suppressing proteins so that T cells can mount an attack on tumor cells

For some cancers, including some types of melanoma and lung cancer, removing the brakes is enough to provoke the immune system into attacking cancer cells. However, ovarian tumors have many ways to suppress the immune system, so checkpoint inhibitors alone usually aren’t enough to launch an immune response.

“The problem with ovarian cancer is no one is hitting the gas. So, even if you take off the brakes, nothing happens,” Pires says.

IL-12 offers one way to “hit the gas,” by supercharging T cells and other immune cells. However, the large doses of IL-12 required to get a strong response can produce side effects due to generalized inflammation, such as flu-like symptoms (fever, fatigue, GI issues, headaches, and fatigue), as well as more severe complications such as liver toxicity and cytokine release syndrome — which can be so severe they may even lead to death.

In a 2022 study, Hammond’s lab developed nanoparticles that could deliver IL-12 directly to tumor cells, which allows larger doses to be given while avoiding the side effects seen when the drug is injected. However, these particles tended to release their payload all at once after reaching the tumor, which hindered their ability to generate a strong T cell response.

In the new study, the researchers modified the particles so that IL-12 would be released more gradually, over about a week. They achieved this by using a different chemical linker to attach IL-12 to the particles.

“With our current technology, we optimize that chemistry such that there’s a more controlled release rate, and that allowed us to have better efficacy,” Pires says.

The particles consist of tiny, fatty droplets known as liposomes, with IL-12 molecules tethered to the surface. For this study, the researchers used a linker called maleimide to attach IL-12 to the liposomes. This linker is more stable than the one they used in the previous generation of particles, which was susceptible to being cleaved by proteins in the body, leading to premature release.

To make sure that the particles get to the right place, the researchers coat them with a layer of a polymer called poly-L-glutamate (PLE), which helps them directly target ovarian tumor cells. Once they reach the tumors, the particles bind to the cancer cell surfaces, where they gradually release their payload and activate nearby T cells.

Disappearing tumors

In tests in mice, the researchers showed that the IL-12-carrying particles could effectively recruit and stimulate T cells that attack tumors. The cancer models used for these studies are metastatic, so tumors developed not only in the ovaries but throughout the peritoneal cavity, which includes the surface of the intestines, liver, pancreas, and other organs. Tumors could even be seen in the lung tissues.

First, the researchers tested the IL-12 nanoparticles on their own, and they showed that this treatment eliminated tumors in about 30 percent of the mice. They also found a significant increase in the number of T cells that accumulated in the tumor environment.

Then, the researchers gave the particles to mice along with checkpoint inhibitors. More than 80 percent of the mice that received this dual treatment were cured. This happened even when the researchers used models of ovarian cancer that are highly resistant to immunotherapy or to the chemotherapy drugs usually used for ovarian cancer.

Patients with ovarian cancer are usually treated with surgery followed by chemotherapy. While this may be initially effective, cancer cells that remain after surgery are often able to grow into new tumors. Establishing an immune memory of the tumor proteins could help to prevent that kind of recurrence.

In this study, when the researchers injected tumor cells into the cured mice five months after the initial treatment, the immune system was still able to recognize and kill the cells.

“We don’t see the cancer cells being able to develop again in that same mouse, meaning that we do have an immune memory developed in those animals,” Pires says.

The researchers are now working with MIT’s Deshpande Center for Technological Innovation to spin out a company that they hope could further develop the nanoparticle technology. In a study published earlier this year, Hammond’s lab reported a new manufacturing approach that should enable large-scale production of this type of nanoparticle.

The research was funded by the National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, the Ragon Institute of MGH, MIT, and Harvard, and the Koch Institute Support (core) Grant from the National Cancer Institute.


Using classic physical phenomena to solve new problems

Marco Graffiedi, a doctoral student in nuclear science and engineering, is researching quenching processes to help cool nuclear cores, and NASA craft the next generation of space vehicles.


Quenching, a powerful heat transfer mechanism, is remarkably effective at transporting heat away. But in extreme environments, like nuclear power plants and aboard spaceships, a lot rides on the efficiency and speed of the process.

It’s why Marco Graffiedi, a fifth-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE), is researching the phenomenon to help develop the next generation of spaceships and nuclear plants.

Growing up in small-town Italy

Graffiedi’s parents encouraged a sense of exploration, giving him responsibilities for family projects even at a young age. When they restored a countryside cabin in a small town near Palazzolo, in the hills between Florence and Bologna, the then-14-year-old Marco got a project of his own. He had to ensure the animals on the property had enough accessible water without overfilling the storage tank. Marco designed and built a passive hydraulic system that effectively solved the problem and is still functional today.

His proclivity for science continued in high school in Lugo, where Graffiedi enjoyed recreating classical physics phenomena, through experiments. Incidentally, the high school is named after Gregorio Ricci-Curbastro, a mathematician who laid the foundation for the theory of relativity — history that is not lost on Graffiedi. After high school, Graffiedi attended the International Physics Olympiad in Bangkok, a formative event that cemented his love for physics.

A gradual shift toward engineering

A passion for physics and basic sciences notwithstanding, Graffiedi wondered if he’d be a better fit for engineering, where he could use the study of physics, chemistry, and math as tools to build something.

Following that path, he completed a bachelor’s and master’s in mechanical engineering — because an undergraduate degree in Italy takes only three years, pretty much everyone does a master’s, Graffiedi laughs — at the Università di Pisa and the Scuola Superiore Sant’Anna (School of Engineering). The Sant’Anna is a highly selective institution that most students attend to complement their university studies.

Graffiedi’s university studies gradually moved him toward the field of environmental engineering. He researched concentrated solar power in order to reduce the cost of solar power by studying the associated thermal cycle and trying to improve solar power collection. While the project was not very successful, it reinforced Graffiedi’s impression of the necessity of alternative energies. Still firmly planted in energy studies, Graffiedi worked on fracture mechanics for his master’s thesis, in collaboration with (what was then) GE Oil and Gas, researching how to improve the effectiveness of centrifugal compressors. And a summer internship at Fermilab had Graffiedi working on the thermal characterization of superconductive coatings.

With his studies behind him, Graffiedi was still unsure about this professional path. Through the Edison Program from GE Oil and Gas, where he worked shortly after graduation, Graffiedi got to test drive many fields — from mechanical and thermal engineering to exploring gas turbines and combustion. He eventually became a test engineer, coordinating a team of engineers to test a new upgrade to the company’s gas turbines. “I set up the test bench, understanding how to instrument the machine, collect data, and run the test,” Graffiedi remembers, “there was a lot you need to think about, from a little turbine blade with sensors on it to the location of safety exits on the test bench.”

The move toward nuclear engineering

As fun as the test engineering job was, Graffiedi started to crave more technical knowledge and wanted to pivot to science. As part of his exploration, he came across nuclear energy and, understanding it to be the future, decided to lean on his engineering background to apply to MIT NSE.

He found a fit in Professor Matteo Bucci’s group and decided to explore boiling and quenching. The move from science to engineering, and back to science, was now complete.

NASA, the primary sponsor of the research, is interested in preventing boiling of cryogenic fuels, because boiling leads to loss of fuel and the resulting vapor will need to be vented to avoid overpressurizing a fuel tank.

Graffiedi’s primary focus is on quenching, which will play an important role in refueling in space — and in the cooling of nuclear cores. When a cryogen is used to cool down a surface, it undergoes what is known as the Leidenfrost effect, which means it first forms a thin vapor film that acts as an insulator and prevents further cooling. To facilitate rapid cooling, it’s important to accelerate the collapse of the vapor film. Graffiedi is exploring the mechanics of the quenching process on a microscopic level, studies that are important for land and space applications.

Boiling can be used for yet another modern application: to improve the efficiency of cooling systems for data centers. The growth of data centers and electric transportation systems needs effective heat transfer mechanisms to avoid overheating. Immersion cooling using dielectric fluids — fluids that do not conduct electricity — is one way to do so. These fluids remove heat from a surface by leaning on the principle of boiling. For effective boiling, the fluid must overcome the Leidenfrost effect and break the vapor film that forms. The fluid must also have high critical heat flux (CHF), which is the maximum value of the heat flux at which boiling can effectively be used to transfer heat from a heated surface to a liquid. Because dielectric fluids have lower CHF than water, Graffiedi is exploring solutions to enhance these limits. In particular, he is investigating how high electric fields can be used to enhance CHF and even to use boiling as a way to cool electronic components in the absence of gravity. He published this research in Applied Thermal Engineering in June.

Beyond boiling

Graffiedi’s love of science and engineering shows in his commitment to teaching as well. He has been a teaching assistant for four classes at NSE, winning awards for his contributions. His many additional achievements include winning the Manson Benedict Award presented to an NSE graduate student for excellence in academic performance and professional promise in nuclear science and engineering, and a service award for his role as past president of the MIT Division of the American Nuclear Society.

Boston has a fervent Italian community, Graffiedi says, and he enjoys being a part of it. Fittingly, the MIT Italian club is called MITaly. When he’s not at work or otherwise engaged, Graffiedi loves Latin dancing, something he makes time for at least a couple of times a week. While he has his favorite Italian restaurants in the city, Graffiedi is grateful for another set of skills his parents gave him when was just 11: making perfect pizza and pasta.


Q&A: How MITHIC is fostering a culture of collaboration at MIT

A presidential initiative, the MIT Human Insight Collaborative is supporting new interdisciplinary initiatives and projects across the Institute.


The MIT Human Insight Collaborative (MITHIC) is a presidential initiative with a mission of elevating human-centered research and teaching and connecting scholars in the humanities, arts, and social sciences with colleagues across the Institute.

Since its launch in 2024, MITHIC has funded 31 projects led by teaching and research staff representing 22 different units across MIT. The collaborative is holding its annual event on Nov. 17. 

In this Q&A, Keeril Makan, associate dean in the MIT School of Humanities, Arts, and Social Sciences, and Maria Yang, interim dean of the MIT School of Engineering, discuss the value of MITHIC and the ways it’s accelerating new research and collaborations across the Institute. Makan is the Michael (1949) Sonja Koerner Music Composition Professor and faculty lead for MITHIC. Yang is the William E. Leonhard (1940) Professor in the Department of Mechanical Engineering and co-chair of MITHIC’s SHASS+ Connectivity Fund.

Q: You each come from different areas of MIT. Looking at MITHIC from your respective roles, why is this initiative so important for the Institute?

Makan: The world is counting on MIT to develop solutions to some of the world’s greatest challenges, such as artificial intelligence, poverty, and health care. These are all issues that arise from human activity, a thread that runs through much of the research we’re focused on in SHASS. Through MITHIC, we’re embedding human-centered thinking and connecting the Institute’s top scholars in the work needed to find innovative ways of addressing these problems.

Yang: MITHIC is very important to MIT, and I think of this from the point of view as an engineer, which is my background. Engineers often think about the technology first, which is absolutely important. But for that technology to have real impact, you have to think about the human insights that make that technology relevant and can be deployed in the world. So really having a deep understanding of that is core to MITHIC and MIT’s engineering enterprise.

Q: How does MITHIC fit into MIT’s broader mission? 

Makan: MITHIC highlights how the work we do in the School of Humanities, Arts, and Social Sciences is aligned with MIT’s mission, which is to address the world’s great problems. But MITHIC has also connected all of MIT in this endeavor. We have faculty from all five schools and the MIT Schwarzman College of Computing involved in evaluating MITHIC project proposals. Each of them represent a different point of view and are engaging with these projects that originate in SHASS, but actually cut across many different fields. Seeing their perspectives on these projects has been inspiring.

Yang: I think of MIT’s main mission as using technology and many other things to make impact in the world, especially social impact. The kind of interdisciplinary work that MITHIC catalyzes really enables all of that work to happen in a new and profound way. The SHASS+ Connectivity Fund, which connects SHASS faculty and researchers with colleagues outside of SHASS, has resulted in collaborations that were not possible before. One example is a project being led by professors Mark Rau, who has a shared appointment between Music and Electrical Engineering and Computer Science, and Antoine Allanore in Materials Science and Engineering. The two of them are looking at how they can take ancient unplayable instruments and recreate them using new technologies for scanning and fabrication. They’re also working with the Museum of Fine Arts, so it’s a whole new type of collaboration that exemplifies MITHIC. 

Q: What has been the community response to MITHIC in its first year?

Makan: It’s been very strong. We found a lot of pent-up demand, both from faculty in SHASS and faculty in the sciences and engineering. Either there were preexisting collaborations that they could take to the next level through MITHIC, or there was the opportunity to meet someone new and talk to someone about a problem and how they could collaborate. MITHIC also hosted a series of Meeting of the Minds events, which are a chance to have faculty and members of the community get to know one another on a certain topic. This community building has been exciting, and led to an overwhelming number of applications last year. There has also been significant student involvement, with several projects bringing on UROPs [Undergraduate Research Opportunities Program projects] and PhD students to help with their research. MITHIC gives a real morale boost and a lot of hope that there is a focus upon building collaborations at MIT and on not forgetting that the world needs humanists, artists, and social scientists.

Yang: One faculty member told me the SHASS+ Connectivity Fund has given them hope for the kind of research that we do because of the cross collaboration. There’s a lot of excitement and enthusiasm for this type of work.

Q: The SHASS+ Connectivity Fund is designed to support interdisciplinary collaborations at MIT. What’s an example of a SHASS+ project that’s worked particularly well? 

Makan: One exciting collaboration is between professors Jörn Dunkel in Mathematics and In Song Kim in Political science. In Song is someone who has done a lot of work on studying lobbying and its effect upon the legislative process. He met Jörn, I believe, at one of MIT’s daycare centers, so it’s a relationship that started in a very informal fashion. But they found they actually had ways of looking at math and quantitative analysis that could complement one another. Their work is creating a new subfield and taking the research in a direction that would not be possible without this funding.

Yang: One of the SHASS+ projects that I think is really interesting is between professors Marzyeh Ghassemi in Electrical Engineering and Computer Science and Esther Duflo in Economics. The two of them are looking at how they can use AI to help health diagnostics in low-resource global settings, where there isn’t a lot of equipment or technology to do basic health diagnostics. They can use handheld, low-cost equipment to do things like predict if someone is going to have a heart attack. And they are not only developing the diagnostic tool, but evaluating the fairness of the algorithm. The project is an excellent example of using a MITHIC grant to make impact in the world.

Q: What has been MITHIC’s impact in terms of elevating research and teaching within SHASS?

Makan: In addition to the SHASS+ Connectivity Fund, there are two other possibilities to help support both SHASS research as well as educational initiatives: the Humanities Cultivation Fund and the SHASS Education Innovation Fund. And both of these are providing funding in excess of what we normally see within SHASS. It both recognizes the importance of the work of our faculty and it also gives them the means to actually take ideas to a much further place. 

One of the projects that MITHIC is helping to support is the Compass Initiative. Compass was started by Lily Tsai, one of our professors in Political Science, along with other faculty in SHASS to create essentially an introductory class to the different methodologies within SHASS. So we have philosophers, music historians, etc., all teaching together, all addressing how we interact with one another, what it means to be a good citizen, what it means to be socially aware and civically engaged. This is a class that is very timely for MIT and for the world. And we were able to give it robust funding so they can take this and develop it even further. 

MITHIC has also been able to take local initiatives in SHASS and elevate them. There has been a group of anthropologists, historians, and urban planners that have been working together on a project called the Living Climate Futures Lab. This is a group interested in working with frontline communities around climate change and sustainability. They work to build trust with local communities and start to work with them on thinking about how climate change affects them and what solutions might look like. This is a powerful and uniquely SHASS approach to climate change, and through MITHIC, we’re able to take this seed effort, robustly fund it, and help connect it to the larger climate project at MIT. 

Q: What excites you most about the future of MITHIC at MIT?

Yang: We have a lot of MIT efforts that are trying to break people out of their disciplinary silos, and MITHIC really is a big push on that front. It’s a presidential initiative, so it’s high on the priority list of what people are thinking about. We’ve already done our first round, and the second round is going to be even more exciting, so it’s only going to gain in force. In SHASS+, we’re actually having two calls for proposals this academic year instead of just one. I feel like there’s still so much possibility to bring together interdisciplinary research across the Institute.

Makan: I’m excited about how MITHIC is changing the culture of MIT. MIT thinks of itself in terms of engineering, science, and technology, and this is an opportunity to think about those STEM fields within the context of human activity and humanistic thinking. Having this shift at MIT in how we approach solving problems bodes well for the world, and it places SHASS as this connective tissue at the Institute. It connects the schools and it can also connect the other initiatives, such as manufacturing and health and life sciences. There’s an opportunity for MITHIC to seed all these other initiatives with the work that goes on in SHASS.


Battery-powered appliances make it easy to switch from gas to electric

Founded by Sam Calisch SM ’14, PhD ’19, Copper offers electric kitchen ranges that plug into standard wall outlets, with no electrical upgrades required.


As batteries have gotten cheaper and more powerful, they have enabled the electrification of everything from vehicles to lawn equipment, power tools, and scooters. But electrifying homes has been a slower process. That’s because switching from gas appliances often requires ripping out drywall, running new wires, and upgrading the electrical box.

Now the startup Copper, founded by Sam Calisch SM ’14, PhD ’19, has developed a battery-equipped kitchen range that can plug into a standard 120-volt wall outlet. The induction range features a lithium iron phosphate battery that charges when energy is cheapest and cleanest, then delivers power when you’re ready to cook.

“We’re making ‘going electric’ like an appliance swap instead of a construction project,” says Calisch. “If you have a gas stove today, there is almost certainly an outlet within reach because the stove has an oven light, clock, or electric igniters. That’s big if you’re in a single-family home, but in apartments it’s an existential factor. Rewiring a 100-unit apartment building is such an expensive proposition that basically no one’s doing it.”

Copper has shipped about 1,000 of its battery-powered ranges to date, often to developers and owners of large apartment complexes. The company also has an agreement with the New York City Housing Authority for at least 10,000 units.

Once installed, the ranges can contribute to a distributed, cleaner, and more resilient energy network. In fact, Copper recently piloted a program in California to offer cheap, clean power to the grid from its home batteries when it would otherwise need to fire up a gas-powered plant to meet spiking electricity demand.

“After these appliances are installed, they become a grid asset,” Calisch says. “We can manage the fleet of batteries to help provide firm power and help grids deliver more clean electricity. We use that revenue, in turn, to further drive down the cost of electrification.”

Finding a mission

Calisch has been working on climate technologies his entire career. It all started at the clean technology incubator Otherlab that was founded by Saul Griffith SM ’01, PhD ’04.

“That’s where I caught the bug for technology and product development for climate impact,” Calisch says. “But I realized I needed to up my game, so I went to grad school in [MIT Professor] Neil Gershenfeld’s lab, the Center for Bits and Atoms. I got to dabble in software engineering, mechanical engineering, electrical engineering, mathematical modeling, all with the lens of building and iterating quickly.”

Calisch stayed at MIT for his PhD, where he worked on approaches in manufacturing that used fewer materials and less energy. After finishing his PhD in 2019, Calisch helped start a nonprofit called Rewiring America focused on advocating for electrification. Through that work, he collaborated with U.S. Senate offices on the Inflation Reduction Act.

The cost of lithium ion batteries has decreased by about 97 percent since their commercial debut in 1991. As more products have gone electric, the manufacturing process for everything from phones to drones, robots, and electric vehicles has converged around an electric tech stack of batteries, electric motors, power electronics, and chips. The countries that master the electric tech stack will be at a distinct manufacturing advantage.

Calisch started Copper to boost the supply chain for batteries while contributing to the electrification movement.

“Appliances can help deploy batteries, and batteries help deploy appliances,” Calisch says. “Appliances can also drive down the installed cost of batteries.”

The company is starting with the kitchen range because its peak power draw is among the highest in the home. Flattening that peak brings big benefits. Ranges are also meaningful: It’s where people gather around and cook each night. People take pride in their kitchen ranges more than, say, a water heater.

Copper’s 30-inch induction range heats up more quickly and reaches more precise temperatures than its gas counterpart. Installing it is as easy as swapping a fridge or dishwasher. Thanks to its 5-kilowatt-hour battery, the range even works when the power goes out.

“Batteries have become 10 times cheaper and are now both affordable and create tangible improvements in quality of life,” Calisch says. “It’s a new notion of climate impact that isn’t about turning down thermostats and suffering for the planet, it’s about adopting new technologies that are better.”

Scaling impact

Calisch says there’s no way for the U.S. to maintain resilient energy systems in the future without a lot of batteries. Because of power transmission and regulatory limitations, those batteries can’t all be located out on the grid.

“We see an analog to the internet,” Calisch says. “In order to deliver millions of times more information across the internet, we didn’t add millions of times more wires. We added local storage and caching across the network. That’s what increased throughput. We’re doing the same thing for the electric grid.”

This summer, Copper raised $28 million to scale its production to meet growing demand for its battery equipped appliances. Copper is also working to license its technology to other appliance manufacturers to help speed the electric transition.

“These electric technologies have the potential to improve people’s lives and, as a byproduct, take us off of fossil fuels,” Calisch says. “We’re in the business of identifying points of friction for that transition. We are not an appliance company; we’re an energy company.”

Looking back, Calisch credits MIT with equipping him with the knowledge needed to run a technical business.

“My time at MIT gave me hands-on experience with a variety of engineering systems,” Calisch. “I can talk to our embedded engineering team or electrical engineering team or mechanical engineering team and understand what they’re saying. That’s been enormously useful for running a company.”

He adds: “I also developed an expansive view of infrastructure at MIT, which has been instrumental in launching Copper and thinking about the electrical grid not just as wires on the street, but all of the loads in our buildings. It’s about making homes not just consumers of electricity, but participants in this broader network.”


Study reveals the role of geography in the opioid crisis

The findings point to state policies involving the presence of “pill mills” as influences on addiction over time.


The U.S. opioid crisis has varied in severity across the country, leading to extended debate about how and why it has spread.

Now, a study co-authored by MIT economists sheds new light on these dynamics, examining the role that geography has played in the crisis. The results show how state-level policies inadvertently contributed to the rise of opioid addiction, and how addiction itself is a central driver of the long-term problem.

The research analyzes data about people who moved within the U.S., as a way of addressing a leading question about the crisis: How much of the problem is attributable to local factors, and to what extent do people have individual characteristics making them prone to opioid problems?

“We find a very large role for place-based factors, but that doesn’t mean there aren’t person-based factors as well,” says MIT economist Amy Finkelstein, co-author of a new paper detailing the study’s findings. “As is usual, it’s rare to find an extreme answer, either one or the other.”

In scrutinizing the role of geography, the scholars developed new insights about the spread of the crisis in relation to the dynamics of addiction. The study concludes that laws restricting pain clinics, or “pill mills,” where opioids were often prescribed, reduced risky opioid use by 5 percent over the 2006-2019 study period. Due to the path of addiction, enacting those laws near the onset of the crisis, in the 1990s, could have reduced risky use by 30 percent over that same time.

“What we do find is that pill mill laws really matter,” says MIT PhD student Dean Li, a co-author of the paper. “The striking thing is that they mattered a lot, and a lot of the effect was through transitions into opioid addiction.”

The paper, “What Drives Risky Prescription Opioid Use: Evidence from Migration,” appears in the Quarterly Journal of Economics. The authors are Finkelstein, who is the John and Jennie S. MacDonald Professor of Economics; Matthew Gentzkow, a professor of economics at Stanford University; and Li, a PhD student in MIT’s Department of Economics.

The opioid crisis, as the scholars note in the paper, is one of the biggest U.S. health problems in recent memory. As of 2017, there were more than twice as many U.S. deaths from opioids as from homicide. There were also at least 10 times as many opioid deaths compared to the number of deaths from cocaine during the 1980s-era crack epidemic in the U.S.

Many accounts and analyses of the crisis have converged on the increase in medically prescribed opioids starting in the 1990s as a crucial part of the problem; this was in turn a function of aggressive marketing by pharmaceutical companies, among other things. But explanations of the crisis beyond that have tended to fracture. Some analyses emphasize the personal characteristics of those who fall into opioid use, such as a past history of substance use, mental health conditions, age, and more. Other analyses focus on place-based factors, including the propensity of area medical providers to prescribe opioids.

To conduct the study, the scholars examined data on prescription opioid use from adults in the Social Security Disability Insurance program from 2006 to 2019, covering about 3 million cases in all. They defined “risky” use as an average daily morphine-equivalent dose of more than 120 milligrams, which has been shown to increase drug dependence.

By studying people who move, the scholars were developing a kind of natural experiment — Finkelstein has also used this same method to examine questions about disparities in health care costs and longevity across the U.S. In this case, in focusing on the opioid consumption patterns of the same people as they lived in different places, the scholars can disentangle the extent to which place-based and personal factors drive usage.

Overall, the study found a somewhat greater role for place-based factors than for personal characteristics in accounting for the drivers of risky opioid use. To see the magnitude of place-based effects, consider someone moving to a state with a 3.5 percentage point higher rate of risky use — akin to moving from the state with the 10th lowest rate of risky use to the state with the 10th highest rate. On average, that person’s probability of risky opioid use would increase by a full percentage point in the first year, then by 0.3 percentage points in each subsequent year.

Some of the study’s key findings involve the precise mechanisms at work beneath these top-line numbers.

In the research, the scholars examine what they call the “addiction channel,” in which opioid users fall into addiction, and the “availability channel,” in which the already-addicted find ways to sustain their use. Over the 2006-2019 period, they find, people falling into addiction through new prescriptions had an impact on overall opioid uptake that was 2.5 times as large as that of existing users getting continued access to prescribed opiods.

When people who are not already risky users of opioids move to places with higher rates of risky opioid use, Finkelstein observes, “One thing you can see very clearly in the data is that in the addiction channel, there’s no immediate change in behavior, but gradually as they’re in this new place you see an increase in risky opioid use.”

She adds: “This is consistent with a model where people move to a new place, have a back problem or car accident and go to a hospital, and if the doctor is more likely to prescribe opioids, there’s more of a risk they’re going to become addicted.”

By contrast, Finkelstein says, “If we look at people who are already risky users of opioids and they move to a new place with higher rates of risky opioid use, you see there’s an immediate increase in their opioid use, which suggests it’s just more available. And then you also see the gradual increase indicating more addiction.”

By looking at state-level policies, the researchers found this trend to be particularly pronounced in over a dozen states that lagged in enacting restrictions on pain clinics, or “pill mills,” where providers had more latitude to prescribe opioids.

In this way the research does not just evaluate the impact of place versus personal characteristics; it quantifies the problem of addiction as an additional dimension of the issue. While many analyses have sought to explain why people first use opioids, the current study reinforces the importance of preventing the onset of addiction, especially because addicted users may later seek out nonprescription opioids, exacerbating the problem even further.

“The persistence of addiction is a huge problem,” Li says. “Even after the role of prescription opioids has subsided, the opioid crisis persists. And we think this is related to the persistence of addiction. Once you have this set in, it’s so much harder to change, compared to stopping the onset of addiction in the first place.”

Research support was provided by the National Institute on Aging, the Social Security Administration, and the Stanford Institute for Economic Policy Research.


Injectable antenna could safely power deep-tissue medical implants

The technology would allow battery-free, minimally invasive, scalable bioelectronic implants such as pacemakers, neuromodulators, and body process monitors.


Researchers from the MIT Media Lab have developed an antenna — about the size of a fine grain of sand — that can be injected into the body to wirelessly power deep-tissue medical implants, such as pacemakers in cardiac patients and neuromodulators in people suffering from epilepsy or Parkinson’s disease.

“This is the next major step in miniaturizing deep-tissue implants,” says Baju Joy, a PhD student in the Media Lab’s Nano-Cybernetic Biotrek research group. “It enables battery-free implants that can be placed with a needle, instead of major surgery.”

paper detailing this work was published in the October issue of IEEE Transactions on Antennas and Propagation. Joy is joined on the paper by lead author Yubin Cai, PhD student at the Media Lab; Benoît X. E. Desbiolles and Viktor Schell, former MIT postdocs; Shubham Yadav, an MIT PhD student in media arts and sciences; David C. Bono, an instructor in the MIT Department of Materials Science and Engineering; and senior author Deblina Sarkar, the AT&T Career Development Associate Professor at the Media Lab and head of the Nano-Cybernetic Biotrek group.

Deep-tissue implants are currently powered either with a several-centimeters-long battery that is surgically implanted in the body, requiring periodic replacement, or with a surgically placed magnetic coil, also of a centimeter-scale size, that can harvest power wirelessly. The coil method functions only at high frequencies, which can cause tissue heating, limiting how much power can be safely delivered to the implant when miniaturized to sub-millimeter sizes.

“After that limit, you start damaging the cells,” says Joy.

As is stated in the team’s IEEE Transactions on Antennas and Propagation paper, “developing an antenna at ultra-small dimensions (less then 500 micrometers) which can operate efficiently in the low-frequency band is challenging.”

The 200-micrometer antenna — developed through research led by Sarkar — operates at low frequencies (109 kHz) thanks to a novel technology in which a magnetostrictive film, which deforms when a magnetic field is applied, is laminated with a piezoelectric film, which converts deformation to electric charge. When an alternating magnetic field is applied, magnetic domains within the magnetostrictive film contort it in the same way that a piece of fabric interwoven with pieces of metal would contort if subjected to a strong magnet. The mechanical strain in the magnetostrictive layer causes the piezoelectric layer to generate electric charges across electrodes placed above and below.

“We are leveraging this mechanical vibration to convert the magnetic field to an electric field,” Joy says.

Sarkar says the newly developed antenna delivers four to five orders of magnitude more power than implantable antennas of similar size that rely on metallic coils and operate in the GHz frequency range.

“Our technology has the potential to introduce a new avenue for minimally invasive bioelectric devices that can operate wirelessly deep within the human body,” she says.

The magnetic field that activates the antenna is provided by a device similar to a rechargeable wireless cell phone charger, and is small enough to be applied to the skin as a stick-on patch or slipped into a pocket close to the skin surface.

Because the antenna is fabricated with the same technology as a microchip, it can be easily integrated with already-existing microelectronics.

“These electronics and electrodes can be easily made to be much smaller than the antenna itself, and they would be integrated with the antenna during nanofabrication,” Joy says, adding that the researchers’ work leverages 50 years of research and development applied to making transistors and other electronics smaller and smaller. “The other components can be tiny, and the entire system can be placed with a needle injection.”

Manufacture of the antennas could be easily scaled up, the researchers say, and multiple antennas and implants could be injected to treat large areas of the body.

Another possible application of this antenna, in addition to pacemaking and neuromodulation, is glucose sensing in the body. Circuits with an optical sensor for detecting glucose already exist, but the process would benefit greatly with a wireless power supply that can be non-invasively integrated inside of the body.

“That’s just one example,” Joy says. “We can leverage all these other techniques that are also developed using the same fabrication methods, and then just integrate them easily to the antenna.”


Study: Identifying kids who need help learning to read isn’t as easy as A, B, C

While most states mandate screenings to guide early interventions for children struggling with reading, many teachers feel underprepared to administer and interpret them.


In most states, schools are required to screen students as they enter kindergarten — a process that is meant to identify students who may need extra help learning to read. However, a new study by MIT researchers suggests that these screenings may not be working as intended in all schools.

The researchers’ survey of about 250 teachers found that many felt they did not receive adequate training to perform the tests, and about half reported that they were not confident that children who need extra instruction in reading end up receiving it.

When performed successfully, these screens can be essential tools to make sure children get the extra help they need to learn to read. However, the new findings suggest that many school districts may need to tweak how they implement the screenings and analyze the results, the researchers say.

“This result demonstrates the need to have a systematic approach for how the basic science on how children learn to read is translated into educational opportunity,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli is the senior author of the new open-access study, which appears today in Annals of Dyslexia. Ola Ozernov-Palchik, an MIT research scientist who is also a research assistant professor at Boston University Wheelock College of Education and Human Development, is the lead author of the study.

Boosting literacy

Over the past 20 years, national reading proficiency scores in the United States have trended up, but only slightly. In 2022, 33 percent of fourth-graders achieved reading proficiency, compared to 29 percent in 1992, according to the National Assessment of Educational Progress reading report card. (The highest level achieved in the past 20 years was 37 percent, in 2017.)

In hopes of boosting those rates, most states have passed laws requiring students to be screened for potential reading struggles early in elementary school. In most cases, the screenings are required two or three times per year, in kindergarten, first grade, and second grade.

These tests are designed to identify students who have difficulty with skills such as identifying letters and the sounds they make, blending sounds to make words, and recognizing words that rhyme. Students with low scores in these measures can then be offered extra interventions designed to help them catch up.

“The indicators of future reading disability or dyslexia are present as early as within the first few months of kindergarten,” Ozernov-Palchik says. “And there’s also an overwhelming body of evidence showing that interventions are most effective in the earliest grades.”

In the new study, the researchers wanted to evaluate how effectively these screenings are being implemented in schools. With help from the National Center for Improving Literacy, they posted on social media sites seeking classroom teachers and reading specialists who are responsible for administering literacy screening tests.

The survey respondents came from 39 states and represented public and private schools, located in urban, suburban, and rural areas. The researchers asked those teachers dozens of questions about their experience with the literacy screenings, including questions about their training, the testing process itself, and the results of the screenings.

One of the significant challenges reported by the respondents was a lack of training. About 75 percent reported that they received fewer than three hours of training on how to perform the screens, and 44 percent received no training at all or less than an hour of training.

“Under ideal conditions, there is an expert who trains the educators, they provide practice opportunities, they provide feedback, and they observe the educators administer the assessment,” Ozernov-Palchik says. “None of this was done in many of the cases.”

Instead, many educators reported that they spent their own time figuring out how to give the evaluations, sometimes working with colleagues. And, new hires who arrived at a school after the initial training was given were often left on their own to figure it out.

Another major challenge was suboptimal conditions for administering the tests. About 80 percent of teachers reported interruptions during the screenings, and 40 percent had to do the screens in noisy locations such as a school hallway. More than half of the teachers also reported technical difficulties in administering the tests, and that rate was higher among teachers who worked at schools with a higher percentage of students from low socioeconomic (SES) backgrounds.

Teachers also reported difficulties when it came to evaluating students categorized as English language learners (ELL). Many teachers relayed that they hadn’t been trained on how to distinguish students who were having trouble reading from those who struggled on the tests because they didn’t speak English well.

“The study reveals that there’s a lot of difficulty understanding how to handle English language learners in the context of screening,” Ozernov-Palchik says. “Overall, those kids tend to be either over-identified or under-identified as needing help, but they’re not getting the support that they need.”

Unrealized potential

Most concerning, the researchers say, is that in many schools, the results of the screening tests are not being used to get students the extra help that they need. Only 44 percent of the teachers surveyed said that their schools had a formal process for creating intervention plans for students after the screening was performed.

“Even though most educators said they believe that screening is important to do, they’re not feeling that it has the potential to drive change the way that it’s currently implemented,” Ozernov-Palchik says.

In the study, the researchers recommended several steps that state legislatures or individual school districts can take to make the screening process run more smoothly and successfully.

“Implementation is the key here,” Ozernov-Palchik says. “Teachers need more support and professional development. There needs to be systematic support as they administer the screening. They need to have designated spaces for screening, and explicit instruction in how to handle children who are English language learners.”

The researchers also recommend that school districts train an individual to take charge of interpreting the screening results and analyzing the data, to make sure that the screenings are leading to improved success in reading.

In addition to advocating for those changes, the researchers are also working on a technology platform that uses artificial intelligence to provide more individualized instruction in reading, which could help students receive help in the areas where they struggle the most.

The research was funded by Schmidt Sciences, the Chan Zuckerberg Initiative for the Reach Every Reader project, and the Halis Family Foundation.


This is your brain without sleep

New research shows attention lapses due to sleep deprivation coincide with a flushing of fluid from the brain — a process that normally occurs during sleep.


Nearly everyone has experienced it: After a night of poor sleep, you don’t feel as alert as you should. Your brain might seem foggy, and your mind drifts off when you should be paying attention.

A new study from MIT reveals what happens inside the brain as these momentary failures of attention occur. The scientists found that during these lapses, a wave of cerebrospinal fluid (CSF) flows out of the brain — a process that typically occurs during sleep and helps to wash away waste products that have built up during the day. This flushing is believed to be necessary for maintaining a healthy, normally functioning brain.

When a person is sleep-deprived, it appears that their body attempts to catch up on this cleansing process by initiating pulses of CSF flow. However, this comes at a cost of dramatically impaired attention.

“If you don’t sleep, the CSF waves start to intrude into wakefulness where normally you wouldn’t see them. However, they come with an attentional tradeoff, where attention fails during the moments that you have this wave of fluid flow,” says Laura Lewis, the Athinoula A. Martinos Associate Professor of Electrical Engineering and Computer Science, a member of MIT’s Institute for Medical Engineering and Science and the Research Laboratory of Electronics, and an associate member of the Picower Institute for Learning and Memory.

Lewis is the senior author of the study, which appears today in Nature Neuroscience. MIT visiting graduate student Zinong Yang is the lead author of the paper.

Flushing the brain

Although sleep is a critical biological process, it’s not known exactly why it is so important. It appears to be essential for maintaining alertness, and it has been well-documented that sleep deprivation leads to impairments of attention and other cognitive functions.

During sleep, the cerebrospinal fluid that cushions the brain helps to remove waste that has built up during the day. In a 2019 study, Lewis and colleagues showed that CSF flow during sleep follows a rhythmic pattern in and out of the brain, and that these flows are linked to changes in brain waves during sleep.

That finding led Lewis to wonder what might happen to CSF flow after sleep deprivation. To explore that question, she and her colleagues recruited 26 volunteers who were tested twice — once following a night of sleep deprivation in the lab, and once when they were well-rested.

In the morning, the researchers monitored several different measures of brain and body function as the participants performed a task that is commonly used to evaluate the effects of sleep deprivation.

During the task, each participant wore an electroencephalogram (EEG) cap that could record brain waves while they were also in a functional magnetic resonance imaging (fMRI) scanner. The researchers used a modified version of fMRI that allowed them to measure not only blood oxygenation in the brain, but also the flow of CSF in and out of the brain. They also measured each subject’s heart rate, breathing rate, and pupil diameter.

The participants performed two attentional tasks while in the fMRI scanner, one visual and one auditory. For the visual task, they had to look at a screen that had a fixed cross. At random intervals, the cross would turn into a square, and the participants were told to press a button whenever they saw this happen. For the auditory task, they would hear a beep instead of seeing a visual transformation.

Sleep-deprived participants performed much worse than well-rested participants on these tasks, as expected. Their response times were slower, and for some of the stimuli, the participants never registered the change at all.

During these momentary lapses of attention, the researchers identified several physiological changes that occurred at the same time. Most significantly, they found a flux of CSF out of the brain just as those lapses occurred. After each lapse, CSF flowed back into the brain.

“The results are suggesting that at the moment that attention fails, this fluid is actually being expelled outward away from the brain. And when attention recovers, it’s drawn back in,” Lewis says.

The researchers hypothesize that when the brain is sleep-deprived, it begins to compensate for the loss of the cleansing that normally occurs during sleep, even though these pulses of CSF flow come with the cost of attention loss.

“One way to think about those events is because your brain is so in need of sleep, it tries its best to enter into a sleep-like state to restore some cognitive functions,” Yang says. “Your brain’s fluid system is trying to restore function by pushing the brain to iterate between high-attention and high-flow states.”

A unified circuit

The researchers also found several other physiological events linked to attentional lapses, including decreases in breathing and heart rate, along with constriction of the pupils. They found that pupil constriction began about 12 seconds before CSF flowed out of the brain, and pupils dilated again after the attentional lapse.

“What’s interesting is it seems like this isn’t just a phenomenon in the brain, it’s also a body-wide event. It suggests that there’s a tight coordination of these systems, where when your attention fails, you might feel it perceptually and psychologically, but it’s also reflecting an event that’s happening throughout the brain and body,” Lewis says.

This close linkage between disparate events may indicate that there is a single circuit that controls both attention and bodily functions such as fluid flow, heart rate, and arousal, according to the researchers.

“These results suggest to us that there’s a unified circuit that’s governing both what we think of as very high-level functions of the brain — our attention, our ability to perceive and respond to the world — and then also really basic fundamental physiological processes like fluid dynamics of the brain, brain-wide blood flow, and blood vessel constriction,” Lewis says.

In this study, the researchers did not explore what circuit might be controlling this switching, but one good candidate, they say, is the noradrenergic system. Recent research has shown that this system, which regulates many cognitive and bodily functions through the neurotransmitter norepinephrine, oscillates during normal sleep.

The research was funded by the National Institutes of Health, a National Defense Science and Engineering Graduate Research Fellowship, a NAWA Fellowship, a McKnight Scholar Award, a Sloan Fellowship, a Pew Biomedical Scholar Award, a One Mind Rising Star Award, and the Simons Collaboration on Plasticity in the Aging Brain.


Studying war in the new nuclear age

MIT political scientist Caitlin Talmadge scrutinizes military postures and international dynamics to understand the risks of escalation.


Nuclear security can be a daunting topic: The consequences seem unimaginable, but the threat is real. Some scholars, though, thrive on the close study of the world’s most dangerous weapons. That includes Caitlin Talmadge PhD ’11, an MIT faculty member who is part of the Institute’s standout group of nuclear security specialists.

Talmadge, who joined the MIT faculty in 2023, has become a prominent scholar in security studies, conducting meticulous research about militaries’ on-the-ground capabilities and how they are influenced by political circumstances.

Earlier in her career, Talmadge studied the military capabilities of armies run by dictatorships. For much of the last decade, though, she has focused on specific issues of nuclear security: When can conventional wars raise risks of nuclear use? In what circumstances will countries ratchet up nuclear threats?

“A scenario that’s interested me a lot is one where the conduct of a conventional war actually raises specific nuclear escalation risks,” Talmadge says, noting that military operations may put pressure on an adversary’s nuclear capabilities. “There are many other instabilities in the world. But I’ve gotten pretty interested in what it means that the U.S., unlike in the Cold War when there was more of a bipolar competition, now faces multiple nuclear-armed adversaries.”

MIT is a natural intellectual home for Talmadge, who is the Raphael Dorman and Helen Starbuck Associate Professor in MIT’s Department of Political Science. She is also part of MIT’s Security Studies Program, long the home of several of the Institute’s nuclear experts, and a core member of the recently launched MIT Center for Nuclear Security Policy, which supports scholarship as well as engagement with nuclear security officials.

“I think dialogue for practitioners and scholars is important for both sides,” says Talmadge, who served on the Defense Policy Board, a panel of outside experts that directly advises senior Pentagon leaders, during the Biden administration. “It’s important for me to do scholarship that speaks to real-world problems. And part of what we do at MIT is train future practitioners. We also sometimes brief current practitioners, meet with them, and get a perspective on the very difficult problems they encounter. That interaction is mutually beneficial.”

Why coup-proofing hurts armies

From a young age, Talmadge was interested in global events, especially military operations, while growing up in a family that supported her curiosity about the world.

“I was fortunate to have parents that encouraged those interests,” Talmadge says. “Education was a really big value in our family. I had great teachers as well.”

Talmadge earned her BA degree at Harvard University, where her interests in international relations and military operations expanded.

“I didn’t even know the term security studies before I went to college,” she says. “But I did, in college, get very interested in studying the problems that had been left by the Soviet nuclear legacy.”

Talmadge then worked at a think tank before deciding to attend graduate school. She had not been fully set on academia, as opposed to, say, working in Washington policy circles. But while earning her PhD at the Institute, she recalls, “it turned out that I really liked research, and I really liked teaching. And I loved being at MIT.”

Talmadge is quick to credit MIT’s security studies faculty for their intellectual guidance, citing the encouragement of a slew of faculty, including Barry Posen (her dissertation advisor), Taylor Fravel, Roger Peterson, Cindy Williams, Owen Cote, and Harvey Sapolsky. Her dissertation examined the combat power of armies run by authoritarians.

That research became her 2015 book, “The Dictator’s Army: Battlefield Effectiveness in Authoritarian Regimes,” published by Cornell University Press. In it she examines how, for one thing, using a military for domestic “coup-proofing” limits its utility against external forces. In the Iran-Iraq war of the 1980s, to cite one example, Iraq’s military improved in the later years of the war, after coup-proofing measures were dropped, whereas Iran’s army performed worse over time as it became more preoccupied with domestic opposition.

“We tend to think of militaries as being designed for external conventional wars, but autocrats use the military for regime-protection tasks, and the more you optimize your military for doing that, sometimes it’s harder to aggregate combat power against an external adversary,” Talmadge says.

In the time since that book was published, even more examples have become evident in the world.

“It may be why the Russian invasion of Ukraine did so poorly in 2022,” she adds. “When you’re a personalist dictator and divide the military so it can’t be strong enough to overthrow you, and direct the intelligence apparatus internally instead of at Ukraine, it affects what your military can achieve. It was not the only factor in 2022, but I think the authoritarian character of Russia’s civil-military relations has played a role in Russia’s rather surprising underperformance in that war.”

On to nuclear escalation

After earning her PhD from MIT, Talmadge joined the faculty of George Washington University, where she taught from 2011 to 2018; she then served on the faculty at Georgetown University, before returning to MIT. And for the last decade, she has continued to study conventional military operations while also exploring the relationship between those operations and nuclear risk.

One issue is that conventional military strikes that might degrade an opponent’s nuclear capabilities. Talmadge is examining why states adopt military postures that threaten adversaries in this way in a book that’s in progress; her co-author is Brendan Rittenhouse Green PhD ’11, a political scientist at the University of Cincinnati.

The book focuses on why the U.S. has at times adopted military postures that increase nuclear pressure on opponents. Historically these escalatory postures have been viewed as unintentional, the result of aggressive military planning.

“In this book we make a different argument, which is that often these escalatory risks are hardwired into force posture deliberately and knowingly by civilian [government leaders] who at times have strategic rationales,” Talmadge says. “If you’re my opponent and I want to deter you from starting a war, it might be helpful to convince you that if you start that war, you’re eventually going to be backed into a nuclear corner.”

This logic may explain why many countries adopt force postures that seem dangerous, and it may offer clues as to how future wars involving the U.S., Russia, China, North Korea, India, or Pakistan could unfold. It also suggests that reining in nuclear escalation risk requires more attention to civilian decisions, not just military behavior.

While being in the middle of research, book-writing, teaching, and engaging with others in the field, Talmadge is certain she has landed in an ideal academic home, especially with MIT’s work in her field being bolstered by the Stanton Foundation gift to establish the Center for Nuclear Security Policy.

“We’re so grateful for the support of the Stanton Foundation,” Talmadge says. “It’s incredibly invigorating to be in a place with so much talent and just constantly learning from the people around you. It’s really amazing, and I do not take it for granted.”

She adds: “It is a little surreal at times to be here because I’m going into the same rooms where I have memories as myself as a grad student, but now I’m the professor. I have a little bit of nostalgia. But one of my primary reasons for coming to MIT, besides the great faculty colleagues, was the students, including the chance to work with the PhD students in the Security Studies Program, and I have not been disappointed. It doesn’t feel like work. It’s a joy to try to have a positive influence helping them become scholars.”


Professor Ioannis Yannas, pioneer of regenerative medicine who invented artificial skin for the treatment of severe burns, dies at 90

A beloved member of the Department of Mechanical Engineering for nearly 60 years, Yannas helped save the lives of thousands of burn victims through his research and innovation.


Professor Ioannis V. Yannas SM ’59, a physical chemist and engineer known for the invention of artificial skin for the treatment of severe burns, and a longtime member of the MIT faculty, died on Oct. 19 at the age of 90.

“Professor Yannas was a beloved and distinguished colleague, teacher, and mentor. The impact of his inventions, and his legacy on the field of bioengineering was immense,” says John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. 

Yannas, known to friends and colleagues as Yanni, held appointments in the MIT Department of Mechanical Engineering and the Harvard-MIT Program in Health Sciences and Technology. His principal research interest throughout his career was the process of induced organ regeneration used to replace organs that are either severely injured or terminally diseased. His work also advanced the clinical use of collagen tubes to treat peripheral nerve injuries.

In 1969, when Yannas approached the late John Burke of Massachusetts General Hospital to collaborate, Burke took him on a tour of a children’s burn unit. “There was a great deal of human misery that was confronting me, and I felt I had to do something about it,” said Yannas in later interviews. In 1981, the pair announced their success: an amalgam of a silicone outer sheet over a scaffolding of molecular material drawn from cow tendon and shark cartilage. Offering protection from infection and dehydration, the scaffolding enabled healthy skin cells to grow. Their discovery would be transformative for the treatment of burn victims.

Their artificial skin, patented and now manufactured as Integra, is still widely used on patients with severe and extensive burns, and for other applications including some types of plastic surgery and the treatment of chronic skin wounds commonly suffered by people with diabetes. The groundbreaking advance, which was later recognized as the first example of organ regeneration in adults, had previously been considered impossible.

“Yanni’s boldness in attacking a wide array of medical problems, including spinal cord transection, in his investigations of applications of collagen-based implants, inspired others, including myself, to work toward solutions to devastating conditions such as blindness, stroke, and spinal cord injury,” says Myron Spector, professor emeritus of orthopedic surgery (biomaterials) at Massachusetts General Brigham and Harvard Medical School, and an affiliate of the Harvard-MIT Program in Health Sciences and Technology. Yannas and Spector created several MIT courses together, including 2.79 (Biomaterial-Tissue Interactions).

“As we were talking about the content [for 2.79], Yanni proposed that we codify the cell behavior underlying the tissue response to implants,” explains Spector. “Within a short time, we laid out the plan for ‘unit cell processes’ to offer students a code to decipher the often inconceivably complex cellular processes that not only underlie the tissue response to implants, but that can guide the selection of the tools necessary to engineer medical devices and reveal their targets for treatment. This was all Yanni, taking a fundamental concept, the control volume used in chemical engineering to analyze systems, and applying it to cellular processes in the human body. I since use UCPs myself all the time.”

As a colleague serving as a collaborator in teaching and in research, Spector says Yannas was eager to help and to learn, bold in his thinking, smart in his choices, able to keep his eye on the goal, respectful of students as well as faculty and other colleagues, and selfless. “These are just the traits that we teach our students to look for when seeking the collaborators who are so necessary in science and engineering.”

Yannas was born on April 14, 1935, in Athens, Greece, where he completed his high school education at Athens College. He received a BA in chemistry at Harvard College in 1957, followed by an MS in chemical engineering from MIT in 1959. After a period of industrial research on polymers at W. R. Grace & Co., in Cambridge, Massachusetts, he attended Princeton University, where he completed an MS degree in 1965 and a PhD in 1966, both in physical chemistry. Yannas joined the MIT faculty immediately thereafter and remained at the Institute for the next 59 years until his passing.

For his discoveries in organ regeneration, Yannas was elected member of the National Academy of Medicine (1987), the National Inventors Hall of Fame (2015), and the National Academy of Engineering (2017). He was also elected Fellow of the American Institute of Medical and Biomedical Engineering.

Further, he was the recipient of many prestigious awards including the Society for Biomaterials Founders Award (1982) and the Society’s Clemson Award for Applied Science and Engineering (1992). He was an author of numerous journal articles, and the sole author of the influential book, “Tissue and Organ Regeneration in Adults.”

Yannas’ work, and 2015 induction into the National Inventors Hall of Fame, was the subject of “Hope Regenerated,” a video produced by the MIT Department of Mechanical Engineering. The film chronicles the development of Integra, which was initially characterized as a “failed experiment” but became a life-saving discovery that launched a new field of regenerative medicine.

“My father's relationship with MIT was deeply meaningful to him,” says Tania Yannas Kluzak. “He regarded MIT as the ideal partner in his life's work — pioneering lifesaving research in organ regeneration.”

Yannas was predeceased by his brother, Pavlos. He is survived by his two children, Tania Kluzak and her husband Gordon, and Alexi Yannas and his wife Maria; his grandchildren — Alexandra, Marina, Sophia, Philippos, and Nefeli; his sister, Elizabeth Sitinas; and many loving relatives and friends. A celebration of life will be announced at a later date. 


With a new molecule-based method, physicists peer inside an atom’s nucleus

An alternative to massive particle colliders, the approach could reveal insights into the universe’s starting ingredients.


Physicists at MIT have developed a new way to probe inside an atom’s nucleus, using the atom’s own electrons as “messengers” within a molecule.

In a study appearing today in the journal Science, the physicists precisely measured the energy of electrons whizzing around a radium atom that had been paired with a fluoride atom to make a molecule of radium monofluoride. They used the environments within molecules as a sort of microscopic particle collider, which contained the radium atom’s electrons and encouraged them to briefly penetrate the atom’s nucleus.

Typically, experiments to probe the inside of atomic nuclei involve massive, kilometers-long facilities that accelerate beams of electrons to speeds fast enough to collide with and break apart nuclei. The team’s new molecule-based method offers a table-top alternative to directly probe the inside of an atom’s nucleus.

Within molecules of radium monofluoride, the team measured the energies of a radium atom’s electrons as they pinged around inside the molecule. They discerned a slight energy shift and determined that electrons must have briefly penetrated the radium atom’s nucleus and interacted with its contents. As the electrons winged back out, they retained this energy shift, providing a nuclear “message” that could be analyzed to sense the internal structure of the atom’s nucleus.

The team’s method offers a new way to measure the nuclear “magnetic distribution.” In a nucleus, each proton and neutron acts like a small magnet, and they align differently depending on how the nucleus’ protons and neutrons are spread out. The team plans to apply their method to precisely map this property of the radium nucleus for the first time. What they find could help to answer one of the biggest mysteries in cosmology: Why do we see much more matter than antimatter in the universe?

“Our results lay the groundwork for subsequent studies aiming to measure violations of fundamental symmetries at the nuclear level,” says study co-author Ronald Fernando Garcia Ruiz, who is the Thomas A. Franck Associate Professor of Physics at MIT. “This could provide answers to some of the most pressing questions in modern physics.”

The study’s MIT co-authors include Shane Wilkins, Silviu-Marian Udrescu, and Alex Brinson, along with collaborators from multiple institutions including the Collinear Resonance Ionization Spectroscopy Experiment (CRIS) at CERN in Switzerland, where the experiments were performed.

Molecular trap

According to scientists’ best understanding, there must have been almost equal amounts of matter and antimatter when the universe first came into existence. However, the overwhelming majority of what scientists can measure and observe in the universe is made from matter, whose building blocks are the protons and neutrons within atomic nuclei.

This observation is in stark contrast to what our best theory of nature, the Standard Model, predicts, and it is thought that additional sources of fundamental symmetry violation are required to explain the almost complete absence of antimatter in our universe. Such violations could be seen within the nuclei of certain atoms such as radium.

Unlike most atomic nuclei, which are spherical in shape, the radium atom’s nucleus has a more asymmetrical configuration, similar to a pear. Scientists predict that this pear shape could significantly enhance their ability to sense the violation of fundamental symmetries, to the extent that they may be potentially observable.

“The radium nucleus is predicted to be an amplifier of this symmetry breaking, because its nucleus is asymmetric in charge and mass, which is quite unusual,” says Garcia Ruiz, whose group has focused on developing methods to probe radium nuclei for signs of fundamental symmetry violation.

Peering inside the nucleus of a radium atom to investigate fundamental symmetries is an incredibly tricky exercise.

“Radium is naturally radioactive, with a short lifetime and we can currently only produce radium monofluoride molecules in tiny quantities,” says study lead author Shane Wilkins, a former postdoc at MIT. “We therefore need incredibly sensitive techniques to be able measure them.”

The team realized that by placing a radium atom in a molecule, they could contain and amplify the behavior of its electrons.

“When you put this radioactive atom inside of a molecule, the internal electric field that its electrons experience is orders of magnitude larger compared to the fields we can produce and apply in a lab,” explains Silviu-Marian Udrescu PhD ’24, a study co-author. “In a way, the molecule acts like a giant particle collider and gives us a better chance to probe the radium’s nucleus.”

Energy shift

In their new study, the team first paired radium atoms with fluoride atoms to create molecules of radium monofluoride. They found that in this molecule, the radium atom’s electrons were effectively squeezed, increasing the chance for electrons to interact with and briefly penetrate the radium nucleus.

The team then trapped and cooled the molecules and sent them through a system of vacuum chambers, into which they also sent lasers, which interacted with the molecules. In this way the researchers were able to precisely measure the energies of electrons inside each molecule.

When they tallied the energies, they found that the electrons appeared to have a slightly different energy compared to what physicists expect if they did not penetrate the nucleus. Although this energy shift was small — just a millionth of the energy of the laser photon used to excite the molecules — it gave unambiguous evidence of the molecules’ electrons interacting with the protons and neutrons inside the radium nucleus.

“There are many experiments measuring interactions between nuclei and electrons outside the nucleus, and we know what those interactions look like,” Wilkins explains. “When we went to measure these electron energies very precisely, it didn’t quite add up to what we expected assuming they interacted only outside of the nucleus. That told us the difference must be due to electron interactions inside the nucleus.”

“We now have proof that we can sample inside the nucleus,” Garcia Ruiz says. “It’s like being able to measure a battery’s electric field. People can measure its field outside, but to measure inside the battery is far more challenging. And that’s what we can do now.”

Going forward, the team plans to apply the new technique to map the distribution of forces inside the nucleus. Their experiments have so far involved radium nuclei that sit in random orientations inside each molecule at high temperature. Garcia Ruiz and his collaborators would like to be able to cool these molecules and control the orientations of their pear-shaped nuclei such that they can precisely map their contents and hunt for the violation of fundamental symmetries.

“Radium-containing molecules are predicted to be exceptionally sensitive systems in which to search for violations of the fundamental symmetries of nature,” Garcia Ruiz says. “We now have a way to carry out that search.”

This research was supported, in part, by the U.S. Department of Energy. 


At MIT, a day of hands-on, kid-friendly learning

Organized by the MIT Museum, the 2025 Cambridge Science Carnival included activities with air cannons, sea bots, and electron microscopes.


Back and better than ever, the Cambridge Science Carnival, an annual free family-friendly science extravaganza, was held on Sunday, Sept. 21, at the Kendall/MIT Open Space.

Founded by the MIT Museum in 2007, and organized with the support of MIT and the City of Cambridge, the 2025 event drew approximately 20,000 attendees and featured more than 140 activities, demonstrations, and installations tied to the topics of science, technology, engineering, arts, and mathematics (STEAM).

Among the carnival’s wide variety of activities was the popular robot petting zoo, an annual showcase involving more than a dozen companies and local robotics clubs, including FIRST Tech Challenge and FIRST Robotics Competition. Participants were invited to engage with a range of different robots, from building with LEGOs and erector sets to piloting underwater robots to learning about the science of automation.

“Every exhibit and every moment of discovery today reinforces why Cambridge remains a global leader in STEAM,” Cambridge Mayor Denise Simmons said in her remarks at the event. “The creativity, ingenuity, and joy on display here today are a powerful reminder that science isn’t just for labs and lecture halls — it’s for everyone.”

Other activities included an appearance from the popular kid-friendly podcast “Tumble Science,” with co-host Marshall Escamilla testing fans’ knowledge of different STEAM topics drawn from “Tumble Science.” Clark University’s smoke-ring air cannons were a particular hit with the under-7-year-old set, while “Cycle To Science” showed off a gravity-defying bicycle wheel that, while spinning, was suspended on one side by a simple piece of string. Attendees also enjoyed live music, food trucks, and activities exploring everything from pipette art to the chemistry of glass. 

At the robot petting zoo, FIRST Robotics volunteer mentor Dominique Regli reflected on the event as someone who was herself first inspired by similar festivals more than a decade earlier. 

“Seeing kids of all ages interact with the robots made me think back to when I was a seventh grader, and how getting to see some of these robots for the first time was truly life-changing for me,” said Regli, who has been involved with FIRST Robotics since 2018 and is now an MIT computer science PhD student and affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL). “These types of events are so important to expose students to what's possible.”

Throughout its history, a key aspect of the carnival has been MIT’s close collaboration with the City of Cambridge, which ran several activities. Cambridge Public School teachers led and the Public Works Department hosted a “Trash or Treasure” activity, which helped teach kids about recycling and composting. The carnival is a major contribution to the Institute’s objective of connecting the MIT ecosystem with Cambridge residents and local communities. 

“Cambridge is one of the world’s leading science cities, with more Nobel laureates per capita than any other city on the planet,” says Michael John Gorman, director of the MIT Museum. “The Cambridge Science Carnival is a beloved day in the Cambridge calendar which brings science out of the labs and onto the streets.” 

With a focus on engaging families and kids ranging from kindergarten to the eighth grade, one important outcome this year was to give undergraduate and graduate students the opportunity to showcase their work and hone their skills in clearly communicating science concepts to the public. There were over 50 activities led by MIT students, as well as participants from other local schools such as Boston College and Boston, Clark, Harvard, Northeastern, and Tufts universities.

Typically organized as part of the annual Cambridge Science Festival, this year the Cambridge Science Carnival returned as a standalone event while the larger festival undergoes a strategic transition for its relaunch in 2026. The MIT Museum offered free admission during the carnival and is always free to Cambridge residents, as well as active military, EBT cardholders, members of the Massachusetts Teachers Association, and MIT ID holders.

“For MIT researchers, discovery often happens in a lab or a classroom, but the truth is, the spark of discovery can happen anywhere,” said Alfred Ironside, MIT vice president for communications, in remarks at the event. “That’s really what today is about: feeding curiosity, encouraging questions, and showing that science is not locked away behind closed doors. It’s for everyone.”


Startup’s tablets deliver cancer drugs more evenly over time

An MIT team’s technology could allow cancer drugs to be delivered more steadily into the bloodstream, to improve effectiveness and reduce side effects.


Pills are by far the most convenient form of cancer treatment, but most oral cancer drugs quickly dissolve in the stomach, delivering a burst of chemicals into the bloodstream all at once. That can cause side effects. It also may limit the drug’s effectiveness because its concentration in the blood may become too low after the initial burst.

Now, the startup Enzian Pharmaceutics, founded by Aron Blaesi PhD ’14 and former principal research scientist Nannaji Saka ScD ’74, is developing an oral tablet that delivers drugs into the gastric fluid and the blood steadily over time. The company’s tablets use tiny 3D-printed fibers that turn into a gel-like substance when exposed to water. The tablets have been shown to stay in the stomach of animals for up to a day, slowly degrading while releasing the drug in controlled quantities.

The company is currently validating its tablets’ ability to stay in place in a small number of healthy human volunteers. In about a year, it plans to begin testing the technology’s ability to improve the effectiveness and safety of cancer drugs in patients.

“A lot of orally delivered cancer drugs could benefit from this,” says Blaesi, who incorporated the company in 2016. “Right now, soon after someone has taken a cancer drug, its concentration in the blood can be up to 50 times greater than when they are supposed to take the next pill. During the peak, the drug goes into the heart, it goes into the liver, the brain, and it can cause a lot of problems, while at the end of the dosing interval the concentration in the blood may be too low. By taking out that peak and increasing the time the drug is released, we could improve the effectiveness of treatments and mitigate certain side effects.”

In search of innovation

When Blaesi came to MIT, he knew he wanted his mechanical engineering PhD work to form the basis of a company. Early on, as part of the Novartis-MIT Center for Continuous Manufacturing, he worked on manufacturing pills with an injection molding machine that melted and solidified the material, in contrast to the traditional process of compacting powder. He noticed injection molding made the pills far less porous.

“If you put a typical pill into a fluid or into the stomach, the fluid percolates the pores and quickly dissolves it,” Blaesi explains. “That’s not the case when you have an injection molded product. That’s when Dr. Saka, who I met almost daily to discuss my research with, and I started to realize that microstructure is very important.”

The researchers began exploring how different tablet microstructures changed the rate at which drugs are released. For more precision, they moved from injection molding to 3D printing.

Using MIT machine shops, Blaesi built a 3D printer and produced tightly wound microstructures that could carry the drugs. He focused on fibrous structures with space between the fibers, because they would allow gastrointestinal fluid to percolate the pill and dissolve rapidly. He tested the structures in both his Cambridge, Massachusetts, apartment and at MIT’s shared facilities.

Blaesi then experimented with different carrier materials, finding that the higher the molecular weight, the longer it took the pill to dissolve because the material would absorb water and expand before degrading.

“Initially I thought, ‘Oh no, the drug isn’t being dissolved fast enough anymore,’” Blaesi recalls. “Then we thought, ‘Everything has its place.’ This could stay in the stomach for longer because of the expansion. Then it could release the drug over time. We realized this wouldn’t just improve manufacturing, it would improve the product.”

In 2019, Blaesi and Saka published the first paper on their expandable fibrous tablets for prolonged drug delivery. It received a mixed reception.

“Some reviewers said, ‘Research on similar gastroretentive dosage forms has been done for 40 years and no one’s really succeeded,’” Blaesi recalls. “People said, ‘It will never work. Do experiments in animals and then we’ll talk.’”

Blaesi moved back to Switzerland during the Covid-19 pandemic and ran his animal experiments there.

“The reviewers were right: What we had didn’t work,” Blaesi says. “But we adjusted the design and showed we could make the pill stay in the stomach for longer.”

Inside Enzian’s final tablet design, tiny fibers are arranged in a grid. When water flows into the spaces between the fibers, they expand to form a strong gel-like substance that slowly erodes in the stomach, steadily releasing the drug. In animal studies, Enzian’s team showed its technology allowed tablets to remain in the stomach for 12 to 24 hours before being safely excreted.

The team soon found cancer drugs would be a good fit for their technology.

“A lot of cancer drugs are only soluble in acidic solutions, so they can only be absorbed while the drug is in the stomach,” Blaesi explains. “But on an empty stomach, the drug may be in the stomach for just 30 or 40 minutes at present. For a full stomach, it’s a few hours. And because you have a short time to deliver the drug, you need to release a high dose immediately. That shoots up the blood concentration, and if you dose every 12 hours, the concentration is going down during the other 10 hours.”

From the lab to patients

In upcoming human trials, Enzian plans to use its tablets to deliver a drug for prostate cancer that Blaesi says is currently dosed at several hundred milligrams a day. He hopes to get down to about a tenth of that with a better therapeutic effect.

Enzian also believes its technology could improve treatments for blood, skin, and breast cancers.

“This could really be used to improve treatment for a variety of cancers,” Blaesi says. “We believe this is a more efficient and effective way to deliver drugs.”

Maximizing effectiveness and minimizing side effects is also important in clinical trials, where a new drug’s superiority over existing treatments must be shown, and a single adverse event can end its development.

The upcoming move into patients is the culmination of more than a decade of work for Blaesi, who is confident Enzian can deliver on its promise of improving treatments.

“The opportunity is enormous,” Blaesi says. “So many oral cancer drugs have this delivery problem. We still have to do the efficacy and safety studies on patients, but we expect this to be a game changer.”


Five with MIT ties elected to National Academy of Medicine for 2025

Professors Facundo Batista and Dina Katabi, along with three additional MIT alumni, are honored for their outstanding professional achievement and commitment to service.


On Oct. 20 during its annual meeting, the National Academy of Medicine announced the election of 100 new members, including MIT faculty members Dina Katabi and Facundo Batista, along with three additional MIT alumni.

Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine, recognizing individuals who have demonstrated outstanding professional achievement and commitment to service.

Facundo Batista is the associate director and scientific director of the Ragon Institute of MGH, MIT and Harvard, as well as the first Phillip T. and Susan M. Ragon Professor in the MIT Department of Biology. The National Academy of Medicine recognized Batista for “his work unraveling the biology of antibody-producing B cells to better understand how our body’s immune systems responds to infectious disease.” More recently, Facundo’s research has advanced preclinical vaccine and therapeutic development for globally important diseases including HIV, malaria, and influenza.

Batista earned a PhD from the International School of Advanced Studies and established his lab in 2002 as a member of the Francis Crick Institute (formerly the London Research Institute), simultaneously holding a professorship at Imperial College London. In 2016, he joined the Ragon Institute to pursue a new research program applying his expertise in B cells and antibody responses to vaccine development, and preclinical vaccinology for diseases including SARS-CoV-2 and HIV. Batista is an elected fellow or member of the U.K. Academy of Medical Sciences, the American Academy of Microbiology, the Academia de Ciencias de América Latina, and the European Molecular Biology Organization, and he is chief editor of The EMBO Journal.

Dina Katabi SM ’99, PhD ’03 is the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science at MIT. Her research spans digital health, wireless sensing, mobile computing, machine learning, and computer vision. Katabi’s contributions include efficient communication protocols for the internet, advanced contactless biosensors, and novel AI models that interpret physiological signals. The NAM recognized Katabi for “pioneering digital health technology that enables non-invasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. She has translated this technology to advance objective, sensitive measures of disease trajectory and treatment response in clinical trials.”

Katabi is director of the MIT Center for Wireless Networks and Mobile Computing. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), where she leads the Networks at MIT Research Group. Katabi received a bachelor’s degree from the University of Damascus and MS and PhD degrees in computer science from MIT. She is a MacArthur Fellow; a member of the American Academy of Arts and Sciences, National Academy of Sciences, and National Academy of Engineering; and a recipient of the ACM Computing Prize. 

Additional MIT alumni who were elected to the NAM for 2025 are:

Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy, and inspires positive actions across sectors.

“I am deeply honored to welcome these extraordinary health and medicine leaders and researchers into the National Academy of Medicine,” says NAM President Victor J. Dzau. “Their demonstrated excellence in tackling public health challenges, leading major discoveries, improving health care, advancing health policy, and addressing health equity will critically strengthen our collective ability to tackle the most pressing health challenges of our time.” 


A “seating chart” for atoms helps locate their positions in materials

The DIGIT imaging tool could enable the design of quantum devices and shed light on atomic-scale processes in cells and tissues.


If you think of a single atom as a grain of sand, then a wavelength of visible light — which is a thousand times larger than the atom’s width — is comparable to an ocean wave. The light wave can dwarf an atom, missing it entirely as it passes by. This gulf in size has long made it impossible for scientists to see and resolve individual atoms using optical microscopes alone.

Only recently have scientists found ways to break this “diffraction limit,” to see features that are smaller than the wavelength of light. With new techniques known as super-resolution microscopy, scientists can see down to the scale of a single molecule.

And yet, individual atoms have still been too small for optical microscopes — which are much simpler and less expensive than super-resolution techniques — to distinguish, until now.

In an open-access paper appearing today in Nature Communications, MIT scientists present a new computational method that enables optical microscopes to resolve individual atoms and zero in on their exact locations in a crystal structure.

The team’s new “discrete grid imaging technique,” or DIGIT, is a computational imaging approach that scientists can apply to optical data to calculate the most probable location of individual atoms based on a very important clue: the material’s known atomic configuration. As long as scientists have an idea of what a material’s physical atomic layout should be, they can use this layout as a sort of map to determine where specific atoms or features must be located.

“It’s like you know there’s a seating chart,” says lead author Yuqin “Sophia” Duan, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “Previous methods could tell you what section an atom is in. But now we can take this seating chart as prior knowledge, and can pinpoint exactly which seat the atom is in.”

With DIGIT, the team can now pinpoint individual atoms with a resolution of 0.178 angstroms. (One angstrom is one-tenth of a nanometer, which is less than half the width of a single atom). The technique enables optical microscopes to localize atomic-scale features in any material that has a known atomic pattern, such as crystalline materials or certain proteins with repeating molecular chains.

The team says the method could help guide the design of quantum devices, which often require placing individual atoms precisely within a crystal. Beyond quantum technologies, DIGIT can also provide new insights into how defects and impurities shape the behavior of advanced materials — from semiconductors to superconductors.

Duan’s co-authors at MIT are Qiushi Gu, Hanfeng Wang, Yong Hu, Kevin Chen, Matthew Trusheim, and EECS Professor Dirk Englund.

Grid support

Scientists can image features smaller than a nanometer, and sometimes as small as a single atom, but not with optical microscopes. In these cases, they use transmission or scanning electron microscopes, which send high-energy beams of electrons into a sample to generate an image based on the pattern in which the electrons scatter. These electron-based methods produce highly detailed, near-atomic-scale images, but they require imaging in a vacuum and at high energies, and only work in ultrathin, synthetic, or solid-state materials. Electron-based imaging methods are too harsh for more delicate living specimens.

In contrast, optical microscopes work at lower energies, in ambient conditions, and are safe to apply to biological samples. But they cannot discern features past the diffraction limit. Essentially, a microscope is unable to see features that are smaller than half the wavelength of visible light (about 200 to 300 nanometers) that a microscope sends in to probe a sample. Atoms, then, have long eluded optical microscopes.

In 2014, however, the Nobel Prize in Chemistry was awarded to developers of a technique to overcome the diffraction limit. Super-resolution microscopy works by shining laser light on a sample at a specific frequency that is known to resonate with a feature of interest, such as a certain molecule. When that molecule resonates, it effectively announces its presence in the material. With this optical manipulation, scientists can visualize features as small as 10 nanometers, on the scale of a single molecule.

Duan and Englund looked to resolve even smaller features by combining super-resolution techniques with statistical analysis and knowledge of materials that has often been overlooked.

“One thing that gets ignored in imaging optical systems is the physical configuration of your system,” Duan says. “For example, if you want to visualize defects in a diamond system, these defects can only be at certain positions, since they have to follow the grid of the atomic diamond structure. In proteins, there are some structures that grow in an organized grid, and their location must be somewhere along that physical grid.”

The researchers suspected that if they had a reasonably accurate map of a material’s atomic structure (imagine the ball-and-stick models of molecules in a chemistry classroom), they might use such maps as a template and try out many different orientations and rotation angles to find the closest match to whatever features are initially visualized using super-resolution microscopy.

“No one has ever done this before, to include the physical constraints or system information into the resolution technique,” Duan says.

Blurriness, collapsed

To test their idea, the researchers worked with a sample of diamond — a crystal whose microstructure is well-understood and resembles an organized grid, or lattice, of repeating carbon atoms. The researchers blindly knocked out some carbon atoms in the lattice and replaced them with silicon atoms using facilities at MIT.nano. Their goal was to identify and determine the precise locations of the errant silicon atoms.

To do so, they first used established techniques of super-resolution microscopy to probe the diamond sample, using lasers set to specific wavelengths at frequencies known to resonate with the silicon atoms but not the carbon atoms. With this technique, researchers produced images that depicted the silicon atoms, but only as a uniform blur.

The team then applied DIGIT to further resolve the picture. Knowing that diamond in general has a grid-like configuration of carbon atoms, the researchers took this configuration as a map, or seating chart of sorts, and assumed that any silicon atoms that took the place of a carbon atom must sit within the grid, which has a known spacing between atoms.

“Because the silicon atoms are substituting carbon atoms in the lattice, that means they must obey some integer multiple of the atomic spacing of the crystal lattice, separating any two silicon atoms,” Englund says. “That prior knowledge makes the localization different than if you add a purely amorphous material.”

The researchers essentially simulated many possibilities of orientations and rotation angles of the diamond lattice, superimposed on the blurry image of atoms that the super-resolution microscopy technique produced.

“The trick is that, in certain materials, atoms aren’t spread out randomly — they sit on a grid inside a crystal,” Duan explains. “We used that prior knowledge to sharpen the microscope’s picture. Once we factored in that ‘atomic grid,’ the blurriness collapsed, and we could pinpoint exact positions.”

In the end, they found the technique could pinpoint the location of individual silicon atoms within the diamond lattice, with a precision of 0.178 angstroms — the sharpest resolution of any optical-based imaging technique. The team has made the DIGIT code available on GitHub for anyone to apply to their optical measurements, provided their sample of interest has a well-understood atomic structure. Then, they hope that scientists will start to see much finer and detailed features and processes using light.

“It’s a big step — it takes optical microscopes into the realm of atomic scale, something people thought only electron microscopes or X-rays could do,” Duan says. “That opens up a whole new way of studying materials and biology.”


Charts can be social artifacts that communicate more than just data

Researchers find that design elements of data visualizations influence viewers’ assumptions about the source of the information and its trustworthiness.


The degree to which someone trusts the information depicted in a chart can depend on their assumptions about who made the data visualization, according to a pair of studies by MIT researchers.

For instance, if someone infers that a graph about a controversial topic like gun violence was produced by an organization they feel is in opposition with their beliefs or political views, they may discredit the information or dismiss the visualization all together.

The researchers found that even the clearest visualizations often communicate more than the data they explicitly depict, and can elicit strong judgments from viewers about the social contexts, identities, and characteristics of those who made the chart.

Readers make these assessments about the social context of a visualization primarily from its design features, like the color palette or the way information is arranged, rather than the underlying data. Often, these inferences are unintended by the designers.

Qualitative and quantitative studies revealed that these social inferences aren’t restricted to certain subgroups, nor are they caused by limited data literacy.

The researchers consolidate their findings into a framework that scientists and communicators can use to think critically about how design choices might affect these social assumptions. Ultimately, they hope this work leads to better strategies for scientific communication.

“If you are scrolling through social media and you see a chart, and you immediately dismiss it as something an influencer has produced just to get attention, that shapes your entire experience with the chart before you even dig into the data. We’ve shown in these papers that visualizations do more than just communicate the data they are depicting — they also communicate other social signals,” says Arvind Satyanarayan, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-senior author of this research.

He is joined on the paper by co-lead authors Amy Rae Fox, a former CSAIL postdoc, and Michelle Morgenstern, a current postdoc in MIT’s anthropology program; and co-senior author Graham M. Jones, professor of anthropology. Two related papers on this research will be presented at the IEEE Visualization Conference.

Charts as social artifacts

During the height of the Covid-19 pandemic, social media was awash in charts from organizations like the World Health Organization and Centers for Disease Control and Prevention, which were designed to convey information about the spread of disease.

The MIT researchers studied how these visualizations were being used to discuss the pandemic. They found that some citizen scientists were using the underlying data to make visualizations of their own, challenging the findings of mainstream science.

“This was an unexpected discovery as, previously, citizen scientists were typically aligned with mainstream scientists. It took us a few years to figure out how to study this phenomenon more deeply,” Satyanarayan says.

Most research into data visualization studies how charts communicate data. Instead, the researchers wanted to explore visualizations from a social and linguistic perspective to assess the information they convey beyond the data.

Linguistic anthropologists have found that, while language allows people to communicate ideas, it also holds social meaning beyond the words people use. For instance, an accent or dialect can indicate that someone is part of a particular community.

By “pointing” to certain social meanings, identities, and characteristics, language serves what is known as a socio-indexical function.

“We wanted to see if things in the visual language of data communication might point to certain institutions, or the kinds of people in those institutions, that carry a meaning that could be unintended by the makers of the visualization,” Jones says.

To do this, the researchers conducted an initial, qualitative study of users on the social media platform Tumblr. During one-on-one interviews, the researchers showed users a variety of real visualizations from online sources, as well as modified visualizations where they removed the textual information, like titles and axes labels.

Stripping out the textual information was particularly important, since it mimics the way people often interact with online visualizations.

“Our engagement with social media is a few quick seconds. People aren’t taking the time to read the title of a chart or look at the data very carefully,” Satyanarayan says.

The interviews revealed that users made detailed inferences about the people or organizations who created the visualizations based on what they called “vibes,” design elements, like colors or the use of certain graphics. These inferences in turn impacted their trust in the data.

For instance, after seeing a chart with the flags of Georgia and Texas and a graph with two lines in red and black, but no text, one user said, “This kind of looks like something a Texas Republican (legislator) would put on Twitter or on their website, or as part of a campaign presentation.”

A quantitative approach

Building on this initial work, the researchers used the same methodology in three quantitative studies involving surveys sent to larger groups of people from a variety of backgrounds.

They found the same phenomenon: People make inferences about the social context of a visualization based on its design, which can lead to misunderstandings about, and mistrust in, the data it depicts.

For instance, users felt some visualizations were so neatly arranged they believed them to be advertisements, and therefore not trustworthy. In another example, one user dismissed a chart by a Pulitzer-prize winning designer because they felt the hand-drawn graphical style indicated it was made by “some female Instagram influencer who is just trying to look for attention.”

“If that is the first reaction someone has to a chart, it is going to massively impact the degree to which they trust it,” Satyanarayan says.

Moreover, when the researchers reintroduced text in the visualizations from which it had been removed, users still made these social inferences.

Typically, in data visualization, the solution to such a problem would be to create clearer charts or educate people about data literacy. But this research points to a completely different kind of data literacy, Jones says.

“It is not erroneous for people to be drawing these inferences. It requires a lot of cultural knowledge about where visualizations come from, how they are made, and how they circulate. Drawing these inferences is a feature, not a bug, of the way we use signs,” he says.

From these results, they created a classification framework to organize the social inferences users made and the design elements that contributed to them. They hope the typology serves as a tool designers can use to develop more effective visualizations, as well as a starting point for additional studies.

Moving forward, the researchers want to continue exploring the role of data visualizations as social artifacts, perhaps by drilling down on each design feature they identified in the typology. They also want to expand the scope of their study to include visualizations in research papers and scientific journals.

“Part of the value of this work is a methodological contribution to render a set of phenomena amenable to experimental study. But this work is also important because it showcases an interdisciplinary cross-pollination that is powerful and unique to MIT,” Jones says.

This work was supported, in part, by MIT METEOR and PFPFEE fellowships, an Amar G. Bose Fellowship, an Alfred P. Sloan Fellowship, and the National Science Foundation.


The student becomes the teacher

Titus Roesler was ready to drop his class in signal processing. Now, he hopes to become an expert in the field.


Coming from a small high school in rural South Dakota that didn’t offer advanced placement (AP) classes, Titus Roesler ’25 didn’t have the easiest start at MIT. But when his efforts to catch up academically to his peers led to a job as a teaching assistant, it changed everything.

Roesler, who graduated last spring with a bachelor’s degree in electrical engineering and is now working on a master’s, has built a reputation for himself as a student-teacher at MIT. Since discovering his affinity for teaching and mentoring, he’s been a teaching assistant for four different classes and designed two seminars from scratch.

Through teaching, Roesler has not only helped other students, but also improved his own grasp of complex subjects. That includes signal processing, which involves manipulating signals, such as radio waves, to make them more useful for applications like wireless communications. He has become fascinated by the topic and hopes to continue working in the field.

Roesler lights up when talking about teaching, but he didn’t always think it was in the cards.

“I don't know that anyone who knew me pre-MIT would believe that I do things like give recitations to crowded rooms, because I think everyone thought, ‘Titus is that quiet kid, he never talked at all.’”

Learning through teaching

Growing up in Marion, South Dakota, a town with a population around 800, Roesler didn’t have MIT on his radar, but he knew he liked math. His high school capstone project involved helping his classmates on the math section of the ACT, and he tutored a few of his classmates. His teacher let him teach trigonometry one day, and he toured local colleges with the plan of becoming a high school math teacher.

But that changed after he self-studied calculus through MIT’s OpenCourseWare offerings and set his sights on the Institute.

Roesler worked overtime during his first year at MIT to catch up with what his peers had learned back in high school. On his first physics exam, he answered only one question correctly — a multiple-choice question he had guessed on. But MIT’s Experimental Study Group (ESG) kept him afloat during his first year, and it quickly led to more opportunities.

When, in the spring of his first year, his multivariable calculus instructor asked him to stay after class one day, Roesler was sure he was in trouble. She actually wanted to see if he could TA for her next year.

“I was flattered because there was still a month left in the class. Plenty of time for me to fail,” Roesler jokes.

He loved the job. During a Friday night office hour session, he stayed for extra hours to help a student whom he saw a lot of himself in — someone who was also from a rural background and had also entered MIT without a strong mathematics background. He went on to become the student’s tutor. The position gave him the opportunity to be the teacher he’d always wanted to have.

As a TA, “I wasn't coming at things from the perspective of ‘Everyone already knows A, B, C’ before I explained. I would always try to start from the ground up and give my perspective on it,” Roesler says.

From his mentorship and teaching work, he received the Undergraduate Teaching Award from the Department of Electrical Engineering and Computer Science and the Outstanding Associate Advisor Award from the Office of the First Year. After joining ESG during his first year, Roesler stayed on as an associate advisor in the learning community for the next three years. His work earned him the Fiekowsky Award for Excellence in Teaching and the Fiekowsky Award for Community Service.

The right blend

Signal processing, the focus of his graduate work, “is where calculus, geometry, linear algebra, probability, statistics, algorithms, and numerical analysis all come into play on practical problems of real-world interest,” Roesler says. “For me, it’s the right blend of theory and application.”

Due to the field’s wide scope, Roesler notices potential applications for signal processing everywhere, and how different fields intersect within the discipline. “Everything comes together in just the right way,” he says.

He is especially interested in signal-processing problems such as source separation, which aims to recover a set of source signals from a set of mixed signals. During his senior year, he spent two semesters on a project where he wrote a Python program to separate harmonies in Bach chorales.

For his master’s degree, following a summer research internship at MIT Lincoln Laboratory, Roesler has stayed at the laboratory, this time venturing into high-frequency radio communications. He’s currently working on a research project that applies the theory of compressed sensing (which states that, under certain conditions, it is possible to reconstruct signals from very few measurements) to communications.

What fascinates Roesler are “something-from-nothing” problems.

“The kind of problems I’m interested in are underdetermined, inverse problems,” he says. For example, imagine trying to reconstruct a full image from only a handful of pixels. While on the surface this seems impossible, researchers have recovered quality images by applying the techniques of compressed sensing.

Running and serving

Roesler has also spent extensive time running, a sport he’s loved since fifth grade. In 2023, he raced a marathon in 2 hours and 46 minutes and went on to run the Boston Marathon in both 2024 and 2025. To prepare, he spent a lot of time reading up on the psychology of running, which he says was the first time he used the scientific method. Now, he just runs for fun and uses it as a way to focus and collect this thoughts.

He has also served on the executive team of the Undergraduate Mathematics Association, as a resident peer mentor at Baker House, and a tutor for two classes. At the PKG Center, he’s been a program lead and counselor for its pre-orientation program.

Roesler still gets excited about seeing the impact of his teaching. At the end of one semester teaching a tutorial, he took his class on a picnic. They surprised him with a card and a bag of goodies. 

Recalling the moment, he says: “I thought, How does it get better? It was wonderful.”


MIT Maritime Consortium releases “Nuclear Ship Safety Handbook”

First-of-its-kind handbook serves as a guide for design safety for civilian nuclear ships.


Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standard for safe maritime nuclear propulsion.

“This handbook is a critical tool in efforts to support the adoption of nuclear in the maritime industry,” explains Themis Sapsis, the William I. Koch Professor of Mechanical Engineering at MIT, director of the MIT Center for Ocean Engineering, and co-director of the MIT Maritime Consortium. “The goal is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry.”

Using research data and standards, combined with operational experiences during civilian maritime nuclear operations, the handbook provides unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations, a topic of growing importance on the national and international stage. 

“Right now, the nuclear-maritime policies that exist are outdated and often tied only to specific technologies, like pressurized water reactors,” says Jose Izurieta, a graduate student in the Department of Mechanical Engineering (MechE) Naval Construction and Engineering (2N) Program, and one of the handbook authors. “With the recent U.K.-U.S. Technology Prosperity Deal now including civil maritime nuclear applications, I hope the handbook can serve as a foundation for creating a clear, modern regulatory framework for nuclear-powered commercial ships.”

The recent memorandum of understanding signed by the U.S. and U.K calls for the exploration of “novel applications of advanced nuclear energy, including civil maritime applications,” and for the parties to play “a leading role informing the establishment of international standards, potential establishment of a maritime shipping corridor between the Participants’ territories, and strengthening energy resilience for the Participants’ defense facilities.”

“The U.S.-U.K. nuclear shipping corridor offers a great opportunity to collaborate with legislators on establishing the critical framework that will enable the United States to invest on nuclear-powered merchant vessels — an achievement that will reestablish America in the shipbuilding space,” says Fotini Christia, the Ford International Professor of the Social Sciences, director of the Institute for Data, Systems, and Society (IDSS), and co-director of the MIT Maritime Consortium.

“With over 30 nations now building or planning their first reactors, nuclear energy’s global acceptance is unprecedented — and that momentum is key to aligning safety rules across borders for nuclear-powered ships and the respective ports,” says Koroush Shirvan, the Atlantic Richfield Career Development Professor in Energy Studies at MIT and director of the Reactor Technology Course for Utility Executives.

The handbook, which is divided into chapters in areas involving the overlapping nuclear and maritime safety design decisions that will be encountered by engineers, is careful to balance technical and practical guidance with policy considerations.

Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, says the handbook will significantly benefit the entire maritime community, specifically naval architects and marine engineers, by providing standardized guidelines for design and operation specific to nuclear powered commercial vessels.

“This will assist in enhancing safety protocols, improve risk assessments, and ensure consistent compliance with international regulations,” MacLean says. “This will also help foster collaboration amongst engineers and regulators. Overall, this will further strengthen the reliability, sustainability, and public trust in nuclear-powered maritime systems.”

Anthony Valiaveedu, the handbook’s lead author, and co-author Nat Edmonds, are both students in the MIT Master’s Program in Technology and Policy (TPP) within the IDSS. The pair are also co-authors of a paper published in Science Policy Review earlier this year that offered structured advice on the development of nuclear regulatory policies.

“It is important for safety and technology to go hand-in-hand,” Valiaveedu explains. “What we have done is provide a risk-informed process to begin these discussions for engineers and policymakers.”

“Ultimately, I hope this framework can be used to build strong bilateral agreements between nations that will allow nuclear propulsion to thrive,” says fellow co-author Izurieta.

Impact on industry

“Maritime designers needed a source of information to improve their ability to understand and design the reactor primary components, and development of the 'Nuclear Ship Safety Handbook' was a good step to bridge this knowledge gap,” says Christopher J. Wiernicki, American Bureau of Shipping (ABS) chair and CEO. “For this reason, it is an important document for the industry.”

The ABS, which is the American classification society for the maritime industry, develops criteria and provides safety certification for all ocean-going vessels. ABS is among the founding members of the MIT Maritime Consortium. Capital Clean Energy Carriers Corp., HD Korea Shipbuilding and Offshore Engineering, and Delos Navigation Ltd. are also consortium founding members. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.

“As we consider a net-zero framework for the shipping industry, nuclear propulsion represents a potential solution. Careful investigation remains the priority, with safety and regulatory standards at the forefront,” says Jerry Kalogiratos, CEO of Capital Clean Energy Carriers Corp. “As first movers, we are exploring all options. This handbook lays the technical foundation for the development of nuclear-powered commercial vessels.”

Sangmin Park, senior vice president at HD Korea Shipbuilding and Offshore Engineering, says “The 'Nuclear Ship Safety Handbook' marks a groundbreaking milestone that bridges shipbuilding excellence and nuclear safety. It drives global collaboration between industry and academia, and paves the way for the safe advancement of the nuclear maritime era.”

Maritime at MIT

MIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. Maritime Consortium projects, including the handbook, reflect national priorities aimed at revitalizing the U.S. shipbuilding and commercial maritime industries.

The MIT Maritime Consortium, which launched in 2024, brings together MIT and maritime industry leaders to explore data-powered strategies to reduce harmful emissions, optimize vessel operations, and support economic priorities.

“One of our most important efforts is the development of technologies, policies, and regulations to make nuclear propulsion for commercial ships a reality,” says Sapsis. “Over the last year, we have put together an interdisciplinary team with faculty and students from across the Institute. One of the outcomes of this effort is this very detailed document providing detailed guidance on how such effort should be implemented safely.”

Handbook contributors come from multiple disciplines and MIT departments, labs, and research centers, including the Center for Ocean Engineering, IDSS, MechE’s Course 2N Program, the MIT Technology and Policy Program, and the Department of Nuclear Science and Engineering.

MIT faculty members and research advisors on the project include Sapsis; Christia; Shirvan; MacLean; Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering, director, Center for Advanced Nuclear Energy Systems, and director of science and technology for the Nuclear Reactor Laboratory; and Captain Andrew Gillespy, professor of the practice and director of the Naval Construction and Engineering (2N) Program.

“Proving the viability of nuclear propulsion for civilian ships will entail getting the technologies, the economics and the regulations right,” says Buongiorno. “This handbook is a meaningful initial contribution to the development of a sound regulatory framework.”

“We were lucky to have a team of students and knowledgeable professors from so many fields,” says Edmonds. “Before even beginning the outline of the handbook, we did significant archival and history research to understand the existing regulations and overarching story of nuclear ships. Some of the most relevant documents we found were written before 1975, and many of them were stored in the bellows of the NS Savannah.”

The NS Savannah, which was built in the late 1950s as a demonstration project for the potential peacetime uses of nuclear energy, was the first nuclear-powered merchant ship. The Savannah was first launched on July 21, 1959, two years after the first nuclear-powered civilian vessel, the Soviet ice-breaker Lenin, and was retired in 1971.

Historical context for this project is important, because the reactor technologies envisioned for maritime propulsion today are quite different from the traditional pressurized water reactors used by the U.S. Navy. These new reactors are being developed not just in the maritime context, but also to power ports and data centers on land; they all use low-enriched uranium and are passively cooled. For the maritime industry, Sapsis says, “the technology is there, it’s safe, and it’s ready.”

The Nuclear Ship Safety Handbook is publicly available on the MIT Maritime Consortium website and from the MIT Libraries.