General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing

He joins Nikos Trichakis in guiding the cross-cutting initiative of the MIT Schwarzman College of Computing.


Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.

Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy.

“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.

Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.

Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.

The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.


Antonio Torralba, three MIT alumni named 2025 ACM fellows

Torralba’s research focuses on computer vision, machine learning, and human visual perception.


Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.

A principal investigator within both the Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds, and Machines, Torralba received his BS in telecommunications engineering from Telecom BCN, Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab. 

Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field. 

Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya — BarcelonaTech (UPC). 

ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.


3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs

Professor James Collins discusses how collaboration has been central to his research into combining computational predictions with new experimental platforms.


In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.

James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.

In this Q&A, Collins speaks about his latest work and goals for this research.

Q.  You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research? 

A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.

At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.

The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.

Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?

A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.

Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.

Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?

A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.

Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.


3D-printed metamaterials that stretch and fail by design

New framework supports design and fabrication of compliant materials such as printable textiles and functional foams, letting users predict deformation and material failure.


Metamaterials — materials whose properties are primarily dictated by their internal microstructure, and not their chemical makeup — have been redefining the engineering materials space for the last decade. To date, however, most metamaterials have been lightweight options designed for stiffness and strength.

New research from the MIT Department of Mechanical Engineering introduces a computational design framework to support the creation of a new class of soft, compliant, and deformable metamaterials. These metamaterials, termed 3D woven metamaterials, consist of building blocks that are composed of intertwined fibers that self-contact and entangle to endow the material with unique properties.

“Soft materials are required for emerging engineering challenges in areas such as soft robotics, biomedical devices, or even for wearable devices and functional textiles,” explains Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor of mechanical engineering.

In an open-access paper published Jan. 26 in the journal Nature Communications, researchers from Portela’s lab provide a universal design framework that generates complex 3D woven metamaterials with a wide range of properties. The work also provides open-source code that allows users to create designs to fit specifications and generate a file for printing or simulating the material using a 3D printer.

“Normal knitting or weaving have been constrained by the hardware for hundreds of years — there’s only a few patterns that you can make clothes out of, for example — but that changes if hardware is no longer a limitation,” Portela says. “With this framework, you can come up with interesting patterns that completely change the way the textile is going to behave.”

Possible applications include wearable sensors that move with human skin, fabrics for aerospace or defense needs, flexible electronic devices, and a variety of other printable textiles.

The team developed general design rules — in the form of an algorithm — that first provide a graph representation of the metamaterial. The attributes of this graph eventually dictate how each fiber is placed and connected within the metamaterial. The fundamental building blocks are woven unit cells that can be functionally graded via control of various design parameters, such as the radius and pitch of the fibers that make up the woven struts.

“Because this framework allows these metamaterials to be tailored to be softer in one place and stiffer in another, or to change shape as they stretch, they can exhibit an exceptional range of behaviors that would be hard to design using conventional soft materials,” says Molly Carton, lead author of the study. Carton, a former postdoc in Portela’s lab, is now an assistant research professor in mechanical engineering at the University of Maryland.

Further, the simulation framework also allows users to predict the deformation response of these materials, capturing complex phenomena such as self-contact within fibers and entanglement, and design to predict and resist deformation or tearing patterns.

“The most exciting part was being able to tailor failure in these materials and design arbitrary combinations,” says Portela. “Based on the simulations, we were able to fabricate these spatially varying geometries and experiment on them at the microscale.”

This work is the first to provide a tool for users to design, print, and simulate an emerging class of metamaterials that are extensible and tough. It also demonstrates that through tuning of geometric parameters, users can control and predict how these materials will deform and fail, and presents several new design building blocks that substantially expand the property space of woven metamaterials.

“Until now, these complex 3D lattices have been designed manually, painstakingly, which limits the number of designs that anyone has tested,” says Carton. “We’ve been able to describe how these woven lattices work and use that to create a design tool for arbitrary woven lattices. With that design freedom, we’re able to design the way that a lattice changes shape as it stretches, how the fibers entangle and knot with each other, as well as how it tears when stretched to the limit.”

Carton says she believes the framework will be useful across many disciplines. “In releasing this framework as a software tool, our hope is that other researchers will explore what’s possible using woven lattices and find new ways to use this design flexibility,” she says. “I’m looking forward to seeing what doors our work can open.”

The paper, “Design framework for programmable three-dimensional woven metamaterials,” is available now in the journal Nature Communications. Its other MIT-affiliated authors are James Utama Surjadi, Bastien F. G. Aymon, and Ling Xu.

This work was performed, in part, through the use of MIT.nano’s fabrication and characterization facilities.


Terahertz microscope reveals the motion of superconducting electrons

For the first time, the new scope allowed physicists to observe terahertz “jiggles” in a superconducting fluid.


You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.

Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.

Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.

But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.

In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.

The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.

“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.

By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.

“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”

In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.

Hitting a limit

Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.

Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.

With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.

“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”

Zooming in

The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.

By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.

The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.

As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.

“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”

With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.

“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.

This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.

“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”

This research was supported, in part, by the U.S. Department of Energy and by the Gordon and Betty Moore Foundation.


MIT winter club sports energized by the Olympics

Members of the MIT curling and figure skating clubs are embracing the 2026 Winter Olympics, an international showcase for their — and many other — cherished winter sports.


With the Milano Cortina 2026 Winter Olympics officially kicking off today, several of MIT’s winter sports clubs are hosting watch parties to cheer on their favorite players, events, and teams.

Members of MIT’s Curling Club are hosting a gathering to support their favorite teams. Co-presidents Polly Harrington and Gabi Wojcik are rooting for the United States.

“I’m looking forward to watching the Olympics and cheering for Team USA. I grew up in Seattle, and during the Vancouver Olympics, we took a family trip to the games. The most affordable tickets were to the curling events, and that was my first exposure to the sport. Seeing it live was really cool. I was hooked,” says Harrington.

Wojcik says, “It’s a very analytical and strategic sport, so it’s perfect for MIT students. Physicists still don't entirely agree on why the rocks behave the way they do. Everyone in the club is welcoming and open to teaching new people to play. I’d never played before and learned from scratch. The other advantage of playing is that it is a lifelong sport.”

The two say the biggest misconception about curling, other than that it is easy, is that it is played on ice skates. It’s neither easy nor played on skates. The stone, or rock, as it is often called, weighs 43 pounds, and is always made from the same weathered granite from Scotland so that the playing field, or in this case, ice, is even.

Both agree that playing is a great way to meet other students from MIT that they might not otherwise have the chance to.

Having seen the American team at a recent tournament, Wojcik is hoping the team does well, but admits that if Scotland wins, she’ll also be happy. Harrington met members of the U.S. men's curling team, Luc Violette and Ben Richardson, when curling in Seattle in high school, and will be cheering for them.

The Curling Club team practices and competes in tournaments in the New England area from late September until mid-March and always welcomes new members, no previous experience is necessary to join.

Figure Skating Club

The MIT Figure Skating Club is also excited for the 2026 Olympics and has been watching preliminary events (nationals) leading up to the games with great anticipation. Eleanor Li, the current club president, and Amanda (Mandy) Paredes Rioboo, former president, say holding small gatherings to watch the Olympics is a great way for the team to bond further.

Li began taking skating lessons at age 14 and fell in love with the sport right away, and has been skating ever since. Paredes Rioboo started lessons at age 5 and practices in the mornings with other club members, saying, “there is no better way to start the day.”

The Figure Skating Club currently has 120 members and offers a great way to meet friends who share the same passion. Any MIT student, regardless of skill level, is welcome to join the club.

Li says, “We have members ranging from former national and international competitors to people who are completely new to the ice.” Adding that her favorite part of skating is, “the freeing feeling of wind coming at you when you’re gliding across the ice! And all the life lessons learned — time management, falling again and again, and getting up again and again, the artistry and expressiveness of this beautiful sport, and most of all the community.”

Paredes Rioboo agrees. “The sport taught me discipline, to work at something and struggle with it until I got good at it. It taught me to be patient with myself and to be unafraid of failure.”

“The Olympics always bring a lot of buzz and curiosity around skating, and we’re excited to hopefully see more people come to our Saturday free group lessons, try skating for the first time, and maybe even join the club,” says Li.

Li and Paredes Rioboo are ready to watch the games with other club members. Li says, “I’m especially excited for women’s singles skating. All of the athletes have trained so hard to get there, and I’m really looking forward to watching all the beautiful skating. Especially Kaori Sakamoto.”

“I’m excited to watch Alysa Liu and Ami Nakai,” adds Paredes Rioboo.

Students interested in joining the Figure Skating Club can find more information here.


Katie Spivakovsky wins 2026 Churchill Scholarship

The MIT senior will pursue a master’s degree at Cambridge University in the U.K. this fall.


MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.

Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.

At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.

On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.

“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.

The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.

MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.


Counter intelligence

Architecture students bring new forms of human-machine interaction into the kitchen.


How can artificial intelligence step out of a screen and become something we can physically touch and interact with?

That question formed the foundation of class 4.043/4.044 (Interaction Intelligence), an MIT course focused on designing a new category of AI-driven interactive objects. Known as large language objects (LLOs), these physical interfaces extend large language models into the real world. Their behaviors can be deliberately generated for specific people or applications, and their interactions can evolve from simple to increasingly sophisticated — providing meaningful support for both novice and expert users.

“I came to the realization that, while powerful, these new forms of intelligence still remain largely ignorant of the world outside of language,” says Marcelo Coelho, associate professor of the practice in the MIT Department of Architecture, who has been teaching the design studio for several years and directs the Design Intelligence Lab. “They lack real-time, contextual understanding of our physical surroundings, bodily experiences, and social relationships to be truly intelligent. In contrast, LLOs are physically situated and interact in real time with their physical environment. The course is an attempt to both address this gap and develop a new kind of design discipline for the age of AI.”

Given the assignment to design an interactive device that they would want in their lives, students Jacob Payne and Ayah Mahmoud focused on the kitchen. While they each enjoy cooking and baking, their design inspiration came from the first home computer: the Honeywell 316 Kitchen Computer, marketed by Neiman Marcus in 1969. Priced at $10,000, there is no record of one ever being sold.

“It was an ambitious but impractical early attempt at a home kitchen computer,” says Payne, an architecture graduate student. “It made an intriguing historical reference for the project.”

“As somebody who likes learning to cook — especially now, in college as an undergrad — the thought of designing something that makes cooking easy for those who might not have a cooking background and just wants a nice meal that satisfies their cravings was a great starting point for me,” says Mahmoud, a senior design major.

“We thought about the leftover ingredients you have in the refrigerator or pantry, and how AI could help you find new creative uses for things that you may otherwise throw away,” says Payne.

Generative cuisine

The students designed their device — named Kitchen Cosmo — with instructions to function as a “recipe generator.” One challenge was prompting the LLM to consistently acknowledge real-world cooking parameters, such as heating, timing, or temperature. One issue they worked out was having the LLM recognize flavor profiles and spices accurate to regional and cultural dishes around the world to support a wider range of cuisines. Troubleshooting included taste-testing recipes Kitchen Cosmo generated. Not every early recipe produced a winning dish.

“There were lots of small things that AI wasn't great at conceptually understanding,” says Mahmoud. “An LLM needs to fundamentally understand human taste to make a great meal.”

They fine-tuned their device to allow for the myriad ways people approach preparing a meal. Is this breakfast, lunch, dinner, or a snack? How advanced of a cook are you? How much meal prep time do you have? How many servings will you make? Dietary preferences were also programmed, as well as the type of mood or vibe you want to achieve. Are you feeling nostalgic, or are you in a celebratory mood? There’s a dial for that.

“These selections were the focal point of the device because we were curious to see how the LLM would interpret subjective adjectives as inputs and use them to transform the type of recipe outputs we would get,” says Payne.

Unlike most AI interactions that tend to be invisible, Payne and Mahmoud wanted their device to be more of a “partner” in the kitchen. The tactile interface was intentionally designed to structure the interaction, giving users a physical control over how the AI responded.

“While I’ve worked with electronics and hardware before, this project pushed me to integrate the components with a level of precision and refinement that felt much closer to a product-ready device,” says Payne of the course work.

Retro and red

After their electronic work was completed, the students designed a series of models using cardboard until settling on the final look, which Payne describes as “retro.” The body was designed in a 3D modeling software and printed. In a nod to the original Honeywell computer, they painted it red.

A thin, rectangular device about 18 inches in height, Kitchen Cosmo has a webcam that hinges open to scan ingredients set on a counter. It translates these into a recipe that takes into consideration general spices and condiments common in most households. An integrated thermal printer delivers a printed recipe that is torn off. Recipes can be stored in a plastic receptacle on its base.

While Kitchen Cosmo made a modest splash in design magazines, both students have ideas where they will take future iterations.

Payne would like to see it “take advantage of a lot of the data we have in the kitchen and use AI as a mediator, offering tips for how to improve on what you’re cooking at that moment.”

Mahmoud is looking at how to optimize Kitchen Cosmo for her thesis. Classmates have given feedback to upgrade its abilities. One suggestion is to provide multi-person instructions that give several people tasks needed to complete a recipe. Another idea is to create a “learning mode” in which a kitchen tool — for example, a paring knife — is set in front of Kitchen Cosmo, and it delivers instructions on how to use the tool. Mahmoud has been researching food science history as well.

“I’d like to get a better handle on how to train AI to fully understand food so it can tailor recipes to a user’s liking,” she says.

Having begun her MIT education as a geologist, Mahmoud’s pivot to design has been a revelation, she says. Each design class has been inspiring. Coelho’s course was her first class to include designing with AI. Referencing the often-mentioned analogy of “drinking from a firehouse” while a student at MIT, Mahmoud says the course helped define a path for her in product design.

“For the first time, in that class, I felt like I was finally drinking as much as I could and not feeling overwhelmed. I see myself doing design long-term, which is something I didn’t think I would have said previously about technology.” 


SMART launches new Wearable Imaging for Transforming Elderly Care research group

WITEC is working to develop the first wearable ultrasound imaging system to monitor chronic conditions in real-time, with the goal of enabling earlier detection and timely intervention.


What if ultrasound imaging is no longer confined to hospitals? Patients with chronic conditions, such as hypertension and heart failure, could be monitored continuously in real-time at home or on the move, giving health care practitioners ongoing clinical insights instead of the occasional snapshots — a scan here and a check-up there. This shift from reactive, hospital-based care to preventative, community and home-based care could enable earlier detection and timely intervention, and truly personalized care.

Bringing this vision to reality, the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, has launched a new collaborative research project: Wearable Imaging for Transforming Elderly Care (WITEC). 

WITEC marks a pioneering effort in wearable technology, medical imaging, research, and materials science. It will be dedicated to foundational research and development of the world’s first wearable ultrasound imaging system capable of 48-hour intermittent cardiovascular imaging for continuous and real-time monitoring and diagnosis of chronic conditions such as hypertension and heart failure. 

This multi-million dollar, multi-year research program, supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence and Technological Enterprise program, brings together top researchers and expertise from MIT, Nanyang Technological University (NTU Singapore), and the National University of Singapore (NUS). Tan Tock Seng Hospital (TTSH) is WITEC’s clinical collaborator and will conduct patient trials to validate long-term heart imaging for chronic cardiovascular disease management.

“Addressing society’s most pressing challenges requires innovative, interdisciplinary thinking. Building on SMART’s long legacy in Singapore as a hub for research and innovation, WITEC will harness interdisciplinary expertise — from MIT and leading institutions in Singapore — to advance transformative research that creates real-world impact and benefits Singapore, the U.S., and societies all over. This is the kind of collaborative research that not only pushes the boundaries of knowledge, but also redefines what is possible for the future of health care,” says Bruce Tidor, chief executive officer and interim director of SMART, who is also an MIT professor of biological engineering and electrical engineering and computer science.

Industry-leading precision equipment and capabilities

To support this work, WITEC’s laboratory is equipped with advanced tools, including Southeast Asia’s first sub-micrometer 3D printer and the latest Verasonics Vantage NXT 256 ultrasonic imaging system, which is the first unit of its kind in Singapore.

Unlike conventional 3D printers that operate at millimeter or micrometer scales, WITEC’s 3D printer can achieve sub‑micrometer resolution, allowing components to be fabricated at the level of single cells or tissue structures. With this capability, WITEC researchers can prototype bioadhesive materials and device interfaces with unprecedented accuracy — essential to ensuring skin‑safe adhesion and stable, long‑term imaging quality.

Complementing this is the latest Verasonics ultrasonic imaging system. Equipped with a new transducer adapter and supporting a significantly larger number of probe control channels than existing systems, it gives researchers the freedom to test highly customized imaging methods. This allows more complex beamforming, higher‑resolution image capture, and integration with AI‑based diagnostic models — opening the door to long‑duration, real‑time cardiovascular imaging not possible with standard hospital equipment.

Together, these technologies allow WITEC to accelerate the design, prototyping, and testing of its wearable ultrasound imaging system, and to demonstrate imaging quality on phantoms and healthy subjects.

Transforming chronic disease care through wearable innovation 

Chronic diseases are rising rapidly in Singapore and globally, especially among the aging population and individuals with multiple long-term conditions. This trend highlights the urgent need for effective home-based care and easy-to-use monitoring tools that go beyond basic wellness tracking.

Current consumer wearables, such as smartwatches and fitness bands, offer limited physiological data like heart rate or step count. While useful for general health, they lack the depth needed to support chronic disease management. Traditional ultrasound systems, although clinically powerful, are bulky, operator-dependent, can only be deployed episodically within the hospitals, and are limited to snapshots in time, making them unsuitable for long-term, everyday use.

WITEC aims to bridge this gap with its wearable ultrasound imaging system that uses bioadhesive technology to enable up to 48 hours of uninterrupted imaging. Combined with AI-enhanced diagnostics, the innovation is aimed at supporting early detection, home-based pre-diagnosis, and continuous monitoring of chronic diseases.

Beyond improving patient outcomes, this innovation could help ease labor shortages by freeing up ultrasound operators, nurses, and doctors to focus on more complex care, while reducing demand for hospital beds and resources. By shifting monitoring to homes and communities, WITEC’s technology will enable patient self-management and timely intervention, potentially lowering health-care costs and alleviating the increasing financial and manpower pressures of an aging population.

Driving innovation through interdisciplinary collaboration

WITEC is led by the following co-lead principal investigators: Xuanhe Zhao, professor of mechanical engineering and professor of civil and environmental engineering at MIT; Joseph Sung, senior vice president of health and life sciences at NTU Singapore and dean of the Lee Kong Chian School of Medicine (LKCMedicine); Cher Heng Tan, assistant dean of clinical research at LKCMedicine; Chwee Teck Lim, NUS Society Professor of Biomedical Engineering at NUS and director of the Institute for Health Innovation and Technology at NUS; and Xiaodong Chen, distinguished university professor at the School of Materials Science and Engineering within NTU. 

“We’re extremely proud to bring together an exceptional team of researchers from Singapore and the U.S. to pioneer core technologies that will make wearable ultrasound imaging a reality. This endeavor combines deep expertise in materials science, data science, AI diagnostics, biomedical engineering, and clinical medicine. Our phased approach will accelerate translation into a fully wearable platform that reshapes how chronic diseases are monitored, diagnosed and managed,” says Zhao, who serves as a co-lead PI of WITEC.

Research roadmap with broad impact across health care, science, industry, and economy

Bringing together leading experts across interdisciplinary fields, WITEC will advance foundational work in soft materials, transducers, microelectronics, data science and AI diagnostics, clinical medicine, and biomedical engineering. As a deep-tech R&D group, its breakthroughs will have the potential to drive innovation in health-care technology and manufacturing, diagnostics, wearable ultrasonic imaging, metamaterials, diagnostics, and AI-powered health analytics. WITEC’s work is also expected to accelerate growth in high-value jobs across research, engineering, clinical validation, and health-care services, and attract strategic investments that foster biomedical innovation and industry partnerships in Singapore, the United States, and beyond.

“Chronic diseases present significant challenges for patients, families, and health-care systems, and with aging populations such as Singapore, those challenges will only grow without new solutions. Our research into a wearable ultrasound imaging system aims to transform daily care for those living with cardiovascular and other chronic conditions — providing clinicians with richer, continuous insights to guide treatment, while giving patients greater confidence and control over their own health. WITEC’s pioneering work marks an important step toward shifting care from episodic, hospital-based interventions to more proactive, everyday management in the community,” says Sung, who serves as co‑lead PI of WITEC.

Led by Violet Hoon, senior consultant at TTSH, clinical trials are expected to commence this year to validate long-term heart monitoring in the management of chronic cardiovascular disease. Over the next three years, WITEC aims to develop a fully integrated platform capable of 48-hour intermittent imaging through innovations in bioadhesive couplants, nanostructured metamaterials, and ultrasonic transducers.

As MIT’s research enterprise in Singapore, SMART is committed to advancing breakthrough technologies that address pressing global challenges. WITEC adds to SMART’s existing research endeavors that foster a rich exchange of ideas through collaboration with leading researchers and academics from the United States, Singapore, and around the world in key areas such as antimicrobial resistance, cell therapy development, precision agriculture, AI, and 3D-sensing technologies.


New tissue models could help researchers develop drugs for liver disease

Two models more accurately replicate the physiology of the liver, offering a new way to test treatments for fat buildup.


More than 100 million people in the United States suffer from metabolic dysfunction-associated steatotic liver disease (MASLD), characterized by a buildup of fat in the liver. This condition can lead to the development of more severe liver disease that causes inflammation and fibrosis.

In hopes of discovering new treatments for these liver diseases, MIT engineers have designed a new type of tissue model that more accurately mimics the architecture of the liver, including blood vessels and immune cells.

Reporting their findings today in Nature Communications, the researchers showed that this model could accurately replicate the inflammation and metabolic dysfunction that occur in the early stages of liver disease. Such a device could help researchers identify and test new drugs to treat those conditions.

This is the latest study in a larger effort by this team to use these types of tissue models, also known as microphysiological systems, to explore human liver biology, which cannot be easily replicated in mice or other animals.

In another recent paper, the researchers used an earlier version of their liver tissue model to explore how the liver responds to resmetirom. This drug is used to treat an advanced form of liver disease called metabolic dysfunction-associated steatohepatitis (MASH), but it is only effective in about 30 percent of patients. The team found that the drug can induce an inflammatory response in liver tissue, which may help to explain why it doesn’t help all patients.

“There are already tissue models that can make good preclinical predictions of liver toxicity for certain drugs, but we really need to better model disease states, because now we want to identify drug targets, we want to validate targets. We want to look at whether a particular drug may be more useful early or later in the disease,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation at MIT, a professor of biological engineering and mechanical engineering, and the senior author of both studies.

Former MIT postdoc Dominick Hellen is the lead author of the resmetirom paper, which appeared Jan. 14 in Communications Biology. Erin Tevonian PhD ’25 and PhD candidate Ellen Kan, both in the Department of Biological Engineering, are the lead authors of today’s Nature Communications paper on the new microphysiological system.

Modeling drug response

In the Communications Biology paper, Griffith’s lab worked with a microfluidic device that she originally developed in the 1990s, known as the LiverChip. This chip offers a simple scaffold for growing 3D models of liver tissue from hepatocytes, the primary cell type in the liver.

This chip is widely used by pharmaceutical companies to test whether their new drugs have adverse effects on the liver, which is an important step in drug development because most drugs are metabolized by the liver.

For the new study, Griffith and her students modified the chip so that it could be used to study MASLD.

Patients with MASLD, a buildup of fat in the liver, can eventually develop MASH, a more severe disease that occurs when scar tissue called fibrosis forms in the liver. Currently, resmetirom and the GLP-1 drug semaglutide are the only medications that are FDA-approved to treat MASH. Finding new drugs is a priority, Griffith says.

“You’re never declaring victory with liver disease with one drug or one class of drugs, because over the long term there may be patients who can’t use them, or they may not be effective for all patients,” she says.

To create a model of MASLD, the researchers exposed the tissue to high levels of insulin, along with large quantities of glucose and fatty acids. This led to a buildup of fatty tissue and the development of insulin resistance, a trait that is often seen in MASLD patients and can lead to type 2 diabetes.

Once that model was established, the researchers treated the tissue with resmetirom, a drug that works by mimicking the effects of thyroid hormone, which stimulates the breakdown of fat.

To their surprise, the researchers found that this treatment could also lead to an increase in immune signaling and markers of inflammation.

“Because resmetirom is primarily intended to reduce hepatic fibrosis in MASH, we found the result quite paradoxical,” Hellen says. “We suspect this finding may help clinicians and scientists alike understand why only a subset of patients respond positively to the thyromimetic drug. However, additional experiments are needed to further elucidate the underlying mechanism.”

A more realistic liver model

Tiny yellow bits flow through vessels

In the Nature Communications paper, the researchers reported a new type of chip that allows them to more accurately reproduce the architecture of the human liver. The key advance was developing a way to induce blood vessels to grow into the tissue. These vessels can deliver nutrients and also allow immune cells to flow through the tissue.

“Making more sophisticated models of liver that incorporate features of vascularity and immune cell trafficking that can be maintained over a long time in culture is very valuable,” Griffith says. “The real advance here was showing that we could get an intimate microvascular network through liver tissue and that we could circulate immune cells. This helped us to establish differences between how immune cells interact with the liver cells in a type two diabetes state and a healthy state.”

As the liver tissue matured, the researchers induced insulin resistance by exposing the tissue to increased levels of insulin, glucose, and fatty acids.

As this disease state developed, the researchers observed changes in how hepatocytes clear insulin and metabolize glucose, as well as narrower, leakier blood vessels that reflect microvascular complications often seen in diabetic patients. They also found that insulin resistance leads to an increase in markers of inflammation that attract monocytes into the tissue. Monocytes are the precursors of macrophages, immune cells that help with tissue repair during inflammation and are also observed in the liver of patients with early-stage liver disease.

“This really shows that we can model the immune features of a disease like MASLD, in a way that is all based on human cells,” Griffith says.

The research was funded by the National Institutes of Health, the National Science Foundation Graduate Research Fellowship program, NovoNordisk, the Massachusetts Life Sciences Center, and the Siebel Scholars Foundation.


Your future home might be framed with printed plastic

MIT engineers are using recycled plastic to 3D print construction-grade floor trusses.


The plastic bottle you just tossed in the recycling bin could provide structural support for your future house.

MIT engineers are using recycled plastic to 3D print construction-grade beams, trusses, and other structural elements that could one day offer lighter, modular, and more sustainable alternatives to traditional wood-based framing.

In a paper published in the Solid FreeForm Fabrication Symposium Proceedings, the MIT team presents the design for a 3D-printed floor truss system made from recycled plastic.

A traditional floor truss is made from wood beams that connect via metal plates in a pattern resembling a ladder with diagonal rungs. Set on its edge and combined with other parallel trusses, the resulting structure provides support for flooring material such as plywood that lies over the trusses.

The MIT team printed four long trusses out of recycled plastic and configured them into a conventional plywood-topped floor frame, then tested the structure’s load-bearing capacity. The printed flooring held over 4,000 pounds, exceeding key building standards set by the U.S. Department of Housing and Urban Development.

The plastic-printed trusses weigh about 13 pounds each, which is lighter than a comparable wood-based truss, and they can be printed on a large-scale industrial printer in under 13 minutes. In addition to floor trusses, the group is working on printing other elements and combining them into a full frame for a modest-sized home.

The researchers envision that as global demand for housing eclipses the supply of wood in the coming years, single-use plastics such as water bottles and food containers could get a second life as recycled framing material to alleviate both a global housing crisis and the overwhelming demand for timber.

“We’ve estimated that the world needs about 1 billion new homes by 2050. If we try to make that many homes using wood, we would need to clear-cut the equivalent of the Amazon rainforest three times over,” says AJ Perez, a lecturer in the MIT School of Engineering and research scientist in the MIT Office of Innovation. “The key here is: We recycle dirty plastic into building products for homes that are lighter, more durable, and sustainable.”

Perez’ co-authors on the study are graduate students Tyler Godfrey, Kenan Sehnawi, Arjun Chandar, and professor of mechanical engineering David Hardt, who are all members of the MIT Laboratory for Manufacturing and Productivity.

Printing dirty

In 2019, Perez and Hardt started MIT HAUS, a group within the Laboratory for Manufacturing and Productivity that aims to produce homes from recycled polymer products, using large-scale additive manufacturing, which encompasses technologies that are capable of producing big structures, layer-by-layer, in relatively short timescales.

Today, some companies are exploring large-scale additive manufacturing to 3D-print modest-sized homes. These efforts mainly focus on printing with concrete or clay — materials that have had a large negative environmental impact associated with their production. The house structures that have been printed so far are largely walls. The MIT HAUS group is among the first to consider printing structural framing elements such as foundation pilings, floor trusses, stair stringers, roof trusses, wall studs, and joists.

What’s more, they are seeking to do so not with cement, but with recycled “dirty” plastic — plastic that doesn’t have to be cleaned and preprocessed before reuse. The researchers envision that one day, used bottles and food containers could be fed directly into a shredder, pelletized, then fed into a large-scale additive manufacturing machine to become structural composite construction components. The plastic composite parts would be light enough to transport via pickup truck rather than a traditional lumber-hauling 18-wheeler. At the construction site, the elements could be quickly fitted into a lightweight yet sturdy home frame.

“We are starting to crack the code on the ability to process and print really dirty plastic,” Perez says. “The questions we’ve been asking are, what is the dirty, unwanted plastic good for, and how do we use the dirty plastic as-is?”

Weight class

The team’s new study is one step toward that overall goal of sustainable, recycled construction. In this work, they developed a design for a printed floor truss made from recycled plastic. They designed the truss with a high stiffness-to-weight ratio, meaning that it should be able to support a given amount of weight with minimal deflection, or bending. (Think of being able to walk across a floor without it sagging between the joists.)

The researchers first explored a handful of possible truss designs in simulation, and put each design through a simulated load-bearing test. Their modeling showed that one design in particular exhibited the highest stiffness-to-weight ratio and was therefore the most promising pattern to print and physically test. The design is close to the traditional wood-based floor truss pattern resembling a ladder with diagonal, triangular rungs. The team made a slight adjustment to this design, adding small reinforcing elements to each node where a “rung” met the main truss frame.

To print the design, Perez and his colleagues went to MIT’s Bates Research and Engineering Center, which houses the group’s industrial-scale 3D printer — a room-sized industrial machine that is capable of printing large structures at a fast rate of up to 80 pounds of material per hour. For their preliminary study, the researchers used pellets made of a combination of recycled PET polymers and glass fibers — a mixture that improves the material’s printability and durability. They obtained the material from an aerospace materials company, and then fed the pellets into the printer as composite “ink.”   

The team printed four trusses, each measuring 8 feet long, 1 foot high, and about 1 inch wide. Each truss took about 13 minutes to print. Perez and Godfrey spaced the trusses apart in a parallel configuration similar to traditional wood-based trusses, and screwed them into a sheet of plywood to mimic a 4-x-8-foot floor frame. They placed bags of sand and concrete of increasing weight in the center of the flooring system and measured the amount of deflection that the trusses experienced underneath.

The trusses easily withstood loads of 300 pounds, well above the deflection standards set by the U.S. by the Department of Housing and Urban Development. They didn’t stop there, continuing to add weight. Only when the loads reached over 4,000 pounds did the trusses finally buckle and crack.

In terms of stiffness, the printed trusses meet existing building codes in the U.S. To make them ready for wide adoption, Perez says the cost of producing the structures will have to be brought down to compete with the price of wood. The trusses in the new study were printed using recycled plastic, but from a source that he describes as the “crème de la crème of recycled feedstocks.” The plastic is factory-discarded material, but is not quite the “dirty” plastic that he aims ultimately to shred, print, and build.

The current study demonstrates that it is possible to print structural building elements from recycled plastic. Perez is in the process of working with dirtier plastic such as used soda bottles — that still hold a bit of liquid residue — to see how such contaminants affect the quality of the printed product.

If dirty plastics can be made into durable housing structures, Perez says “the idea is to bring shipping containers close to where you know you’ll have a lot of plastic, like next to a football stadium. Then you could use off-the-shelf shredding technology and feed that dirty shredded plastic into a large-scale additive manufacturing system, which could exist in micro-factories, just like bottling centers, around the world. You could print the parts for entire buildings that would be light enough to transport on a moped or pickup truck to where homes are most needed.”

This research was supported, in part, by the Gerstner Foundation, the Chandler Health of the Planet grant, and Cincinnati Incorporated.


Young and gifted

Joshua Bennett’s new book profiles American prodigies, examining the personal and social dimensions of cultivating promise.


James Baldwin was a prodigy. That is not the first thing most people associate with a writer who once declared that he “had no childhood” and whose work often elides the details of his early life in New York, in the 1920s and 1930s. Still, by the time Baldwin was 14, he was a successful church preacher, excelling in a role otherwise occupied by adults.

Throw in the fact that Baldwin was reading Dostoyevsky by the fifth grade, wrote “like an angel” according to his elementary school principal, edited his middle school periodical, and wrote for his high school magazine, and it’s clear he was a precocious wordsmith.

These matters are complicated, of course. To MIT scholar Joshua Bennett, Baldwin’s writings reveal enough for us to conclude that his childhood was marked by a “relentless introspection” as he sought to come to terms with the world. Beyond that, Bennett thinks, some of Baldwin’s work, and even the one children’s book he wrote, yields “messages of persistence,” recognizing the need for any child to receive encouragement and education.

And if someone as precocious as Baldwin still needed cultivation, then virtually everyone does. If we act is if talent blossoms on its own, we are ignoring the vital role communities, teachers, and families play in helping artists — or anyone — develop their skills.

“We talk as if these people emerged ex nihilo,” Bennett says. “When all along the way, there were people who cultivated them, and our children deserve the same — all of the children of the world. We have a dominant model of genius that is fundamentally flawed, in that it often elides the role of communities and cultural institutions.”

Bennett explores these issues in a new book, “The People Can Fly: American Promise, Black Prodigies, and the Greatest Miracle of All Time,” published this week by Hachette. A literary scholar and poet himself, Bennett is the Distinguished Chair of the Humanities at MIT and a professor of literature.

“The People Can Fly” accomplishes many kinds of work at once: Bennett offers a series of profiles, carefully wrought to see how some prominent figures were able to flourish from childhood forward. And he closely reads their works for indications about how they understood the shape of their own lives. In so doing, Bennett underscores the significance of the social settings that prodigious talents grow up in. For good measure, he also offers reflections on his own career trajectory and encounters with these artists, driving home their influence and meaning.

Reading about these many prodigies, one by one, helps readers build a picture of the realities, and complications, of trying to sustain early promise.

“It’s part of what I tell my students — the individual is how you get to the universal,” Bennett says. “It doesn’t mean I need to share certain autobiographical impulses with, say, Hemingway. It’s just that I think those touchpoints exist in all great works of art.”

Space odyssey

For Bennett, the idea of writing about prodigies grew naturally from his research and teaching, which ranges broadly in American and global literature. Bennett began contemplating “the idea of promise as this strange, idiosyncratic quality, this thing we see through various acts, perhaps something as simple as a little riff you hear a child sing, an element of their drawings, or poems.” At the same time, he notes, people struggle with “the weight of promise. There is a peril that can come along with promise. Promise can be taken away.”

Ultimately, Bennett adds, “I started thinking a little more about what promise has meant in African American communities,” in particular. Ranging widely in the book, Bennett consistently loops back to a core focus on the ideals, communities, and obstacles many Black artists grew up with. These artists and intellectuals include Malcolm X, Gwendolyn Brooks, Stevie Wonder, and the late poet and scholar Nikki Giovanni.

Bennett’s chapter on Giovanni shows his own interest in placing an artist’s life in historical context, and picks up on motifs relating back to childhood and personal promise.

Giovanni attended Fisk University early, enrolling at 17. Later she enrolled in Columbia University’s Masters of Fine Arts program, where poetry students were supposed to produce publishable work in a two-year program. In her first year, Giovanni’s poetry collection, “Black Feeling, Black Talk,” not only got published but became a hit, selling 10,000 copies. She left the program early — without a degree, since it required two years of residency. In short, she was always going places.

Giovanni went on to become one of the most celebrated poets of her time, and spent decades on the faculty at Virginia Tech. One idea that kept recurring in her work: dreams of space exploration. Giovanni’s work transmitted a clear enthusiasm for exploring the stars.

“Looking through her work, you see space travel everywhere,” Bennett says. “Even in her most prominent poem, ‘Ego trippin (there may be a reason why),’ there is this sense of someone who’s soaring over the landscape — ‘I’m so hip even my errors are correct.’ There is this idea of an almost divine being.”

That enthusiasm was accompanied by the recognition that astronauts, at least at one time, emerged from a particular slice of society. Indeed, Giovanni at many times publicly called for more opportunities for more Americans to become astronauts. A pressing issue, for her, was making dreams achievable for more people.

“Nikki Giovanni is very invested in these sorts of questions, as a writer, as an educator, and as a big thinker,” Bennett says. “This kind of thinking about the cosmos is everywhere in her work. But inside of that is a critique, that everyone should have a chance to expand the orbit of their dreaming. And dream of whatever they need to.”

And as Bennett draws out in “The People Can Fly,” stories and visions of flying have run deep in Black culture, offering a potent symbolism and a mode of “holding on to a deeper sense that the constraints of this present world are not all-powerful or everlasting. The miraculous is yet available. The people could fly, and still can.”

Children with promise, families with dreams

Other artists have praised “The People Can Fly.” The actor, producer, and screenwriter Lena Waithe has said that “Bennett’s poetic nature shines through on every page. … This book is a masterclass in literature and a necessary reminder to cherish the child in all of us.”

Certainly Bennett brings a vast sense of scope to “The People Can Fly,” ranging across centuries of history. Phillis Wheatley, a former enslaved woman whose 1773 poetry collection was later praised by George Washington, was an early American prodigy, studying the classics as a teenager and releasing her work at age 20. Mae Jemison, the first Black female astronaut, enrolled in Stanford University at age 16, spurred by family members who taught her about the stars. All told, Bennett weaves together a scholarly tapestry about hope, ambition, and, at times, opportunity.

Often, that hope and ambition belong to whole families, not just one gifted child. As Nikki Giovanni herself quipped, while giving the main address at MIT’s annual Martin Luther King convocation in 1990, “the reason you go to college is that it makes your mother happy.”

Bennett can relate, having come from a family where his mother was the only prior relative to have attended college. As a kid in the 1990s, growing up in Yonkers, New York, he had a Princeton University sweatshirt, inspired by his love of the television program “The Fresh Prince of Bel Air.” The program featured a character named Phillip Banks — popularly known as “Uncle Phil” — who was, within the world of the show, a Princeton alumnus.

“I would ask my Mom, ‘How do I get into Princeton?’” Bennett recalls. “She would just say, ‘Study hard, honey.’ No one but her had even been to college in my family. No one had been to Princeton. No one had set foot on Princeton University’s campus. But the idea that was possible in the country we lived in, for a woman who was the daughter of two sharecroppers, and herself grew up in a tenement with her brothers and sister, and nonetheless went on to play at Carnegie Hall and get a college degree and buy her mother a color TV — it’s fascinating to me.”

The postscript to that anecdote is that Bennett did go on to earn his PhD from Princeton. Behind many children with promise are families and communities with dreams for those kids.

“There’s something to it I refuse to relinquish,” Bennett says. “My mother’s vision was a powerful and persistent one — she believed that the future also belonged to her children.”


How a unique class of neurons may set the table for brain development

Somatostatin-expressing neurons follow a unique trajectory when forming connections in the visual cortex that may help establish the conditions needed for sensory experience to refine circuits.


The way the brain develops can shape us throughout our lives, so neuroscientists are intensely curious about how it happens. A new study by researchers in The Picower Institute for Learning and Memory at MIT that focused on visual cortex development in mice reveals that an important class of neurons follows a set of rules that, while surprising, might just create the right conditions for circuit optimization.

During early brain development, multiple types of neurons emerge in the visual cortex (where the brain processes vision). Many are “excitatory,” driving the activity of brain circuits, and others are “inhibitory,” meaning they control that activity. Just like a car needs not only an engine and a gas pedal, but also a steering wheel and brakes, a healthy balance between excitation and inhibition is required for proper brain function. During a “critical period” of development in the visual cortex, soon after the eyes first open, excitatory and inhibitory neurons forge and edit millions of connections, or synapses, to adapt nascent circuits to the incoming flood of visual experience. Over many days, in other words, the brain optimizes its attunement to the world.

In the new study in The Journal of Neuroscience, a team led by MIT research scientist Josiah Boivin and Professor Elly Nedivi visually tracked somatostatin (SST)-expressing inhibitory neurons forging synapses with excitatory cells along their sprawling dendrite branches, illustrating the action before, during, and after the critical period with unprecedented resolution. Several of the rules the SST cells appeared to follow were unexpected — for instance, unlike other cell types, their activity did not depend on visual input — but now that the scientists know these neurons’ unique trajectory, they have a new idea about how it may enable sensory activity to influence development: SST cells might help usher in the critical period by establishing the baseline level of inhibition needed to ensure that only certain types of sensory input will trigger circuit refinement.

“Why would you need part of the circuit that’s not really sensitive to experience? It could be that it’s setting things up for the experience-dependent components to do their thing,” says Nedivi, the William R. and Linda R. Young Professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences.

Boivin adds: “We don’t yet know whether SST neurons play a causal role in the opening of the critical period, but they are certainly in the right place at the right time to sculpt cortical circuitry at a crucial developmental stage.”

A unique trajectory

To visualize SST-to-excitatory synapse development, Nedivi and Boivin’s team used a genetic technique that pairs expression of synaptic proteins with fluorescent molecules to resolve the appearance of the “boutons” SST cells use to reach out to excitatory neurons. They then performed a technique called eMAP, developed by Kwanghun Chung’s lab in the Picower Institute, that expands and clears brain tissue to increase magnification, allowing super-resolution visualization of the actual synapses those boutons ultimately formed with excitatory cells along their dendrites. Co-author and postdoc Bettina Schmerl helped lead the eMAP work.

These new techniques revealed that SST bouton appearance and then synapse formation surged dramatically when the eyes opened, and then as the critical period got underway. But while excitatory neurons during this time frame are still maturing, first in the deepest layers of the cortex and later in its more superficial layers, the SST boutons blanketed all layers simultaneously, meaning that, perhaps counterintuitively, they sought to establish their inhibitory influence regardless of the maturation stage of their intended partners.

Many studies have shown that eye opening and the onset of visual experience sets in motion the development and elaboration of excitatory cells and another major inhibitory neuron type (parvalbumin-expressing cells). Raising mice in the dark for different lengths of time, for instance, can distinctly alter what happens with these cells. Not so for the SST neurons. The new study showed that varying lengths of darkness had no effect on the trajectory of SST bouton and synapse appearance; it remained invariant, suggesting it is preordained by a genetic program or an age-related molecular signal, rather than experience.

Moreover, after the initial frenzy of synapse formation during development, many synapses are then edited, or pruned away, so that only the ones needed for appropriate sensory responses endure. Again, the SST boutons and synapses proved to be exempt from these redactions. Although the pace of new SST synapse formation slowed at the peak of the critical period, the net number of synapses never declined, and even continued increasing into adulthood.

“While a lot of people think that the only difference between inhibition and excitation is their valence, this demonstrates that inhibition works by a totally different set of rules,” Nedivi says.

In all, while other cell types were tailoring their synaptic populations to incoming experience, the SST neurons appeared to provide an early but steady inhibitory influence across all layers of the cortex. After excitatory synapses have been pruned back by the time of adulthood, the continued upward trickle of SST inhibition may contribute to the increase in the inhibition to excitation ratio that still allows the adult brain to learn, but not as dramatically or as flexibly as during early childhood.

A platform for future studies

In addition to shedding light on typical brain development, Nedivi says, the study’s techniques can enable side-by-side comparisons in mouse models of neurodevelopmental disorders such as autism or epilepsy, where aberrations of excitation and inhibition balance are implicated.

Future studies using the techniques can also look at how different cell types connect with each other in brain regions other than the visual cortex, she adds.

Boivin, who will soon open his own lab as a faculty member at Amherst College, says he is eager to apply the work in new ways.

“I’m excited to continue investigating inhibitory synapse formation on genetically defined cell types in my future lab,” Boivin says. “I plan to focus on the development of limbic brain regions that regulate behaviors relevant to adolescent mental health.”

In addition to Nedivi, Boivin and Schmerl, the paper’s other authors are Kendyll Martin and Chia-Fang Lee.

Funding for the study came from the National Institutes of Health, the Office of Naval Research, and the Freedom Together Foundation.


How generative AI can help scientists synthesize complex materials

MIT researchers’ DiffSyn model offers recipes for synthesizing new materials, enabling faster experimentation and a shorter journey from hypothesis to use.


Generative artificial intelligence models have been used to create enormous libraries of theoretical materials that could help solve all kinds of problems. Now, scientists just have to figure out how to make them.

In many cases, materials synthesis is not as simple as following a recipe in the kitchen. Factors like the temperature and length of processing can yield huge changes in a material’s properties that make or break its performance. That has limited researchers’ ability to test millions of promising model-generated materials.

Now, MIT researchers have created an AI model that guides scientists through the process of making materials by suggesting promising synthesis routes. In a new paper, they showed the model delivers state-of-the-art accuracy in predicting effective synthesis pathways for a class of materials called zeolites, which could be used to improve catalysis, absorption, and ion exchange processes. Following its suggestions, the team synthesized a new zeolite material that showed improved thermal stability.

The researchers believe their new model could break the biggest bottleneck in the materials discovery process.

“To use an analogy, we know what kind of cake we want to make, but right now we don’t know how to bake the cake,” says lead author Elton Pan, a PhD candidate in MIT’s Department of Materials Science and Engineering (DMSE). “Materials synthesis is currently done through domain expertise and trial and error.”

The paper describing the work appears today in Nature Computational Science. Joining Pan on the paper are Soonhyoung Kwon ’20, PhD ’24; DMSE postdoc Sulin Liu; chemical engineering PhD student Mingrou Xie; DMSE postdoc Alexander J. Hoffman; Research Assistant Yifei Duan SM ’25; DMSE visiting student Thorben Prein; DMSE PhD candidate Killian Sheriff; MIT Robert T. Haslam Professor in Chemical Engineering Yuriy Roman-Leshkov; Valencia Polytechnic University Professor Manuel Moliner; MIT Paul M. Cook Career Development Professor Rafael Gómez-Bombarelli; and MIT Jerry McAfee Professor in Engineering Elsa Olivetti.

Learning to bake

Massive investments in generative AI have led companies like Google and Meta to create huge databases filled with material recipes that, at least theoretically, have properties like high thermal stability and selective absorption of gases. But making those materials can require weeks or months of careful experiments that test specific reaction temperatures, times, precursor ratios, and other factors.

“People rely on their chemical intuition to guide the process,” Pan says. “Humans are linear. If there are five parameters, we might keep four of them constant and vary one of them linearly. But machines are much better at reasoning in a high-dimensional space.”

The synthesis process of materials discovery now often takes the most time in a material’s journey from hypothesis to use.

To help scientists navigate that process, the MIT researchers trained a generative AI model on over 23,000 material synthesis recipes described over 50 years of scientific papers. The researchers iteratively added random “noise” to the recipes during training, and the model learned to de-noise and sample from the random noise to find promising synthesis routes.

The result is DiffSyn, which uses an approach in AI known as diffusion.

“Diffusion models are basically a generative AI model like ChatGPT, but more like the DALL-E image generation model,” Pan says. “During inference, it converts noise into meaningful structure by subtracting a little bit of noise at each step. In this case, the ‘structure’ is the synthesis route for a desired material.”

When a scientist using DiffSyn enters a desired material structure, the model offers some promising combinations of reaction temperatures, reaction times, precursor ratios, and more.

“It basically tells you how to bake your cake,” Pan says. “You have a cake in mind, you feed it into the model, the model spits out the synthesis recipes. The scientist can pick whichever synthesis path they want, and there are simple ways to quantify the most promising synthesis path from what we provide, which we show in our paper.”

To test their system, the researchers used DiffSyn to suggest novel synthesis paths for a zeolite, a material class that is complex and takes time to form into a testable material.

“Zeolites have a very high-dimensional synthesis space,” Pan says. “Zeolites also tend to take days or weeks to crystallize, so the impact [of finding the best synthesis pathway faster] is much higher than other materials that crystallize in hours.”

The researchers were able to make the new zeolite material using synthesis pathways suggested by DiffSyn. Subsequent testing revealed the material had a promising morphology for catalytic applications.

“Scientists have been trying out different synthesis recipes one by one,” Pan says. “That makes them very time-consuming. This model can sample 1,000 of them in under a minute. It gives you a very good initial guess on synthesis recipes for completely new materials.”

Accounting for complexity

Previously, researchers have built machine-learning models that mapped a material to a single recipe. Those approaches do not take into account that there are different ways to make the same material.

DiffSyn is trained to map material structures to many different possible synthesis paths. Pan says that is better aligned with experimental reality.

“This is a paradigm shift away from one-to-one mapping between structure and synthesis to one-to-many mapping,” Pan says. “That’s a big reason why we achieved strong gains on the benchmarks.”

Moving forward, the researchers believe the approach should work to train other models that guide the synthesis of materials outside of zeolites, including metal-organic frameworks, inorganic solids, and other materials that have more than one possible synthesis pathway.

“This approach could be extended to other materials,” Pan says. “Now, the bottleneck is finding high-quality data for different material classes. But zeolites are complicated, so I can imagine they are close to the upper-bound of difficulty. Eventually, the goal would be interfacing these intelligent systems with autonomous real-world experiments, and agentic reasoning on experimental feedback to dramatically accelerate the process of materials design.”

The work was supported by MIT International Science and Technology Initiatives (MISTI), the National Science Foundation, Generalitat Vaslenciana, the Office of Naval Research, ExxonMobil, and the Agency for Science, Technology and Research in Singapore.


A portable ultrasound sensor may enable earlier detection of breast cancer

The new system could be used at home or in doctors’ offices to scan people who are at high risk for breast cancer.


For people who are at high risk of developing breast cancer, frequent screenings with ultrasound can help detect tumors early. MIT researchers have now developed a miniaturized ultrasound system that could make it easier for breast ultrasounds to be performed more often, either at home or at a doctor’s office.

The new system consists of a small ultrasound probe attached to an acquisition and processing module that is a little larger than a smartphone. This system can be used on the go when connected to a laptop computer to reconstruct and view wide-angle 3D images in real-time.

“Everything is more compact, and that can make it easier to be used in rural areas or for people who may have barriers to this kind of technology,” says Canan Dagdeviren, an associate professor of media arts and sciences at MIT and the senior author of the study.

With this system, she says, more tumors could potentially be detected earlier, which increases the chances of successful treatment.

Colin Marcus PhD ’25 and former MIT postdoc Md Osman Goni Nayeem are the lead authors of the paper, which appears in the journal Advanced Healthcare Materials. Other authors of the paper are MIT graduate students Aastha Shah, Jason Hou, and Shrihari Viswanath; MIT summer intern and University of Central Florida undergraduate Maya Eusebio; MIT Media Lab Research Specialist David Sadat; MIT Provost Anantha Chandrakasan; and Massachusetts General Hospital breast cancer surgeon Tolga Ozmen.

Frequent monitoring

While many breast tumors are detected through routine mammograms, which use X-rays, tumors can develop in between yearly mammograms. These tumors, known as interval cancers, account for 20 to 30 percent of all breast cancer cases, and they tend to be more aggressive than those found during routine scans.

Detecting these tumors early is critical: When breast cancer is diagnosed in the earliest stages, the survival rate is nearly 100 percent. However, for tumors detected in later stages, that rate drops to around 25 percent.

For some individuals, more frequent ultrasound scanning in addition to regular mammograms could help to boost the number of tumors that are detected early. Currently, ultrasound is usually done only as a follow-up if a mammogram reveals any areas of concern. Ultrasound machines used for this purpose are large and expensive, and they require highly trained technicians to use them.

“You need skilled ultrasound technicians to use those machines, which is a major obstacle to getting ultrasound access to rural communities, or to developing countries where there aren’t as many skilled radiologists,” Viswanath says.

By creating ultrasound systems that are portable and easier to use, the MIT team hopes to make frequent ultrasound scanning accessible to many more people.

In 2023, Dagdeviren and her colleagues developed an array of ultrasound transducers that were incorporated into a flexible patch that can be attached to a bra, allowing the wearer to move an ultrasound tracker along the patch and image the breast tissue from different angles.

Those 2D images could be combined to generate a 3D representation of the tissue, but there could be small gaps in coverage, making it possible that small abnormalities could be missed. Also, that array of transducers had to be connected to a traditional, costly, refrigerator-sized processing machine to view the images.

In their new study, the researchers set out to develop a modified ultrasound array that would be fully portable and could create a 3D image of the entire breast by scanning just two or three locations.

The new system they developed is a chirped data acquisition system (cDAQ) that consists of an ultrasound probe and a motherboard that processes the data. The probe, which is a little smaller than a deck of cards, contains an ultrasound array arranged in the shape of an empty square, a configuration that allows the array to take 3D images of the tissue below.

This data is processed by the motherboard, which is a little bit larger than a smartphone and costs only about $300 to make. All of the electronics used in the motherboard are commercially available. To view the images, the motherboard can be connected to a laptop computer, so the entire system is portable.

“Traditional 3D ultrasound systems require power expensive and bulky electronics, which limits their use to high-end hospitals and clinics,” Chandrakasan says. “By redesigning the system to be ultra-sparse and energy-efficient, this powerful diagnostic tool can be moved out of the imaging suite and into a wearable form factor that is accessible for patients everywhere.”

This system also uses much less power than a traditional ultrasound machine, so it can be powered with a 5V DC supply (a battery or an AC/DC adapter used to plug in small electronic devices such as modems or portable speakers).

“Ultrasound imaging has long been confined to hospitals,” says Nayeem. “To move ultrasound beyond the hospital setting, we reengineered the entire architecture, introducing a new ultrasound fabrication process, to make the technology both scalable and practical.”

Earlier diagnosis

The researchers tested the new system on one human subject, a 71-year-old woman with a history of breast cysts. They found that the system could accurately image the cysts and created a 3D image of the tissue, with no gaps.

The system can image as deep as 15 centimeters into the tissue, and it can image the entire breast from two or three locations. And, because the ultrasound device sits on top of the skin without having to be pressed into the tissue like a typical ultrasound probe, the images are not distorted.

“With our technology, you simply place it gently on top of the tissue and it can visualize the cysts in their original location and with their original sizes,” Dagdeviren says.

The research team is now conducting a larger clinical trial at the MIT Center for Clinical and Translational Research and at MGH.

The researchers are also working on an even smaller version of the data processing system, which will be about the size of a fingernail. They hope to connect this to a smartphone that could be used to visualize the images, making the entire system smaller and easier to use. They also plan to develop a smartphone app that would use an AI algorithm to help guide the patient to the best location to place the ultrasound probe.

While the current version of the device could be readily adapted for use in a doctor’s office, the researchers hope that the future, a smaller version can be incorporated into a wearable sensor that could be used at home by people at high risk for developing breast cancer.

Dagdeviren is now working on launching a company to help commercialize the technology, with assistance from an MIT HEALS Deshpande Momentum Grant, the Martin Trust Center for MIT Entrepreneurship, and the MIT Media Lab WHx Women’s Health Innovation Fund.

The research was funded by a National Science Foundation CAREER Award, a 3M Non-Tenured Faculty Award, Lyda Hill Philanthropies, and the MIT Media Lab Consortium.


The philosophical puzzle of rational artificial intelligence

As AI technology advances, a new interdisciplinary course seeks to equip students with foundational critical thinking skills in computing.


To what extent can an artificial system be rational?

A new MIT course, 6.S044/24.S00 (AI and Rationality), doesn’t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.

This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one's goals.

“You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn’t available as a major at the time.

Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”

Tools for further theoretical thinking

Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).

While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.

Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.

“It's important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”

Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent.

Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”

The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.

Blending disciplines and questioning assumptions

So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.

Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.

On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “We’re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?”

Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class’s challenges and the ways in which the definition of a rational agent could change depending on the discipline. “Representing what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.”

The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.

For Paredes Rioboo, this is her third Common Ground course. She says, “I really like the interdisciplinary aspect. They’ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.”

According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.

He adds, “philosophy also has a way of surprising you.”


Designing the future of metabolic health through tissue-selective drug delivery

Founded by three MIT alumni, Gensaic uses AI-guided protein design to deliver RNA and other therapeutic molecules to specific cells or areas of the body.


New treatments based on biological molecules like RNA give scientists unprecedented control over how cells function. But delivering those drugs to the right tissues remains one of the biggest obstacles to turning these promising yet fragile molecules into powerful new treatments.

Now Gensaic, founded by Lavi Erisson MBA ’19; Uyanga Tsedev SM ’15, PhD ’21; and Jonathan Hsu PhD ’22, is building an artificial intelligence-powered discovery engine to develop protein shuttles that can deliver therapeutic molecules like RNA to specific tissues and cells in the body. The company is using its platform to create advanced treatments for metabolic diseases and other conditions. It is also developing treatments in partnership with Novo Nordisk and exploring additional collaborations to amplify the speed and scale of its impact.

The founders believe their delivery technology — combined with advanced therapies that precisely control gene expression, like RNA interference (RNAi) and small activating RNA (saRNA) — will enable new ways of improving health and treating disease.

“I think the therapeutic space in general is going to explode with the possibilities our approach unlocks,” Erisson says. “RNA has become a clinical-grade commodity that we know is safe. It is easy to synthesize, and it has unparalleled specificity and reversibility. By taking that and combining it with our targeting and delivery, we can change the therapeutic landscape.”

Drinking from the firehose

Erisson worked on drug development at the large pharmaceutical company Teva Pharmaceuticals before coming to MIT for his Sloan Fellows MBA in 2018.

“I came to MIT in large part because I was looking to stretch the boundaries of how I apply critical thinking,” Erisson says. “At that point in my career, I had taken about 10 drug programs into clinical development, with products on the market now. But what I didn’t have were the intellectual and quantitative tools for interrogating finance strategy and other disciplines that aren’t purely scientific. I knew I’d be drinking from the firehose coming to MIT.”

Erisson met Hsu and Tsedev, then PhD students at MIT, in a class taught by professors Harvey Lodish and Andrew Lo. The group started holding weekly meetings to discuss their research and the prospect of starting a business.

After Erisson completed his MBA program in 2019, he became chief medical and business officer at the MIT spinout Iterative Health, a company using AI to improve screening for colorectal cancer and inflammatory bowel disease that has raised over $200 million to date. There, Erisson ran a 1,400-patient study and led the development and clearance of the company’s software product.

During that time, the eventual founders continued to meet at Erisson’s house to discuss promising research avenues, including Tsedev’s work in the lab of Angela Belcher, MIT’s James Mason Crafts Professor of Biological Engineering. Tsedev’s research involved using bacteriophages, which are fast-replicating protein particles, to deliver treatments into hard-to-drug places like the brain.

As Hsu and Tsedev neared completion of their PhDs, the team decided to commercialize the technology, founding Gensaic at the end of 2021. Gensaic’s approach uses a method called unbiased directed evolution to find the best protein scaffolding to reach target tissues in the body.

“Directed evolution means having a lot of different species of proteins competing together for a certain function,” Erisson says. “The proteins are competing for the ability to reach the right cell, and we are then able to look at the genetic code of the protein that has ‘won’ that competition. When we do that process repeatedly, we find extremely adaptable proteins that can achieve the function we’re looking for.”

Initially, the founders focused on developing protein scaffolds to deliver gene therapies. Gensaic has since pivoted to focus on delivering molecules like siRNA and RNAi, which have been hard to deliver outside of the liver.

Today Gensaic has screened more than 500 billion different proteins using a process called phage display and directed evolution. It calls its platform FORGE, for Functional Optimization by Recursive Genetic Evolution.

Erisson says Gensaic’s delivery vehicles can also carry multiple RNA molecules into cells at the same time, giving doctors a novel and powerful set of tools to treat and prevent diseases.

“Today FORGE is built into the idea of multifunctional medicines,” Erisson says. “We are moving into a future where we can extract multiple therapeutic mechanisms from a single molecule. We can combine proteins with multiple tissue selectivity and multiple molecules of siRNA or other therapeutic modalities, and affect complex disease system biology with a single molecule.”

A “universe of opportunity”

The founders believe their approach will enable new ways of improving health by delivering advanced therapies directly to new places in the body. Precise delivery of drugs to anywhere in the body could not only unlock new therapeutic targets but also boost the effectiveness of existing treatments and reduce side effects.

“We’ve found we can get to the brain, and we can get to specific tissues like skeletal and adipose tissue,” Erisson says. “We’re the only company, to my knowledge, that has a protein-based delivery mechanism to get to adipose tissue.”

Delivering drugs into fat and muscle cells could be used to help people lose weight, retain muscle, and prevent conditions like fatty liver disease or osteoporosis.

Erisson says combining RNA therapeutics is another differentiator for Gensaic.

“The idea of multiplexed medicines is just emerging,” Erisson says. “There are no clinically approved drugs using dual-targeted siRNAs, especially ones that have multi-tissue targeting. We are focused on metabolic indications that have two targets at the same time and can take on unique tissues or combinations of tissues.”

Gensaic’s collaboration with Novo Nordisk, announced last year, targets cardiometabolic diseases and includes up to $354 million in upfront and milestone payments per disease target.

“We already know we can deliver multiple types of payloads, and Novo Nordisk is not limited to siRNA, so we can go after diseases in ways that aren’t available to other companies,” Erisson says. “We are too small to try to swallow this universe of opportunity on our own, but the potential of this platform is incredibly large. Patients deserve safer medicines and better outcomes than what are available now.”


Taking the heat out of industrial chemical separations

The gas-filtering membranes developed by MIT spinout Osmoses offer an alternative to energy-hungry thermal separation for chemicals and fuels.


The modern world runs on chemicals and fuels that require a huge amount of energy to produce: Industrial chemical separation accounts for 10 to 15 percent of the world’s total energy consumption. That’s because most separations today rely on heat to boil off unwanted materials and isolate compounds.

The MIT spinout Osmoses is making industrial chemical separations more efficient by reducing the need for all that heat. The company, founded by former MIT postdoc Francesco Maria Benedetti; Katherine Mizrahi Rodriguez ’17, PhD ’22; Professor Zachary Smith; and Holden Lai, has developed a polymer technology capable of filtering gases with unprecedented selectivity.

Gases — consisting of some of the smallest molecules in the world — have historically been the hardest to separate. Osmoses says its membranes enable industrial customers to increase production, use less energy, and operate in a smaller footprint than is possible using conventional heat-based separation processes.

Osmoses has already begun working with partners to demonstrate its technology’s performance, including its ability to upgrade biogas, which involves separating CO2 and methane. The company also has projects in the works to recover hydrogen from large chemical facilities and, in a partnership with the U.S. Department of Energy, to pull helium from underground hydrogen wells.

“Chemical separations really matter, and they are a bottleneck to innovation and progress in an industry where innovation is challenging, yet an existential need,” Benedetti says. “We want to make it easier for our customers to reach their revenue targets, their decarbonization goals, and expand their markets to move the industry forward.”

Better separations

Benedetti joined Smith’s lab in MIT’s Department of Chemical Engineering in 2017. He was joined by Mizrahi Rodriguez the following year, and the pair spent the next few years conducting fundamental research into membrane materials for gas separations, collaborating with chemists at MIT and beyond, including Lai as he conducted his PhD at Stanford University with Professor Yan Xia.

“I was fascinated by the projects [Smith] was thinking about,” Benedetti says. “It was high-risk, high-reward, and that’s something I love. I had the opportunity to work with talented chemists, and they were synthesizing amazing polymers. The idea was for us chemical engineers at MIT to study those polymers, support chemists in taking next steps, and find an application in the separations world.”

The researchers slowly iterated on the membranes, gradually achieving better performance until, in 2020, a group including Lai, Benedetti, Xia, and Smith broke records for gas separation selectivity with a class of three-dimensional polymers whose structural backbone could be tuned to optimize performance. They filed patents with Stanford and MIT over the next two years, publishing their results in the journal Science in 2022.

“We were facing a decision of what to do with this incredible innovation,” Benedetti recalls. “By then, we’d published a lot of papers where, as the introduction, we described the huge energy footprint of thermal gas separations and the potential of membranes to solve that. We thought rather than wait for somebody to pick up the paper and do something with it, we wanted to lead the effort to commercialize the technology.”

Benedetti joined forces with Mizrahi Rodriguez, Lai, and industrial advisor Xinjin Zhao PhD ’92 to go through the National Science Foundation’s I-Corps Program, which challenges researchers to speak to potential customers in industry. The researchers interviewed more than 100 people, which confirmed for them the huge impact their technology could have.

Benedetti received grants from the MIT Deshpande Center for Technological Innovation, MIT Sandbox, and was a fellow with the MIT Energy Initiative. Osmoses also won the MIT $100K Entrepreneurship Competition in 2021, the same year they founded the company.

“I spent a lot of time talking to stakeholders of companies, and it was a window into the challenges the industry is facing,” Benedetti says. “It helped me determine this was a problem they were facing, and showed me the problem was massive. We realized if we could solve the problem, we could change the world.”

Today, Benedetti says more than 90 percent of energy in the chemicals industry is used to thermally separate gases. One study in Nature found that replacing thermal distillation could reduce annual U.S. energy costs by $4 billion and save 100 million tons of carbon dioxide emissions.

Made up of a class of molecules with tunable structures called hydrocarbon ladder polymers, Osmoses’ membranes are capable of filtering gas molecules with high levels of selectivity, at scale. The technology reduces the size of separation systems, making it easier to add to existing spaces and lowering upfront costs for customers.

“This technology is a paradigm shift with respect to how most separations are happening in industry today,” Benedetti says. “It doesn’t require any thermal processes, which is the reason why the chemical and petrochemical industries have such high energy consumption. There are huge inefficiencies in how separations are done today because of the traditional systems used.”

From the lab to the world

In the lab, the founders were making single grams of their membrane polymers for experiments. Since then, they’ve scaled up production dramatically, reducing the cost of the material with an eye toward producing potentially hundreds of kilograms in the future.

The company is currently working toward its first pilot project upgrading biogas at a landfill operated by a large utility in North America. It is also planning a pilot at a dairy farm in North America. Mizrahi Rodriguez says waste gas from landfills and agricultural make up over 80 percent of the biogas upgrading market overall and represent a promising alternative source of renewable methane for customers.

“In the near term, our goal is to validate this technology at scale,” Benedetti says, noting Osmoses aims to scale up its pilot projects. “It has been a big accomplishment to secure funded pilots in all of the verticals that will serve as a springboard for our next commercial phase.”

Osmoses’ other two pilot projects focus on recovering valuable gas, including helium with the Department of Energy.

“Helium is a scarce resource that we need for a variety of applications, like MRIs, and our membranes’ high performance can be used to extract small amounts of it from underground wells,” Mizrahi Rodriguez explains. “Helium is very important in the semiconductor industry to build chips and graphical processing units that are powering the AI revolution. It’s a strategic resource that the U.S. has a growing interest to produce domestically.”

Benedetti says further down the line, Osmoses’ technology could be used in carbon capture, gas “sweetening” to remove acid gases from natural gas, to separate oxygen and nitrogen, to reuse refrigerants, and more.

“There will be a progressive expansion of our capabilities and markets to deliver on our mission of redefining the backbone of the chemical, petrochemical, and energy industries,” Benedetti says. “Separations should not be a bottleneck to innovation and progress anymore.”


Q&A: A simpler way to understand syntax

A new book by Professor Ted Gibson brings together his years of teaching and research to detail the rules of how words combine.



For decades, MIT Professor Ted Gibson has taught the meaning of language to first-year graduate students in the Department of Brain and Cognitive Sciences (BCS). A new book, Gibson’s first, brings together his years of teaching and research to detail the rules of how words combine.

Syntax: A Cognitive Approach,” released by MIT Press on Dec. 16, lays out the grammar of a language from the perspective of a cognitive scientist, outlining the components of language structure and the model of syntax that Gibson advocates: dependency grammar.

It was his research collaborator and wife, associate professor of BCS and McGovern Institute for Brain Research investigator Ev Fedorenko, who encouraged him to put pen to paper. Here, Gibson takes some time to discuss the book.

Q: Where did the process for “Syntax” begin?

A: I think it started with my teaching. Course 9.012 (Cognitive Science), which I teach with Josh Tenenbaum and Pawan Sinha, divides language into three components: sound, structure, and meaning. I work on the structure and meaning parts of language: words and how they get put together. That’s called syntax.

I’ve spent a lot of time over the last 30 years trying to understand the compositional rules of syntax, and even though there are many grammar rules in any language, I actually don’t think the form for grammar rules is that complicated. I’ve taught it in a very simple way for many years, but I’ve never written it all down in one place. My wife, Ev, is a longtime collaborator, and she suggested I write a paper. It turned into a book.

Q: How do you like to explain syntax?

A: For any sentence, for any utterance in any human language, there’s always going to be a word that serves as the head of that sentence, and every other other word will somehow depend on that headword, maybe as an immediate dependent, or further away, through some other dependent words. This is called dependency grammar; it means there’s a root word in each sentence, and dependents of that root, on down, for all the words in the sentence, form a simple tree structure. I have cognitive reasons to suggest that this model is correct, but it isn’t my model; it was first proposed in the 1950s. I adopted it because it aligns with human cognitive phenomena.

That very simple framework gives you the following observation: that longer-distance connections between words are harder to produce and understand than shorter-distance ones. This is because of limitations in human memory. The closer the words are together, the easier it is for me to produce them in a sentence, and the easier it is for you to understand them. If they’re far apart, then it’s a complicated memory problem to produce and understand them.

This gives rise to a cool observation: Languages optimize their rules in order to keep the words close together. We can have very different orders of the same elements across languages, such as the difference in word orders for English versus Japanese, where the order of the words in the English sentence “Mary eats an apple” is “Mary apple eats” in Japanese. But then the ordering rules in English and Japanese are aligned within themselves in order to minimize dependency lengths on average for the language.

Q: How does the book challenge some longstanding ideas in the field of linguistics?

A: In 1957, a book called “Syntactic Structures” by Noam Chomsky was published. It is a wonderful book that provides mathematical approaches to describe what human language is. It is very influential in the field of linguistics, and for good reason.

One of the key components of the theory that Chomsky proposed was the “transformation,” such that words and phrases can move from a deep structure to the structure that we produce. He thought it was self-evident from examples in English that transformations must be part of a human language. But then this concept of transformations eventually led him to conclude that grammar is unlearnable, that it has to be built into the human mind.  

In my view of grammar, there are no transformations. Instead, there are just two different versions of some words, or they can be underspecified for their grammar usage. The different usages may be related in meaning, and they can point to a similar meaning, but they have different dependency structures.

I think the advent of large language models suggests that language is learnable and that syntax isn’t as complicated as we used to think it was, because LLMs are successful at producing language. A large language model is almost the same as an adult speaker of a language in what it can produce. There are subtle ways in which they differ, but on the surface, they look the same in many ways, which suggests that these models do very well with learning language, even with human-like quantities of data.

I get pushback from some people who say, well, researchers can still use transformations to account for some phenomena. My reaction is: Unless you can show me that transformations are necessary, then I don’t think we need them.

Q: This book is open access. Why did you decide to publish it that way?

A: I am all for free knowledge for everyone. I am one of the editors of “Open Mind,” a journal established several years ago that is completely free and open access. I felt my book should be the same way, and MIT Press is a fantastic university press that is nonprofit and supportive of open-access publishing. It means I make less money, but it also means it can reach more people. For me, it is really about trying to get the information out there. I want more people to read it, to learn things. I think that’s how science is supposed to be.


Rhea Vedro brings community wishes to life in Boston sculpture

The MIT lecturer and artist-in-residence transformed hundreds of inscribed and hammered steel plates into “Amulet,” a soaring public artwork at City Hall Plaza.


Boston recently got its own good luck charm, “Amulet,” a 19-foot-tall tangle of organic spires installed in City Hall Plaza and embedded with the wishes, hopes, and prayers of residents from across the city.

The public artwork, by artist Rhea Vedro — also a lecturer and metals artist-in-residence in MIT’s Department of Materials Science and Engineering (DMSE) — was installed on the north side of City Hall, in a newly renovated stretch of the plaza along Congress Street, in October and dedicated with a ribbon cutting on Dec. 19.

“I’m really interested in this idea of protective objects worn on the skin by humans across cultures, across time,” said Vedro at the event in the Civic Pavilion, across the plaza from the sculpture. “And then, how do you take those ideas off the body and turn them into a blown-up version — a stand-in for the body?”

Vedro started exploring that question in 2021, when she was awarded a Boston Triennial Public Art Accelerator fellowship and later commissioned by the city to create the piece — the first artwork installed in the refurbished section of the plaza. She invited people to workshops and community centers to create hundreds of “wishmarks” — steel panels with hammered indentations and words, each representing a personal wish or reflection.

The plates were later used to form the metal skin of the sculpture — three bird-like forms designed to be, in Vedro’s words, a “protective amulet for the landscape.”

“I didn’t ask anyone to share what their actual wishes were, but I met people going into surgery, people who were homeless and looking for housing, people who had just lost a loved one, people dealing with immigration issues,” Vedro said. She asked participants to meditate on the idea of a journey and safe passage. “That could be a literal journey with ideas around immigration and migration,” she said, “or it could be your own internal journey.”

Large-scale art, fine-scale detail

Vedro, who has several public artworks to her name, said in a video about making “Amulet” that the project was “the biggest thing I’ve ever done.” While artworks of this scale are often handed off to fabrication teams, she handled the construction herself, starting on her driveway until zoning rules forced her to move to her father-in-law’s warehouse. Sections were also welded at Artisans Asylum, a community workshop in Boston, where she was an artist in residence, and then moved to a large industrial studio in Rhode Island.

At the ribbon-cutting event, Vedro thanked friends, family members, and city officials who helped bring the project to life. The celebration ended with a concert by musician Veronica Robles and her mariachi band. Robles runs the Veronica Robles Cultural Center in East Boston, which served as the main site for wishmark workshops. The sculpture is expected to remain in City Hall Plaza for up to five years.

Vedro’s background is in fine arts metalsmithing, a discipline that involves shaping and manipulating metals like silver, gold, and copper through forging, casting, and soldering. She began working at a very different scale, making jewelry, and then later moved primarily to welded steel sculpture — both techniques she now teaches at MIT. When working with steel, Vedro applies the same sensitivity a jeweler brings to small objects, paying close attention to small undulations and surface texture.

She loves working with steel, Vedro says — “shaping and forming and texturing and fighting with it” — because it allows her to engage physically with the material, with her hands involved in every millimeter.

The sculpture’s fluid design began with loose, free-form bird drawings on a cement floor and rubber panels with soapstone, oil pastels, and paint sticks. Vedro then built the forms in metal, welding three-dimensional armatures from round steel bars. The organic shapes and flourishes emerged through a responsive, intuitive process.

“I’m someone who works in real-time, changing my mind and responding to the material,” Vedro says. She likens her process to making a patchwork quilt of steel pieces: forming patterns in a shapeable material like tar paper, transferring them to steel sheets, cutting and shaping and texturing the pieces, and welding them together. “So I can get lots of curvatures that way that are not at all modular.”

From steel plates to soaring form

The sculpture’s outer skin is made from thin, 20-gauge mild steel — a low-carbon steel that’s relatively soft and easy to work with — used for the wishmarks. Those plates were fitted over an internal armature constructed from heavier structural steel.

Because there were more wishmark panels than surface area, Vedro slipped some of them into the hollow space inside the sculpture before welding the piece closed. She compares them to treasures in a locket, “loose, rattling around, which freaked out the team when they were installing.” Any written text on the panels was burned off when the pieces were welded together.

“I believe the stuff’s all alchemized up into smoke, which to me is wonderful because it traverses realms just like a bird,” she says.

The surface of the sculpture is coated with a sealant — necessary because the outer skin material is prone to rust — along with spray paints, patinas, and accents including gold leaf. Its appearance will change over time, something Vedro embraces.

“The idea of transformation is actually integral to my work,” she says.

Standing outside the warmth of the Civic Pavilion on a windy, rainy day, artist Matt Bajor described the sculpture as “gorgeous,” attributing its impact in part to Vedro’s fluency in working across vastly different scales.

“The attention to detail — paying attention to the smaller things so that as it comes together as a whole, you have that fineness throughout the whole sculpture,” he said. “To do that at such a large scale is just crazy. It takes a lot of skill, a lot of effort, and a lot of time.”

Suveena Sreenilayam, a DMSE graduate student who has worked closely with Vedro, said her understanding of the relationship between art and craft brings a unique dimension to her work.

“Metal is hard to work with — and to build that on such small and large scales indicates real versatility,” Sreenilayam said. “To make something so artistic at this scale reflects her physical talent, and also her eye for detail and expression.”

Bajor said “Amulet” is a striking addition to the plaza, where the clean lines of City Hall’s Brutalist architecture contrast with the sculpture’s sinuous curves — and to Boston itself.

“I’m looking forward to seeing it in different conditions — in snow and bright sun — as the metal changes over time and as the patina develops,” he said. “It’s just a really great addition to the city.”


“MIT Open Learning has opened doors I never imagined possible”

Munip Utama applies knowledge from the MITx MicroMasters Program in Data, Economics, and Design of Policy to his efforts supporting disadvantaged students in Indonesia.


Through the MITx MicroMasters Program in Data, Economics, and Design of Policy, Munip Utama strengthened the skills he was already applying in his work with Baitul Enza, a nonprofit helping students in need via policy-shaping research and hands-on assistance. 

Utama’s commitment to advancing education for underprivileged students stems from his own background. His father is an elementary school teacher in a remote area and his mother has passed away. While financial hardship has always been a defining challenge, he says it has also been the driving force behind his pursuit of education. With the assistance of special programs for high-achieving students, Utama attended top schools and completed his bachelor’s degree in economics at UIN Jakarta — becoming the second person in his family to earn a university degree.

Utama joined Baitul Enza two months before graduation, through a faculty-led research project, and later became its manager, leading its programs and future development. In this interview, he describes how his experiences with the MicroMasters Program in Data, Economics, and Design of Policy (DEDP), offered by the Abdul Latif Jameel Poverty Action Lab (J-PAL) and MIT Open Learning, are shaping his education, career, and personal mission.

Q: What motivated you to pursue the MITx MicroMasters Program in Data, Economics, and Design of Policy?

A: I was seeking high-quality, evidence-based courses in economics and development. I needed rigorous training in data analysis, economic reasoning, and policy design to strengthen our interventions at Baitul Enza. The MITx MicroMasters Program in Data, Economics, and Design of Policy offered exactly that: a curriculum grounded in real-world problem-solving, aligned with the challenges I face in Indonesia.

I deeply admire MIT’s commitment to transforming teaching and learning not only through innovation, but also through empathy. The DEDP program exemplifies this mission: It connects theory with practice, allowing learners like me to apply analytical tools directly to real development challenges. This approach has inspired me to adopt the same philosophy in my own teaching and mentoring, encouraging students to use data and critical thinking to solve problems in their communities.

Q: What have you gained from the MITx DEDP program? 

A: The DEDP courses have provided me with rigorous analytical and quantitative training in data analysis, economics, and policy design. They have strengthened both my research and mentorship abilities by teaching me to approach poverty and inequality through evidence-based frameworks. My experience conducting independent and collaborative research projects has informed how I mentor students, guiding them to carry out their own evidence-based research projects. I continue to seek further academic dialogue to broaden my understanding and prepare for future graduate studies.

Another key component has been the program’s financial assistance offers. Even with DEDP’s personalized income-based course pricing, financial constraints remain a significant challenge for me, and Baitul Enza operates entirely on donations and volunteer support. The scholarships administered by DEDP have been crucial in enabling me to continue my studies. It has allowed me to focus on learning without the constant burden of financial insecurity, while staying committed to my mission of breaking cycles of poverty through education. 

Q: How are you applying what you’ve learned from MIT Open Learning’s MITx programs, and how will you use what you’ve learned going forward?

A: The DEDP program has transformed how I lead Baitul Enza. I now apply data-driven and evidence-based approaches to program design, monitoring, and evaluation — enhancing cost-effectiveness and long-term impact. The program has enabled me to design case-based learning modules for students, where they analyze real-world data on poverty and education; mentor youth researchers to conduct small-scale projects using evidence-based methods; and improve program cost-effectiveness and outcome measurement to attract collaborators and government support.

Coming from a lower-middle-class family with limited access to education, MIT Open Learning has opened doors I never imagined possible. It has reaffirmed my belief that education, grounded in data and empathy, can break the cycle of poverty. The DEDP program continues to inspire me to mentor young researchers, empower disadvantaged students, and build a community rooted in evidence-based decision-making.

With the foundation built by MITx, I aim to produce policy-relevant research and scale up Baitul Enza’s impact. My long-term vision is to generate experimental evidence in Indonesia on scalable education interventions, inform national policy, and empower marginalized youth to thrive. MITx has not only prepared me academically, but has also strengthened my resolve to lead with clarity, design with evidence, and act with purpose. Beyond my own growth, MITx has multiplied its impact by empowering the next generation of students to use data and evidence in solving local development challenges.


MIT engineers design structures that compute with heat

By leveraging excess heat instead of electricity, microscopic silicon structures could enable more energy-efficient thermal sensing and signal processing.


MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation.

In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device. The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is thermostat at a fixed temperature.      

The researchers used these structures to perform matrix vector multiplication with more than 99 percent accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.

While the researchers still have to overcome many challenges to scale up this computing method for modern deep-learning models, the technique could be applied to detect heat sources and measure temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that take up space on a chip.

“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the new computing paradigm.

Silva is joined on the paper by senior author Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies. The research appears today in Physical Review Applied.

Turning up the heat

This work was enabled by a software system the researchers previously developed that allows them to automatically design a material that can conduct heat in a specific manner.

Using a technique called inverse design, this system flips the traditional engineering approach on its head. The researchers define the functionality they want first, then the system uses powerful algorithms to iteratively design the best geometry for the task.

They used this system to design complex silicon structures, each roughly the same size as a dust particle, that can perform computations using heat conduction. This is a form of analog computing, in which data are encoded and signals are processed using continuous values, rather than digital bits that are either 0s or 1s.

The researchers feed their software system the specifications of a matrix of numbers that represents a particular calculation. Using a grid, the system designs a set of rectangular silicon structures filled with tiny pores. The system continually adjusts each pixel in the grid until it arrives at the desired mathematical function.

Heat diffuses through the silicon in a way that performs the matrix multiplication, with the geometry of the structure encoding the coefficients.

Four renders show subtle movements of porous structure.

“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique,” Romano says.

But the researchers ran into a problem. Due to the laws of heat conduction, which impose that heat goes from hot to cold regions, these structures can only encode positive coefficients. 

They overcame this problem by splitting the target matrix into its positive and negative components and representing them with separately optimized silicon structures that encode positive entries. Subtracting the outputs at a later stage allows them to compute negative matrix values.

They can also tune the thickness of the structures, which allows them to realize a greater variety of matrices. Thicker structures have greater heat conduction.

“Finding the right topology for a given matrix is challenging. We beat this problem by developing an optimization algorithm that ensures the topology being developed is as close as possible to the desired matrix without having any weird parts,” Silva explains.

Microelectronic applications

The researchers used simulations to test the structures on simple matrices with two or three columns. While simple, these small matrices are relevant for important applications, such as fusion sensing and diagnostics in microelectronics.     

The structures performed computations with more than 99 percent accuracy in many cases.

However, there is still a long way to go before this technique could be used for large-scale applications such as deep learning, since millions of structures would need to be tiled together. As the matrices become more complicated, the structures become less accurate, especially when there is a large distance between the input and output terminals. In addition, the devices have limited bandwidth, which would need to be greatly expanded if they were to be used for deep learning.

But because the structures rely on excess heat, they could be directly applied for tasks like thermal management, as well as heat source or temperature gradient detection in microelectronics.

“This information is critical. Temperature gradients can cause thermal expansion and damage a circuit or even cause an entire device to fail. If we have a localized  heat source where we don’t want a heat source, it means we have a problem. We could directly detect such heat sources with these structures, and we can just plug them in without needing any digital components,” Romano says.

Building on this proof-of-concept, the researchers want to design structures that can perform sequential operations, where the output of one structure becomes an input for the next. This is how machine-learning models perform computations. They also plan to develop programmable structures, enabling them to encode different matrices without starting from scratch with a new structure each time.


Keeril Makan named vice provost for the arts

An acclaimed composer and longtime MIT faculty member, Makan will direct the next act in MIT’s story of artistic leadership.



Keeril Makan has been appointed vice provost for the arts at MIT, effective Feb. 1. In this role, Makan, who is the Michael (1949) and Sonja Koerner Music Composition Professor at MIT, will provide leadership and strategic direction for the arts across the Institute.

Provost Anantha Chandrakasan announced Makan’s appointment in an email to the MIT community today.

“Keeril’s record of accomplishment both as an artist and an administrative leader makes him exceedingly qualified to take on this important role,” Chandrakasan wrote, noting that Makan “has repeatedly taken on new leadership assignments with skill and enthusiasm.”

Makan’s appointment follows the publication last September of the final report of the Future of the Arts at MIT Committee. At MIT, the report noted, “the arts thrive as a constellation of recognized disciplines while penetrating and illuminating countless aspects of the Institute’s scientific and technological enterprise.” Makan will build on this foundation as MIT continues to strengthen the role of the arts in research, education, and community life.

As vice provost for the arts, Makan will provide Institute-wide leadership and strategic direction for the arts, working in close partnership with academic leaders, arts units, and administrative colleagues across MIT, including the Office of the Arts; the MIT Center for Art, Science and Technology; the MIT Museum; the List Visual Arts Center; and the Council for the Arts at MIT. His role will focus on strengthening connections between artistic practice, research, education, and community life, and on supporting public engagement and interdisciplinary collaboration.

“At MIT, the arts are a vital way of thinking, making, and convening,” Makan says. “As vice provost, my priority is to support and strengthen the extraordinary artistic work already happening across the Institute, while listening carefully to faculty, students, and staff as we shape what comes next. I’m excited to build on MIT’s distinctive, only-at-MIT approach to the arts and to help ensure that artistic practice remains central to MIT’s intellectual and community life.”

Makan says he will begin his new role with a period of listening and learning across MIT’s arts ecosystem, informed by the Future of the Arts at MIT report. His initial focus will be on understanding how artistic practice intersects with research, education, and community life, and on identifying opportunities to strengthen connections, visibility, and coordination across MIT’s many arts activities.

Over time, Makan says he will work with the arts community to advance MIT’s long-standing commitment to artistic excellence and experimentation, while supporting student participation and public engagement in the arts. He said his approach will “emphasize collaboration, clarity, and sustainability, reflecting MIT’s distinctive integration of the arts with science and technology.”

Makan came to MIT in 2006 as an assistant professor of music. From 2018 to 2024, he served as head of the Music and Theater Arts (MTA) Section in the School of Humanities, Arts, and Social Sciences (SHASS). In 2023, he was appointed associate dean for strategic initiatives in SHASS, where he helped guide the school’s response to recent fiscal pressures and led Institute-wide strategic initiatives.

With colleagues from MTA and the School of Engineering, Makan helped launch a new, multidisciplinary graduate program in music technology and computation. He was intimately involved in the project to develop the new Edward and Joyce Linde Music Building (Building 18), a state-of-the-art facility that opened in 2025. 

Makan was a member of the Future of the Arts at MIT Committee and chaired a working group on the creation of a center for the humanities, which ultimately became the MIT Human Insight Collaborative (MITHIC), one of the Institute’s strategic initiatives. Since last year, he has served as MITHIC’s faculty lead. Under his leadership, MITHIC has awarded $4.7 million in funding to 56 projects across 28 units at MIT, supporting interdisciplinary, human-centered research and teaching.

Trained initially as a violinist, Makan earned undergraduate degrees in music composition and religion from Oberlin and a PhD in music composition from the University of California at Berkeley.

A critically-acclaimed composer, Makan is the recipient of a Guggenheim Fellowship and the Luciano Berio Rome Prize from the American Academy in Rome. His music has been recorded by the Kronos Quartet, the Boston Modern Orchestra Project, and the International Contemporary Ensemble, and performed at Carnegie Hall, the Lincoln Center for the Performing Arts, and Tanglewood. His opera, “Persona,” premiered at National Sawdust and was performed at the Isabella Stewart Gardner Museum in Boston and by the Los Angeles Opera. The Los Angeles Times described the music from “Persona” as “brilliant.”

Makan succeeds Philip Khoury, the Ford International Professor of History, who served as vice provost for the arts from 2006 before stepping down in 2025. Khoury will return to the MIT faculty following a sabbatical.


Study: The infant universe’s “primordial soup” was actually soupy

MIT physicists observed the first clear evidence that quarks create a wake as they speed through quark-gluon plasma, confirming the plasma behaves like a liquid.


In its first moments, the infant universe was a trillion-degree-hot soup of quarks and gluons. These elementary particles zinged around at light speed, creating a “quark-gluon plasma” that lasted for only a few millionths of a second. The primordial goo then quickly cooled, and its individual quarks and gluons fused to form the protons, neutrons, and other fundamental particles that exist today.

Physicists at CERN’s Large Hadron Collider in Switzerland are recreating quark-gluon plasma (QGP) to better understand the universe’s starting ingredients. By smashing together heavy ions at close to light speeds, scientists can briefly dislodge quarks and gluons to create and study the same material that existed during the first microseconds of the early universe.

Now, a team at CERN led by MIT physicists has observed clear signs that quarks create wakes as they speed through the plasma, similar to a duck trailing ripples through water. The findings are the first direct evidence that quark-gluon plasma reacts to speeding particles as a single fluid, sloshing and splashing in response, rather than scattering randomly like individual particles.

“It has been a long debate in our field, on whether the plasma should respond to a quark,” says Yen-Jie Lee, professor of physics at MIT. “Now we see the plasma is incredibly dense, such that it is able to slow down a quark, and produces splashes and swirls like a liquid. So quark-gluon plasma really is a primordial soup.”

To see a quark’s wake effects, Lee and his colleagues developed a new technique that they report in the study. They plan to apply the approach to more particle-collision data to zero in on other quark wakes. Measuring the size, speed, and extent of these wakes, and how long it takes for them to ebb and dissipate, can give scientists an idea of the properties of the plasma itself, and how quark-gluon plasma might have behaved in the universe’s first microseconds.

“Studying how quark wakes bounce back and forth will give us new insights on the quark-gluon plasma’s properties,” Lee says. “With this experiment, we are taking a snapshot of this primordial quark soup.”

The study’s co-authors are members of the CMS Collaboration — a team of particle physicists from around the world who work together to carry out and analyze data from the Compact Muon Solenoid (CMS) experiment, which is one of the general-purpose particle detectors at CERN’s Large Hadron Collider. The CMS experiment was used to detect signs of quark wake effects for this study. The open-access study appears in the journal Physics Letters B.

Quark shadows

Quark-gluon plasma is the first liquid to have ever existed in the universe. It is also the hottest liquid ever, as scientists estimate that during its brief existence, the QGP was around a few trillion degrees Celsius. This boiling stew is also thought to have been a near-“perfect” liquid, meaning that the individual quarks and gluons in the plasma flowed together as a smooth, frictionless fluid.

This picture of the QGP is based on many independent experiments and theoretical models. One such model, derived by Krishna Rajagopal, the William A. M. Burden Professor of Physics at MIT, and his collaborators, predicts that the quark-gluon plasma should respond like a fluid to any particles speeding through it. His theory, known as the hybrid model, suggests that when a jet of quarks is zinging through the QGP, it should produce a wake behind it, inducing the plasma to ripple and splash in response.

Physicists have looked for such wake effects in experiments at the Large Hadron Collider and other high-energy particle accelerators. These experiments whip up heavy ions such as lead, to close to the speed of light, at which point they can collide and produce a short-lived droplet of primordial soup, typically lasting for less than a quadrillionth of a second. Scientists essentially take a snapshot of the moment to try and identify characteristics of the QGP.

To identify quark wakes, physicists have looked for pairs of quarks and “antiquarks” — particles that are identical to their quark counterparts, except that certain properties are equal in magnitude but opposite in sign. For instance, when a quark is speeding through plasma, there is likely an antiquark that is traveling at exactly the same speed, but in the opposite direction.

For this reason, physicists have looked for quark/antiquark pairs in the QGP produced in heavy-ion collisions, assuming that the particles might produce identical, detectable wakes through the plasma.

“When you have two quarks produced, the problem is that, when the two quarks go in opposite directions, the one quark overshadows the wake of the second quark,” Lee says.

He and his colleagues realized that looking for the wake of the first quark would be easier if there were no second quark obscuring its effects.

“We have figured out a new technique that allows us to see the effects of a single quark in the QGP, through a different pair of particles,” Lee says.

A wake tag

Rather than search for pairs of quarks and antiquarks in the aftermath of lead ion collisions, Lee’s team instead looked for events with only one quark moving through the plasma, essentially back-to-back with a “Z boson.” A Z boson is a neutral, electrically weak elementary particle that has virtually no effect on the surrounding environment. However, because they exist at a very specific energy, Z bosons are relatively straightforward to detect.

“In this soup of quark-gluon plasma, there are numerous quarks and gluons passing by and colliding with each other,” Lee explains. “Sometimes when we are lucky, one of these collisions creates a Z boson and a quark, with high momentum.”

In such a collision, the two particles should hit each other and fly off in exact opposite directions. While the quark could leave a wake, the Z boson should have no effect on the surrounding plasma. Whatever ripples are observed in the droplet of primordial soup would have been made entirely by the single quark zipping through it.

The team, in collaboration with Professor Yi Chen’s group at Vanderbilt University, reasoned that they could use Z bosons as a “tag” to locate and trace the wake effects of single quarks. For their new study, the researchers looked through data from the Large Hadron Collider’s heavy-ion collision experiments. From 13 billion collisions, they identified about 2,000 events that produced a Z boson. For each of these events, they mapped the energies throughout the short-lived quark-gluon plasma, and consistently observed a fluid-like pattern of splashes in swirls — a wake effect — in the opposite direction of the Z bosons, which the team could directly attribute to the effect of single quarks zooming through the plasma.

What’s more, the physicists found that the wake effects they observed in the data were consistent with what Rajagopal’s hybrid model predicts. In other words, quark-gluon plasma does in fact flow and ripple like a fluid when particles speed through it.

“This is something that many of us have argued must be there for a good many years, and that many experiments have looked for,” says Rajagopal, who was not directly involved with the new study.

“What Yen-Jie and CMS have done is to devise and execute a measurement that has brought them and us the first clean, clear, unambiguous, evidence for this foundational phenomenon,” says Daniel Pablos, professor of physics at Oviedo University in Spain and a collaborator of Rajagopal’s who was not involved in the current study.

“We’ve gained the first direct evidence that the quark indeed drags more plasma with it as it travels,” Lee adds. “This will enable us to study the properties and behavior of this exotic fluid in unprecedented detail.”

This work was supported, in part, by the U.S. Department of Energy.


Welcome to the “most wicked” apprentice program on campus

With a focus on metallurgy and fabrication, Pappalardo Apprentices assist their peers with machining, hand-tool use, brainstorming, and more, while expanding their own skills.


The Pappalardo Apprentice program pushes the boundaries of the traditional lab experience, inviting a selected group of juniors and seniors to advance their fabrication skills while also providing mentor training and peer-to-peer mentoring opportunities in an environment fueled by creativity, safety, and fun.

“This apprenticeship was largely born of my need for additional lab help during our larger sophomore-level design course, and the desire of third- and fourth-year students to advance their fabrication knowledge and skills,” says Daniel Braunstein, senior lecturer in mechanical engineering (MechE) and director of the Pappalardo Undergraduate Teaching Laboratories. “Though these needs and wants were nothing particularly new, it had not occurred to me that we could combine these interests into a manageable and meaningful program.”

Apprentices serve as undergraduate lab assistants for class 2.007 (Design and Manufacturing I), joining lab sessions and assisting 2.007 students with various aspects of the learning experience including machining, hand-tool use, brainstorming, and peer support. Apprentices also participate in a series of seminars and clinics designed to further their fabrication knowledge and hands-on skills, including advancing understanding of mill and lathe use, computer-aided design and manufacturing (CAD/CAM) and pattern-making.

Putting this learning into practice, junior apprentices fabricate Stirling engines (a closed-cycle heat engine that converts thermal energy into mechanical work), while returning senior apprentices take on more ambitious group projects involving casting. Previous years’ projects included an early 20th-century single-cylinder marine engine and a 19th-century torpedo boat steam engine, on permanent exhibit at the MIT Museum. This spring will focus on copper alloys and fabrication of a replica of an 1899 anchor windlass from the Herreshoff Manufacturing Co., used on the famous New York 70 class sloops.

The sloops, designed by MIT Class of 1870 alumnus Nathanael Greene Herreshoff for wealthy New York Yacht Club members, were a short-lived, single-design racing vessels meant for exclusive competition. The historic racing yachts used robust manual windlasses — mechanical devices used to haul large loads — to manage their substantial anchors.

“The more we got into casting, I was modestly surprised that [the students’] exposure to metals was very limited. So that really launched not just a project, but also a more specific curriculum around metallurgy,” says Braunstein.

Metallurgy is not a traditional part of the curriculum. “I think [the project] really opened up my eyes to how much material choice is an important thing for engineering in general,” says apprentice Jade Durham.

In casting the windlasses, students are working from century-old drawings. “[Looking at these old drawings,] we don't know how they made [the parts],” says Braunstein. “So, there is an element of the discovery of what they may or may not have done. It’s like technical archaeology.”

“You’re really just relying on your knowledge of the windlass system, how it’s meant to work, which surfaces are really critical, and kind of just applying your intuition,” says apprentice Saechow Yap. “I learned a lot about applying my art skills and my ability to judge and shape aesthetic.”

Learning by doing is an important hallmark of an MIT MechE education. The Pappalardo Apprentice Program, which celebrated its 10th year last spring, is housed in the Pappalardo Lab. The lab, established through a gift from Neil Pappalardo ’64, is the self-proclaimed “most wicked labs on campus” — “wicked,” for readers outside of Greater Boston, is slang used in a variety of ways, but generally meaning something is pretty awesome.

“Pappalardo is my favorite place on campus, I had never set foot in any sort of like makerspace or lab before I came to MIT,” says apprentice Wilhem Hector. “I did not just learn how to make things. I got empowered ... [to] make anything.”

Braunstein developed the Pappalardo Apprentice program to reinforce the learning of the older students while building community. In a 2023 interview, he said he called the seminar an apprenticeship to emphasize MIT’s relationship with the art — and industrial character — of engineering.

“I did want to borrow from the language of the trades,” Braunstein said. “MIT has a strong heritage in industrial work; that’s why we were founded. It was not a science institution; it was about the mechanical arts. And I think the blend of the industrial, plus the academic, is what makes this lab particularly meaningful.”

Today, he says the most enjoyable part of the program, for him, is watching relationships develop. “They come in, bright-eyed, bushy-tailed, and then to see them go to people who are capable of pouring iron, tramming mills, teaching other people how to do it and having this tight group of friends … that's fun to watch.”


Expanding educational access in Massachusetts prisons

Recent summit at MIT brought together educators, policymakers, and community partners, featuring resilience expert Shaka Senghor on transforming lives through learning and redefining pathways to freedom.


Collaborators from across the Commonwealth of Massachusetts came together in December for a daylong summit of the Massachusetts Prison Education Consortium (MPEC), hosted by the Educational Justice Institute (TEJI) at MIT. Held at MIT’s Walker Memorial, the summit aimed to expand access to high-quality education for incarcerated learners and featured presentations by leaders alongside strategy sessions designed to turn ideas into concrete plans to improve equitable access to higher education and reduce recidivism in local communities.

In addition to a keynote address by author and resilience expert Shaka Senghor, speakers such as Molly Lasagna, senior strategy officer in the Ascendium Education Group, and Stefan LoBuglio, former director of the National Institute of Corrections, discussed the roles of learning, healing, and community support in building a more just system for justice-impacted individuals.

The MPEC summit, “Building Integrated Systems Together: Massachusetts Community Colleges and County Corrections 2.0,” addressed three key issues surrounding equitable education: the integration of Massachusetts community college education with county corrections to provide incarcerated individuals with access to higher education; the integration of carceral education with industry to expand work and credentialing opportunities; and the goal of better serving women who experience unique challenges within the criminal legal system.

Created by TEJI, MPEC is a statewide network of Massachusetts colleges, organizations and correctional partners working together to expand access to high-quality, credit-bearing education in Massachusetts prisons and jails. The consortium works on all levels of the pipeline, from academic programming, faculty support, research, reentry pathways, and more, drawing from the research and success of the MIT Prison Education Initiative and the recent restoration of Pell Grant eligibility for incarcerated learners.

The summit was hosted by TEJI co-directors Lee Perlman and Carole Cafferty. Perlman founded the MIT Prison Initiative after years of teaching in MIT’s Experimental Study Group (ESG) and in correctional classrooms. He has been recognized for his work in bringing humanities education to prison settings with three Irwin Sizer Awards and MIT’s Martin Luther King Jr. Leadership Award.

Cafferty jointly co-founded TEJI after more than 30 years’ experience with corrections, including working as superintendent of the Middlesex Jail and House of Correction. She now guides the institute with the knowledge she gained from building integrative and therapeutic educational programs that have since been replicated nationally.

“TEJI serves two populations, incarcerated learners and the MIT community. All of our classes involve MIT students, either learning alongside the incarcerated students or as TAs [teaching assistants],” emphasizes Perlman. In discussing the unification of TEJI with the roles and experiences MIT students take, Perlman further notes: “Our humanities classes, which we call our philosophical life skills curriculum, give MIT students the opportunity to discuss how we want to live our lives with incarcerated students with very different backgrounds.”

These courses, offered through ESG, are subjects with a unique focus that often differ from the traditional focus of a more academic course, often prioritizing hands-on learning and innovative teaching methods. Perlman’s courses are almost always taught in a carceral setting, and he notes that these courses can be highly impactful on the MIT community: “In courses like Philosophy of Love; Non-violence as a Way of Life; and Authenticity and Emotional Intelligence for Teams, the discussions are rich and personal. Many MIT students have described their experience in these classes as life-changing.”

Throughout morning addresses and afternoon strategy sessions, summit attendees developed concrete plans for scaling classroom capacity, aligning curricula with regional labor markets, and strengthening academic and reentry supports that help students remain on the right path after release. Panels explored practical issues, such as how to coordinate registration and credit transfer when a student moves between facilities and how to staff hybrid classrooms that combine in-person and remote instruction, as well as how to measure program outcomes beyond enrollment.

Co-directors Perlman and Cafferty highlighted that the average length of stay within these programs in county facilities is only six months, and that inspired a particular focus on making sure these programs are high-impact even when community members are only able to participate for a short period of time.

Speakers repeatedly emphasized that these logistical challenges often sit atop deeper, more human challenges. In his keynote, Shaka Senghor traced his own journey from trauma to transformation, stressing the power of reading, mentorship, and completing something of one’s own. “What else can you do with your mind?” he asked, describing the moment he realized that the act of reading and writing could change the trajectory of his life.

The line became a refrain throughout the day, a question that caused all to reflect on how prison education could not only function as a workforce pathway, but as a catalyst for dignity and hope after reentry. Senghor also directly confronted the stigma that returning citizens face. “They said I’d be back in prison in six months,” he recalled, using the remark from a corrections officer from the day he was released on parole as a reminder of the structural and social barriers encountered after release.

The summit also brought together funders and implementers who are shaping the field’s future. Molly Lasagna of Ascendium Education Group described the organization’s strategy of “Expand, Support, Connect,” which funds the creation of new educational programs, strengthens basic needs and advising infrastructure, and ensures that individuals leaving prison can transition into high-quality employment opportunities. “How is this education program putting somebody on a pathway to opportunity?” she asked, noting that true change requires aligning education, reentry, and workforce systems.

Participants also heard from Stefan LoBuglio, former director of the National Institute of Corrections and a national thought leader in corrections and reentry, who lauded Massachusetts as a leader while cautioning that staffing shortages, limited program space, and uneven access to technology continue to constrain progress. “We have a crisis in staffing in corrections that does affect our educational programs,” he noted, calling for attention to staff wellness and institutional support as essential components of sustainability.

Throughout the day, TEJI and MPEC leaders highlighted emerging pilots and partnerships, including a new “Prisons to Pathways” initiative aimed at building stackable, transferable credentials aligned with regional industry needs. Additional collaborations with the American Institutes for Research will support new implementation guides and technical assistance resources designed by practitioners in the field.

The summit concluded with a commitment to sustain collaboration. As Senghor reminded participants, the work is both practical and moral. The question he posed, “What else can you do with your mind?,” serves as a reminder to Massachusetts educators, corrections partners, funders, and community organizations to ensure that learning inside prison becomes a foundation for opportunity outside it.
 


Bryan Bryson: Engineering solutions to the tough problem of tuberculosis

By analyzing how Myobacterium tuberculosis interacts with the immune system, the associate professor hopes to find new vaccine targets to help eliminate the disease.


On his desk, Bryan Bryson ’07, PhD ’13 still has the notes he used for the talk he gave at MIT when he interviewed for a faculty position in biological engineering. On that sheet, he outlined the main question he wanted to address in his lab: How do immune cells kill bacteria?

Since starting his lab in 2018, Bryson has continued to pursue that question, which he sees as critical for finding new ways to target infectious diseases that have plagued humanity for centuries, especially tuberculosis. To make significant progress against TB, researchers need to understand how immune cells respond to the disease, he says.

“Here is a pathogen that has probably killed more people in human history than any other pathogen, so you want to learn how to kill it,” says Bryson, now an associate professor at MIT. “That has really been the core of our scientific mission since I started my lab. How does the immune system see this bacterium and how does the immune system kill the bacterium? If we can unlock that, then we can unlock new therapies and unlock new vaccines.”

The only TB vaccine now available, the BCG vaccine, is a weakened version of a bacterium that causes TB in cows. This vaccine is widely administered in some parts of the world, but it poorly protects adults against pulmonary TB. Although some treatments are available, tuberculosis still kills more than a million people every year.

“To me, making a better TB vaccine comes down to a question of measurement, and so we have really tried to tackle that problem head-on. The mission of my lab is to develop new measurement modalities and concepts that can help us accelerate a better TB vaccine,” says Bryson, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard.

From engineering to immunology

Engineering has deep roots in Bryson’s family: His great-grandfather was an engineer who worked on the Panama Canal, and his grandmother loved to build things and would likely have become an engineer if she had had the educational opportunity, Bryson says.

The oldest of four sons, Bryson was raised primarily by his mother and grandparents, who encouraged his interest in science. When he was three years old, his family moved from Worcester, Massachusetts, to Miami, Florida, where he began tinkering with engineering himself, building robots out of Styrofoam cups and light bulbs. After moving to Houston, Texas, at the beginning of seventh grade, Bryson joined his school’s math team.

As a high school student, Bryson had his heart set on studying biomedical engineering in college. However, MIT, one of his top choices, didn’t have a biomedical engineering program, and biological engineering wasn’t yet offered as an undergraduate major. After he was accepted to MIT, his family urged him to attend and then figure out what he would study.

Throughout his first year, Bryson deliberated over his decision, with electrical engineering and computer science (EECS) and aeronautics and astronautics both leading contenders. As he recalls, he thought he might study aero/astro with a minor in biomedical engineering and work on spacesuit design.

However, during an internship the summer after his first year, his mentor gave him a valuable piece of advice: “You should study something that will let you have a lot of options, because you don’t know how the world is going to change.”

When he came back to MIT for his sophomore year, Bryson switched his major to mechanical engineering, with a bioengineering track. He also started looking for undergraduate research positions. A poster in the hallway grabbed his attention, and he ended up with working with the professor whose work was featured: Linda Griffith, a professor of biological engineering and mechanical engineering.

Bryson’s experience in the lab “changed the trajectory of my life,” he says. There, he worked on building microfluidic devices that could be used to grow liver tissue from hepatocytes. He enjoyed the engineering aspects of the project, but he realized that he also wanted to learn more about the cells and why they behaved the way they did. He ended up staying at MIT to earn a PhD in biological engineering, working with Forest White.

In White’s lab, Bryson studied cell signaling processes and how they are altered in diseases such as cancer and diabetes. While doing his PhD research, he also became interested in studying infectious diseases. After earning his degree, he went to work with a professor of immunology at the Harvard School of Public Health, Sarah Fortune.

Fortune studies tuberculosis, and in her lab, Bryson began investigating how Mycobacterium tuberculosis interacts with host cells. During that time, Fortune instilled in him a desire to seek solutions to tuberculosis that could be transformative — not just identifying a new antibiotic, for example, but finding a way to dramatically reduce the incidence of the disease. This, he thought, could be done by vaccination, and in order to do that, he needed to understand how immune cells response to the disease. 

“That postdoc really taught me how to think bravely about what you could do if you were not limited by the measurements you could make today,” Bryson says. “What are the problems we really need to solve? There are so many things you could think about with TB, but what’s the thing that’s going to change history?”

Pursuing vaccine targets

Since joining the MIT faculty eight years ago, Bryson and his students have developed new ways to answer the question he posed in his faculty interviews: How does the immune system kill bacteria?

One key step in this process is that immune cells must be able to recognize bacterial proteins that are displayed on the surfaces of infected cells. Mycobacterium tuberculosis produces more than 4,000 proteins, but only a small subset of those end up displayed by infected cells. Those proteins would likely make the best candidates for a new TB vaccine, Bryson says.

Bryson’s lab has developed ways to identify those proteins, and so far, their studies have revealed that many of the TB antigens displayed to the immune system belong to a class of proteins known as type 7 secretion system substrates. Mycobacterium tuberculosis expresses about 100 of these proteins, but which of these 100 are displayed by infected cells varies from person to person, depending on their genetic background.

By studying blood samples from people of different genetic backgrounds, Bryson’s lab has identified the TB proteins displayed by infected cells in about 50 percent of the human population. He is now working on the remaining 50 percent and believes that once those studies are finished, he’ll have a very good idea of which proteins could be used to make a TB vaccine that would work for nearly everyone.

Once those proteins are chosen, his team can work on designing the vaccine and then testing it in animals, with hopes of being ready for clinical trials in about six years.

In spite of the challenges ahead, Bryson remains optimistic about the possibility of success, and credits his mother for instilling a positive attitude in him while he was growing up.

“My mom decided to raise all four of her children by herself, and she made it look so flawless,” Bryson says. “She instilled a sense of ‘you can do what you want to do,’ and a sense of optimism. There are so many ways that you can say that something will fail, but why don’t we look to find the reasons to continue?”

One of the things he loves about MIT is that he has found a similar can-do attitude across the Institute.

“The engineer ethos of MIT is that yes, this is possible, and what we’re trying to find is the way to make this possible,” he says. “I think engineering and infectious disease go really hand-in-hand, because engineers love a problem, and tuberculosis is a really hard problem.”

When not tackling hard problems, Bryson likes to lighten things up with ice cream study breaks at Simmons Hall, where he is an associate head of house. Using an ice cream machine he has had since 2009, Bryson makes gallons of ice cream for dorm residents several times a year. Nontraditional flavors such as passion fruit or jalapeno strawberry have proven especially popular.

“Recently I did flavors of fall, so I did a cinnamon ice cream, I did a pear sorbet,” he says. “Toasted marshmallow was a huge hit, but that really destroyed my kitchen.”


Pablo Jarillo-Herrero wins BBVA Foundation Frontiers of Knowledge Award

MIT physicist shares 400,000-euro award for influential work on “magic-angle” graphene.


Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, has won the 2025 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences for “discoveries concerning the ‘magic angle’ that allows the behavior of new materials to be transformed and controlled.”

He shares the 400,000-euro award with Allan MacDonald of the University of Texas at Austin. According to the BBVA Foundation, “the pioneering work of the two physicists has achieved both the theoretical foundation and experimental validation of a new field where superconductivity, magnetism, and other properties can be obtained by rotating new two-dimensional materials like graphene.” Graphene is a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure.

Theoretical foundation, experimental validation

In a theoretical model published in 2011, MacDonald predicted that on twisting two graphene layers at a given angle, of around 1 degree, the interaction of electrons would produce new emerging properties.
 
In 2018, Jarillo-Herrero delivered the experimental confirmation of this “magic angle” by rotating two graphene sheets in a way that transformed the material’s behavior, giving rise to new properties like superconductivity.

The physicists’ work “has opened up new frontiers in physics by demonstrating that rotating matter to a given angle allows us to control its behavior, obtaining properties that could have a major industrial impact,” explained award committee member María José García Borge, a research professor at the Institute for the Structure of Matter. “Superconductivity, for example, could bring about far more sustainable electricity transmission, with virtually no energy loss.”

Almost science fiction

MacDonald’s initial discovery had little immediate impact. It was not until some years later, when it was confirmed in the laboratory by Jarillo-Herrero, that its true importance was revealed. 

“The community would never have been so interested in my subject, if there hadn’t been an experimental program that realized that original vision,” observes MacDonald, who refers to his co-laureate’s achievement as “almost science fiction.”

Jarillo-Herrero had been intrigued by the possible effects of placing two graphene sheets on top of each other with a precise rotational alignment, because “it was uncharted territory, beyond the reach of the physics of the past, so was bound to produce some interesting results.”

But the scientist was still unsure of how to make it work in the lab. For years, he had been stacking together layers of the super-thin material, but without being able to specify the angle between them. Finally, he devised a way to do so, making the angle smaller and smaller until he got to the “magic” angle of 1.1 degrees at which the graphene revealed some extraordinary behavior.

“It was a big surprise, because the technique we used, though conceptually straightforward, was hard to pull off in the lab,” says Jarillo-Herrero, who is also affiliated with the Materials Research Laboratory.

Since 2009, the BBVA has given Frontiers of Knowledge Awards to more than a dozen MIT faculty members. The Frontiers of Knowledge Awards, spanning eight prize categories, recognize world-class research and cultural creation and aim to celebrate and promote the value of knowledge as a global public good. The BBVA Foundation works to support scientific research and cultural creation, disseminate knowledge and culture, and recognize talent and innovation. 


Cancer’s secret safety net

Researchers uncover a hidden mechanism that allows cancer to develop aggressive mutations.


Researchers in Class of 1942 Professor of Chemistry Matthew D. Shoulders’ lab have uncovered a sinister hidden mechanism that can allow cancer cells to survive (and, in some cases, thrive) even when hit with powerful drugs. The secret lies in a cellular “safety net” that gives cancer the freedom to develop aggressive mutations.

This fascinating intersection between molecular biology and evolutionary dynamics, published Jan. 22 on the cover of Molecular Cell, focuses on the most famous anti-cancer gene in the human body, TP53 (tumor protein 53, known as p53), and suggests that cancer cells don’t just mutate by accident — they create a specialized environment that makes dangerous mutations possible. 

The guardian under attack

Tasked with the job of stopping damaged cells from dividing, the p53 protein has been known for decades as the “guardian of the genome” and is the most mutated gene in cancer. Some of the most perilous of these mutations are known as “dominant-negative” variants. Not only do they stop working, but they actually prevent any healthy p53 in the cell from doing its job, essentially disarming the body’s primary defense system.

To function, p53 and most other proteins must fold into specific 3D shapes, much like precise cellular origami. Typically, if a mutation occurs that ruins this shape, the protein becomes a tangled mess, and the cell destroys it.

A specialized network of proteins, called cellular chaperones, help proteins fold into their correct shape, collectively known as the proteostasis network. 

“Many chaperone networks are known to be upregulated in cancer cells, for reasons that are not totally clear,” says Stephanie Halim, a graduate student in the Shoulders Group and co-first author of the study, along with Rebecca Sebastian PhD ’22. “We hypothesized that increasing the activities of these helpful protein folding networks can allow cancer cells to tolerate more mutations than a regular cell.”

The research team investigated a “helper” system in the cell called the proteostasis network. This network involves many proteins known as chaperones that help other proteins fold correctly. A master regulator called Heat Shock Factor 1 (HSF1) controls the composition of the proteostasis network, with HSF1 activity upregulating the network to create supportive protein folding environments in response to stress. In healthy cells, HSF1 stays dormant until heat or toxins appear. In cancer, HSF1 is often permanently in action mode.

To see how this works in real-time, the team created a specialized cancer cell line that let them chemically “turn up” the activity of HSF1 on demand. They then used a cutting-edge technique to express every possible singly mutated version of a p53 protein — testing thousands of different genetic “typos” at once.

The results were clear: When HSF1 was amplified, the cancer cells became much better at handling “bad” mutations. Normally, these specific mutations are so physically disruptive that they would cause the protein to collapse and fail. However, with HSF1 providing extra folding help, these unstable, cancer-driving proteins were able to stay intact and keep the cancer growing.

“These findings show that chaperone networks can reshape the fundamental mutational tolerance of the most mutated gene in cancer, linking proteostasis network activity directly to cancer development,” said Halim. “This work also puts us one step closer to understanding how tinkering with cellular protein folding pathways can help with cancer treatment.”

Unravelling cancer’s safety net

The study revealed that HSF1 activity specifically protects normally disruptive amino acid substitutions located deep inside the protein’s core — the most sensitive areas. Without this extra folding help, these substitutions would likely cause degradation of these proteins. With it, the cancer cell can keep these broken proteins around to help it grow.

This discovery helps explain why cancer is so resilient, and why previous attempts to treat cancer by blocking chaperone proteins (like HSP90, an abundant cellular chaperone) have been so complex. By understanding how cancer “buffers” its own bad mutations, doctors may one day be able to break that safety net, forcing the cancer’s own mutations to become its downfall.

The research was conducted in collaboration with the labs of professors Yu-Shan Lin of Tufts University; Francisco J. Sánchez-Rivera of the MIT Department of Biology; William C. Hahn, institute member of the Broad Institute of MIT and Harvard and professor of medicine in the Department of Medical Oncology at the Dana-Farber Cancer Institute and Harvard Medical School; and Marc L. Mendillo of Northwestern University.


Richard Hynes, a pioneer in the biology of cellular adhesion, dies at 81

Professor, mentor, and leader at MIT for more than 50 years shaped fundamental understandings of cell adhesion, the extracellular matrix, and molecular mechanisms of metastasis.


MIT Professor Emeritus Richard O. Hynes PhD ’71, a cancer biologist whose discoveries reshaped modern understandings of how cells interact with each other and their environment, passed away on Jan. 6. He was 81.

Hynes is best known for his discovery of integrins, a family of cell-surface receptors essential to cell–cell and cell–matrix adhesion. He played a critical role in establishing the field of cell adhesion biology, and his continuing research revealed mechanisms central to embryonic development, tissue integrity, and diseases including cancer, fibrosis, thrombosis, and immune disorders.

Hynes was the Daniel K. Ludwig Professor for Cancer Research, Emeritus, an emeritus professor of biology, and a member of the Koch Institute for Integrated Cancer Research at MIT and the Broad Institute of MIT and Harvard. During his more than 50 years on the faculty at MIT, he was deeply respected for his academic leadership at the Institute and internationally, as well as his intellectual rigor and contributions as an educator and mentor.

“Richard had an enormous impact in his career. He was a visionary leader of the MIT Cancer Center, what is now the Koch Institute, during a time when the progress in understanding cancer was just starting to be translated into new therapies,” reflects Matthew Vander Heiden, director of the Koch Institute and the Lester Wolfe (1919) Professor of Molecular Biology. “The research from his laboratory launched an entirely new field by defining the molecules that mediate interactions between cells and between cells and their environment. This laid the groundwork for better understanding the immune system and metastasis.”

Pond skipper

Born in Kenya, Hynes grew up during the 1950s in Liverpool, in the United Kingdom. While he sometimes recounted stories of being schoolmates with two of the Beatles, and in the same Boy Scouts troop as Paul McCartney, his academic interests were quite different, and he specialized in the sciences at a young age. Both of his parents were scientists: His father was a freshwater ecologist, and his mother a physics teacher. Hynes and all three of his siblings followed their parents into scientific fields.

"We talked science at home, and if we asked questions, we got questions back, not answers. So that conditioned me into being a scientist, for sure," Hynes said of his youth.

He described his time as an undergraduate and master’s student at Cambridge University during the 1960s as “just fantastic,” noting that it was shortly after two 1962 Nobel Prizes were awarded to Cambridge researchers — one to Francis Crick and James Watson for the structure of DNA, the other to John Kendrew and Max Perutz for the structures of proteins — and Cambridge was “the place to be” to study biology.

Newly married, Hynes and his wife traded Cambridge, U.K. for Cambridge, Massachusetts, so that he could conduct doctoral work at MIT under the direction of Paul Gross. He tried (and by his own assessment, failed) to differentiate maternal messages among the three germ layers of sea urchin embryos. However, he did make early successful attempts to isolate the globular protein tubulin, a building block for essential cellular structures, from sea urchins.

Inspired by a course he had taken with Watson in the United States, Hynes began work during his postdoc at the Institute of Cancer Research in the U.K. on the early steps of oncogenic transformation and the role of cell migration and adhesion; it was here that he made his earliest discovery and characterizations of the fibronectin protein.

Recruited back to MIT by Salvador Luria, founding director of the MIT Center for Cancer Research, whom he had met during a summer at Woods Hole Oceanographic Institute on Cape Cod, Hynes returned to the Institute in 1975 as a founding faculty member of the center and an assistant professor in the Department of Biology.

Big questions about tiny cells

To his own research, Hynes brought the same spirit of inquiry that had characterized his upbringing, asking fundamental questions: How do cells interact with each other? How do they stick together to form tissues?

His research focused on proteins that allow cells to adhere to each other and to the extracellular matrix — a mesh-like network that surrounds cells, providing structural support, as well as biochemical and mechanical cues from the local microenvironment. These proteins include integrins, a type of cell surface receptor, and fibronectins, a family of extracellular adhesive proteins. Integrins are the major adhesion receptors connecting the extracellular matrix to the intracellular cytoskeleton, or main architectural support within the cell.

Hynes began his career as a developmental biologist, studying how cells move to the correct locations during embryonic development. During this stage of development, proper modulation of cell adhesion is critical for cells to move to the correct locations in the embryo.

Hynes’ work also revealed that dysregulation of cell-to-matrix contact plays an important role in cancer cells’ ability to detach from a tumor and spread to other parts of the body, key steps in metastasis.

As a postdoc, Hynes had begun studying the differences in the surface landscapes of healthy cells and tumor cells. It was this work that led to the discovery of fibronectin, which is often lost when cells become cancerous.

He and others found that fibronectin is an important part of the extracellular matrix. When fibronectin is lost, cancer cells can more easily free themselves from their original location and metastasize to other sites in the body. By studying how fibronectin normally interacts with cells, Hynes and others discovered a family of cell surface receptors known as integrins, which function as important physical links with the extracellular matrix. In humans, 24 integrin proteins have been identified. These proteins help give tissues their structure, enable blood to clot, and are essential for embryonic development.

“Richard’s discoveries, along with others’, of cell surface integrins led to the development of a number of life-altering treatments. Among these are treatment of autoimmune diseases such as multiple sclerosis,” notes longtime colleague Phillip Sharp, MIT Institute professor emeritus.

As research technologies advanced, including proteomic and extracellular matrix isolation methods developed directly in Hynes’ laboratory, he and his group were able to uncover increasingly detailed information about specific cell adhesion proteins, the biological mechanisms by which they operate, and the roles they play in normal biology and disease.

In cancer, their work helped to uncover how cell adhesion (and the loss thereof) and the extracellular matrix contribute not only to fundamental early steps in the metastatic process, but also tumor progression, therapeutic response, and patient prognosis. This included studies that mapped matrix protein signatures associated with cancer and non-cancer cells and tissues, followed by investigations into how differentially expressed matrix proteins can promote or suppress cancer progression. 

Hynes and his colleagues also demonstrated how extracellular matrix composition can influence immunotherapy, such as the importance of a family of cell adhesion proteins called selectins for recruiting natural killer cells to tumors. Further, Hynes revealed links between fibronectin, integrins, and other matrix proteins with tumor angiogenesis, or blood vessel development, and also showed how interaction with platelets can stimulate tumor cells to remodel the extracellular matrix to support invasion and metastasis. In pursuing these insights into the oncogenic mechanisms of matrix proteins, Hynes and members of his laboratory have identified useful diagnostic and prognostic biomarkers, as well as therapeutic targets.

Along the way, Hynes shaped not only the research field, but also the careers of generations of trainees.

“There was much to emulate in Richard’s gentle, patient, and generous approach to mentorship. He centered the goals and interests of his trainees, fostered an inclusive and intellectually rigorous environment, and cared deeply about the well-being of his lab members. Richard was a role model for integrity in both personal and professional interactions and set high expectations for intellectual excellence,” recalls Noor Jailkhani, a former Hynes Lab postdoc.

Jailkhani is CEO and co-founder, with Hynes, of Matrisome Bio, a biotech company developing first-in-class targeted therapies for cancer and fibrosis by leveraging the extracellular matrix. “The impact of his long and distinguished scientific career was magnified through the generations of trainees he mentored, whose influence spans academia and the biotechnology industry worldwide. I believe that his dedication to mentorship stands among his most far-reaching and enduring contributions,” she says.

A guiding light

Widely sought for his guidance, Hynes served in a number of key roles at MIT and in the broader scientific community. As head of MIT’s Department of Biology from 1989 to 1991, then a decade as director of the MIT Center for Cancer Research, his leadership has helped shape the Institute’s programs in both areas.

“Words can’t capture what a fabulous human being Richard was. I left every interaction with him with new insights and the warm glow that comes from a good conversation,” says Amy Keating, the Jay A. Stein (1968) Professor, professor of biology and biological engineering, and head of the Department of Biology. “Richard was happy to share stories, perspectives, and advice, always with a twinkle in his eye that conveyed his infinite interest in and delight with science, scientists, and life itself. The calm support that he offered me, during my years as department head, meant a lot and helped me do my job with confidence.”

Hynes served as director of the MIT Center for Cancer Research from 1991 until 2001, positioning the center’s distinguished cancer biology program for expansion into its current, interdisciplinary research model as MIT’s Koch Institute for Integrative Cancer Research. “He recruited and strongly supported Tyler Jacks to the faculty, who subsequently became director and headed efforts to establish the Koch Institute,” recalls Sharp.

Jacks, a David H. Koch (1962) Professor of Biology and founding director of the Koch Institute, remembers Hynes as a thoughtful, caring, and highly effective leader in the Center for Cancer Research, or CCR, and in the Department of Biology. “I was fortunate to be able to lean on him when I took over as CCR director. He encouraged me to drop in — unannounced — with questions and concerns, which I did regularly. I learned a great deal from Richard, at every level,” he says.

Hynes’ leadership and recognition extended well beyond MIT to national and international contexts, helping to shape policy and strengthen connections between MIT researchers and the wider field. He served as a scientific governor of the Wellcome Trust, a global health research and advocacy foundation based in the United Kingdom, and co-chaired U.S. National Academy committees establishing guidelines for stem cell and genome editing research.

“Richard was an esteemed scientist, a stimulating colleague, a beloved mentor, a role model, and to me a partner in many endeavors both within and beyond MIT,” notes H. Robert Horvitz, a David H. Koch (1962) Professor of Biology. He was a wonderful human being, and a good friend. I am sad beyond words at his passing.”

Awarded Howard Hughes medical investigator status in 1988, Hynes’ research and leadership have since been recognized with a number of other notable honors. Most recently, he received the 2022 Albert Lasker Basic Medical Research Award, which he shared with Erkki Ruoslahti of Sanford Burnham Prebys and Timothy Springer of Harvard University, for his discovery of integrins and pioneering work in cell adhesion.

His other awards include the Canada Gairdner International Award, the Distinguished Investigator Award from the International Society for Matrix Biology, the Robert and Claire Pasarow Medical Research Award, the E.B. Wilson Medal from the American Society for Cell Biology, the David Rall Medal from the National Academy of Medicine and the Paget-Ewing Award from the Metastasis Research Society. Hynes was a member of the National Academy of Sciences, the National Academy of Medicine, the Royal Society of London, the American Association for the Advancement of Science, and the American Academy of Arts and Sciences.

Easily recognized by a commanding stature that belied his soft-spoken nature, Hynes was known around MIT’s campus not only for his acuity, integrity, and wise counsel, but also for his community spirit and service. From serving food at community socials to moderating events and meetings or recognizing the success of colleagues and trainees, his willingness to help spanned roles of every size.

“Richard was a phenomenal friend and colleague. He approached complex problems with a thoughtfulness and clarity that few can achieve,” notes Vander Heiden. “He was also so generous in his willingness to provide help and advice, and did so with a genuine kindness that was appreciated by everyone.”

Hynes is survived by his wife Fleur, their sons Hugh and Colin and their partners, and four grandchildren.


Biology-based brain model matches animals in learning, enables new discovery

New “biomimetic” model of brain circuits and function at multiple scales produced naturalistic dynamics and learning, and even identified curious behavior by some neurons.


A new computational model of the brain based closely on its biology and physiology not only learned a simple visual category learning task exactly as well as lab animals, but even enabled the discovery of counterintuitive activity by a group of neurons that researchers working with animals to perform the same task had not noticed in their data before, says a team of scientists at Dartmouth College, MIT, and the State University of New York at Stony Brook.

Notably, the model produced these achievements without ever being trained on any data from animal experiments. Instead, it was built from scratch to faithfully represent how neurons connect into circuits and then communicate electrically and chemically across broader brain regions to produce cognition and behavior. Then, when the research team asked the model to perform the same task that they had previously performed with the animals (looking at patterns of dots and deciding which of two broader categories they fit), it produced highly similar neural activity and behavioral results, acquiring the skill with almost exactly the same erratic progress.

“It’s just producing new simulated plots of brain activity that then only afterward are being compared to the lab animals. The fact that they match up as strikingly as they do is kind of shocking,” says Richard Granger, a professor of psychological and brain sciences at Dartmouth and senior author of a new study in Nature Communications that describes the model.

A goal in making the model, and newer iterations developed since the paper was written, is not only to offer insight into how the brain works, but also how it might work differently in disease and what interventions could correct those aberrations, adds co-author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory at MIT. Miller, Granger, and other members of the research team have founded the company Neuroblox.ai to develop the models’ biotech applications. Co-author Lilianne R. Mujica-Parodi, a biomedical engineering professor at Stony Brook who is lead principal investigator for the Neuroblox Project, is CEO of the company.

“The idea is to make a platform for biomimetic modeling of the brain so you can have a more efficient way of discovering, developing, and improving neurotherapeutics. Drug development and efficacy testing, for example, can happen earlier in the process, on our platform, before the risk and expense of clinical trials,” says Miller, who is also a faculty member of MIT’s Department of Brain and Cognitive Sciences.

Making a biomimetic model

Dartmouth postdoc Anand Pathak created the model, which differs from many others in that it incorporates both small details, such as how individual pairs of neurons connect with each other, and large-scale architecture, including how information processing across regions is affected by neuromodulatory chemicals such as acetylcholine. Pathak and the team iterated their designs to ensure they obeyed various constraints observed in real brains, such as how neurons become synchronized by broader rhythms. Many other models focus only on the small or big scales, but not both, he says.

“We didn’t want to lose the tree, and we didn’t want to lose the forest,” Pathak says.

The metaphorical “trees,” called “primitives” in the study, are small circuits of a few neurons each that connect based on electrical and chemical principles of real cells to perform fundamental computational functions. For example, within the model’s version of the brain’s cortex, one primitive design has excitatory neurons that receive input from the visual system via synapse connections affected by the neurotransmitter glutamate. Those excitatory neurons then densely connect with inhibitory neurons in a competition to signal them to shut down the other excitatory neurons — a “winner-take-all” architecture found in real brains that regulates information processing.

At a larger scale, the model encompasses four brain regions needed for basic learning and memory tasks: a cortex, a brainstem, a striatum, and a “tonically active neuron” (TAN) structure that can inject a little “noise” into the system via bursts of aceytlcholine. For instance, as the model engaged in the task of categorizing the presented patterns of dots, the TAN at first ensured some variability in how the model acted on the visual input so that the model could learn by exploring varied actions and their outcomes. As the model continued to learn, cortex and striatum circuits strengthened connections that suppressed the TAN, enabling the model to act on what it was learning with increasing consistency.

As the model engaged in the learning task, real-world properties emerged, including a dynamic that Miller has commonly observed in his research with animals. As learning progressed, the cortex and striatum became more synchronized in the “beta” frequency band of brain rhythms, and this increased synchrony correlated with times when the model (and the animals) made the correct category judgement about what they were seeing.

Revealing “incongruent” neurons

But the model also presented the researchers with a group of neurons — about 20 percent — whose activity appeared highly predictive of error. When these so-called “incongruent” neurons influenced circuits, the model would make the wrong category judgement. At first, Granger says, the team figured it was a quirk of the model. But then they looked at the real-brain data Miller’s lab accumulated when animals performed the same task.

“Only then did we go back to the data we already had, sure that this couldn’t be in there because somebody would have said something about it, but it was in there, and it just had never been noticed or analyzed,” he says.

Miller says these counterintuitive cells might serve a purpose: it’s all well and good to learn the rules of a task, but what if the rules change? Trying out alternatives from time to time can enable a brain to stumble upon a newly emerging set of conditions. Indeed, a separate Picower Institute lab recently published evidence that humans and other animals do this sometimes.

While the model described in the new paper performed beyond the team’s expectations, Granger says, the team has been expanding it to make it sophisticated enough to handle a greater variety of tasks and circumstances. For instance, they have added more regions and new neuromodulatory chemicals. They’ve also begun to test how interventions such as drugs affect its dynamics.

In addition to Granger, Miller, Pathak and Mujica-Parodi, the paper’s other authors are Scott Brincat, Haris Organtzidis, Helmut Strey, Sageanne Senneff, and Evan Antzoulatos.  

The Baszucki Brain Research Fund, United States, the Office of Naval Research, and the Freedom Together Foundation provided support for the research.


Akorfa Dagadu named 2027 Schwarzman Scholar

The MIT senior will spend the 2026-27 year at Tsinghua University in Beijing, studying global affairs.


MIT undergraduate Akorfa Dagadu has been named a Schwarzman Scholar and will join the program’s Class of 2026-27 scholars from 40 countries and 83 universities. This year’s 150 Schwarzman Scholars were selected for their leadership potential from a pool of over 5,800 applicants, the highest number in the Schwarzman Scholarship’s 11-year history.

Schwarzman Scholars pursue a one-year, fully funded master’s degree program in global affairs at Schwarzman College, Tsinghua University, in Beijing, China. The graduate curriculum focuses on the pillars of leadership, global affairs, and China, with additional opportunities for cultural immersion, experiential learning, and professional development. The program aims to build a global network of leaders with a well-rounded understanding of China’s evolving role in the world.

Hailing from Ghana, Dagadu is a senior majoring in chemical-biological engineering. At MIT, she researches how enzyme-polymer systems can be designed to break down plastics at end-of-life, work that has been recognized internationally through publications and awards, including the CellPress Rising Scientist Award.

Dagadu is the founder of Ishara, a venture transforming recycling in Ghana by connecting informal waste pickers to transparent, efficient systems with potential to scale across growth markets. She aspires to establish a materials innovation hub in Africa to address the end-of-life of materials, from plastics to e-waste.

MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in MIT Career Advising and Professional Development, as well as the Presidential Committee on Distinguished Fellowships. Students and alumni interested in learning more should contact Kimberly Benard, associate dean and director of distinguished fellowships and academic excellence.


Featured video: How tiny satellites help us track hurricanes and other weather events

Mini microwave sounders developed at Lincoln Laboratory, demonstrated on a NASA mission, and now transferred to industry, are expanding storm-forecasting capabilities.


MIT Lincoln Laboratory has transformed weather intelligence by miniaturizing microwave sounders, instruments that measure Earth's atmospheric temperature, moisture, and water vapor. These instruments are 1/100th the size of traditional sounders aboard multibillion-dollar satellites, enabling them to fit on shoebox-sized CubeSats. 

When deployed in a constellation, the CubeSats can observe rapidly intensifying storms near-hourly — providing fresh data to forecasting professionals during critical windows of storm development that have largely been undetectable by past remote-sensing technology.

Developed at Lincoln Laboratory, the mini microwave sounders were first demonstrated on NASA's TROPICS mission, which measured temperature and humidity soundings as well as precipitation. TROPICS concluded in 2025 with over 11 billion observations, providing scientists with key insights into tropical cyclone evolution. 

Now the technology has been licensed by the commercial firm Tomorrow.io, allowing for the enhancement of global weather coverage for customers in aviation, logistics, agriculture, and emergency management. Tomorrow.io provides clients with hyperlocal forecasts around the globe and is set to launch their own constellation of satellites based on the TROPICS program. Says John Springman, Tomorrow.io's head of space and sensing: “Our overall goal is to fundamentally improve weather forecasts, and that'll improve our downstream products like our weather intelligence.”

Video by Tim Briggs/Lincoln Laboratory | 13 minutes, 58 seconds


Professor of the practice Robert Liebeck, leading expert on aircraft design, dies at 87

A giant in aviation, Liebeck had taught at MIT since 2000 and was a pioneer in the famed Blended-Wing Body experimental aircraft.


Robert Liebeck, a professor of the practice in the MIT Department of Aeronautics and Astronautics and one of the world’s leading experts on aircraft design, aerodynamics, and hydrodynamics, died on Jan. 12 at age 87.

“Bob was a mentor and dear friend to so many faculty, alumni, and researchers at AeroAstro over the course of 25 years,” says Julie Shah, department head and the H.N. Slater Professor of Aeronautics and Astronautics at MIT. “He’ll be deeply missed by all who were fortunate enough to know him.”

Liebeck’s long and distinguished career in aerospace engineering included a number of foundational contributions to aerodynamics and aircraft design, beginning with his graduate research into high-lift airfoils. His novel designs came to be known as “Liebeck airfoils” and are used primarily for high-altitude reconnaissance airplanes; Liebeck airfoils have also been adapted for use in Formula One racing cars, racing sailboats, and even a flying replica of a giant pterosaur.

He was perhaps best known for his groundbreaking work on blended wing body (BWB) aircraft. He oversaw the BWB project at Boeing during his celebrated five-decade tenure at the company, working closely with NASA on the X-48 experimental aircraft. After retiring as senior technical fellow at Boeing in 2020, Liebeck remained active in BWB research. He served as technical advisor at BWB startup JetZero, which is aiming to build a more fuel-efficient aircraft for both military and commercial use and has set a target date of 2027 for its demonstration flight. 

Liebeck was appointed a professor of the practice at MIT in 2000, and taught classes on aircraft design and aerodynamics. 

“Bob contributed to the department both in aircraft capstones and also in advising students and mentoring faculty, including myself,” says John Hansman, the T. Wilson Professor of Aeronautics and Astronautics. “In addition to aviation, Bob was very significant in car racing and developed the downforce wing and flap system which has become standard on F1, IndyCar, and NASCAR cars.”

He was a major contributor to the Silent Aircraft Project, a collaboration between MIT and Cambridge University led by Dame Ann Dowling. Liebeck also worked closely with Professor Woody Hoburg ’08 and his research group, advising on students’ research into efficient methods for designing aerospace vehicles. Before Hoburg was accepted into the NASA astronaut corps in 2017, the group produced an open-source Python package, GPkit, for geometric programming, which was used to design a five-day endurance unmanned aerial vehicle for the U.S. Air Force.

“Bob was universally respected in aviation and he was a good friend to the department,” remembers Professor Ed Greitzer.

Liebeck was an AIAA honorary fellow and Boeing senior technical fellow, as well as a member of the National Academy of Engineering, Royal Aeronautical Society, and Academy of Model Aeronautics. He was a recipient of the Guggenheim Medal and ASME Spirit of St. Louis Medal, among many other awards, and was inducted into the International Air and Space Hall of Fame.

An avid runner and motorcyclist, Liebeck is remembered by friends and colleagues for his adventurous nature and generosity of spirit. Throughout a career punctuated by honors and achievements, Liebeck found his greatest satisfaction in teaching. In addition to his role at MIT, he was an adjunct faculty member at University of California at Irving and served as faculty member for that university’s Design/Build/Fly and Human-Powered Airplane teams.

“It is the one job where I feel I have done some good — even after a bad lecture,” he told AeroAstro Magazine in 2007. “I have decided that I am finally beginning to understand aeronautical engineering, and I want to share that understanding with our youth.”


Electrifying boilers to decarbonize industry

AtmosZero, co-founded by Addison Stark SM ’10, PhD ’14, developed a modular heat pump to electrify the centuries-old steam boiler.


More than 200 years ago, the steam boiler helped spark the Industrial Revolution. Since then, steam has been the lifeblood of industrial activity around the world. Today the production of steam — created by burning gas, oil, or coal to boil water — accounts for a significant percentage of global energy use in manufacturing, powering the creation of paper, chemicals, pharmaceuticals, food, and more.

Now, the startup AtmosZero, founded by Addison Stark SM ’10, PhD ’14; Todd Bandhauer; and Ashwin Salvi, is taking a new approach to electrify the centuries-old steam boiler. The company has developed a modular heat pump capable of delivering industrial steam at temperatures up to 150 degrees Celsius to serve as a drop-in replacement for combustion boilers.

The company says its first 1-megawatt steam system is far cheaper to operate than commercially available electric solutions thanks to ultra-efficient compressor technology, which uses 50 percent less electricity than electric resistive boilers. The founders are hoping that’s enough to make decarbonized steam boilers drive the next industrial revolution.

“Steam is the most important working fluid ever,” says Stark, who serves as AtmosZero’s CEO. “Today everything is built around the ubiquitous availability of steam. Cost-effectively electrifying that requires innovation that can scale. In other words, it requires a mass-produced product — not one-off projects.”

Tapping into steam

Stark joined the Technology and Policy Program when he came to MIT in 2007. He ultimately completed a dual master’s degree by adding mechanical engineering to his studies.

“I was interested in the energy transition and in accelerating solutions to enable that,” Stark says. “The transition isn’t happening in a vacuum. You need to align economics, policy, and technology to drive that change.”

Stark stayed at MIT to earn his PhD in mechanical engineering, studying thermochemical biofuels.

After MIT, Stark began working on early-stage energy technologies with the Department of Energy’s Advanced Research Projects Agency— Energy (ARPA-E), with a focus on manufacturing efficiency, the energy-water nexus, and electrification.

“Part of that work involved applying my training at MIT to things that hadn’t really been innovated on for 50 years,” Stark says. “I was looking at the heat exchanger. It’s so fundamental. I thought, ‘How might we reimagine it in the context of modern advances in manufacturing technology?’”

The problem is as difficult as it is consequential, touching nearly every corner of the global industrial economy. More than 2.2 gigatons of CO2 emissions are generated each year to turn water into steam — accounting for more than 5 percent of global energy-related emissions.

In 2020, Stark co-authored an article in the journal Joule with Gregory Thiel SM ’12, PhD ’15 titled, “To decarbonize industry, we must decarbonize heat.” The article examined opportunities for industrial heat decarbonization, and it got Stark excited about the potential impact of a standardized, scalable electric heat pump.

Most electric boiler options today bring huge increases in operating costs. Many also make use of factory waste heat, which requires pricey retrofits. Stark never imagined he’d become an entrepreneur, but he soon realized no one was going to act on his findings for him.

“The only path to seeing this invention brought out into the world was to found and run the company,” Stark says. “It’s something I didn’t anticipate or necessarily want, but here I am.”

Stark partnered with former ARPA-E awardee Todd Bandhauer, who had been inventing new refrigerant compressor technology in his lab at Colorado State University, and former ARPA-E colleague Ashwin Salvi. The team officially founded AtmosZero in 2022.

“The compressor is the engine of the heat pump and defines the efficiency, cost, and performance,” Stark says. “The fundamental challenge of delivering heat is that the higher your heat pump is raising the air temperature, the lower your maximum efficiency. It runs into thermodynamic limitations. By designing for optimum efficiency in the operational windows that matter for the refrigerants we’re using, and for the precision manufacturing of our compressors, we’re able to maximize the individual stages of compression to maximize operational efficiency.”

The system can work with waste heat from air or water, but it doesn’t need waste heat to work. Many other electric boilers rely on waste heat, but Stark thinks that adds too much complexity to installation and operations.

Instead, in AtmosZero’s novel heat pump cycle, heat from ambient-temperature air is used to warm a liquid heat transfer material, which evaporates a refrigerant so it flows into the system’s series of compressors and heat exchangers, reaching high enough temperatures to boil water while recovering heat from the refrigerant once it reaches lower temperatures. The system can be ramped up and down to seamlessly fit into existing industrial processes.

“We can work just like a combustion boiler,” Stark says. “At the end of the day, customers don’t want to change how their manufacturing facilities operate in order to electrify. You can’t change or increase complexity on-site.”

That approach means the boiler can be deployed in a range of industrial contexts without unique project costs or other changes.

“What we really offer is flexibility and something that can drop in with ease and minimize total capital costs,” Stark says.

From 1 to 1,000

AtmosZero already has a pilot 650 kilowatt system operating at a customer facility near its headquarters in Loveland, Colorado. The company is currently focused on demonstrating year-round durability and reliability of the system as they work to build out their backlog of orders and prepare to scale. 

Stark says once the system is brought to a customer’s facility, it can be installed in an afternoon and deployed in a matter of days, with zero downtime.

AtmosZero is aiming to deliver a handful of units to customers over the next year or two, with plans to deploy hundreds of units a year after that. The company is currently targeting manufacturing plants using under 10 megawatts of thermal energy at peak demand, which represents most U.S. manufacturing facilities.

Stark is proud to be part of a growing group of MIT-affiliated decarbonization startups, some of which are targeting specific verticals, like Boston Metal for steel and Sublime Systems for cement. But he says beyond the most common materials, the industry gets very fragmented, with one of the only common threads being the use of steam.

“If we look across industrial segments, we see the ubiquity of steam,” Stark says. “It’s a tremendously ripe opportunity to have impact at scale. Steam cannot be removed from industry. So much of every industrial process that we’ve designed over the last 160 years has been around the availability of steam. So, we need to focus on ways to deliver low-emissions steam rather than removing it from the equation.”


Why it’s critical to move beyond overly aggregated machine-learning metrics

New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.


MIT researchers have identified significant examples of machine-learning model failure when those models are applied to data other than what they were trained on, raising questions about the need to test whenever a model is deployed in a new setting.

“We demonstrate that even when you train models on large amounts of data, and choose the best average model, in a new setting this ‘best model’ could be the worst model for 6-75 percent of the new data,” says Marzyeh Ghassemi, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Institute for Medical Engineering and Science, and principal investigator at the Laboratory for Information and Decision Systems.

In a paper that was presented at the Neural Information Processing Systems (NeurIPS 2025) conference in December, the researchers point out that models trained to effectively diagnose illness in chest X-rays at one hospital, for example, may be considered effective in a different hospital, on average. The researchers’ performance assessment, however, revealed that some of the best-performing models at the first hospital were the worst-performing on up to 75 percent of patients at the second hospital, even though when all patients are aggregated in the second hospital, high average performance hides this failure.

Their findings demonstrate that although spurious correlations — a simple example of which is when a machine-learning system, not having “seen” many cows pictured at the beach, classifies a photo of a beach-going cow as an orca simply because of its background — are thought to be mitigated by just improving model performance on observed data, they actually still occur and remain a risk to a model’s trustworthiness in new settings. In many instances — including areas examined by the researchers such as chest X-rays, cancer histopathology images, and hate speech detection — such spurious correlations are much harder to detect.

In the case of a medical diagnosis model trained on chest X-rays, for example, the model may have learned to correlate a specific and irrelevant marking on one hospital’s X-rays with a certain pathology. At another hospital where the marking is not used, that pathology could be missed.

Previous research by Ghassemi’s group has shown that models can spuriously correlate such factors as age, gender, and race with medical findings. If, for instance, a model has been trained on more older people’s chest X-rays that have pneumonia and hasn’t “seen” as many X-rays belonging to younger people, it might predict that only older patients have pneumonia.

“We want models to learn how to look at the anatomical features of the patient and then make a decision based on that,” says Olawale Salaudeen, an MIT postdoc and the lead author of the paper, “but really anything that’s in the data that’s correlated with a decision can be used by the model. And those correlations might not actually be robust with changes in the environment, making the model predictions unreliable sources of decision-making.”

Spurious correlations contribute to the risks of biased decision-making. In the NeurIPS conference paper, the researchers showed that, for example, chest X-ray models that improved overall diagnosis performance actually performed worse on patients with pleural conditions or enlarged cardiomediastinum, meaning enlargement of the heart or central chest cavity.

Other authors of the paper included PhD students Haoran Zhang and Kumail Alhamoud, EECS Assistant Professor Sara Beery, and Ghassemi.

While previous work has generally accepted that models ordered best-to-worst by performance will preserve that order when applied in new settings, called accuracy-on-the-line, the researchers were able to demonstrate examples of when the best-performing models in one setting were the worst-performing in another.

Salaudeen devised an algorithm called OODSelect to find examples where accuracy-on-the-line was broken. Basically, he trained thousands of models using in-distribution data, meaning the data were from the first setting, and calculated their accuracy. Then he applied the models to the data from the second setting. When those with the highest accuracy on the first-setting data were wrong when applied to a large percentage of examples in the second setting, this identified the problem subsets, or sub-populations. Salaudeen also emphasizes the dangers of aggregate statistics for evaluation, which can obscure more granular and consequential information about model performance.

In the course of their work, the researchers separated out the “most miscalculated examples” so as not to conflate spurious correlations within a dataset with situations that are simply difficult to classify.

The NeurIPS paper releases the researchers’ code and some identified subsets for future work.

Once a hospital, or any organization employing machine learning, identifies subsets on which a model is performing poorly, that information can be used to improve the model for its particular task and setting. The researchers recommend that future work adopt OODSelect in order to highlight targets for evaluation and design approaches to improving performance more consistently.

“We hope the released code and OODSelect subsets become a steppingstone,” the researchers write, “toward benchmarks and models that confront the adverse effects of spurious correlations.”


To flexibly organize thought, the brain makes use of space

MIT researchers tested their theory of spatial computing, which holds that the brain recruits and controls ad hoc groups of neurons for cognitive tasks by applying brain waves to patches of the cortex.


Our thoughts are specified by our knowledge and plans, yet our cognition can also be fast and flexible in handling new information. How does the well-controlled and yet highly nimble nature of cognition emerge from the brain’s anatomy of billions of neurons and circuits? 

A study by researchers in The Picower Institute for Learning and Memory at MIT provides new evidence from tests in animals that the answer might be found within a theory called “spatial computing.”

First proposed in 2023 by Picower Professor Earl K. Miller and colleagues Mikael Lundqvist and Pawel Herman, spatial computing theory explains how neurons in the prefrontal cortex can be organized on the fly into a functional group capable of carrying out the information processing required by a cognitive task. Moreover, it allows for neurons to participate in multiple such groups, as years of experiments have shown that many prefrontal neurons can indeed participate in multiple tasks at once. 

The basic idea of the theory is that the brain recruits and organizes ad hoc “task forces” of neurons by using “alpha” and “beta” frequency brain waves (about 10-30Hz) to apply control signals to physical patches of the prefrontal cortex. Rather than having to rewire themselves into new physical circuits every time a new task must be done, the neurons in the patch instead process information by following the patterns of excitation and inhibition imposed by the waves.

Think of the alpha and beta frequency waves as stencils that shape when and where in the prefrontal cortex groups of neurons can take in or express information from the senses, Miller says. In that way, the waves represent the rules of the task and can organize how the neurons electrically “spike” to process the information content needed for the task.

“Cognition is all about large-scale neural self-organization,” says Miller, senior author of the paper in Current Biology and a faculty member in MIT’s Department of Brain and Cognitive Sciences. “Spatial computing explains how the brain does that.”

Testing five predictions

A theory is just an idea. In the study, lead author Zhen Chen and other current and former members of Miller’s lab put spatial computing to the test by examining whether five predictions it makes about neural activity and brain wave patterns were actually evident in measurements made in the prefrontal cortex of animals as they engaged in two working memory and one categorization tasks. Across the tasks there were distinct pieces of sensory information to process (e.g., “A blue square appeared on the screen followed by a green triangle”) and rules to follow (e.g., “When new shapes appear on the screen, do they match the shapes I saw before and appear in the same order?”)

The first two predictions were that alpha and beta waves should represent task controls and rules, while the spiking activity of neurons should represent the sensory inputs. When the researchers analyzed the brain wave and spiking readings gathered by the four electrode arrays implanted in the cortex, they found that indeed these predictions were true. Neural spikes, but not the alpha/beta waves, carried sensory information. While both spikes and the alpha/beta waves carried task information, it was strongest in the waves, and it peaked at times relevant to when rules were needed to carry out the tasks.

Notably, in the categorization task, the researchers purposely varied the level of abstraction to make categorization more or less cognitively difficult. The researchers saw that the greater the difficulty, the stronger the alpha/beta wave power was, further showing that it carries task rules.

The next two predictions were that alpha/beta would be spatially organized, and that when and where it was strong, the sensory information represented by spiking would be suppressed, but where and when it was weak, spiking would increase. These predictions also held true in the data. Under the electrodes, Chen, Miller, and the team could see distinct spatial patterns of higher or lower wave power, and where power was high, the sensory information in spiking was low, and vice versa.

Finally, if spatial computing is valid, the researchers predicted, then trial by trial, alpha/beta power and timing should accurately correlate with the animals’ performance. Sure enough, there were significant differences in the signals on trials where the animals performed the tasks correctly versus when they made mistakes. In particular, the measurements predicted mistakes due to messing up task rules versus sensory information. For instance, alpha/beta discrepancies pertained to the order in which stimuli appeared (first square then triangle) rather than the identity of the individual stimuli (square or triangle).

Compatible with findings in humans

By conducting this study with animals, the researchers were able to make direct measurements of individual neural spikes as well as brain waves, and in the paper, they note that other studies in humans report some similar findings. For instance, studies using noninvasive EEG and MEG brain wave readings show that humans use alpha oscillations to inhibit activity in task-irrelevant areas under top-down control, and that alpha oscillations appear to govern task-related activity in the prefrontal cortex.

While Miller says he finds the results of the new study, and their intersection with human studies, to be encouraging, he acknowledges that more evidence is still needed. For instance, his lab has shown that brain waves are typically not still (like a jump rope), but travel across areas of the brain. Spatial computing should account for that, he says.

In addition to Chen and Miller, the paper’s other authors are Scott Brincat, Mikael Lundqvist, Roman Loonis, and Melissa Warden.

The U.S. Office of Naval Research, The Freedom Together Foundation, and The Picower Institute for Learning and Memory funded the study.


Polar weather on Jupiter and Saturn hints at the planets’ interior details

New research may explain the striking differences between the two planets’ polar vortex patterns.


Over the years, passing spacecraft have observed mystifying weather patterns at the poles of Jupiter and Saturn. The two planets host very different types of polar vortices, which are huge atmospheric whirlpools that rotate over a planet’s polar region. On Saturn, a single massive polar vortex appears to cap the north pole in a curiously hexagonal shape, while on Jupiter, a central polar vortex is surrounded by eight smaller vortices, like a pan of swirling cinnamon rolls.

Given that both planets are similar in many ways — they are roughly the same size and made from the same gaseous elements — the stark difference in their polar weather patterns has been a longstanding mystery.

Now, MIT scientists have identified a possible explanation for how the two different systems may have evolved. Their findings could help scientists understand not only the planets’ surface weather patterns, but also what might lie beneath the clouds, deep within their interiors.

In a study appearing this week in the Proceedings of the National Academy of Sciences, the team simulates various ways in which well-organized vortex patterns may form out of random stimulations on a gas giant. A gas giant is a large planet that is made mostly of gaseous elements, such as Jupiter and Saturn. Among a wide range of plausible planetary configurations, the team found that, in some cases, the currents coalesced into a single large vortex, similar to Saturn’s pattern, whereas other simulations produced multiple large circulations, akin to Jupiter’s vortices.

After comparing simulations, the team found that vortex patterns, and whether a planet develops one or multiple polar vortices, comes down to one main property: the “softness” of a vortex’s base, which is related to the interior composition. The scientists liken an individual vortex to a whirling cylinder spinning through a planet’s many atmospheric layers. When the base of this swirling cylinder is made of softer, lighter materials, any vortex that evolves can only grow so large. The final pattern can then allow for multiple smaller vortices, similar to those on Jupiter. In contrast, if a vortex’s base is made of harder, denser stuff, it can grow much larger and subsequently engulf other vortices to form one single, massive vortex, akin to the monster cyclone on Saturn.

“Our study shows that, depending on the interior properties and the softness of the bottom of the vortex, this will influence the kind of fluid pattern you observe at the surface,” says study author Wanying Kang, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “I don’t think anyone’s made this connection between the surface fluid pattern and the interior properties of these planets. One possible scenario could be that Saturn has a harder bottom than Jupiter.”

The study’s first author is MIT graduate student Jiaru Shi.

Spinning up

Kang and Shi’s new work was inspired by images of Jupiter and Saturn that have been taken by the Juno and Cassini missions. NASA’s Juno spacecraft has been orbiting around Jupiter since 2016, and has captured stunning images of the planet’s north pole and its multiple swirling vortices. From these images, scientists have estimated that each of Jupiter’s vortices is immense, spanning about 3,000 miles across — almost half as wide as the Earth itself.

The Cassini spacecraft, prior to intentionally burning up in Saturn’s atmosphere in 2017, orbited the ringed planet for 13 years. Its observations of Saturn’s north pole recorded a single, hexagonal-shaped polar vortex, about 18,000 miles wide.

“People have spent a lot of time deciphering the differences between Jupiter and Saturn,” Shi says. “The planets are about the same size and are both made mostly of hydrogen and helium. It’s unclear why their polar vortices are so different.”

Shi and Kang set out to identify a physical mechanism that would explain why one planet might evolve a single vortex, while the other hosts multiple vortices. To do so, they worked with a two-dimensional model of surface fluid dynamics. While a polar vortex is three-dimensional in nature, the team reasoned that they could accurately represent vortex evolution in two dimensions, as the fast rotation of Jupiter and Saturn enforces uniform motion along the rotating axis.

“In a fast-rotating system, fluid motion tends to be uniform along the rotating axis,” Kang explains. “So, we were motivated by this idea that we can reduce a 3D dynamical problem to a 2D problem because the fluid pattern does not change in 3D. This makes the problem hundreds of times faster and cheaper to simulate and study.”

Getting to the bottom

Following this reasoning, the team developed a two-dimensional model of vortex evolution on a gas giant, based on an existing equation that describes how swirling fluid evolves over time.

“This equation has been used in many contexts, including to model midlatitude cyclones on Earth,” Kang says. “We adapted the equation to the polar regions of Jupiter and Saturn.”

The team applied their two-dimensional model to simulate how fluid would evolve over time on a gas giant under different scenarios. In each scenario, the team varied the planet’s size, its rate of rotation, its internal heating, and the softness or hardness of the rotating fluid, among other parameters. They then set a random “noise” condition, in which fluid initially flowed in random patterns across the planet’s surface. Finally, they observed how the fluid evolved over time given the scenario’s specific conditions.

Over multiple different simulations, they observed that some scenarios evolved to form a single large polar vortex, like Saturn, whereas others formed multiple smaller vortices, like Jupiter. After analyzing the combinations of parameters and variables in each scenario and how they related to the final outcome, they landed on a single mechanism to explain whether a single or multiple vortices evolve: As random fluid motions start to coalesce into individual vortices, the size to which a vortex can grow is limited by how soft the bottom of the vortex is. The softer, or lighter the gas is that is rotating at the bottom of a vortex, the smaller the vortex is in the end, allowing for multiple smaller-scale vortices to coexist at a planet’s pole, similar to those on Jupiter.

Two circles, with chaotic lines, represent planets. On left, the lines slowly form multiple unstable vortexes, but on right, only a single stable vortex is formed.

Conversely, the harder or denser a vortex bottom is, the larger the system can grow, to a size where eventually it can follow the planet’s curvature as a single, planetary-scale vortex, like the one on Saturn.

If this mechanism is indeed what is at play on both gas giants, it would suggest that Jupiter could be made of softer, lighter material, while Saturn may harbor heavier stuff in its interior.

“What we see from the surface, the fluid pattern on Jupiter and Saturn, may tell us something about the interior, like how soft the bottom is,” Shi says. “And that is important because maybe beneath Saturn’s surface, the interior is more metal-enriched and has more condensable material which allows it to provide stronger stratification than Jupiter. ”

"Because Jupiter and Saturn are otherwise so similar, their different polar weather has been a puzzle,” says Yohai Kaspi, a professor of geophysical fluid dynamics at the Weizmann Institute of Science, and a member of the Juno mission’s science team, who was not involved in the new study. “The work by Shi and Kang reveals a surprising link between these differences and the planets’ deep interior ‘softness’, offering a new way to map the key internal properties that shape their atmospheres."

This research was supported, in part, by a Mathworks Fellowship and endowed funding from MIT’s Department of Earth, Atmospheric and Planetary Sciences.


Demystifying college for enlisted veterans and service members

For nearly a decade, the MIT Warrior-Scholar Project STEM boot camp has helped enlisted members of the military prepare for higher education.


“I went into the military right after high school, mostly because I didn’t really see the value of academics,” says Air Force veteran and MIT sophomore Justin Cole.

His perspective on education shifted, however, after he experienced several natural disasters during his nine years of service. As a satellite systems operator in Colorado, Cole volunteered in the aftermath of the 2013 Black Forest fire, the state’s most destructive fire at the time. And in 2018, while he was leading a team in Okinawa conducting signal-monitoring work on communications satellites, two Category 5 typhoons barreled through the area within 26 days.

“I realized, this climate stuff is really a prerequisite to national security objectives in almost every sense, so I knew that school was going to be the thing that would help prepare me to make a difference,” he says. In 2023, after leaving the Air Force to work for climate-focused nonprofits and take engineering courses, Cole participated in an intense, weeklong STEM boot camp at MIT. “It definitely reaffirmed that I wanted to continue down the path of at least getting a bachelor’s, and it also inspired me to apply to MIT,” he says. He transferred in 2024 and is majoring in climate system science and engineering.

“It’s a lot like the MIT experience”

MIT runs the boot camp every summer as part of the nonprofit Warrior-Scholar Project (WSP), which started at Yale University in 2012. WSP offers a range of programming designed to help enlisted veterans and service members transition from the military to higher education. The academic boot camp program, which aims to simulate a week of undergraduate life, is offered at 19 schools nationwide in three areas: business, college readiness, and STEM.

MIT joined WSP in 2017 as one of the first three campuses to offer the STEM boot camp. “It was definitely rigorous,” Cole recalls, “not getting tons of sleep, grinding psets at night with friends … it’s a lot like the MIT experience.” In addition to problem sets, every day at MIT-WSP is packed with faculty lectures on math and physics, recitations, working on research projects, and tours of MIT campus labs. Scholars also attend daily college success workshops on topics such as note taking, time management, and applying to college. The schedule is meticulously mapped out — including travel times — from 0845 to 2200, Sunday through Friday.

Michael McDonald, an associate professor of physics at the Kavli Institute for Astrophysics and Space Research, and Navy veteran Nelson Olivier MBA ’17 have run the MIT-WSP program since its inception. At the time, WSP wanted to expand its STEM boot camps to other universities, so a Yale astrophysicist colleague recruited McDonald. Meanwhile, Olivier’s former Navy SEAL Team THREE teammate — who happened to be the WSP CEO — convinced Olivier to help launch the program while he was at the MIT Sloan School of Management, along with classmate Bill Kindred MBA ’17.

Now in its 10th year, MIT-WSP has hosted over 120 scholars, 93 percent of whom have gone on to attend schools like Stanford University, Georgetown University, University of Notre Dame, Harvard University, and the University of California at Berkeley. MIT-WSP alumni who have graduated now work at employers such as Meta, Price Waterhouse Coopers, Boeing, and BAE Systems.

Translating helicopter repairs to Newton’s laws

McDonald has a lot of fun teaching WSP scholars every summer. “When I pose a question to my first-year physics class in September, no one wants to meet my eyes or raise their hand for fear of embarrassing themselves,” he says. “But I ask a question to this group of, say, 12 vets, and 12 hands shoot up, they are all answering over each other, and then asking questions to follow up on the question. They are just curious and hungry, and they couldn’t care less about how they come off. … As a professor, it’s like your dream class.”

Every year, McDonald witnesses a predictable transformation among the scholars. They start off eager enough, however “by Tuesday, they are miserable, they’re pretty beaten down. But by the end of the week, they’re like, ‘I could do another week,’” he says.

Their confidence grows as they recognize that, while they may not have taken college courses, their military experience is invaluable. “It’s just a matter of convincing these guys that what they are already doing is what we are looking for. We have guys that say, ‘I don’t know if I can succeed in an engineering program,’ but then in the field, they are repairing helicopters. And I’m like, ‘Oh no, you can do this stuff!’ They just need to understand the background of why that helicopter that they are building works.”

Olivier agrees. “The enlisted veteran has a leg up because they’ve already done this before. They are just translating it from either fixing a radio or messing around with the components of a bomb to understanding Newton’s laws. That’s a thing of beauty, when you see that.”

Fostering a virtuous cycle

While just seeing themselves succeed at MIT-WSP helps instill confidence among scholars, meeting veterans who have made the leap into academia has a multiplier effect. To that end, the WSP organization provides each academic boot camp with alumni, called fellows, to teach college success workshops, provide support, and share their experiences in higher education.

“When I was at boot camp, we had two WSP fellows who were at Columbia, one at Princeton, and one who just got accepted to Harvard,” Cole recalls. “Just seeing people existing at these institutions made me realize, this is a thing that is doable.” The following summer, he became a fellow as well.

Former Marine Corps communications operator Aaron Kahler, who attended MIT-WSP in 2024, particularly recalls meeting a veteran PhD student while the group toured the neuroscience facility. “It was really cool seeing instances of successful vets doing their thing at MIT,” he says. “There were a lot more than we thought.”

Over the years, McDonald has made an effort to recruit more MIT veterans to staff the program. One of them is Andrea Henshall, a retired major in the Air Force and a PhD student in the Department of Aeronautics and Astronautics. After joining the Ask Me Anything panel a few years ago, she’s become increasingly involved, presenting lectures, mentoring participants, offering tours of the motion capture lab where she conducts experiments, and informally mentoring scholars.

“It’s so inspiring to hear so many students at the end of the week say, ‘I never considered a place like MIT until the boot camp, or until somebody told me, hey, you can be here, too.’ Or they see examples of enlisted veterans, like Justin, who’ve transitioned to a place like MIT and shown that it’s possible,” says Henshall.

At the conclusion of MIT-WSP, scholars receive a tangible reminder of what’s possible: a challenge coin designed by Olivier and McDonald. “In the military, the challenge coin usually has the emblem of the unit and symbolizes the ethos of the unit,” Olivier explains. On one side of the MIT-WSP coin are Newton’s laws of motion, superimposed over the WSP logo. MIT's “mens et manus” (“mind and hand”) motto appears on the other side, beneath an image of the Great Dome inscribed with the scholar’s name.

“As you go into Killian Court you see all the names of Pasteur, Newton, et cetera, but Building 10 doesn’t have a name on it,” he says. “So we say, ‘earn your space there on these buildings. Do something significant that will impact the human experience.’ And that’s what we think each one of these guys and gals can do.”

Kahler keeps the coin displayed on his desk at MIT, where he’s now a first-year student, for inspiration. “I don’t think I would be here if it weren’t for the Warrior-Scholar Project,” he says.


How collective memory of the Rwandan genocide was preserved

Delia Wendel’s new book illuminates a painful and painstaking effort by citizens to bear witness to atrocities.


The 1994 genocide in Rwanda took place over a little more than three months, during which militias representing the Hutu ethnic group conducted a mass murder of members of the Tutsi ethnic group along with some politically moderate members of the Hutu and Twa groups. Soon after, local citizens and aid workers began to document the atrocities that had occurred in the country.

They were establishing evidence of a genocide that many outsiders were slow to acknowledge; other countries and the U.N. did not recognize it until 1998. By preserving scenes of massacre and victims’ remains, this effort allowed foreigners, journalists, and neighbors to witness what had happened. Though the citizens’ work was emotionally and physically challenging, they used these sites of memory to seek justice for victims who had been killed and harmed.

In so doing, these efforts turned memory into officially recognized history. Now, in a new book, MIT scholar Delia Wendel carefully explores this work, shedding new light on the people who created the state’s genocide memorials, and the decisions they made in the process — such as making the remains of the dead available for public viewing. She also examines how the state gained control of the effort and has chosen to represent the past through these memorials.

“I’m seeking to recuperate this forgotten history of the ethics of the work, while also contending with the motivations of state sovereignty that has sustained it,” says Wendel, who is the Class of 1922 Career Development Associate Professor of Urban Studies and International Development in MIT’s Department of Urban Studies and Planning (DUSP).

That book, “Rwanda’s Genocide Heritage: Between Justice and Sovereignty,” is published by Duke University Press and is freely available through the MIT Libraries. In it, Wendel uncovers new details about the first efforts to preserve the memory of the genocide, analyzes the social and political dynamics, and examines their impact on people and public spaces.

“The shift from memory to history is important because it also requires recognition that is official or more public in nature,” Wendel says. “Survivors, their kin, their relatives, they know their histories. What they’re wishing to happen is a form of repair, or justice, or empowerment, that comes with disclosing those histories. That truth-telling aspect is really important.”

Conversations and memory

Wendel’s book was well over a decade in the making — and emerged from a related set of scholarly inquiries about peace-building activities in the wake of genocide. For this project, about memorializing genocide, Wendel visited over 30 villages in Rwanda over a span of many years, gradually making connections and building dialogues with citizens, in addition to conducting more conventional social science research.

“Speaking with rural residents started to unlock a lot of different types of conversations,” Wendel says of those visits. “A good deal of those conversations had to do with memory, and with relationships to place, neighbors, and authority.” She adds: “These are topics that people are very hesitant to speak about, and rightly so. This has been a book that took a long time to research and build some semblance of trust.”

During her research, Wendel also talked at length with some key figures involved in the process, including Louis Kanamugire, a Rwandan who became the first head of the country’s post-war Genocide Memorial Commission. Kanamugire, who lost his parents in the genocide, felt it was necessary to preserve and display the remains of genocide victims, including at four key sites that later become official state memorials.

This process involved, as Wendel puts it, the “gruesome” work of cleaning and preserving bodies and bones to provide both material evidence of genocide and the grounds for beginning the work of societal repair and individual healing.

Wendel also uncovers, in detail for the first time, the work done by Mario Ibarra, a Chilean aid worker for the U.N. who also investigated atrocities, photographed evidence extensively, conducted preservation work, and contributed to the country’s Genocide Memorial Commission as well. The relationships between global human rights practice and genocide survivors seeking justice, in terms of preserving and documenting evidence, is at the core of the book and, Wendel believes, a previously underappreciated aspect of this topic.

“The story of Rwanda memorialization that has typically been told is one of state control,” Wendel says. “But in the beginning, the government followed independent initiatives by this human rights worker and local residents who really spurred this on.”

In the book, Wendel also examines how Rwanda’s memorialization practices relates to those of other countries, often in the so-called Global South. This phenomenon is something she terms “trauma heritage,” and has followed similar trajectories across countries in Africa and South America, for instance.

“Trauma heritage is the act of making visible the violence that had been actively hidden, and intervening in the dynamics of power,” she says. “Making such public spaces for silenced pain is a way of seeking recognition of those harms, and [seeking] forms of justice and repair.”

The tensions of memorialization

To be clear, Rwanda has been able to construct genocide memorials in the first place because, in the mid-1990s, Tutsi troops regained power in the country by defeating their Hutu adversaries. Subsequently, in a state without unlimited free expression, the government has considerable control over the content and forms of memorialization that take place.

Meanwhile, there have always been differing views about, say, displaying victims’ remains, and to what degree such a practice underlines their humanity or emphasizes the dehumanizing treatment they suffered. Then too, atrocities can produce a wide range of psychological responses among the living, including survivors’ guilt and the sheer difficulty many experience in expressing what they have witnessed. The process of memorialization, in such circumstances, will likely be fraught.

“The book is about the tensions and paradoxes between the ethics of this work and its politics, which have a lot to do with state sovereignty and control,” Wendel says. “It’s rooted in the tension between what’s invisible and what’s visible, between this bid to be seen and to recognize the humanity of the victims and yet represent this dehumanizing violence. These are irresolvable dilemmas that were felt by the people doing this work.”

Or, as Wendel writes in the book, Rwandans and others immersed in similar struggles for justice around the world have had to grapple with the “messy politics of repair, searching for seemingly impossible redress for injustice.”

Other experts have praised Wendel’s book, such as Pumla Gobodo-Madikizela, a professor at Stellenbosch University in South Africa, who studies the psychological effects of mass violence. Gobodo-Madikizela has cited Wendel’s “extraordinary narratives” about the book’s principal figures, observing that they “not only preserve the remains but also reclaim the victims’ humanity. … Wendel shows how their labor becomes a defiant insistence on visibility that transforms the act of cleaning into a form of truth-telling, making injustice materially and spatially undeniable.”

For her part, Wendel hopes the book will engage readers interested in multiple related issues, including Rwandan and African history, the practices and politics of public memory, human rights and peace-building, and the design of public memorials and related spaces, including those built in the aftermath of traumatic historical episodes.

“Rwanda’s genocide heritage remains an important endeavor in memory justice, even if its politics need to be contended with at the same time,” Wendel says. 


Helping companies with physical operations around the world run more intelligently

Founded by two MIT alumni, Samsara’s platform gives companies a central hub to learn from their workers, equipment, and other infrastructure.


Running large companies in construction, logistics, energy, and manufacturing requires careful coordination between millions of people, devices, and systems. For more than a decade, Samsara has helped those companies connect their assets to get work done more intelligently.

Founded by John Bicket SM ’05 and Sanjit Biswas SM ’05, Samsara’s platform gives companies with physical operations a central hub to track and learn from workers, equipment, and other infrastructure. Layered on top of that platform are real-time analytics and notifications designed to prevent accidents, reduce risks, save fuel, and more.

Tens of thousands of customers have used Samsara’s platform to improve their operations since its founding in 2015. Home Depot, for instance, used Samsara’s artificial intelligence-equipped dashcams to reduce their total auto liability claims by 65 percent in one year. Maxim Crane Works saved more than $13 million in maintenance costs using Samsara’s equipment and vehicle diagnostic data in 2024. Mohawk Industries, the world’s largest flooring manufacturer, improved their route efficiency and saved $7.75 million annually.

“It’s all about real-world impact,” says Biswas, Samsara’s CEO. “These organizations have complex operations and are functioning at a massive scale. Workers are driving millions of miles and consuming tons of fuel. If you can understand what’s happening and run analysis in the cloud, you can find big efficiency improvements. In terms of safety, these workers are putting their lives at risk every day to keep this infrastructure running. You can literally save lives if you can reduce risk.”

Finding big problems

Biswas and Bicket started PhD programs at MIT in 2002, both conducting research around networking in the Computer Science and Artificial Intelligence Laboratory (CSAIL). They eventually applied their studies to build a wireless network called MIT RoofNet.

Upon graduating with master’s degrees, Biswas and Bicket decided to commercialize the technologies they worked on, founding the company Meraki in 2006.

“How do you get big Wi-Fi networks out in the world?” Biswas asks. “With MIT RoofNet, we covered Cambridge in Wi-Fi. We wanted to enable other people to build big Wi-Fi networks and make Wi-Fi go mainstream for larger campuses and offices.”

Over the next six years, Meraki’s technology was used to create millions of Wi-Fi networks around the world. In 2012, Meraki was acquired by Cisco. Biswas and Bicket left Cisco in 2015, unsure of what they’d work on next.

“The way we found ourselves to Samsara was through the same curiosity we had as graduate students,” Biswas says. “This time it dealt more with the planet’s infrastructure. We were thinking about how utilities work, and how construction happens at the scale of cities and states. It drew us into operations, which is the infrastructure backbone of the planet.”

As the founders learned about industries like logistics, utilities, and construction, they realized they could use their technical background to improve safety and efficiency.

“All these industries have a lot in common,” Biswas says. “They have a lot of field workers — often thousands of them — they have a lot of assets like trucks and equipment, and they’re trying to orchestrate it all. The throughline was the importance of data.”

When they founded Samsara 10 years ago, many people were still collecting field data with pen and paper.

“Because of our technical background, we knew that if you could collect the data and run sophisticated algorithms like AI over it, you could get a ton of insights and improve the way those operations run,” Biswas says.

Biswas says extracting insights from data is easy. Making field-ready products and getting them into the hands of frontline workers took longer.

Samsara started by tapping into existing sensors in buildings, cars, and other assets. They also built their own, including AI-equipped cameras and GPS trackers that can monitor driving behavior. That formed the foundation of Samsara’s Connected Operations Platform. On top of that, Samsara Intelligence processes data in the cloud and provides insights like ways to calculate the best routes for commercial vehicles, be more proactive with maintenance, and reduce fuel consumption.

Samsara’s platform can be used to detect if a commercial vehicle or snowplow driver is on their phone and send an audio message nudging them to stay safe and focused. The platform can also deliver training and coaching.

“That’s the kind of thing that reduces risk, because workers are way less likely to be distracted,” Biswas says. “If you do for millions of workers, you reduce risk at scale.”

The platform also allows managers to query their data in a ChatGPT-style interface, asking questions such as: Who are my safest drivers? Which vehicles need maintenance? And what are my least fuel-efficient trucks?

“Our platform helps recognize frontline workers who are safe and efficient in their job,” Biswas says. “These people are largely unsung heroes. They keep our planet running, but they don’t hear ‘thank you’ very often. Samsara helps companies recognize the safest workers on the field and give them recognition and rewards. So, it’s about modernizing equipment but also improving the experience of millions of people that help run this vital infrastructure.”

Continuing to grow

Today Samsara processes 20 trillion data points a year and monitors 90 billion miles of driving. The company employs about 4,000 people across North America and Europe.

“It still feels early for us,” Biswas says. “We’ve been around for 10 years and gotten some scale, but we needed to build this platform to be able to build more products and have more impact. If you step back, operations is 40 percent of the world’s GDP, so we see a lot of opportunities to do more with this data. For instance, weather is part of Samsara Intelligence, and weather is 20 to 25 percent of the risk, and so we’re training AI models to reduce risk from the weather. And on the sustainability side, the more data we have, the more we can help optimize for things like fuel consumption or transitioning to electric vehicles. Maintenance is another fascinating data problem.”

The founders have also maintained a connection with MIT — and not just because the City of Boston’s Department of Public Works and the MBTA are customers. Last year, the Biswas Family Foundation announced funding for a four-year postdoctoral fellowship program at MIT for early-stage researchers working to improve health care.

Biswas says Samsara’s journey has been incredibly rewarding and notes the company is well-positioned to leverage advances in AI to further its impact going forward.

“It’s been a lot of fun and also a lot of hard work,” Biswas says. “What’s exciting is that each decade of the company feels different. It’s almost like a new chapter — or a whole new book. Right now, there’s so many incredible things happening with data and AI. It feels as exciting as it did in the early days of the company. It feels very much like a startup.”


Efficient cooling method could enable chip-based trapped-ion quantum computers

New technique could improve the scalability of trapped-ion quantum computers, an essential step toward making them practically useful.


Quantum computers could rapidly solve complex problems that would take the most powerful classical supercomputers decades to unravel. But they’ll need to be large and stable enough to efficiently perform operations. To meet this challenge, researchers at MIT and elsewhere are developing trapped-ion quantum computers based on ultra-compact photonic chips. These chip-based systems offer a scalable alternative to existing trapped-ion quantum computers, which rely on bulky optical equipment.

The ions in these quantum computers must be cooled to extremely cold temperatures to minimize vibrations and prevent errors. So far, such trapped-ion systems based on photonic chips have been limited to inefficient and slow cooling methods.

Now, a team of researchers at MIT and MIT Lincoln Laboratory has implemented a much faster and more energy-efficient method for cooling trapped ions using photonic chips. Their approach achieved cooling to about 10 times below the limit of standard laser cooling.

Key to this technique is a photonic chip that incorporates precisely designed antennas to manipulate beams of tightly focused, intersecting light.

The researchers’ initial demonstration takes a key step toward scalable chip-based architectures that could someday enable quantum computing systems with greater efficiency and stability.

“We were able to design polarization-diverse integrated-photonics devices, utilize them to develop a variety of novel integrated-photonics-based systems, and apply them to show very efficient ion cooling. However, this is just the beginning of what we can do using these devices. By introducing polarization diversity to integrated-photonics-based trapped-ion systems, this work opens the door to a variety of advanced operations for trapped ions that weren’t previously attainable, even beyond efficient ion cooling — all research directions we are excited to explore in the future,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this architecture.

She is joined on the paper by lead authors Sabrina Corsetti, an EECS graduate student; Ethan Clements, a former postdoc who is now a staff scientist at MIT Lincoln Laboratory; Felix Knollmann, a graduate student in the Department of Physics; John Chiaverini, senior member of the technical staff at Lincoln Laboratory and a principal investigator in MIT’s Center for Quantum Engineering; as well as others at Lincoln Laboratory and MIT. The research appears today in two joint publications in Light: Science and Applications and Physical Review Letters.

Seeking scalability

While there are many types of quantum systems, this research is focused on trapped-ion quantum computing. In this application, a charged particle called an ion is formed by peeling an electron from an atom, and then trapped using radio-frequency signals and manipulated using optical signals.

Researchers use lasers to encode information in the trapped ion by changing its state. In this way, the ion can be used as a quantum bit, or qubit. Qubits are the building blocks of a quantum computer.

To prevent collisions between ions and gas molecules in the air, the ions are held in vacuum, often created with a device known as a cryostat. Traditionally, bulky lasers sit outside the cryostat and shoot different light beams through the cryostat’s windows toward the chip. These systems require a room full of optical components to address just a few dozen ions, making it difficult to scale to the large numbers of ions needed for advanced quantum computing. Slight vibrations outside the cryostat can also disrupt the light beams, ultimately reducing the accuracy of the quantum computer.

To get around these challenges, MIT researchers have been developing integrated-photonics-based systems. In this case, the light is emitted from the same chip that traps the ion. This improves scalability by eliminating the need for external optical components.

“Now, we can envision having thousands of sites on a single chip that all interface up to many ions, all working together in a scalable way,” Knollmann says.

But integrated-photonics-based demonstrations to date have achieved limited cooling efficiencies.

Keeping their cool

To enable fast and accurate quantum operations, researchers use optical fields to reduce the kinetic energy of the trapped ion. This causes the ion to cool to nearly absolute zero, an effective temperature even colder than cryostats can achieve.

But common methods have a higher cooling floor, so the ion still has a lot of vibrational energy after the cooling process completes. This would make it hard to use the qubits for high-quality computations.

The MIT researchers utilized a more complex approach, known as polarization-gradient cooling, which involves the precise interaction of two beams of light.

Each light beam has a different polarization, which means the field in each beam is oscillating in a different direction (up and down, side to side, etc.). Where these beams intersect, they form a rotating vortex of light that can force the ion to stop vibrating even more efficiently.

Although this approach had been shown previously using bulk optics, it hadn’t been shown before using integrated photonics.

To enable this more complex interaction, the researchers designed a chip with two nanoscale antennas, which emit beams of light out of the chip to manipulate the ion above it.

These antennas are connected by waveguides that route light to the antennas. The waveguides are designed to stabilize the optical routing, which improves the stability of the vortex pattern generated by the beams.

“When we emit light from integrated antennas, it behaves differently than with bulk optics. The beams, and generated light patterns, become extremely stable. Having these stable patterns allows us to explore ion behaviors with significantly more control,” Clements says.

The researchers also designed the antennas to maximize the amount of light that reaches the ion. Each antenna has tiny curved notches that scatter light upward, spaced just right to direct light toward the ion.

“We built upon many years of development at Lincoln Laboratory to design these gratings to emit diverse polarizations of light,” Corsetti says.

They experimented with several architectures, characterizing each to better understand how it emitted light.

With their final design in place, the researchers demonstrated ion cooling that was nearly 10 times below the limit of standard laser cooling, referred to as the Doppler limit. Their chip was able to reach this limit in about 100 microseconds, several times faster than other techniques.

“The demonstration of enhanced performance using optics integrated in the ion-trap chip lays the foundation for further integration that can allow new approaches for quantum-state manipulation, and that could improve the prospects for practical quantum-information processing,” adds Chiaverini. “Key to achieving this advance was the cross-Institute collaboration between the MIT campus and Lincoln groups, a model that we can build on as we take these next steps.”

In the future, the team plans to conduct characterization experiments on different chip architectures and demonstrate polarization-gradient cooling with multiple ions. In addition, they hope to explore other applications that could benefit from the stable light beams they can generate with this architecture.

Other authors who contributed to this research are Ashton Hattori (MIT), Zhaoyi Li (MIT), Milica Notaros (MIT), Reuel Swint (Lincoln Laboratory), Tal Sneh (MIT), Patrick Callahan (Lincoln Laboratory), May Kim (Lincoln Laboratory), Aaron Leu (MIT), Gavin West (MIT), Dave Kharas (Lincoln Laboratory), Thomas Mahony (Lincoln Laboratory), Colin Bruzewicz (Lincoln Laboratory), Cheryl Sorace-Agaskar (Lincoln Laboratory), Robert McConnell (Lincoln Laboratory), and Isaac Chuang (MIT).

This work is funded, in part, by the U.S. Department of Energy, the U.S. National Science Foundation, the MIT Center for Quantum Engineering, the U.S. Department of Defense, an MIT Rolf G. Locher Endowed Fellowship, and an MIT Frederick and Barbara Cronin Fellowship.


Michael Moody: Impacting MIT through leadership in auditing

The Institute auditor has guided the Audit Division through a transformative period while strengthening collaborations across MIT.


Michael J. Moody, who has served as Institute auditor since 2014, will retire from MIT in October, following a career in internal and external audit spanning 40 years.

Executive Vice President and Treasurer Glen Shor announced the news today in a letter to MIT’s Academic Council.

“I have greatly appreciated Mike’s rigorous and collaborative approach to auditing and advising on the Institute’s policies and processes,” Shor wrote. “He has helped MIT accomplish far-reaching ambitions while adhering to best practices in administering programs and services.”

As Institute auditor, Moody oversees a division that conducts financial, operational, compliance, and technology reviews across MIT. He leads a team of internal auditors that serve as trusted advisors to administrative leadership and members of the MIT Corporation, assessing processes and making recommendations to control risks, improve processes, and enhance decision-making.

The MIT Audit Division maintains a dual reporting structure to ensure its independence. Moody and his team work for the MIT Corporation Risk and Audit Committee but receive administrative support from the MIT Office of the Executive Vice President and Treasurer.

“Mike is highly principled and rigorous with detail, earning our committee’s trust,” says Pat Callahan, chair of the Risk and Audit Committee. “The committee runs like clockwork because of Mike’s dedication and skill.”

Moody has guided the Audit Division through a transformative period, spearheading several impactful initiatives throughout his tenure. He advanced the approval of the first-ever Audit Division Charter to codify the unit’s independence and objectivity and to articulate its mandates for accountability and oversight, and he implemented a new process to distribute audit reports to all senior administrative officers as a best practice. He also initiated the Institute’s inaugural external quality assurance review, for which MIT received the highest rating. Moody has continued the practice of externally auditing the division.

Having a particular interest in leveraging analytics and data to improve workflows and inform assessments, Moody added a data analyst to his team in 2016. The team also sponsors the cross-Institute Data Analysts and Data Scientists (DADS) group, which seeks to foster collaboration while advancing analytics and data practices at an Institute level.

More recently, Moody helped establish the MIT AI Cohort to advance artificial intelligence solutions across the Institute while minimizing associated risks. The group, launched in November 2025, includes representatives from MIT Sloan School of Management, the Koch Institute for Integrative Cancer Research, the School of Engineering, MIT Libraries, the Office of the Vice President for Research, the Division of Graduate and Undergraduate Education, and MIT Health, among others.

A key aspect of Moody's work — and one that has been especially meaningful to him — is helping the MIT community understand the Audit Division's mission and role in furthering the Institute’s positive impact. To facilitate this, he instilled in his team a set of core values that emphasizes professionalism, objectivity, pragmatism, openness, and willingness to listen, and has presented it as a model for peer institutions. He has in this vein focused on building relationships with the community to identify the right opportunities for improvement in MIT’s operations and ensure that the Audit Division’s feedback is constructively delivered and received.

“Mike has been an invaluable partner,” says Suzy Nelson, MIT vice chancellor for student life. “Over the years, his collaborative and knowledgeable approach has helped us improve so many areas — from student organization event management to our business practices to enhancing our student support services. Mike has listened carefully to students’ needs and offered guidance aligned with the goals of the program and student safety.”

Before joining MIT, Moody served in audit and compliance roles at Northwestern University, the University of Illinois at Chicago, and the state of Illinois. At the public accounting firm Coopers and Lybrand (now Pricewaterhouse Coopers LLP), he managed and performed information technology audits and served as a financial and technology consultant for clients in a variety of industries. Moody has also held numerous volunteer and elected leadership positions in international, national, and local professional audit associations. He holds certified internal auditor and certified information systems auditor designations, along with a certification in risk management assurance.

“In reflecting on my time here, I’m most proud of assembling a team that has made positive changes to how MIT operates,” says Moody. “It’s been very rewarding having leaders, staff, and researchers reach out for advice and assistance. It's a testament to the strong relationships we've built across the Institute.”

Shor and Callahan will soon formally launch a search for Institute auditor, and expect to identify Moody’s successor during the fall 2026 semester.


Chemists determine the structure of the fuzzy coat that surrounds Tau proteins

Learning more about this structure could help scientists find ways to block Tau from forming tangles in the brain of Alzheimer’s patients.


One of the hallmarks of Alzheimer’s disease is the clumping of proteins called Tau, which form tangled fibrils in the brain. The more severe the clumping, the more advanced the disease is.

The Tau protein, which has also been linked to many other neurodegenerative diseases, is unstructured in its normal state, but in the pathological state it consists of a well-ordered rigid core surrounded by floppy segments. These disordered segments form a “fuzzy coat” that helps determine how Tau interacts with other molecules.

MIT chemists have now shown, for the first time, they can use nuclear magnetic resonance (NMR) spectroscopy to decipher the structure of this fuzzy coat. They hope their findings will aid efforts to develop drugs that interfere with Tau buildup in the brain.

“If you want to disaggregate these Tau fibrils with small-molecule drugs, then these drugs have to penetrate this fuzzy coat,” says Mei Hong, an MIT professor of chemistry and the senior author of the new study. “That would be an important future endeavor.”

MIT graduate student Jia Yi Zhang is the lead author of the paper, which appears today in the Journal of the American Chemical Society. Former MIT postdoc Aurelio Dregni is also an author of the paper.

Analyzing the fuzzy coat

In a healthy brain, Tau proteins help to stabilize microtubules, which give cells their structure. However, when Tau proteins become misfolded or otherwise altered, they form clumps that contribute to neurodegenerative diseases such as Alzheimer’s and frontotemporal dementia.

Determining the structure of the Tau tangles has been difficult because so much of the protein — about 80 percent — is found in the fuzzy coat, which tends to be highly disordered.

This fuzzy coat surrounds a rigid inner core that is made from folded protein strands known as beta sheets. Hong and her colleagues have previously analyzed the structure of the core in a particular Tau fibril using NMR, which can reveal the structures of molecules by measuring the magnetic properties of atomic nuclei within the molecules.

Until now, most researchers had overlooked Tau’s fuzzy coat and focused on the rigid core of the fibrils because those disordered segments change their structures so often that standard structure characterization techniques such as cryoelectron microscopy and X-ray crystallography can’t capture them.

However, in the new study, the researchers developed NMR techniques that allowed them to study the entire Tau protein. In one experiment, they were able to magnetize protons within the most rigid amino acids, then measure how long it took for the magnetization to be transferred to the mobile amino acids. This allowed them to track the magnetization as it traveled from rigid regions to floppy segments, and vice versa.

Using this approach, the researchers could estimate the proximity between the rigid and mobile segments. They complemented this experiment by measuring the different degrees of movement of the amino acids in the fuzzy coat.

“We have now developed an NMR-based technology to examine the fuzzy coat of a full-length Tau fibril, allowing us to capture both the dynamic regions and the rigid core,” Hong says.

Protein dynamics

For this particular fibril, the researchers showed that the overall structure of the Tau protein, which contains about 10 different domains, somewhat resembles a burrito, with several layers of the fuzzy coat wrapped around the rigid core.

Based on their measurements of protein dynamics, the researchers found that these segments fell into three categories. The rigid core of the fibril was surrounded by protein regions with intermediate mobility, whereas the most dynamic segments were found in the outermost layer.

The most dynamic segments of the fuzzy coat are rich in the amino acid proline. In the protein sequence, these prolines are near the amino acids that form the rigid core, and were previously thought to be partially immobilized. Instead, they are highly mobile, indicating that these positively charged proline-rich regions are repelled by the positive charges of the amino acids that form the rigid core.

This structural model gives insight into how Tau proteins form tangles in the brain, Hong says. Similar to how prions trigger healthy proteins to misfold in the brain, it is believed that misfolded Tau proteins latch onto normal Tau proteins and act as a template that induces them to adopt the abnormal structure.

In principle, these normal Tau proteins could add to the ends of existing short filaments or pile onto the sides. The fact that the fuzzy coat wraps around the rigid core indicates that normal Tau proteins more likely add onto the ends of the filaments to generate longer fibrils.

The researchers now plan to explore whether they can stimulate normal Tau proteins to assemble into the type of fibrils seen in Alzheimer’s disease, using misfolded Tau proteins from Alzheimer’s patients as a template.

The research was funded by the National Institutes of Health.


The “delicious joy” of creating and recreating music

Leslie Tilley combines deep experience as a musician with cultural and formal analysis, to see how people refashion music anew.


As a graduate student, Leslie Tilley spent years studying and practicing the music of Bali, Indonesia, including a traditional technique in which two Balinese drummers play intricately interlocking rhythms while simultaneously improvising. It was beautiful and compelling music, which Tilley heard an unexpected insight about one day.

“The higher drum is the bus driver, and the lower drum is the person who puts the bags on the top of the bus,” a Balinese musician told Tilley.

Today, Tilley is an MIT faculty member who works as both an ethnomusicologist, studying music in its cultural settings, and a music theorist, analyzing its formal principles. The tools of music theory have long been applied to, say, Bach, and rather less often to Balinese drumming. But one of Tilley’s interests is building music theory across boundaries. As she recognized, the drummer’s bus driver analogy is a piece of theory. 

“That doesn’t feel like the music theory I had learned, but that is 100 percent music theory,” Tilley said. “What is the relationship between the drummers? The higher drum has to stick to a smaller subset of rhythms so that the lower drum has more freedom to improvise around. Putting it that way is just a different music-theoretical language.”

Tilley’s anecdote touches on many aspects of her career: Her work ranges widely, while linking theory, practice, and learning. Her studies in Bali became the basis for an award-winning book, which uses Balinese music as a case study for a more generalized framework about collective improvisation, one that can apply to any type of music.

Currently, Tilley is engaged in another major project, supported by a multiyear, $500,000 Mellon Foundation grant, to develop a reimagined music theory curriculum. That project aims to produce an alternative four-semester open access music theory curriculum with a broader scope than many existing course materials, to be accompanied by a new audio-visual textbook. The effort includes a major conference later this year that Tilley is organizing, and is designed as a collaborative project; she will work with other scholars on the curriculum and textbook, with 2028 as a completion date.

If that weren’t enough, Tilley is also working on a new book about the phenomenon of cover songs in modern pop music, from the 1950s onward. Here too, Tilley is combining careful cultural analysis of select popular artists and their work, along with a formal examination of the musical choices they have made while developing cover versions of songs.

All told, understanding how music works within a culture, while understanding the inner workings of music, can deliver us new insights — about music, performers, and audiences.

“What I am focused on fundamentally is how musicians take a musical thing and make something new out of it,” Tilley says. “And then how listeners react to that thing. What is happening here musically? And can that explain the human reaction to it, which is messy and subjective?”

Across all these projects, Tilley has been a consistently innovative scholar who reshapes existing genres of work. For her research and teaching, Tilley has received tenure and is now an associate professor in MIT’s Music and Theater Arts Program.

The joy of collective improv

Both of Tilley’s parents were musicians, but “they never had any intention for their kids to go into music,” says Tilley, a native of Halifax, Nova Scotia. Growing up, she studied piano, violin, and French horn for years; played in a symphony orchestra, brass band, and concert bands; sang in choirs; and performed in musicals. Ultimately she realized she could make a career out of music as well. 

“In 12th grade I suddenly realized, music is what I do. Music is who I am. Music is what I love,” Tilley says. Back then, she pictured herself being an opera singer. Subsequently, as she recalls, “Somewhere along the way, I steered myself into music scholarship.”

Tilley received her bachelor of music degree from Acadia University in Nova Scotia, and then conducted her graduate studies in music at the University of British Columbia, where she earned an MA and PhD. It was in graduate school that Tilley began studying the music of Bali — on campus and during extended periods of field research.

Studying Balinese music was “mildly accidental,” Tilley says, calling it “a little bit of happy happenstance. Encountering these musical traditions exploded the way I thought about music and ways of understanding the interactions of musicians.”

In her research, Tilley looked intensively at two distinct improvised Balinese musical practices: the four-person melodic gong technique “reyong norot” and the two-person drumming practice “kendang arja.” Both are featured in her 2019 book, “Making It Up Together: The Art of Collective Improvisation in Balinese Music and Beyond.” Published by the University of Chicago Press, it won the 2022 Emerging Scholar Award from the Society for Music Theory.

Grounded in empirical evidence, the book proposes a novel, universal framework for understanding the components of collective improvisation. That includes both the more strictly musical aspects of improvisation — how much flexibility musicians give themselves to improvise, for instance — as well as the forms of interaction musicians have with their co-performers.

“My book is about collective improvisation and what it means,” Tilley says. “What is the give and take of that process, and how can we analyze that? There are lots of scholars who have discussed collective improvisation as it exists in jazz. The delicious joy of collective improvisation is something anybody who improvises in a musical group will talk about. My book looks at examples, especially the case studies I have from Bali, and then creates bigger analytical frameworks, so there can finally be an umbrella way of looking at this phenomenon across music cultures and practices.”

Despite her years of immersing herself in the music, and playing it, Tilley says, “I am a beginner in comparison to the drummers I studied with, who have been playing forever and played with other masters their whole lives, and were generous enough to allow me to learn from them.” Still, she thinks the experience of playing music while studying it is indispensable.

“Ethnomusicology is a field that takes a bit from other fields,” Tilley notes. “The idea of participant observation, we borrow that from anthropology, and the idea of close musical analysis is from musicology or music theory. It’s an in-between way of thinking about music where I get to both participate and observe. But also I’m a music analysis nerd: What’s happening in the notes? Looking at music note-by-note, but from a place of physical embodiment, provides a better understanding than if I had just looked at the notes.”

Expanding instruction

At present, Tilley is devoting significant effort to her music-theory curriculum work, which is funded by the Mellon Foundation as a three-year effort. The upcoming summer conference she is organizing, also supported by the Mellon Foundation, will be a key part of the project, allowing a wide range of scholars to air perspectives about reimagining music theory studies in the 21st century.

Substantively, the idea is to broaden the scope of music theory instruction. Often, Tilley says, “music theory is learning how to understand the musical structures that are essentially between Bach and early Beethoven, that kind of narrow range of a couple hundred years, really amazing musical systems with a very deep, written-down music theory. But that accepted canon leaves out so many other kinds of music and ways of knowing.” Instead, she adds, “If we were not beholden to any assumptions about what we should have in a music program, what skills would we want our students to walk away from four semesters of music theory with?”

About the conference, Tilley quips: “Sitting in a room and nerding out with a bunch of people who care deeply about a thing you care about, which in my case is music, music theory, and pedagogy, is possibly the coolest thing you can do with your time. Hopefully something wonderful comes out of it.”

As Tilley views it, her current book project on pop music cover songs stems from some of the same issues that have long animated her thinking: How do artists fashion their work out of existing knowledge?

“The project on cover songs is similar to the project on collective improvisation in Bali,” Tilley says, in the sense that when it comes to improvisation, “I have a bank of things I know, in my head and in my body about this musical practice, and within that context I can create something that is new and mine, based on something that exists already.”

She adds: “Cover songs to me are the same, but different. The same in that it’s a musical transformation, but different because a pop song doesn’t just have lyrics, melody, and chords, but the vocal quality, the arrangement, the brand of the performer, and so much more. What we think about in popular music isn’t just the song, it’s the person singing it, the social and political contexts, and the listener’s personal relationships to all those things, and they’re so wrapped up together we almost can’t disentangle them.”

As with her earlier work, Tilley is not just examining individual pieces of music, but building a larger analytical model in the process — one that factors in the formal musical changes artists make as well as the cultural components of the phenomenon, to understand why cover songs can produce strong and varying reactions among listeners.

In the process, Tilley has been presenting conference papers and invited talks on the topic for a number of years now. One case that interests Tilley is the singer-songwriter Tori Amos, whose many cover versions transform the viewpoint, music, and meaning of songs by artists from Eminem to Nirvana, and more. There may also be some Taylor Swift content in the next book, although with thousands and thousands of songs to choose from in the pop-rock era, there could be something for everyone — fitting Tilley’s ethos of studying music broadly, across time and space as it is created, recreated, and recreated again.

“This is why music is infinitely cool,” Tilley says. “It’s so malleable, and so open to interpretation.” 


A protein found in the GI tract can neutralize many bacteria

The protein, known as intelectin-2, also helps to strengthen the mucus barrier lining the digestive tract.


The mucosal surfaces that line the body are embedded with defensive molecules that help keep microbes from causing inflammation and infections. Among these molecules are lectins — proteins that recognize microbes and other cells by binding to sugars found on cell surfaces.

One of these lectins, MIT researchers have found, has broad-spectrum antimicrobial activity against bacteria found in the GI tract. This lectin, known as intelectin-2, binds to sugar molecules found on bacterial membranes, trapping the bacteria and hindering their growth. Additionally, it can crosslink molecules that make up mucus, helping to strengthen the mucus barrier.

“What’s remarkable is that intelectin-2 operates in two complementary ways. It helps stabilize the mucus layer, and if that barrier is compromised, it can directly neutralize or restrain bacteria that begin to escape,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.

This kind of broad-spectrum antimicrobial activity could make intelectin-2 useful as a potential therapeutic, the researchers say. It could also be harnessed to help strengthen the mucus barrier in patients with disorders such as inflammatory bowel disease.

Amanda Dugan, a former MIT research scientist, and Deepsing Syangtan PhD ’24 are the lead authors of the paper, which appears today in Nature Communications.

A multifunctional protein

Current evidence suggests that the human genome encodes more than 200 lectins — carbohydrate-binding proteins that play a variety of roles in the immune system and in communication between cells. Kiessling’s lab, which has been exploring lectin-carbohydrate interactions, recently became interested in a family of lectins called intelectins. In humans, this family includes two lectins, intelectin-1 and intelectin-2.

Those two proteins have very similar structures, but intelectin-1 is distinctive in that it only binds to carbohydrates found in bacteria and other microbes. About 10 years ago, Kiessling and her colleagues were able to discover intelectin-1’s structure, but its functions are still not fully understood.

At that time, scientists hypothesized that intelectin-2 might play a role in immune defense, but there hadn’t been many studies to support that idea. Dugan, then a postdoc in Kiessling’s lab, set out to learn more about intelectin-2.

In humans, intelectin-2 is produced at steady levels by Paneth cells in the small intestine, but in mice, its expression from mucus-producing Goblet cells appears to be triggered by inflammation and certain types of parasitic infection.

In the new study, the researchers found that both human and mouse intelectin-2 bind to a sugar molecule called galactose. This sugar is commonly found in molecules called mucins that make up mucus. When intelectin-2 binds to these mucins, it helps to strengthen the mucus barrier, the researchers found.

Galactose is also found in carbohydrates displayed on the surfaces of some bacterial cells. The researchers showed that intelectin-2 can bind to microbes that display these sugars, including many pathogens that cause GI infections.

The researchers also found that over time, these trapped microbes eventually disintegrate, suggesting that the protein is able to kill them by disrupting their cell membranes. This antimicrobial activity appears to affect a wide range of bacteria, including some that are resistant to traditional antibiotics.

These dual functions help to protect the lining of the GI tract from infection, the researchers believe.

“Intelectin-2 first reinforces the mucus barrier itself, and then if that barrier is breached, it can control the bacteria and restrict their growth,” Kiessling says.

Fighting off infection

In patients with inflammatory bowel disease, intelectin-2 levels can become abnormally high or low. Low levels could contribute to degradation of the mucus barrier, while high levels could kill off too many beneficial bacteria that normally live in the gut. Finding ways to restore the correct levels of intelectin-2 could be beneficial for those patients, the researchers say.

“Our findings show just how critical it is to stabilize the mucus barrier. Looking ahead, we can imagine exploiting lectin properties to design proteins that actively reinforce that protective layer,” Kiessling says.

Because intelectin-2 can neutralize or eliminate pathogens such as Staphylococcus aureus and Klebsiella pneumoniae, which are often difficult to treat with antibiotics, it could potentially be adapted as an antimicrobial agent.

“Harnessing human lectins as tools to combat antimicrobial resistance opens up a fundamentally new strategy that draws on our own innate immune defenses,” Kiessling says. “Taking advantage of proteins that the body already uses to protect itself against pathogens is compelling and a direction that we are pursuing.”

The research was funded by the National Institutes of Health Glycoscience Common Fund, the National Institute of Allergy and Infectious Disease, the National Institute of General Medical Sciences, and the National Science Foundation.

Other authors who contributed to the study include Charles Bevins, a professor of medical microbiology and immunology at the University of California at Davis School of Medicine; Ramnik Xavier, a professor of medicine at Harvard Medical School and the Broad Institute of MIT and Harvard; and Katharina Ribbeck, the Andrew and Erna Viterbi Professor of Biological Engineering at MIT.


Understanding ammonia energy’s tradeoffs around the world

MIT Energy Initiative researchers calculated the economic and environmental impact of future ammonia energy production and trade pathways.


Many people are optimistic about ammonia’s potential as an energy source and carrier of hydrogen, and though large-scale adoption would require major changes to the way it is currently manufactured, ammonia does have a number of advantages. For one thing, ammonia is energy-dense and carbon-free. It is also already produced at scale and shipped around the world, primarily for use in fertilizer.

Though current manufacturing processes give ammonia an enormous carbon footprint, cleaner ways to make ammonia do exist. A better understanding of how to guide the ammonia fuel industry’s continued development could improve carbon emissions, energy costs, and regional energy balances.

In a new paper, MIT Energy Initiative (MITEI) researchers created the largest combined dataset showing the economic and environmental impact of global ammonia supply chains under different scenarios. They examined potential ammonia flows across 63 countries and considered a variety of country-specific economic parameters as well as low- and no-carbon ammonia production technologies. The results should help researchers, policymakers, and industry stakeholders calculate the cost and lifecycle emissions of different ammonia production technologies and trade routes.

“This is the most comprehensive work on the global ammonia landscape,” says senior author Guiyan Zang, a research scientist at MITEI. “We developed many of these frameworks at MIT to be able to make better cost-benefit analyses. Hydrogen and ammonia are the only two types of fuel with no carbon at scale. If we want to use fuel to generate power and heat, but not release carbon, hydrogen and ammonia are the only options, and ammonia is easier to transport and lower-cost.”

The study provides the clearest view yet of the tradeoffs associated with various ammonia production technologies. The researchers found, for instance, that a full transition to ammonia produced using conventional processes paired with carbon capture could cut global greenhouse gas emissions by nearly 71 percent for a 23.2 percent cost increase. A transition to electrolyzed ammonia produced using renewable energy could reduce greenhouse gas emissions by 99.7 percent for a 46 percent cost increase.

“Before this, there were no harmonized datasets quantifying the impacts of this transition,” says lead author Woojae Shin, a postdoc at MITEI. “Everyone is talking about ammonia as a super important hydrogen carrier in the future, and also ammonia can be directly used in power generation or fertilizer and other industrial uses. But we needed this dataset. It’s filling a major knowledge gap.”

The paper appears in Energy and Environmental Science. Former MITEI postdocs Haoxiang Lai and Gasim Ibrahim are also co-authors.

Filling a data gap

Today ammonia is mainly produced through the Haber-Bosch process, which in 2020 was responsible for about 1.8 percent of global greenhouse gas emissions. Although current ammonia production is energy-intensive and polluting (referred to as gray ammonia), ammonia can also be produced sustainably using renewable sources (green ammonia) or with natural gas and carbon sequestration (blue ammonia).

As ammonia has increasingly attracted interest as a carbon-free energy source and a medium for hydrogen transport, it’s become more important to quantify the costs and life-cycle emissions associated with various ammonia production technologies, as well as ammonia storage and shipping routes. But existing studies were too narrowly focused.

“The previous studies and datasets were fragmented,” Shin says. “They focused on specific regions or single technologies, like gray ammonia only, or blue ammonia only. They would also only cover the cost or the greenhouse emissions of ammonia in isolation. Finally, they use different scopes and methodologies. It meant you couldn’t make global comparisons or draw definitive conclusions.”

To build their database, the MIT researchers combined data from dozens of studies analyzing specific technologies, regions, economic parameters, and trade flows. They also used frameworks they previously developed to calculate the total cost of ammonia production in each country and estimated lifecycle greenhouse gas emissions across the supply chain, factoring in storage and shipping between different regions.

Emissions calculations included activities related to feedstock extraction, production, transport, and import processing. Major cost factors included each country’s renewable and grid electricity prices, natural gas prices, and location. Other factors like interest rates and equity premiums were also included.

The researchers used their calculations to find ammonia costs and life cycle emissions across six ammonia production technologies. In the context of the U.S. average, they found the lowest production cost came from using a popular form of the Haber Bosch process known as natural gas steam methane reforming (SMR) without carbon capture and storage (gray ammonia), at 48 cents per kilogram of ammonia. Unfortunately, that economic advantage came with the highest greenhouse gas emissions, at 2.46 kilograms of CO2 equivalent per kilogram of ammonia. In contrast, SMR with carbon capture and storage achieves an approximately 61 percent reduction in emissions while incurring a 29 percent increase in production costs.

Another method for producing ammonia that uses natural gas as a feedstock called auto-thermal reforming (ATR) with air combustion, when combined with carbon capture and storage, exhibited a 10 percent higher cost than conventional SMR while generating emissions of 0.75 kilograms of CO2 equivalent per kilogram of ammonia, representing a more cost-effective decarbonization option than SMR with carbon capture and storage.

Among production pathways including carbon capture (blue ammonia), a variation of ATR that uses oxygen combustion and carbon capture had the lowest emissions, with a production cost of about 57 cents per kilogram of ammonia. Producing ammonia with electricity generally had higher production costs than blue ammonia pathways. When nuclear energy is powering ammonia production, as opposed to the grid, greenhouse gas emissions are near zero at 0.03 kilograms of CO2 equivalent per kilogram of ammonia produced.

Across the 63 countries studied, major cost and emissions differences were driven by energy costs, sources of energy for the grid, and financing environments. China emerged as an optimal future supplier of green ammonia to many countries, while the Middle East also offered competitive low-carbon ammonia production pathways. Generally, blue ammonia pathways are most attractive for countries with low-cost natural gas resources, and ammonia made using grid electricity proved more expensive and more carbon-intensive than conventionally produced ammonia.

From data to policy

Low-carbon ammonia use is projected to grow dramatically by 2050, with that ammonia procured via global trade. Japan and Korea, for example, have included ammonia in their national energy strategies and conducted trials using ammonia to generate power. They even offer economic credits for verified CO2 reductions from clean ammonia projects.

“Ammonia researchers, producers, as well as government officials require this data to understand the impact of different technologies and global supply corridors,” Shin says.

The authors also believe industry stakeholders and other researchers will get a lot of value from their database, which allows users to explore the impact of changing specific parameters.

“We collaborate with companies, and they need to know the full costs and lifecycle emissions associated with different options,” Zang says. “Governments can also use this to compare options and set future policies. Any country producing ammonia needs to know which countries they can deliver to economically.”

The research was supported by the MIT Energy Initiative’s Future Energy Systems Center.


This new tool could tell us how consciousness works

Researchers propose a roadmap for using transcranial focused ultrasound, a noninvasive way to stimulate the brain and see how it functions.


Consciousness is famously a “hard problem” of science: We don’t precisely know how the physical matter in our brains translates into thoughts, sensations, and feelings. But an emerging research tool called transcranial focused ultrasound may enable researchers to learn more about the phenomenon.

The technology has entered use in recent years, but it isn’t yet fully integrated into research. Now, two MIT researchers are planning experiments with it, and have published a new paper they term a “roadmap” for using the tool to study consciousness.

“Transcranial focused ultrasound will let you stimulate different parts of the brain in healthy subjects, in ways you just couldn’t before,” says Daniel Freeman, an MIT researcher and co-author of a new paper on the subject. “This is a tool that’s not just useful for medicine or even basic science, but could also help address the hard problem of consciousness. It can probe where in the brain are the neural circuits that generate a sense of pain, a sense of vision, or even something as complex as human thought.”

Transcranial focused ultrasound is noninvasive and reaches deeper into the brain, with greater resolution, than other forms of brain stimulation, such as transcranial magnetic or electrical stimulation.

“There are very few reliable ways of manipulating brain activity that are safe but also work,” says Matthias Michel, an MIT philosopher who studies consciousness and co-authored the new work.

The paper, “Transcranial focused ultrasound for identifying the neural substrate of conscious perception,” is published in Neuroscience and Biobehavioral Reviews. The authors are Freeman, a technical staff member at MIT Lincoln Laboratory; Brian Odegaard, an assistant professor of psychology at the University of Florida; Seung-Schik Yoo, an associate professor of radiology at Brigham and Women’s Hospital and Harvard Medical School; and Michel, an associate professor in MIT’s Department of Philosophy and Linguistics.

Pinpointing causality

Brain research is especially difficult because of the challenge of studying healthy individuals. Apart from neurosurgery, there are very limited ways to gain knowledge of the deepest structures in the human brain. From the outside of the head, noninvasive approaches like MRIs and other kinds of ultrasounds yield some imaging information, while the electroencephalogram (EEG) shows electrical activity in the brain. Conversely, with transcranial focused ultrasound, acoustic waves are transmitted through the skull, focusing down to a target area of a few millimeters, allowing specific brain structures to be stimulated to study the resulting effect. It could therefore be a productive tool for robust experiments.

“It truly is the first time in history that one can modulate activity deep in the brain, centimeters from the scalp, examining subcortical structures with high spatial resolution,” Freeman says. “There’s a lot of interesting emotional circuits that are deep in the brain, but until now you couldn’t manipulate them outside of the operating room.”

Crucially, the technology may help researchers determine cause-and-effect patterns, precisely because its ultrasound waves modulate brain activity. Many studies of consciousness today may measure brain activity in relation to, say, visual stumuli, since visual processing is among the core components of consciousness. But it’s not necessarily clear if the brain activity being measured represents the generation of consciousness, or a mere consequence of consciousness. By manipulating the brain’s activity, researchers can better grasp which actions help constitute consciousness, or are byproducts of it.

“Transcranial focused ultrasound gives us a solution to that problem,” says Michel.

The “roadmap” laid out in the new paper aims to help distinguish between two main conceptions of consciousness. Broadly, the “cognitivist” conception holds that the neural activity that generates conscious experience must involve higher-level mental processes, such as reasoning or self-reflection. These processes link information from many different parts of the brain into a coherent whole, likely using the frontal cortex of the brain.

By contrast, the “non-cognitivist” idea of consciousness takes the position that conscious experience does not require such cognitive machinery; instead, specific patterns of neural activity give rise directly to particular subjective experiences, without the need for sophisticated interpretive processes. In this view, brain activity responsible for consciousness may be more localized, at the back of the cortex or in subcortical structures at the back of the brain.

To use transcranial focused ultrasound productively, the researchers lay out a series of more specific questions that experiments might address: What is the role of the prefrontal cortex in conscious perception? Is perception generated locally, or are brain-wide networks required? If consciousness arises across distant regions of the brain, how are perceptions from those areas linked into one unified experience? And what is the role of subcortical structures in conscious activity?

By modulating brain activity in experiments involving, say, visual stimuli, researchers could draw closer to answers about the brain areas that are necessary in the production of conscious thought. The same goes for studies of, for instance, pain, another core sensation linked with consciousness. We pull our hand back from a hot stove before the pain hits us. But how is the conscious sensation of pain generated, and where in the brain does that happen?

“It’s a basic science question, how is pain generated in the brain,” Freeman says. “And it’s surprising there is such uncertainty … Pain could stem from cortical areas, or it could be deeper brain structures. I’m interested in therapies, but I’m also curious if subcortical structures may play a bigger role than appreciated. It could be the physical manifestation of pain is subcortical. That’s a hypothesis. But now we have a tool to examine it.”

Experiments ahead

Freeman and Michel are not just abstractly charting a course for others to follow; they are planning forthcoming experiments centered on stimulation of the visual cortex, before moving on to higher-level areas in frontal cortex. While methods of recording brain activity, such as an EEG reveal areas that are visually responsive, these new experiments are aiming to build a more complete, causal picture of the entire process of visual perception and its associated brain activity.

“It’s one thing to say if these neurons reponded electrically. It’s another thing to say if a person saw light,” Freeman says.

Michel, for his part, is also playing an active role in generating further interest in studies of consciousness at MIT. Along with Earl Miller, the Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, Michel is a co-founder of the MIT Consciousness Club, a cross-disciplinary effort to spur further academic study of consciousness, on campus and at other Boston-area institutions.

The MIT Consciousness Club is supported in part by MITHIC, the MIT Human Insight Collaborative, an initiative backed by the School of Humanities, Arts, and Social Sciences. The program aims to hold monthly events, while grappling with the cutting edge of consciousness research.

At the moment, Michel believes, the cutting edge very much involves transcranial focused ultrasound.

“It’s a new tool, so we don’t really know to what extent it’s going to work,” Michel says. “But I feel there’s low risk and high reward. Why wouldn’t you take this path?”

The research for the paper was supported by the U.S. Department of the Air Force. 


3 Questions: How AI could optimize the power grid

While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.


Artificial intelligence has captured headlines recently for its rapidly growing energy demands, and particularly the surging electricity usage of data centers that enable the training and deployment of the latest generative AI models. But it’s not all bad news — some AI tools have the potential to reduce some forms of energy consumption and enable cleaner grids.

One of the most promising applications is using AI to optimize the power grid, which would improve efficiency, increase resilience to extreme weather, and enable the integration of more renewable energy. To learn more, MIT News spoke with Priya Donti, the Silverman Family Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), whose work focuses on applying machine learning to optimize the power grid.

Q: Why does the power grid need to be optimized in the first place?

A: We need to maintain an exact balance between the amount of power that is put into the grid and the amount that comes out at every moment in time. But on the demand side, we have some uncertainty. Power companies don’t ask customers to pre-register the amount of energy they are going to use ahead of time, so some estimation and prediction must be done.

Then, on the supply side, there is typically some variation in costs and fuel availability that grid managers need to be responsive to. That has become an even bigger issue because of the integration of energy from time-varying renewable sources, like solar and wind, where uncertainty in the weather can have a major impact on how much power is available. Then, at the same time, depending on how power is flowing in the grid, there is some power lost through resistive heat on the power lines. So, as a grid operator, how do you make sure all that is working all the time? That is where optimization comes in.

Q: How can AI be most useful in power grid optimization?

A: One way AI can be helpful is to use a combination of historical and real-time data to make more precise predictions about how much renewable energy will be available at a certain time. This could lead to a cleaner power grid by allowing us to handle and better utilize these resources.

AI could also help tackle the complex optimization problems that power grid operators must solve to balance supply and demand in a way that also reduces costs. These optimization problems are used to determine which power generators should produce power, how much they should produce, and when they should produce it, as well as when batteries should be charged and discharged, and whether we can leverage flexibility in power loads. These optimization problems are so computationally expensive that operators use approximations so they can solve them in a feasible amount of time. But these approximations are often wrong, and when we integrate more renewable energy into the grid, they are thrown off even farther. AI can help by providing more accurate approximations in a faster manner, which can be deployed in real-time to help grid operators responsively and proactively manage the grid.

AI could also be useful in the planning of next-generation power grids. Planning for power grids requires one to use huge simulation models, so AI can play a big role in running those models more efficiently. The technology can also help with predictive maintenance by detecting where anomalous behavior on the grid is likely to happen, reducing inefficiencies that come from outages. More broadly, AI could also be applied to accelerate experimentation aimed at creating better batteries, which would allow the integration of more energy from renewable sources into the grid.

Q: How should we think about the pros and cons of AI, from an energy sector perspective?

A: One important thing to remember is that AI refers to a heterogeneous set of technologies. There are different types and sizes of models that are used, and different ways that models are used. If you are using a model that is trained on a smaller amount of data with a smaller number of parameters, that is going to consume much less energy than a large, general-purpose model.

In the context of the energy sector, there are a lot of places where, if you use these application-specific AI models for the applications they are intended for, the cost-benefit tradeoff works out in your favor. In these cases, the applications are enabling benefits from a sustainability perspective — like incorporating more renewables into the grid and supporting decarbonization strategies.

Overall, it’s important to think about whether the types of investments we are making into AI are actually matched with the benefits we want from AI. On a societal level, I think the answer to that question right now is “no.” There is a lot of development and expansion of a particular subset of AI technologies, and these are not the technologies that will have the biggest benefits across energy and climate applications. I’m not saying these technologies are useless, but they are incredibly resource-intensive, while also not being responsible for the lion’s share of the benefits that could be felt in the energy sector.

I’m excited to develop AI algorithms that respect the physical constraints of the power grid so that we can credibly deploy them. This is a hard problem to solve. If an LLM says something that is slightly incorrect, as humans, we can usually correct for that in our heads. But if you make the same magnitude of a mistake when you are optimizing a power grid, that can cause a large-scale blackout. We need to build models differently, but this also provides an opportunity to benefit from our knowledge of how the physics of the power grid works.

And more broadly, I think it’s critical that those of us in the technical community put our efforts toward fostering a more democratized system of AI development and deployment, and that it’s done in a way that is aligned with the needs of on-the-ground applications.


Pills that communicate from the stomach could improve medication adherence

MIT engineers designed capsules with biodegradable radio frequency antennas that can reveal when the pill has been swallowed.


In an advance that could help ensure people are taking their medication on schedule, MIT engineers have designed a pill that can report when it has been swallowed.

The new reporting system, which can be incorporated into existing pill capsules, contains a biodegradable radio frequency antenna. After it sends out the signal that the pill has been consumed, most components break down in the stomach while a tiny RF chip passes out of the body through the digestive tract.

This type of system could be useful for monitoring transplant patients who need to take immunosuppressive drugs, or people with infections such as HIV or TB, who need treatment for an extended period of time, the researchers say.

“The goal is to make sure that this helps people receive the therapy they need to help maximize their health,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and an associate member of the Broad Institute of MIT and Harvard.

Traverso is the senior author of the new study, which appears today in Nature Communications. Mehmet Girayhan Say, an MIT research scientist, and Sean You, a former MIT postdoc, are the lead authors of the paper.

A pill that communicates

Patients’ failure to take their medicine as prescribed is a major challenge that contributes to hundreds of thousands of preventable deaths and billions of dollars in health care costs annually.

To make it easier for people to take their medication, Traverso’s lab has worked on delivery capsules that can remain in the digestive tract for days or weeks, releasing doses at predetermined times. However, this approach may not be compatible with all drugs.

“We’ve developed systems that can stay in the body for a long time, and we know that those systems can improve adherence, but we also recognize that for certain medications, we can’t change the pill,” Traverso says. “The question becomes: What else can we do to help the person and help their health care providers ensure that they’re receiving the medication?”

In their new study, the researchers focused on a strategy that would allow doctors to more closely monitor whether patients are taking their medication. Using radio frequency — a type of signal that can be easily detected from outside the body and is safe for humans — they designed a capsule that can communicate after the patient has swallowed it.

There have been previous efforts to develop RF-based signaling devices for medication capsules, but those were all made from components that don’t break down easily in the body and would need to travel through the digestive system.

To minimize the potential risk of any blockage of the GI tract, the MIT team decided to create an RF-based system that would be bioresorbable, meaning that it can be broken down and absorbed by the body. The antenna that sends out the RF signal is made from zinc, and it is embedded into a cellulose particle.

“We chose these materials recognizing their very favorable safety profiles and also environmental compatibility,” Traverso says.

The zinc-cellulose antenna is rolled up and placed inside a capsule along with the drug to be delivered. The outer layer of the capsule is made from gelatin coated with a layer of cellulose and either molybdenum or tungsten, which blocks any RF signal from being emitted.

Once the capsule is swallowed, the coating breaks down, releasing the drug along with the RF antenna. The antenna can then pick up an RF signal sent from an external receiver and, working with a small RF chip, sends back a signal to confirm that the capsule was swallowed. This communication happens within 10 minutes of the pill being swallowed.

The RF chip, which is about 400 by 400 micrometers, is an off-the-shelf chip that is not biodegradable and would need to be excreted through the digestive tract. All of the other components would break down in the stomach within a week.

“The components are designed to break down over days using materials with well-established safety profiles, such as zinc and cellulose, which are already widely used in medicine,” Say says. “Our goal is to avoid long-term accumulation while enabling reliable confirmation that a pill was taken, and longer-term safety will continue to be evaluated as the technology moves toward clinical use.”

Promoting adherence

Tests in an animal model showed that the RF signal was successfully transmitted from inside the stomach and could be read by an external receiver at a distance up to 2 feet away. If developed for use in humans, the researchers envision designing a wearable device that could receive the signal and then transmit it to the patient’s health care team.

The researchers now plan to do further preclinical studies and hope to soon test the system in humans. One patient population that could benefit greatly from this type of monitoring is people who have recently had organ transplants and need to take immunosuppressant drugs to make sure their body doesn’t reject the new organ.

“We want to prioritize medications that, when non-adherence is present, could have a really detrimental effect for the individual,” Traverso says.

Other populations that could benefit include people who have recently had a stent inserted and need to take medication to help prevent blockage of the stent, people with chronic infectious diseases such as tuberculosis, and people with neuropsychiatric disorders whose conditions may impair their ability to take their medication.

The research was funded by Novo Nordisk, MIT’s Department of Mechanical Engineering, the Division of Gastroenterology at Brigham and Women’s Hospital, and the U.S. Advanced Research Projects Agency for Health (ARPA-H), which notes that the views and conclusions contained in this article are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Government.

This work was carried out, in part, through the use of MIT.nano’s facilities.