As the story goes, the Scottish inventor James Watt envisioned how steam engines should work on one day in 1765, when he was walking across Glasgow Green, a park in his hometown. Watt realized that putting a separate condenser in an engine would allow its main cylinder to remain hot, making the engine more efficient and compact than the huge steam engines then in existence.
And yet Watt, who had been pondering the problem for a while, needed a partnership with entrepreneur Matthew Boulton to get a practical product to market, starting in 1775 and becoming successful in later years.
“People still use this story of Watt’s ‘Eureka!’ moment, which Watt himself promoted later in his life,” says MIT Professor David Mindell, an engineer and historian of science and engineering. “But it took 20 years of hard labor, during which Watt struggled to support a family and had multiple failures, to get it out in the world. Multiple other inventions were required to achieve what we today call product-market fit.”
The full story of the steam engine, Mindell argues, is a classic case of what is today called “process innovation,” not just “product innovation.” Inventions are rarely fully-formed products, ready to change the world. Mostly, they need a constellation of improvements, and sustained persuasion, to become adopted into industrial systems.
What was true for Watt still holds, as Mindell’s body of work shows. Most technology-driven growth today comes from overlapping advances, when inventors and companies tweak and improve things over time. Now, Mindell explores those ideas in a forthcoming book, “The New Lunar Society: An Enlightenment Guide to the Next Industrial Revolution,” being published on Feb. 24 by the MIT Press. Mindell is professor of aeronautics and astronautics and the Dibner Professor of the History of Engineering and Manufacturing at MIT, where he has also co-founded the Work of the Future initiative.
“We’ve overemphasized product innovation, although we’re very good at it,” Mindell says. “But it’s become apparent that process innovation is just as important: how you improve the making, fixing, rebuilding, or upgrading of systems. These are deeply entangled. Manufacturing is part of process innovation.”
Today, with so many things being positioned as world-changing products, it may be especially important to notice that being adaptive and persistent is practically the essence of improvement.
“Young innovators don’t always realize that when their invention doesn’t work at first, they’re at the start of a process where they have to refine and engage, and find the right partners to grow,” Mindell says.
Manufacturing at home
The title of Mindell’s book refers to British Enlightenment thinkers and inventors — Watt was one of them — who used to meet in a group they called the Lunar Society, centered in Birmingham. This included pottery innovator Josiah Wedgewood; physician Erasmus Darwin; chemist Joseph Priestley; and Boulton, a metal manufacturer whose work and capital helped make Watt’s improved steam engine a reliable product. The book moves between chapters on the old Lunar Society and those on contemporary industrial systems, drawing parallels between then and now.
“The stories about the Lunar Society are models for the way people can go about their careers, engineering or otherwise, in a way they may not see in popular press about technology today,” Mindell says. “Everyone told Wedgwood he couldn’t compete with Chinese porcelain, yet he learned from the Lunar Society and built an English pottery industry that led the world.”
Applying the Lunar Society’s virtues to contemporary industry leads Mindell to a core set of ideas about technology. Research shows that design and manufacturing should be adjacent if possible, not outsourced globally, to accelerate learning and collaboration. The book also argues that technology should address human needs and that venture capital should focus more on industrial systems than it does. (Mindell has co-founded a firm, called Unless, that invests in companies by using venture financing structures better-suited to industrial transformation.)
In seeing a new industrialism taking shape, Mindell suggests that its future includes new ways of working, collaborating, and valuing knowledge throughout organizations, as well as more AI-based open-source tools for small and mid-size manufacturers. He also contends that a new industrialism should include greater emphasis on maintenance and repair work, which are valuable sources of knowledge about industrial devices and systems.
“We’ve undervalued how to keep things running, while simultaneously hollowing out the middle of the workforce,” he says. “And yet, operations and maintenance are sites of product innovation. Ask the person who fixes your car or dishwasher. They’ll tell you the strengths and weaknesses of every model.”
All told, “The sum total of this work, over time, amounts to a new industrialism if it elevates its cultural status into a movement that values the material basis of our lives and seeks to improve it, literally from the ground up,” Mindell writes in the book.
“The book doesn’t predict the future,” he says. “But rather it suggests how to talk about the future of industry with optimism and realism, as opposed to saying, this is the utopian future where machines do everything, and people just sit back in chairs with wires coming out of their heads.”
Work of the Future
“The New Lunar Society” is a concise book with expansive ideas. Mindell also devotes chapters to the convergence of the Industrial-era Enlightenment, the founding of the U.S., and the crucial role of industry in forming the republic.
“The only founding father who signed all of the critical documents in the founding of the country, Benjamin Franklin, was also the person who crystallized the modern science of electricity and deployed its first practical invention, the lightning rod,” Mindell says. “But there were multiple figures, including Thomas Jefferson and Paul Revere, who integrated the industrial Enlightenment with democracy. Industry has been core to American democracy from the beginning.”
Indeed, as Mindell emphasizes in the book, “industry,” beyond evoking smokestacks, has a human meaning: If you are hard-working, you are displaying industry. That meshes with the idea of persistently redeveloping an invention over time.
Despite the high regard Mindell holds for the Industrial Enlightenment, he recognizes that the era’s industrialization brought harsh working conditions, as well as environmental degradation. As one of the co-founders of MIT’s Work of the Future initiative, he argues that 21st-century industrialism needs to rethink some of its fundamentals.
“The ideals of [British] industrialization missed on the environment, and missed on labor,” Mindell says. “So at this point, how do we rethink industrial systems to do better?” Mindell argues that industry must power an economy that grows while decarbonizing.
After all, Mindell adds, “About 70 percent of greenhouse gas emissions are from industrial sectors, and all of the potential solutions involve making lots of new stuff. Even if it’s just connectors and wire. We’re not going to decarbonize or address global supply chain crises by deindustrializing, we’re going to get there by reindustrializing.”
“The New Lunar Society” has received praise from technologists and other scholars. Joel Mokyr, an economic historian at Northwestern University who coined the term “Industrial Enlightenment,” has stated that Mindell “realizes that innovation requires a combination of knowing and making, mind and hand. … He has written a deeply original and insightful book.” Jeff Wilke SM ’93, a former CEO of Amazon’s consumer business, has said the book “argues compellingly that a thriving industrial base, adept at both product and process innovation, underpins a strong democracy.”
Mindell hopes the audience for the book will range from younger technologists to a general audience of anyone interested in the industrial future.
“I think about young people in industrial settings and want to help them see they’re part of a great tradition and are doing important things to change the world,” Mindell says. “There is a huge audience of people who are interested in technology but find overhyped language does not match their aspirations or personal experience. I’m trying to crystallize this new industrialism as a way of imagining and talking about the future.”
Driving innovation, from Silicon Valley to DetroitDoug Field SM ’92, Ford’s chief of EVs and digital design, leads the legacy carmaker into the software-enabled, battery-propelled future.Across a career’s worth of pioneering product designs, Doug Field’s work has shaped the experience of anyone who’s ever used a MacBook Air, ridden a Segway, or driven a Tesla Model 3.
But his newest project is his most ambitious yet: reinventing the Ford automobile, one of the past century’s most iconic pieces of technology.
As Ford’s chief electric vehicle (EV), digital, and design officer, Field is tasked with leading the development of the company’s electric vehicles, while making new software platforms central to all Ford models.
To bring Ford Motor Co. into that digital and electric future, Field effectively has to lead a fast-moving startup inside the legacy carmaker. “It is incredibly hard, figuring out how to do ‘startups’ within large organizations,” he concedes.
If anyone can pull it off, it’s likely to be Field. Ever since his time in MIT’s Leaders for Global Operations (then known as “Leaders in Manufacturing”) program studying organizational behavior and strategy, Field has been fixated on creating the conditions that foster innovation.
“The natural state of an organization is to make it harder and harder to do those things: to innovate, to have small teams, to go against the grain,” he says. To overcome those forces, Field has become a master practitioner of the art of curating diverse, talented teams and helping them flourish inside of big, complex companies.
“It’s one thing to make a creative environment where you can come up with big ideas,” he says. “It’s another to create an execution-focused environment to crank things out. I became intrigued with, and have been for the rest of my career, this question of how can you have both work together?”
Three decades after his first stint as a development engineer at Ford Motor Co., Field now has a chance to marry the manufacturing muscle of Ford with the bold approach that helped him rethink Apple’s laptops and craft Tesla’s Model 3 sedan. His task is nothing less than rethinking how cars are made and operated, from the bottom up.
“If it’s only creative or execution, you’re not going to change the world,” he says. “If you want to have a huge impact, you need people to change the course you’re on, and you need people to build it.”
A passion for design
From a young age, Field had a fascination with automobiles. “I was definitely into cars and transportation more generally,” he says. “I thought of cars as the place where technology and art and human design came together — cars were where all my interests intersected.”
With a mother who was an artist and musician and an engineer father, Field credits his parents’ influence for his lifelong interest in both the aesthetic and technical elements of product design. “I think that’s why I’m drawn to autos — there’s very much an aesthetic aspect to the product,” he says.
After earning a degree in mechanical engineering from Purdue University, Field took a job at Ford in 1987. The big Detroit automakers of that era excelled at mass-producing cars, but weren’t necessarily set up to encourage or reward innovative thinking. Field chafed at the “overstructured and bureaucratic” operational culture he encountered.
The experience was frustrating at times, but also valuable and clarifying. He realized that he “wanted to work with fast-moving, technology-based businesses.”
“My interest in advancing technical problem-solving didn’t have a place in the auto industry” at the time, he says. “I knew I wanted to work with passionate people and create something that didn’t exist, in an environment where talent and innovation were prized, where irreverence was an asset and not a liability. When I read about Silicon Valley, I loved the way they talked about things.”
During that time, Field took two years off to enroll in MIT’s LGO program, where he deepened his technical skills and encountered ideas about manufacturing processes and team-driven innovation that would serve him well in the years ahead.
“Some of core skill sets that I developed there were really, really important,” he says, “in the context of production lines and production processes.” He studied systems engineering and the use of Monte Carlo simulations to model complex manufacturing environments. During his internship with aerospace manufacturer Pratt & Whitney, he worked on automated design in computer-aided design (CAD) systems, long before those techniques became standard practice.
Another powerful tool he picked up was the science of probability and statistics, under the tutelage of MIT Professor Alvin Drake in his legendary course 6.041/6.431 (Probabilistic Systems Analysis). Field would go on to apply those insights not only to production processes, but also to characterizing variability in people’s aptitudes, working styles, and talents, in the service of building better, more innovative teams. And studying organizational strategy catalyzed his career-long interest in “ways to look at innovation as an outcome, rather than a random spark of genius.”
“So many things I was lucky to be exposed to at MIT,” Field says, were “all building blocks, pieces of the puzzle, that helped me navigate through difficult situations later on.”
Learning while leading
After leaving Ford in 1993, Field worked at Johnson and Johnson Medical for three years in process development. There, he met Segway inventor Dean Kamen, who was working on a project called the iBOT, a gyroscopic powered wheelchair that could climb stairs.
When Kamen spun off Segway to develop a new personal mobility device using the same technology, Field became his first hire. He spent nearly a decade as the firm’s chief technology officer.
At Segway, Field’s interests in vehicles, technology, innovation, process, and human-centered design all came together.
“When I think about working now on electric cars, it was a real gift,” he says. The problems they tackled prefigured the ones he would grapple with later at Tesla and Ford. “Segway was very much a precursor to a modern EV. Completely software controlled, with higher-voltage batteries, redundant systems, traction control, brushless DC motors — it was basically a miniature Tesla in the year 2000.”
At Segway, Field assembled an “amazing” team of engineers and designers who were as passionate as he was about pushing the envelope. “Segway was the first place I was able to hand-pick every single person I worked with, define the culture, and define the mission.”
As he grew into this leadership role, he became equally engrossed with cracking another puzzle: “How do you prize people who don’t fit in?”
“Such a fundamental part of the fabric of Silicon Valley is the love of embracing talent over a traditional organization’s ways of measuring people,” he says. “If you want to innovate, you need to learn how to manage neurodivergence and a very different set of personalities than the people you find in large corporations.”
Field still keeps the base housing of a Segway in his office, as a reminder of what those kinds of teams — along with obsessive attention to detail — can achieve.
Before joining Apple in 2008, he showed that component, with its clean lines and every minuscule part in its place in one unified package, to his prospective new colleagues. “They were like, “OK, you’re one of us,’” he recalls.
He soon became vice president of hardware development for all Mac computers, leading the teams behind the MacBook Air and MacBook Pro and eventually overseeing more than 2,000 employees. “Making things really simple and really elegant, thinking about the product as an integrated whole, that really took me into Apple.”
The challenge of giving the MacBook Air its signature sleek and light profile is an example.
“The MacBook Air was the first high-volume consumer electronic product built out of a CNC-machined enclosure,” says Field. He worked with industrial design and technology teams to devise a way to make the laptop from one solid piece of aluminum and jettison two-thirds of the parts found in the iMac. “We had material cut away so that every single screw and piece of electronics sat down into it an integrated way. That’s how we got the product so small and slim.”
“When I interviewed with Jony Ive” — Apple’s legendary chief design officer — “he said your ability to zoom out and zoom in was the number one most important ability as a leader at Apple.” That meant zooming out to think about “the entire ethos of this product, and the way it will affect the world” and zooming all the way back in to obsess over, say, the physical shape of the laptop itself and what it feels like in a user’s hands.
“That thread of attention to detail, passion for product, design plus technology rolled directly into what I was doing at Tesla,” he says. When Field joined Tesla in 2013, he was drawn to the way the brash startup upended the approach to making cars. “Tesla was integrating digital technology into cars in a way nobody else was. They said, ‘We’re not a car company in Silicon Valley, we’re a Silicon Valley company and we happen to make cars.’”
Field assembled and led the team that produced the Model 3 sedan, Tesla’s most affordable vehicle, designed to have mass-market appeal.
That experience only reinforced the importance, and power, of zooming in and out as a designer — in a way that encompasses the bigger human resources picture.
“You have to have a broad sense of what you’re trying to accomplish and help people in the organization understand what it means to them,” he says. “You have to go across and understand operations enough to glue all of those (things) together — while still being great at and focused on something very, very deeply. That’s T-shaped leadership.”
He credits his time at LGO with providing the foundation for the “T-shaped leadership” he practices.
“An education like the one I got at MIT allowed me to keep moving that ‘T’, to focus really deep, learn a ton, teach as much as I can, and after something gets more mature, pull out and bed down into other areas where the organization needs to grow or where there’s a crisis.”
The power of marrying scale to a “startup mentality”
In 2018, Field returned to Apple as a vice president for special projects. “I left Tesla after Model 3 and Y started to ramp, as there were people better than me to run high-volume manufacturing,” he says. “I went back to Apple hoping what Tesla had learned would motivate Apple to get into a different market.”
That market was his early love: cars. Field quietly led a project to develop an electric vehicle at Apple for three years.
Then Ford CEO Jim Farley came calling. He persuaded Field to return to Ford in late 2021, partly by demonstrating how much things had changed since his first stint as the carmaker.
“Two things came through loud and clear,” Field says. “One was humility. ‘Our success is not assured.’” That attitude was strikingly different from Field’s early experience in Detroit, encountering managers who were resistant to change. “The other thing was urgency. Jim and Bill Ford said the exact same thing to me: ‘We have four or five years to completely remake this company.’”
“I said, ‘OK, if the top of company really believes that, then the auto industry may be ready for what I hope to offer.’”
So far, Field is energized and encouraged by the appetite for reinvention he’s encountered this time around at Ford.
“If you can combine what Ford does really well with what a Tesla or Rivian can do well, this is something to be reckoned with,” says Field. “Skunk works have become one of the fundamental tools of my career,” he says, using an industry term that describes a project pursued by a small, autonomous group of people within a larger organization.
Ford has been developing a new, lower-cost, software-enabled EV platform — running all of the car’s sensors and components from a central digital operating system — with a “skunk works” team for the past two years. The company plans to build new sedans, SUVs, and small pickups based on this new platform.
With other legacy carmakers like Volvo racing into the electric future and fierce competition from EV leaders Tesla and Rivian, Field and his colleagues have their work cut out for them.
If he succeeds, leveraging his decades of learning and leading from LGO to Silicon Valley, then his latest chapter could transform the way we all drive — and secure a spot for Ford at the front of the electric vehicle pack in the process.
“I’ve been lucky to feel over and over that what I’m doing right now — they are going to write a book about it,” say Field. “This is a big deal, for Ford and the U.S. auto industry, and for American industry, actually.”
How telecommunications cables can image the ground beneath usBy making use of MIT’s existing fiber optic infrastructure, PhD student Hilary Chang imaged the ground underneath campus, a method that can be used to characterize seismic hazards.When people think about fiber optic cables, its usually about how they’re used for telecommunications and accessing the internet. But fiber optic cables — strands of glass or plastic that allow for the transmission of light — can be used for another purpose: imaging the ground beneath our feet.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) PhD student Hilary Chang recently used the MIT fiber optic cable network to successfully image the ground underneath campus using a method known as distributed acoustic sensing (DAS). By using existing infrastructure, DAS can be an efficient and effective way to understand ground composition, a critical component for assessing the seismic hazard of areas, or how at risk they are from earthquake damage.
“We were able to extract very nice, coherent waves from the surroundings, and then use that to get some information about the subsurface,” says Chang, the lead author of a recent paper describing her work that was co-authored with EAPS Principal Research Scientist Nori Nakata.
Dark fibers
The MIT campus fiber optic system, installed from 2000 to 2003, services internal data transport between labs and buildings as well as external transport, such as the campus internet (MITNet). There are three major cable hubs on campus from which lines branch out into buildings and underground, much like a spiderweb.
The network allocates a certain number of strands per building, some of which are “dark fibers,” or cables that are not actively transporting information. Each campus fiber hub has redundant backbone cables between them so that, in the event of a failure, network transmission can switch to the dark fibers without loss of network services.
DAS can use existing telecommunication cables and ambient wavefields to extract information about the materials they pass through, making it a valuable tool for places like cities or the ocean floor, where conventional sensors can’t be deployed. Chang, who studies earthquake waveforms and the information we can extract from them, decided to try it out on the MIT campus.
In order to get access to the fiber optic network for the experiment, Chang reached out to John Morgante, a manager of infrastructure project engineering with MIT Information Systems and Technology (IS&T). Morgante has been at MIT since 1998 and was involved with the original project installing the fiber optic network, and was thus able to provide personal insight into selecting a route.
“It was interesting to listen to what they were trying to accomplish with the testing,” says Morgante. While IS&T has worked with students before on various projects involving the school’s network, he said that “in the physical plant area, this is the first that I can remember that we’ve actually collaborated on an experiment together.”
They decided on a path starting from a hub in Building 24, because it was the longest running path that was entirely underground; above-ground wires that cut through buildings wouldn’t work because they weren’t grounded, and thus were useless for the experiment. The path ran from east to west, beginning in Building 24, traveling under a section of Massachusetts Ave., along parts of Amherst and Vassar streets, and ending at Building W92.
“[Morgante] was really helpful,” says Chang, describing it as “a very good experience working with the campus IT team.”
Locating the cables
After renting an interrogator, a device that sends laser pulses to sense ambient vibrations along the fiber optic cables, Chang and a group of volunteers were given special access to connect it to the hub in Building 24. They let it run for five days.
To validate the route and make sure that the interrogator was working, Chang conducted a tap test, in which she hit the ground with a hammer several times to record the precise GPS coordinates of the cable. Conveniently, the underground route is marked by maintenance hole covers that serve as good locations to do the test. And, because she needed the environment to be as quiet as possible to collect clean data, she had to do it around 2 a.m.
“I was hitting it next to a dorm and someone yelled ‘shut up,’ probably because the hammer blows woke them up,” Chang recalls. “I was sorry.” Thankfully, she only had to tap at a few spots and could interpolate the locations for the rest.
During the day, Chang and her fellow students — Denzel Segbefia, Congcong Yuan, and Jared Bryan — performed an additional test with geophones, another instrument that detects seismic waves, out on Brigg’s Field where the cable passed under it to compare the signals. It was an enjoyable experience for Chang; when the data were collected in 2022, the campus was coming out of pandemic measures, with remote classes sometimes still in place. “It was very nice to have everyone on the field and do something with their hands,” she says.
The noise around us
Once Chang collected the data, she was able to see plenty of environmental activity in the waveforms, including the passing of cars, bikes, and even when the train that runs along the northern edge of campus made its nightly passes.
After identifying the noise sources, Chang and Nakata extracted coherent surface waves from the ambient noises and used the wave speeds associated with different frequencies to understand the properties of the ground the cables passed through. Stiffer materials allow fast velocities, while softer material slows it.
“We found out that the MIT campus is built on soft materials overlaying a relatively hard bedrock,” Chang says, which confirms previously known, albeit lower-resolution, information about the geology of the area that had been collected using seismometers.
Information like this is critical for regions that are susceptible to destructive earthquakes and other seismic hazards, including the Commonwealth of Massachusetts, which has experienced earthquakes as recently as this past week. Areas of Boston and Cambridge characterized by artificial fill during rapid urbanization are especially at risk due to its subsurface structure being more likely to amplify seismic frequencies and damage buildings. This non-intrusive method for site characterization can help ensure that buildings meet code for the correct seismic hazard level.
“Destructive seismic events do happen, and we need to be prepared,” she says.
Mishael Quraishi named 2025 Churchill ScholarThe MIT senior will pursue a master’s program at Cambridge University in the UK.MIT senior Mishael Quraishi has been selected as a 2025-26 Churchill Scholar and will undertake an MPhil in archaeological research at Cambridge University in the U.K. this fall.
Quraishi, who is majoring in material sciences and archeology with a minor in ancient and medieval studies, envisions a future career as a materials scientist, using archeological methods to understand how ancient techniques can be applied to modern problems.
At the Masic Lab at MIT, Quraishi was responsible for studying Egyptian blue, the world’s oldest synthetic pigment, to uncover ancient methods for mass production. Through this research, she secured an internship at the Metropolitan Museum of Art’s Department of Scientific Research, where she characterized pigments on the Amathus sarcophagus. Last fall, she presented her findings to kick off the International Roundtable on Polychromy at the Getty Museum. Quraishi has continued research in the Masic lab and her work on the “Blue Room” of Pompeii was featured on NBC nightly news.
Outside of research, Quraishi has been active in MIT’s makerspace and art communities. She has created engravings and acrylic pourings in the MIT MakerWorkshop, metal sculptures in the MIT Forge, and colored glass rods in the MIT Metropolis makerspace. Quraishi also plays the piano and harp and has sung with the Harvard Summer Chorus and the Handel and Haydn Society. She currently serves as the president of the Society for Undergraduates in Materials Science (SUMS) and captain of the lightweight women’s rowing team that won MIT’s first Division I national championship title in 2022.
“We are delighted that Mishael will have the opportunity to expand her important and interesting research at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships. “Her combination of scientific inquiry, humanistic approach, and creative spirit make her an ideal representative of MIT.”
The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, which was established in 1963, honors former British Prime Minister Winston Churchill’s vision of U.S.-U.K. scientific exchange. Since 2017, two additional Kanders Churchill Scholarships have been awarded each year for studies in science policy.
MIT students interested in learning more about the Churchill Scholarship should contact Benard in MIT Career Advising and Professional Development.
Aligning AI with human values“We need to both ensure humans reap AI’s benefits and that we don’t lose control of the technology,” says senior Audrey Lorvo.Senior Audrey Lorvo is researching AI safety, which seeks to ensure increasingly intelligent AI models are reliable and can benefit humanity. The growing field focuses on technical challenges like robustness and AI alignment with human values, as well as societal concerns like transparency and accountability. Practitioners are also concerned with the potential existential risks associated with increasingly powerful AI tools.
“Ensuring AI isn’t misused or acts contrary to our intentions is increasingly important as we approach artificial general intelligence (AGI),” says Lorvo, a computer science, economics, and data science major. AGI describes the potential of artificial intelligence to match or surpass human cognitive capabilities.
An MIT Schwarzman College of Computing Social and Ethical Responsibilities of Computing (SERC) scholar, Lorvo looks closely at how AI might automate AI research and development processes and practices. A member of the Big Data research group, she’s investigating the social and economic implications associated with AI’s potential to accelerate research on itself and how to effectively communicate these ideas and potential impacts to general audiences including legislators, strategic advisors, and others.
Lorvo emphasizes the need to critically assess AI’s rapid advancements and their implications, ensuring organizations have proper frameworks and strategies in place to address risks. “We need to both ensure humans reap AI’s benefits and that we don’t lose control of the technology,” she says. “We need to do all we can to develop it safely.”
Her participation in efforts like the AI Safety Technical Fellowship reflect her investment in understanding the technical aspects of AI safety. The fellowship provides opportunities to review existing research on aligning AI development with considerations of potential human impact. “The fellowship helped me understand AI safety’s technical questions and challenges so I can potentially propose better AI governance strategies,” she says. According to Lorvo, companies on AI’s frontier continue to push boundaries, which means we’ll need to implement effective policies that prioritize human safety without impeding research.
Value from human engagement
When arriving at MIT, Lorvo knew she wanted to pursue a course of study that would allow her to work at the intersection of science and the humanities. The variety of offerings at the Institute made her choices difficult, however.
“There are so many ways to help advance the quality of life for individuals and communities,” she says, “and MIT offers so many different paths for investigation.”
Beginning with economics — a discipline she enjoys because of its focus on quantifying impact — Lorvo investigated math, political science, and urban planning before choosing Course 6-14.
“Professor Joshua Angrist’s econometrics classes helped me see the value in focusing on economics, while the data science and computer science elements appealed to me because of the growing reach and potential impact of AI,” she says. “We can use these tools to tackle some of the world’s most pressing problems and hopefully overcome serious challenges.”
Lorvo has also pursued concentrations in urban studies and planning and international development.
As she’s narrowed her focus, Lorvo finds she shares an outlook on humanity with other members of the MIT community like the MIT AI Alignment group, from whom she learned quite a bit about AI safety. “Students care about their marginal impact,” she says.
Marginal impact, the additional effect of a specific investment of time, money, or effort, is a way to measure how much a contribution adds to what is already being done, rather than focusing on the total impact. This can potentially influence where people choose to devote their resources, an idea that appeals to Lorvo.
“In a world of limited resources, a data-driven approach to solving some of our biggest challenges can benefit from a tailored approach that directs people to where they’re likely to do the most good,” she says. “If you want to maximize your social impact, reflecting on your career choice’s marginal impact can be very valuable.”
Lorvo also values MIT’s focus on educating the whole student and has taken advantage of opportunities to investigate disciplines like philosophy through MIT Concourse, a program that facilitates dialogue between science and the humanities. Concourse hopes participants gain guidance, clarity, and purpose for scientific, technical, and human pursuits.
Student experiences at the Institute
Lorvo invests her time outside the classroom in creating memorable experiences and fostering relationships with her classmates. “I’m fortunate that there’s space to balance my coursework, research, and club commitments with other activities, like weightlifting and off-campus initiatives,” she says. “There are always so many clubs and events available across the Institute.”
These opportunities to expand her worldview have challenged her beliefs and exposed her to new interest areas that have altered her life and career choices for the better. Lorvo, who is fluent in French, English, Spanish, and Portuguese, also applauds MIT for the international experiences it provides for students.
“I’ve interned in Santiago de Chile and Paris with MISTI and helped test a water vapor condensing chamber that we designed in a fall 2023 D-Lab class in collaboration with the Madagascar Polytechnic School and Tatirano NGO [nongovernmental organization],” she says, “and have enjoyed the opportunities to learn about addressing economic inequality through my International Development and D-Lab classes.”
As president of MIT’s Undergraduate Economics Association, Lorvo connects with other students interested in economics while continuing to expand her understanding of the field. She enjoys the relationships she’s building while also participating in the association’s events throughout the year. “Even as a senior, I’ve found new campus communities to explore and appreciate,” she says. “I encourage other students to continue exploring groups and classes that spark their interests throughout their time at MIT.”
After graduation, Lorvo wants to continue investigating AI safety and researching governance strategies that can help ensure AI’s safe and effective deployment.
“Good governance is essential to AI’s successful development and ensuring humanity can benefit from its transformative potential,” she says. “We must continue to monitor AI’s growth and capabilities as the technology continues to evolve.”
Understanding technology’s potential impacts on humanity, doing good, continually improving, and creating spaces where big ideas can see the light of day continue to drive Lorvo. Merging the humanities with the sciences animates much of what she does. “I always hoped to contribute to improving people’s lives, and AI represents humanity’s greatest challenge and opportunity yet,” she says. “I believe the AI safety field can benefit from people with interdisciplinary experiences like the kind I’ve been fortunate to gain, and I encourage anyone passionate about shaping the future to explore it.”
Eleven MIT faculty receive Presidential Early Career AwardsFaculty members and additional MIT alumni are among 400 scientists and engineers recognized for outstanding leadership potential.Eleven MIT faculty, including nine from the School of Engineering and two from the School of Science, were awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). More than 15 additional MIT alumni were also honored.
Established in 1996 by President Bill Clinton, the PECASE is awarded to scientists and engineers “who show exceptional potential for leadership early in their research careers.” The latest recipients were announced by the White House on Jan. 14 under President Joe Biden. Fourteen government agencies recommended researchers for the award.
The MIT faculty and alumni honorees are among 400 scientists and engineers recognized for innovation and scientific contributions. Those from the School of Engineering and School of Science who were honored are:
Additional MIT alumni who were honored include: Elaheh Ahmadi ’20, MNG ’21; Ambika Bajpayee MNG ’07, PhD ’15; Katherine Bouman SM ’13, PhD ’17; Walter Cheng-Wan Lee ’95, MNG ’95, PhD ’05; Ismaila Dabo PhD ’08; Ying Diao SM ’10, PhD ’12; Eno Ebong ’99; Soheil Feizi- Khankandi SM ’10, PhD ’16; Mark Finlayson SM ’01, PhD ’12; Chelsea B. Finn ’14; Grace Xiang Gu SM ’14, PhD ’18; David Michael Isaacson PhD ’06, AF ’16; Lewei Lin ’05; Michelle Sander PhD ’12; Kevin Solomon SM ’08, PhD ’12; and Zhiting Tian PhD ’14.
Introducing the MIT Generative AI Impact Consortium The consortium will bring researchers and industry together to focus on impact.From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.
Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.
“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”
Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”
Developing the blueprint for generative AI’s next leap
The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:
Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there's no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.
“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.
"What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time," says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.
A “perfect match” of academia and industry
At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.
The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.
“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”
Industry partners: Collaborating on AI’s evolution
At the core of the consortium’s mission is collaboration — bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.
Among the founding members is OpenAI, the creator of the generative AI chatbot ChatGPT.
“This type of collaboration between academics, practitioners, and labs is key to ensuring that generative AI evolves in ways that meaningfully benefit society,” says Anna Makanju, vice president of global impact at OpenAI, adding that OpenAI “is eager to work alongside MIT’s Generative AI Consortium to bridge the gap between cutting-edge AI research and the real-world expertise of diverse industries.”
The Coca-Cola Co. recognizes an opportunity to leverage AI innovation on a global scale. “We see a tremendous opportunity to innovate at the speed of AI and, leveraging The Coca-Cola Company's global footprint, make these cutting-edge solutions accessible to everyone,” says Pratik Thakar, global vice president and head of generative AI. “Both MIT and The Coca-Cola Company are deeply committed to innovation, while also placing equal emphasis on the legally and ethically responsible development and use of technology.”
For TWG Global, the consortium offers the ideal environment to share knowledge and drive advancements. “The strength of the consortium is its unique combination of industry leaders and academia, which fosters the exchange of valuable lessons, technological advancements, and access to pioneering research,” says Drew Cukor, head of data and artificial intelligence transformation. Cukor adds that TWG Global “is keen to share its insights and actively engage with leading executives and academics to gain a broader perspective of how others are configuring and adopting AI, which is why we believe in the work of the consortium.”
The Tata Group views the collaboration as a platform to address some of AI’s most pressing challenges. “The consortium enables Tata to collaborate, share knowledge, and collectively shape the future of generative AI, particularly in addressing urgent challenges such as ethical considerations, data privacy, and algorithmic biases,” says Aparna Ganesh, vice president of Tata Sons Ltd.
Similarly, SK Telecom sees its involvement as a launchpad for growth and innovation. Suk-geun (SG) Chung, SK Telecom executive vice president and chief AI global officer, explains, “Joining the consortium presents a significant opportunity for SK Telecom to enhance its AI competitiveness in core business areas, including AI agents, AI semiconductors, data centers (AIDC), and physical AI,” says Chung. “By collaborating with MIT and leveraging the SK AI R&D Center as a technology control tower, we aim to forecast next-generation generative AI technology trends, propose innovative business models, and drive commercialization through academic-industrial collaboration.”
Alan Lee, chief technology officer of Analog Devices (ADI), highlights how the consortium bridges key knowledge gaps for both his company and the industry at large. “ADI can’t hire a world-leading expert in every single corner case, but the consortium will enable us to access top MIT researchers and get them involved in addressing problems we care about, as we also work together with others in the industry towards common goals,” he says.
The consortium will host interactive workshops and discussions to identify and prioritize challenges. “It’s going to be a two-way conversation, with the faculty coming together with industry partners, but also industry partners talking with each other,” says Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research and statistics, who serves alongside Huttenlocher as co-chair of the GenAI Dean’s oversight group.
Preparing for the AI-enabled workforce of the future
With AI poised to disrupt industries and create new opportunities, one of the consortium’s core goals is to guide that change in a way that benefits both businesses and society.
“When the first commercial digital computers were introduced [the UNIVAC was delivered to the U.S. Census Bureau in 1951], people were worried about losing their jobs,” says Kraska. “And yes, jobs like large-scale, manual data entry clerks and human ‘computers,’ people tasked with doing manual calculations, largely disappeared over time. But the people impacted by those first computers were trained to do other jobs.”
The consortium aims to play a key role in preparing the workforce of tomorrow by educating global business leaders and employees on generative AI evolving uses and applications. With the pace of innovation accelerating, leaders face a flood of information and uncertainty.
“When it comes to educating leaders about generative AI, it’s about helping them navigate the complexity of the space right now, because there’s so much hype and hundreds of papers published daily,” says Kraska. “The hard part is understanding which developments could actually have a chance of changing the field and which are just tiny improvements. There's a kind of FOMO [fear of missing out] for leaders that we can help reduce.”
Defining success: Shared goals for generative AI impact
Success within the initiative is defined by shared progress, open innovation, and mutual growth. “Consortium participants recognize, I think, that when I share my ideas with you, and you share your ideas with me, we’re both fundamentally better off,” explains Farias. “Progress on generative AI is not zero-sum, so it makes sense for this to be an open-source initiative.”
While participants may approach success from different angles, they share a common goal of advancing generative AI for broad societal benefit. “There will be many success metrics,” says Perakis. “We’ll educate students, who will be networking with companies. Companies will come together and learn from each other. Business leaders will come to MIT and have discussions that will help all of us, not just the leaders themselves.”
For Analog Devices’ Alan Lee, success is measured in tangible improvements that drive efficiency and product innovation: “For us at ADI, it’s a better, faster quality of experience for our customers, and that could mean better products. It could mean faster design cycles, faster verification cycles, and faster tuning of equipment that we already have or that we’re going to develop for the future. But beyond that, we want to help the world be a better, more efficient place.”
Ganesh highlights success through the lens of real-world application. “Success will also be defined by accelerating AI adoption within Tata companies, generating actionable knowledge that can be applied in real-world scenarios, and delivering significant advantages to our customers and stakeholders,” she says.
Generative AI is no longer confined to isolated research labs — it’s driving innovation across industries and disciplines. At MIT, the technology has become a campus-wide priority, connecting researchers, students, and industry leaders to solve complex challenges and uncover new opportunities. “It's truly an MIT initiative,” says Farias, “one that’s much larger than any individual or department on campus.”
David Darmofal SM ’91, PhD ’93 named vice chancellor for undergraduate and graduate educationLongtime AeroAstro professor brings deep experience with academic and student life.David L. Darmofal SM ’91, PhD ’93 will serve as MIT’s next vice chancellor for undergraduate and graduate education, effective Feb. 17. Chancellor Melissa Nobles announced Darmofal’s appointment today in a letter to the MIT community.
Darmofal succeeds Ian A. Waitz, who stepped down in May to become MIT’s vice president for research, and Daniel E. Hastings, who has been serving in an interim capacity.
A creative innovator in research-based teaching and learning, Darmofal is the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. Since 2017, he and his wife Claudia have served as heads of house at The Warehouse, an MIT graduate residence.
“Dave knows the ins and outs of education and student life at MIT in a way that few do,” Nobles says. “He’s a head of house, an alum, and the parent of a graduate. Dave will bring decades of first-hand experience to the role.”
“An MIT education is incredibly special, combining passionate students, staff, and faculty striving to use knowledge and discovery to drive positive change for the world,” says Darmofal. “I am grateful for this opportunity to play a part in supporting MIT’s academic mission.”
Darmofal’s leadership experience includes service from 2008 to 2011 as associate and interim department head in the Department of Aeronautics and Astronautics, overseeing undergraduate and graduate programs. He was the AeroAstro director of digital education from 2020 to 2022, including leading the department’s response to remote learning during the Covid-19 pandemic. He currently serves as director of the MIT Aerospace Computational Science and Engineering Laboratory and is a member of the Center for Computational Science and Engineering (CCSE) in the MIT Stephen A. Schwarzman College of Computing.
As an MIT faculty member and administrator, Darmofal has been involved in designing more flexible degree programs, developing open digital-learning opportunities, creating first-year advising seminars, and enhancing professional and personal development opportunities for students. He also contributed his expertise in engineering pedagogy to the development of the Schwarzman College of Computing’s Common Ground efforts, to address the need for computing education across many disciplines.
“MIT students, staff, and faculty share a common bond as problem solvers. Talk to any of us about an MIT education, and you will get an earful on not only what we need to do better, but also how we can actually do it. The Office of the Vice Chancellor can help bring our community of problem solvers together to enable improvements in our academics,” says Darmofal.
Overseeing the academic arm of the Chancellor’s Office, the vice chancellor’s portfolio is extensive. Darmofal will lead professionals across more than a dozen units, covering areas such as recruitment and admissions, financial aid, student systems, advising, professional and career development, pedagogy, experiential learning, and support for MIT’s more than 100 graduate programs. He will also work collaboratively with many of MIT’s student organizations and groups, including with the leaders of the Undergraduate Association and the Graduate Student Council, and administer the relationship with the graduate student union.
“Dave will be a critical part of my office’s efforts to strengthen and expand critical connections across all areas of student life and learning,” Nobles says. She credits the search advisory group, co-chaired by professors Laurie Boyer and Will Tisdale, in setting the right tenor for such an important role and leading a thorough, inclusive process.
Darmofal’s research is focused on computational methods for partial differential equations, especially fluid dynamics. He earned his SM and PhD degrees in aeronautics and astronautics in 1991 and 1993, respectively, from MIT, and his BS in aerospace engineering in 1989 from the University of Michigan. Prior to joining MIT in 1998, he was an assistant professor in the Department of Aerospace Engineering at Texas A&M University from 1995 to 1998. Currently, he is the chair of AeroAstro’s Undergraduate Committee and the graduate officer for the CCSE PhD program.
“I want to echo something that Dan Hastings said recently,” Darmofal says. “We have a lot to be proud of when it comes to an MIT education. It’s more accessible than it has ever been. It’s innovative, with unmatched learning opportunities here and around the world. It’s home to academic research labs that attract the most talented scholars, creators, experimenters, and engineers. And ultimately, it prepares graduates who do good.”
The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.
To improve the efficiency of AI models, MIT researchers created an automated system that enables developers of deep learning algorithms to simultaneously take advantage of two types of data redundancy. This reduces the amount of computation, bandwidth, and memory storage needed for machine learning operations.
Existing techniques for optimizing algorithms can be cumbersome and typically only allow developers to capitalize on either sparsity or symmetry — two different types of redundancy that exist in deep learning data structures.
By enabling a developer to build an algorithm from scratch that takes advantage of both redundancies at once, the MIT researchers’ approach boosted the speed of computations by nearly 30 times in some experiments.
Because the system utilizes a user-friendly programming language, it could optimize machine-learning algorithms for a wide range of applications. The system could also help scientists who are not experts in deep learning but want to improve the efficiency of AI algorithms they use to process data. In addition, the system could have applications in scientific computing.
“For a long time, capturing these data redundancies has required a lot of implementation effort. Instead, a scientist can tell our system what they would like to compute in a more abstract way, without telling the system exactly how to compute it,” says Willow Ahrens, an MIT postdoc and co-author of a paper on the system, which will be presented at the International Symposium on Code Generation and Optimization.
She is joined on the paper by lead author Radha Patel ’23, SM ’24 and senior author Saman Amarasinghe, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Cutting out computation
In machine learning, data are often represented and manipulated as multidimensional arrays known as tensors. A tensor is like a matrix, which is a rectangular array of values arranged on two axes, rows and columns. But unlike a two-dimensional matrix, a tensor can have many dimensions, or axes, making tensors more difficult to manipulate.
Deep-learning models perform operations on tensors using repeated matrix multiplication and addition — this process is how neural networks learn complex patterns in data. The sheer volume of calculations that must be performed on these multidimensional data structures requires an enormous amount of computation and energy.
But because of the way data in tensors are arranged, engineers can often boost the speed of a neural network by cutting out redundant computations.
For instance, if a tensor represents user review data from an e-commerce site, since not every user reviewed every product, most values in that tensor are likely zero. This type of data redundancy is called sparsity. A model can save time and computation by only storing and operating on non-zero values.
In addition, sometimes a tensor is symmetric, which means the top half and bottom half of the data structure are equal. In this case, the model only needs to operate on one half, reducing the amount of computation. This type of data redundancy is called symmetry.
“But when you try to capture both of these optimizations, the situation becomes quite complex,” Ahrens says.
To simplify the process, she and her collaborators built a new compiler, which is a computer program that translates complex code into a simpler language that can be processed by a machine. Their compiler, called SySTeC, can optimize computations by automatically taking advantage of both sparsity and symmetry in tensors.
They began the process of building SySTeC by identifying three key optimizations they can perform using symmetry.
First, if the algorithm’s output tensor is symmetric, then it only needs to compute one half of it. Second, if the input tensor is symmetric, then algorithm only needs to read one half of it. Finally, if intermediate results of tensor operations are symmetric, the algorithm can skip redundant computations.
Simultaneous optimizations
To use SySTeC, a developer inputs their program and the system automatically optimizes their code for all three types of symmetry. Then the second phase of SySTeC performs additional transformations to only store non-zero data values, optimizing the program for sparsity.
In the end, SySTeC generates ready-to-use code.
“In this way, we get the benefits of both optimizations. And the interesting thing about symmetry is, as your tensor has more dimensions, you can get even more savings on computation,” Ahrens says.
The researchers demonstrated speedups of nearly a factor of 30 with code generated automatically by SySTeC.
Because the system is automated, it could be especially useful in situations where a scientist wants to process data using an algorithm they are writing from scratch.
In the future, the researchers want to integrate SySTeC into existing sparse tensor compiler systems to create a seamless interface for users. In addition, they would like to use it to optimize code for more complicated programs.
This work is funded, in part, by Intel, the National Science Foundation, the Defense Advanced Research Projects Agency, and the Department of Energy.
With generative AI, MIT chemists quickly calculate 3D genomic structures A new approach, which takes minutes rather than days, predicts how a specific DNA sequence will arrange itself in the cell nucleus.Every cell in your body contains the same genetic sequence, yet each cell expresses only a subset of those genes. These cell-specific gene expression patterns, which ensure that a brain cell is different from a skin cell, are partly determined by the three-dimensional structure of the genetic material, which controls the accessibility of each gene.
MIT chemists have now come up with a new way to determine those 3D genome structures, using generative artificial intelligence. Their technique can predict thousands of structures in just minutes, making it much speedier than existing experimental methods for analyzing the structures.
Using this technique, researchers could more easily study how the 3D organization of the genome affects individual cells’ gene expression patterns and functions.
“Our goal was to try to predict the three-dimensional genome structure from the underlying DNA sequence,” says Bin Zhang, an associate professor of chemistry and the senior author of the study. “Now that we can do that, which puts this technique on par with the cutting-edge experimental techniques, it can really open up a lot of interesting opportunities.”
MIT graduate students Greg Schuette and Zhuohan Lao are the lead authors of the paper, which appears today in Science Advances.
From sequence to structure
Inside the cell nucleus, DNA and proteins form a complex called chromatin, which has several levels of organization, allowing cells to cram 2 meters of DNA into a nucleus that is only one-hundredth of a millimeter in diameter. Long strands of DNA wind around proteins called histones, giving rise to a structure somewhat like beads on a string.
Chemical tags known as epigenetic modifications can be attached to DNA at specific locations, and these tags, which vary by cell type, affect the folding of the chromatin and the accessibility of nearby genes. These differences in chromatin conformation help determine which genes are expressed in different cell types, or at different times within a given cell.
Over the past 20 years, scientists have developed experimental techniques for determining chromatin structures. One widely used technique, known as Hi-C, works by linking together neighboring DNA strands in the cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.
This method can be used on large populations of cells to calculate an average structure for a section of chromatin, or on single cells to determine structures within that specific cell. However, Hi-C and similar techniques are labor-intensive, and it can take about a week to generate data from one cell.
To overcome those limitations, Zhang and his students developed a model that takes advantage of recent advances in generative AI to create a fast, accurate way to predict chromatin structures in single cells. The AI model that they designed can quickly analyze DNA sequences and predict the chromatin structures that those sequences might produce in a cell.
“Deep learning is really good at pattern recognition,” Zhang says. “It allows us to analyze very long DNA segments, thousands of base pairs, and figure out what is the important information encoded in those DNA base pairs.”
ChromoGen, the model that the researchers created, has two components. The first component, a deep learning model taught to “read” the genome, analyzes the information encoded in the underlying DNA sequence and chromatin accessibility data, the latter of which is widely available and cell type-specific.
The second component is a generative AI model that predicts physically accurate chromatin conformations, having been trained on more than 11 million chromatin conformations. These data were generated from experiments using Dip-C (a variant of Hi-C) on 16 cells from a line of human B lymphocytes.
When integrated, the first component informs the generative model how the cell type-specific environment influences the formation of different chromatin structures, and this scheme effectively captures sequence-structure relationships. For each sequence, the researchers use their model to generate many possible structures. That’s because DNA is a very disordered molecule, so a single DNA sequence can give rise to many different possible conformations.
“A major complicating factor of predicting the structure of the genome is that there isn’t a single solution that we’re aiming for. There’s a distribution of structures, no matter what portion of the genome you’re looking at. Predicting that very complicated, high-dimensional statistical distribution is something that is incredibly challenging to do,” Schuette says.
Rapid analysis
Once trained, the model can generate predictions on a much faster timescale than Hi-C or other experimental techniques.
“Whereas you might spend six months running experiments to get a few dozen structures in a given cell type, you can generate a thousand structures in a particular region with our model in 20 minutes on just one GPU,” Schuette says.
After training their model, the researchers used it to generate structure predictions for more than 2,000 DNA sequences, then compared them to the experimentally determined structures for those sequences. They found that the structures generated by the model were the same or very similar to those seen in the experimental data.
“We typically look at hundreds or thousands of conformations for each sequence, and that gives you a reasonable representation of the diversity of the structures that a particular region can have,” Zhang says. “If you repeat your experiment multiple times, in different cells, you will very likely end up with a very different conformation. That’s what our model is trying to predict.”
The researchers also found that the model could make accurate predictions for data from cell types other than the one it was trained on. This suggests that the model could be useful for analyzing how chromatin structures differ between cell types, and how those differences affect their function. The model could also be used to explore different chromatin states that can exist within a single cell, and how those changes affect gene expression.
“ChromoGen provides a new framework for AI-driven discovery of genome folding principles and demonstrates that generative AI can bridge genomic and epigenomic features with 3D genome structure, pointing to future work on studying the variation of genome structure and function across a broad range of biological contexts,” says Jian Ma, a professor of computational biology at Carnegie Mellon University, who was not involved in the research.
Another possible application would be to explore how mutations in a particular DNA sequence change the chromatin conformation, which could shed light on how such mutations may cause disease.
“There are a lot of interesting questions that I think we can address with this type of model,” Zhang says.
The researchers have made all of their data and the model available to others who wish to use it.
The research was funded by the National Institutes of Health.
MIT engineers help multirobot systems stay in the safety zoneNew research could improve the safety of drone shows, warehouse robots, and self-driving cars.Drone shows are an increasingly popular form of large-scale light display. These shows incorporate hundreds to thousands of airborne bots, each programmed to fly in paths that together form intricate shapes and patterns across the sky. When they go as planned, drone shows can be spectacular. But when one or more drones malfunction, as has happened recently in Florida, New York, and elsewhere, they can be a serious hazard to spectators on the ground.
Drone show accidents highlight the challenges of maintaining safety in what engineers call “multiagent systems” — systems of multiple coordinated, collaborative, and computer-programmed agents, such as robots, drones, and self-driving cars.
Now, a team of MIT engineers has developed a training method for multiagent systems that can guarantee their safe operation in crowded environments. The researchers found that once the method is used to train a small number of agents, the safety margins and controls learned by those agents can automatically scale to any larger number of agents, in a way that ensures the safety of the system as a whole.
In real-world demonstrations, the team trained a small number of palm-sized drones to safely carry out different objectives, from simultaneously switching positions midflight to landing on designated moving vehicles on the ground. In simulations, the researchers showed that the same programs, trained on a few drones, could be copied and scaled up to thousands of drones, enabling a large system of agents to safely accomplish the same tasks.
“This could be a standard for any application that requires a team of agents, such as warehouse robots, search-and-rescue drones, and self-driving cars,” says Chuchu Fan, associate professor of aeronautics and astronautics at MIT. “This provides a shield, or safety filter, saying each agent can continue with their mission, and we’ll tell you how to be safe.”
Fan and her colleagues report on their new method in a study appearing this month in the journal IEEE Transactions on Robotics. The study’s co-authors are MIT graduate students Songyuan Zhang and Oswin So as well as former MIT postdoc Kunal Garg, who is now an assistant professor at Arizona State University.
Mall margins
When engineers design for safety in any multiagent system, they typically have to consider the potential paths of every single agent with respect to every other agent in the system. This pair-wise path-planning is a time-consuming and computationally expensive process. And even then, safety is not guaranteed.
“In a drone show, each drone is given a specific trajectory — a set of waypoints and a set of times — and then they essentially close their eyes and follow the plan,” says Zhang, the study’s lead author. “Since they only know where they have to be and at what time, if there are unexpected things that happen, they don’t know how to adapt.”
The MIT team looked instead to develop a method to train a small number of agents to maneuver safely, in a way that could efficiently scale to any number of agents in the system. And, rather than plan specific paths for individual agents, the method would enable agents to continually map their safety margins, or boundaries beyond which they might be unsafe. An agent could then take any number of paths to accomplish its task, as long as it stays within its safety margins.
In some sense, the team says the method is similar to how humans intuitively navigate their surroundings.
“Say you’re in a really crowded shopping mall,” So explains. “You don’t care about anyone beyond the people who are in your immediate neighborhood, like the 5 meters surrounding you, in terms of getting around safely and not bumping into anyone. Our work takes a similar local approach.”
Safety barrier
In their new study, the team presents their method, GCBF+, which stands for “Graph Control Barrier Function.” A barrier function is a mathematical term used in robotics that calculates a sort of safety barrier, or a boundary beyond which an agent has a high probability of being unsafe. For any given agent, this safety zone can change moment to moment, as the agent moves among other agents that are themselves moving within the system.
When designers calculate barrier functions for any one agent in a multiagent system, they typically have to take into account the potential paths and interactions with every other agent in the system. Instead, the MIT team’s method calculates the safety zones of just a handful of agents, in a way that is accurate enough to represent the dynamics of many more agents in the system.
“Then we can sort of copy-paste this barrier function for every single agent, and then suddenly we have a graph of safety zones that works for any number of agents in the system,” So says.
To calculate an agent’s barrier function, the team’s method first takes into account an agent’s “sensing radius,” or how much of the surroundings an agent can observe, depending on its sensor capabilities. Just as in the shopping mall analogy, the researchers assume that the agent only cares about the agents that are within its sensing radius, in terms of keeping safe and avoiding collisions with those agents.
Then, using computer models that capture an agent’s particular mechanical capabilities and limits, the team simulates a “controller,” or a set of instructions for how the agent and a handful of similar agents should move around. They then run simulations of multiple agents moving along certain trajectories, and record whether and how they collide or otherwise interact.
“Once we have these trajectories, we can compute some laws that we want to minimize, like say, how many safety violations we have in the current controller,” Zhang says. “Then we update the controller to be safer.”
In this way, a controller can be programmed into actual agents, which would enable them to continually map their safety zone based on any other agents they can sense in their immediate surroundings, and then move within that safety zone to accomplish their task.
“Our controller is reactive,” Fan says. “We don’t preplan a path beforehand. Our controller is constantly taking in information about where an agent is going, what is its velocity, how fast other drones are going. It’s using all this information to come up with a plan on the fly and it’s replanning every time. So, if the situation changes, it’s always able to adapt to stay safe.”
The team demonstrated GCBF+ on a system of eight Crazyflies — lightweight, palm-sized quadrotor drones that they tasked with flying and switching positions in midair. If the drones were to do so by taking the straightest path, they would surely collide. But after training with the team’s method, the drones were able to make real-time adjustments to maneuver around each other, keeping within their respective safety zones, to successfully switch positions on the fly.
In similar fashion, the team tasked the drones with flying around, then landing on specific Turtlebots — wheeled robots with shell-like tops. The Turtlebots drove continuously around in a large circle, and the Crazyflies were able to avoid colliding with each other as they made their landings.
“Using our framework, we only need to give the drones their destinations instead of the whole collision-free trajectory, and the drones can figure out how to arrive at their destinations without collision themselves,” says Fan, who envisions the method could be applied to any multiagent system to guarantee its safety, including collision avoidance systems in drone shows, warehouse robots, autonomous driving vehicles, and drone delivery systems.
This work was partly supported by the U.S. National Science Foundation, MIT Lincoln Laboratory under the Safety in Aerobatic Flight Regimes (SAFR) program, and the Defence Science and Technology Agency of Singapore.
From bench to bedside, and beyondIn the United States and abroad, Matthew Dolan ’81 has served as a leader in immunology and virology.In medical school, Matthew Dolan ’81 briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work.
“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.”
Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the United States and abroad through the U.S. Air Force, Dolan has emerged as a leader in immunology and virology, and has served as director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and Covid-19, and has even been a guest speaker on NPR’s “Science Friday.”
“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.”
Pieces of the puzzle
Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge.
He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems such as ice machines or air conditioners, are solved at the interface between public health and ecology.
“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”
Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive.
“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.”
Choosing To serve
Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”
One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die.
“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.”
Lasting impacts
Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives.
Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future.
“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.”
Dolan understands that the most lasting impact he has had is, likely, teaching: Time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of health-care specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the U.S. departments of State and Defense, and taught those programs around the world.
“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.”
MIT spinout Gradiant reduces companies’ water use and waste by billions of gallons each dayThe company builds water recycling, treatment, and purification solutions for some of the world’s largest brands.When it comes to water use, most of us think of the water we drink. But industrial uses for things like manufacturing account for billions of gallons of water each day. For instance, making a single iPhone, by one estimate, requires more than 3,000 gallons.
Gradiant is working to reduce the world’s industrial water footprint. Founded by a team from MIT, Gradiant offers water recycling, treatment, and purification solutions to some of the largest companies on Earth, including Coca Cola, Tesla, and the Taiwan Semiconductor Manufacturing Company. By serving as an end-to-end water company, Gradiant says it helps companies reuse 2 billion gallons of water each day and saves another 2 billion gallons of fresh water from being withdrawn.
The company’s mission is to preserve water for generations to come in the face of rising global demand.
“We work on both ends of the water spectrum,” Gradiant co-founder and CEO Anurag Bajpayee SM ’08, PhD ’12 says. “We work with ultracontaminated water, and we can also provide ultrapure water for use in areas like chip fabrication. Our specialty is in the extreme water challenges that can’t be solved with traditional technologies.”
For each customer, Gradiant builds tailored water treatment solutions that combine chemical treatments with membrane filtration and biological process technologies, leveraging a portfolio of patents to drastically cut water usage and waste.
“Before Gradiant, 40 million liters of water would be used in the chip-making process. It would all be contaminated and treated, and maybe 30 percent would be reused,” explains Gradiant co-founder and COO Prakash Govindan PhD ’12. “We have the technology to recycle, in some cases, 99 percent of the water. Now, instead of consuming 40 million liters, chipmakers only need to consume 400,000 liters, which is a huge shift in the water footprint of that industry. And this is not just with semiconductors. We’ve done this in food and beverage, we’ve done this in renewable energy, we’ve done this in pharmaceutical drug production, and several other areas.”
Learning the value of water
Govindan grew up in a part of India that experienced a years-long drought beginning when he was 10. Without tap water, one of Govindan’s chores was to haul water up the stairs of his apartment complex each time a truck delivered it.
“However much water my brother and I could carry was how much we had for the week,” Govindan recalls. “I learned the value of water the hard way.”
Govindan attended the Indian Institute of Technology as an undergraduate, and when he came to MIT for his PhD, he sought out the groups working on water challenges. He began working on a water treatment method called carrier gas extraction for his PhD under Gradiant co-founder and MIT Professor John Lienhard.
Bajpayee also worked on water treatment methods at MIT, and the pair worked with the Deshpande Center to derisk their technologies. After brief stints as postdocs at MIT, the researchers licensed their work and founded Gradiant in 2013.
Carrier gas extraction became Gradiant’s first proprietary technology. The founders began by treating wastewater created by oil and gas wells, landing their first partner in a Texas company. But Gradiant gradually expanded to solving water challenges in power generation, mining, textiles, and refineries. Then the founders noticed opportunities in industries like electronics, semiconductors, food and beverage, and pharmaceuticals. Today, oil and gas wastewater treatment makes up a small percentage of Gradiant’s work.
As the company expanded, it added technologies to its portfolio, patenting new water treatment methods around reverse osmosis, selective contaminant extraction, and free radical oxidation. Gradiant has also created a digital system that uses AI to measure, predict, and control water treatment facilities.
“The advantage Gradiant has over every other water company is that R&D is in our DNA,” Govindan says, noting Gradiant has a world-class research lab at its headquarters in Boston. “At MIT, we learned how to do cutting-edge technology development, and we never let go of that.”
The founders compare their suite of technologies to LEGO bricks they can mix and match depending on a customer’s water needs. Gradiant has built more than 2,500 of these end-to-end systems for customers around the world.
“Our customers aren’t water companies; they are industrial clients like semiconductor manufacturers, drug companies, and food and beverage companies,” Bajpayee says. “They aren’t about to start operating a water treatment plant. They look at us as their water partner who can take care of the whole water problem.”
Continuing innovation
The founders say Gradiant has been roughly doubling its revenue each year over the last five years, and it’s continuing to add technologies to its platform. For instance, Gradiant recently developed a critical minerals recovery solution to extract materials like lithium and nickel from customers’ wastewater, which could expand access to critical materials essential to the production of batteries and other products.
“If we can extract lithium from brine water in an environmentally and economically feasible way, the U.S. can meet all of its lithium needs from within the U.S.,” Bajpayee says. “What’s preventing large-scale extraction of lithium from brine is technology, and we believe what we have now deployed will open the floodgates for direct lithium extraction and completely revolutionized the industry.”
The company has also validated a method for eliminating PFAS — so-called toxic “forever chemicals” — in a pilot project with a leading U.S. semiconductor manufacturer. In the near future, it hopes to bring that solution to municipal water treatment plants to protect cities.
At the heart of Gradiant’s innovation is the founders’ belief that industrial activity doesn’t have to deplete one of the world’s most vital resources.
“Ever since the industrial revolution, we’ve been taking from nature,” Bajpayee says. “By treating and recycling water, by reducing water consumption and making industry highly water efficient, we have this unique opportunity to turn the clock back and give nature water back. If that’s your driver, you can’t choose not to innovate.”
Rare and mysterious cosmic explosion: Gamma-ray burst or jetted tidal disruption event?Researchers characterize the peculiar Einstein Probe transient EP240408a.Highly energetic explosions in the sky are commonly attributed to gamma-ray bursts. We now understand that these bursts originate from either the merger of two neutron stars or the collapse of a massive star. In these scenarios, a newborn black hole is formed, emitting a jet that travels at nearly the speed of light. When these jets are directed toward Earth, we can observe them from vast distances — sometimes billions of light-years away — due to a relativistic effect known as Doppler boosting. Over the past decade, thousands of such gamma-ray bursts have been detected.
Since its launch in 2024, the Einstein Probe — an X-ray space telescope developed by the Chinese Academy of Sciences (CAS) in partnership with European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics — has been scanning the skies looking for energetic explosions, and in April the telescope observed an unusual event designated as EP240408A. Now an international team of astronomers, including Dheeraj Pasham from MIT, Igor Andreoni from University of North Carolina at Chapel Hill, and Brendan O’Connor from Carnegie Mellon University, and others have investigated this explosion using a slew of ground-based and space-based telescopes, including NuSTAR, Swift, Gemini, Keck, DECam, VLA, ATCA, and NICER, which was developed in collaboration with MIT.
An open-access report of their findings, published Jan. 27 in The Astrophysical Journal Letters, indicates that the characteristics of this explosion do not match those of typical gamma-ray bursts. Instead, it may represent a rare new class of powerful cosmic explosion — a jetted tidal disruption event, which occurs when a supermassive black hole tears apart a star.
“NICER’s ability to steer to pretty much any part of the sky and monitor for weeks has been instrumental in our understanding of these unusual cosmic explosions,” says Pasham, a research scientist at the MIT Kavli Institute for Astrophysics and Space Research.
While a jetted tidal disruption event is plausible, the researchers say the lack of radio emissions from this jet is puzzling. O’Connor surmises, “EP240408a ticks some of the boxes for several different kinds of phenomena, but it doesn’t tick all the boxes for anything. In particular, the short duration and high luminosity are hard to explain in other scenarios. The alternative is that we are seeing something entirely new!”
According to Pasham, the Einstein Probe is just beginning to scratch the surface of what seems possible. “I’m excited to chase the next weird explosion from the Einstein Probe”, he says, echoing astronomers worldwide who look forward to the prospect of discovering more unusual explosions from the farthest reaches of the cosmos.
Evelina Fedorenko receives Troland Award from National Academy of SciencesCognitive neuroscientist is recognized for her groundbreaking discoveries about the brain’s language system.The National Academy of Sciences (NAS) recently announced that MIT Associate Professor Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions toward understanding the language network in the human brain.
The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.
Fedorenko, an associate professor of brain and cognitive sciences and a McGovern Institute for Brain Research investigator, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.
Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.
She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.
Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington.
3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilitiesMIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.If you’ve watched cartoons like Tom and Jerry, you’ll recognize a common theme: An elusive target avoids his formidable adversary. This game of “cat-and-mouse” — whether literal or otherwise — involves pursuing something that ever-so-narrowly escapes you at each try.
In a similar way, evading persistent hackers is a continuous challenge for cybersecurity teams. Keeping them chasing what’s just out of reach, MIT researchers are working on an AI approach called “artificial adversarial intelligence” that mimics attackers of a device or network to test network defenses before real attacks happen. Other AI-based defensive measures help engineers further fortify their systems to avoid ransomware, data theft, or other hacks.
Here, Una-May O'Reilly, an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) principal investigator who leads the Anyscale Learning For All Group (ALFA), discusses how artificial adversarial intelligence protects us from cyber threats.
Q: In what ways can artificial adversarial intelligence play the role of a cyber attacker, and how does artificial adversarial intelligence portray a cyber defender?
A: Cyber attackers exist along a competence spectrum. At the lowest end, there are so-called script-kiddies, or threat actors who spray well-known exploits and malware in the hopes of finding some network or device that hasn't practiced good cyber hygiene. In the middle are cyber mercenaries who are better-resourced and organized to prey upon enterprises with ransomware or extortion. And, at the high end, there are groups that are sometimes state-supported, which can launch the most difficult-to-detect "advanced persistent threats" (or APTs).
Think of the specialized, nefarious intelligence that these attackers marshal — that's adversarial intelligence. The attackers make very technical tools that let them hack into code, they choose the right tool for their target, and their attacks have multiple steps. At each step, they learn something, integrate it into their situational awareness, and then make a decision on what to do next. For the sophisticated APTs, they may strategically pick their target, and devise a slow and low-visibility plan that is so subtle that its implementation escapes our defensive shields. They can even plan deceptive evidence pointing to another hacker!
My research goal is to replicate this specific kind of offensive or attacking intelligence, intelligence that is adversarially-oriented (intelligence that human threat actors rely upon). I use AI and machine learning to design cyber agents and model the adversarial behavior of human attackers. I also model the learning and adaptation that characterizes cyber arms races.
I should also note that cyber defenses are pretty complicated. They've evolved their complexity in response to escalating attack capabilities. These defense systems involve designing detectors, processing system logs, triggering appropriate alerts, and then triaging them into incident response systems. They have to be constantly alert to defend a very big attack surface that is hard to track and very dynamic. On this other side of attacker-versus-defender competition, my team and I also invent AI in the service of these different defensive fronts.
Another thing stands out about adversarial intelligence: Both Tom and Jerry are able to learn from competing with one another! Their skills sharpen and they lock into an arms race. One gets better, then the other, to save his skin, gets better too. This tit-for-tat improvement goes onwards and upwards! We work to replicate cyber versions of these arms races.
Q: What are some examples in our everyday lives where artificial adversarial intelligence has kept us safe? How can we use adversarial intelligence agents to stay ahead of threat actors?
A: Machine learning has been used in many ways to ensure cybersecurity. There are all kinds of detectors that filter out threats. They are tuned to anomalous behavior and to recognizable kinds of malware, for example. There are AI-enabled triage systems. Some of the spam protection tools right there on your cell phone are AI-enabled!
With my team, I design AI-enabled cyber attackers that can do what threat actors do. We invent AI to give our cyber agents expert computer skills and programming knowledge, to make them capable of processing all sorts of cyber knowledge, plan attack steps, and to make informed decisions within a campaign.
Adversarially intelligent agents (like our AI cyber attackers) can be used as practice when testing network defenses. A lot of effort goes into checking a network's robustness to attack, and AI is able to help with that. Additionally, when we add machine learning to our agents, and to our defenses, they play out an arms race we can inspect, analyze, and use to anticipate what countermeasures may be used when we take measures to defend ourselves.
Q: What new risks are they adapting to, and how do they do so?
A: There never seems to be an end to new software being released and new configurations of systems being engineered. With every release, there are vulnerabilities an attacker can target. These may be examples of weaknesses in code that are already documented, or they may be novel.
New configurations pose the risk of errors or new ways to be attacked. We didn't imagine ransomware when we were dealing with denial-of-service attacks. Now we're juggling cyber espionage and ransomware with IP [intellectual property] theft. All our critical infrastructure, including telecom networks and financial, health care, municipal, energy, and water systems, are targets.
Fortunately, a lot of effort is being devoted to defending critical infrastructure. We will need to translate that to AI-based products and services that automate some of those efforts. And, of course, to keep designing smarter and smarter adversarial agents to keep us on our toes, or help us practice defending our cyber assets.
Imagine a boombox that tracks your every move and suggests music to match your personal dance style. That’s the idea behind “Be the Beat,” one of several projects from MIT course 4.043/4.044 (Interaction Intelligence), taught by Marcelo Coelho in the Department of Architecture, that were presented at the 38th annual NeurIPS (Neural Information Processing Systems) conference in December 2024. With over 16,000 attendees converging in Vancouver, NeurIPS is a competitive and prestigious conference dedicated to research and science in the field of artificial intelligence and machine learning, and a premier venue for showcasing cutting-edge developments.
The course investigates the emerging field of large language objects, and how artificial intelligence can be extended into the physical world. While “Be the Beat” transforms the creative possibilities of dance, other student submissions span disciplines such as music, storytelling, critical thinking, and memory, creating generative experiences and new forms of human-computer interaction. Taken together, these projects illustrate a broader vision for artificial intelligence: one that goes beyond automation to catalyze creativity, reshape education, and reimagine social interactions.
Be the Beat
“Be the Beat,” by Ethan Chang, an MIT mechanical engineering and design student, and Zhixing Chen, an MIT mechanical engineering and music student, is an AI-powered boombox that suggests music from a dancer's movement. Dance has traditionally been guided by music throughout history and across cultures, yet the concept of dancing to create music is rarely explored.
“Be the Beat” creates a space for human-AI collaboration on freestyle dance, empowering dancers to rethink the traditional dynamic between dance and music. It uses PoseNet to describe movements for a large language model, enabling it to analyze dance style and query APIs to find music with similar style, energy, and tempo. Dancers interacting with the boombox reported having more control over artistic expression and described the boombox as a novel approach to discovering dance genres and choreographing creatively.
A Mystery for You
“A Mystery for You,” by Mrinalini Singha SM ’24, a recent graduate in the Art, Culture, and Technology program, and Haoheng Tang, a recent graduate of the Harvard University Graduate School of Design, is an educational game designed to cultivate critical thinking and fact-checking skills in young learners. The game leverages a large language model (LLM) and a tangible interface to create an immersive investigative experience. Players act as citizen fact-checkers, responding to AI-generated “news alerts” printed by the game interface. By inserting cartridge combinations to prompt follow-up “news updates,” they navigate ambiguous scenarios, analyze evidence, and weigh conflicting information to make informed decisions.
This human-computer interaction experience challenges our news-consumption habits by eliminating touchscreen interfaces, replacing perpetual scrolling and skim-reading with a haptically rich analog device. By combining the affordances of slow media with new generative media, the game promotes thoughtful, embodied interactions while equipping players to better understand and challenge today’s polarized media landscape, where misinformation and manipulative narratives thrive.
Memorscope
“Memorscope,” by MIT Media Lab research collaborator Keunwook Kim, is a device that creates collective memories by merging the deeply human experience of face-to-face interaction with advanced AI technologies. Inspired by how we use microscopes and telescopes to examine and uncover hidden and invisible details, Memorscope allows two users to “look into” each other’s faces, using this intimate interaction as a gateway to the creation and exploration of their shared memories.
The device leverages AI models such as OpenAI and Midjourney, introducing different aesthetic and emotional interpretations, which results in a dynamic and collective memory space. This space transcends the limitations of traditional shared albums, offering a fluid, interactive environment where memories are not just static snapshots but living, evolving narratives, shaped by the ongoing relationship between users.
Narratron
“Narratron,” by Harvard Graduate School of Design students Xiying (Aria) Bao and Yubo Zhao, is an interactive projector that co-creates and co-performs children's stories through shadow puppetry using large language models. Users can press the shutter to “capture” protagonists they want to be in the story, and it takes hand shadows (such as animal shapes) as input for the main characters. The system then develops the story plot as new shadow characters are introduced. The story appears through a projector as a backdrop for shadow puppetry while being narrated through a speaker as users turn a crank to “play” in real time. By combining visual, auditory, and bodily interactions in one system, the project aims to spark creativity in shadow play storytelling and enable multi-modal human-AI collaboration.
Perfect Syntax
“Perfect Syntax,” by Karyn Nakamura ’24, is a video art piece examining the syntactic logic behind motion and video. Using AI to manipulate video fragments, the project explores how the fluidity of motion and time can be simulated and reconstructed by machines. Drawing inspiration from both philosophical inquiry and artistic practice, Nakamura's work interrogates the relationship between perception, technology, and the movement that shapes our experience of the world. By reimagining video through computational processes, Nakamura investigates the complexities of how machines understand and represent the passage of time and motion.
Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.
Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products.
To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.
The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.
First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.
By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.
“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.
The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.
“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”
Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts.
New training approach could help AI agents perform better in uncertain conditionsSometimes, it might be better to train a robot in an environment that’s different from the one where it will be deployed.A home robot trained to perform household tasks in a factory may fail to effectively scrub the sink or take out the trash when deployed in a user’s kitchen, since this new environment differs from its training space.
To avoid this, engineers often try to match the simulated training environment as closely as possible with the real world where the agent will be deployed.
However, researchers from MIT and elsewhere have now found that, despite this conventional wisdom, sometimes training in a completely different environment yields a better-performing artificial intelligence agent.
Their results indicate that, in some situations, training a simulated AI agent in a world with less uncertainty, or “noise,” enabled it to perform better than a competing AI agent trained in the same, noisy world they used to test both agents.
The researchers call this unexpected phenomenon the indoor training effect.
“If we learn to play tennis in an indoor environment where there is no noise, we might be able to more easily master different shots. Then, if we move to a noisier environment, like a windy tennis court, we could have a higher probability of playing tennis well than if we started learning in the windy environment,” explains Serena Bono, a research assistant in the MIT Media Lab and lead author of a paper on the indoor training effect.
The researchers studied this phenomenon by training AI agents to play Atari games, which they modified by adding some unpredictability. They were surprised to find that the indoor training effect consistently occurred across Atari games and game variations.
They hope these results fuel additional research toward developing better training methods for AI agents.
“This is an entirely new axis to think about. Rather than trying to match the training and testing environments, we may be able to construct simulated environments where an AI agent learns even better,” adds co-author Spandan Madan, a graduate student at Harvard University.
Bono and Madan are joined on the paper by Ishaan Grover, an MIT graduate student; Mao Yasueda, a graduate student at Yale University; Cynthia Breazeal, professor of media arts and sciences and leader of the Personal Robotics Group in the MIT Media Lab; Hanspeter Pfister, the An Wang Professor of Computer Science at Harvard; and Gabriel Kreiman, a professor at Harvard Medical School. The research will be presented at the Association for the Advancement of Artificial Intelligence Conference.
Training troubles
The researchers set out to explore why reinforcement learning agents tend to have such dismal performance when tested on environments that differ from their training space.
Reinforcement learning is a trial-and-error method in which the agent explores a training space and learns to take actions that maximize its reward.
The team developed a technique to explicitly add a certain amount of noise to one element of the reinforcement learning problem called the transition function. The transition function defines the probability an agent will move from one state to another, based on the action it chooses.
If the agent is playing Pac-Man, a transition function might define the probability that ghosts on the game board will move up, down, left, or right. In standard reinforcement learning, the AI would be trained and tested using the same transition function.
The researchers added noise to the transition function with this conventional approach and, as expected, it hurt the agent’s Pac-Man performance.
But when the researchers trained the agent with a noise-free Pac-Man game, then tested it in an environment where they injected noise into the transition function, it performed better than an agent trained on the noisy game.
“The rule of thumb is that you should try to capture the deployment condition’s transition function as well as you can during training to get the most bang for your buck. We really tested this insight to death because we couldn’t believe it ourselves,” Madan says.
Injecting varying amounts of noise into the transition function let the researchers test many environments, but it didn’t create realistic games. The more noise they injected into Pac-Man, the more likely ghosts would randomly teleport to different squares.
To see if the indoor training effect occurred in normal Pac-Man games, they adjusted underlying probabilities so ghosts moved normally but were more likely to move up and down, rather than left and right. AI agents trained in noise-free environments still performed better in these realistic games.
“It was not only due to the way we added noise to create ad hoc environments. This seems to be a property of the reinforcement learning problem. And that was even more surprising to see,” Bono says.
Exploration explanations
When the researchers dug deeper in search of an explanation, they saw some correlations in how the AI agents explore the training space.
When both AI agents explore mostly the same areas, the agent trained in the non-noisy environment performs better, perhaps because it is easier for the agent to learn the rules of the game without the interference of noise.
If their exploration patterns are different, then the agent trained in the noisy environment tends to perform better. This might occur because the agent needs to understand patterns it can’t learn in the noise-free environment.
“If I only learn to play tennis with my forehand in the non-noisy environment, but then in the noisy one I have to also play with my backhand, I won’t play as well in the non-noisy environment,” Bono explains.
In the future, the researchers hope to explore how the indoor training effect might occur in more complex reinforcement learning environments, or with other techniques like computer vision and natural language processing. They also want to build training environments designed to leverage the indoor training effect, which could help AI agents perform better in uncertain environments.
MIT Climate and Energy Ventures class spins out entrepreneurs — and successful companiesThe course challenges students to commercialize technologies and ideas in one whirlwind semester. Alumni of the class have founded more than 150 companies.In 2014, a team of MIT students in course 15.366 (Climate and Energy Ventures) developed a plan to commercialize MIT research on how to move information between chips with light instead of electricity, reducing energy usage.
After completing the class, which challenges students to identify early customers and pitch their business plan to investors, the team went on to win both grand prizes at the MIT Clean Energy Prize. Today the company, Ayar Labs, has raised a total of $370 million from a group including chip leaders AMD, Intel, and NVIDIA, to scale the manufacturing of its optical chip interconnects.
Ayar Labs is one of many companies whose roots can be traced back to 15.366. In fact, more than 150 companies have been founded by alumni of the class since its founding in 2007.
In the class, student teams select a technology or idea and determine the best path for its commercialization. The semester-long project, which is accompanied by lectures and mentoring, equips students with real-world experience in launching a business.
“The goal is to educate entrepreneurs on how to start companies in the climate and energy space,” says Senior Lecturer Tod Hynes, who co-founded the course and has been teaching since 2008. “We do that through hands-on experience. We require students to engage with customers, talk to potential suppliers, partners, investors, and to practice their pitches to learn from that feedback.”
The class attracts hundreds of student applications each year. As one of the catalysts for MIT spinoffs, it is also one reason a 2015 report found that MIT alumni-founded companies had generated roughly $1.9 trillion in annual revenues. If MIT were a country, that figure that would make it the 10th largest economy in the world, according to the report.
“’Mens et manus’ (‘mind and hand’) is MIT's motto, and the hands-on experience we try to provide in this class is hard to beat,” Hynes says. “When you actually go through the process of commercialization in the real world, you learn more and you’re in a better spot. That experiential learning approach really aligns with MIT’s approach.”
Simulating a startup
The course was started by Bill Aulet, a professor of the practice at the MIT Sloan School of Management and the managing director of the Martin Trust Center for MIT Entrepreneurship. After serving as an advisor the first year and helping Aulet launch the class, Hynes began teaching the class with Aulet in the fall of 2008. The pair also launched the Climate and Energy Prize around the same time, which continues today and recently received over 150 applications from teams from around the world.
A core feature of the class is connecting students in different academic fields. Each year, organizers aim to enroll students with backgrounds in science, engineering, business, and policy.
“The class is meant to be accessible to anybody at MIT,” Hynes says, noting the course has also since opened to students from Harvard University. “We’re trying to pull across disciplines.”
The class quickly grew in popularity around campus. Over the last few years, the course has had about 150 students apply for 50 spots.
“I mentioned Climate and Energy Ventures in my application to MIT,” says Chris Johnson, a second-year graduate student in the Leaders for Global Operations (LGO) Program. “Coming into MIT, I was very interested in sustainability, and energy in particular, and also in startups. I had heard great things about the class, and I waited until my last semester to apply.”
The course’s organizers select mostly graduate students, whom they prefer to be in the final year of their program so they can more easily continue working on the venture after the class is finished.
“Whether or not students stick with the project from the class, it’s a great experience that will serve them in their careers,” says Jennifer Turliuk, the practice leader for climate and energy artificial intelligence at the Martin Trust Center for Entrepreneurship, who helped teach the class this fall.
Hynes describes the course as a venture-building simulation. Before it begins, organizers select up to 30 technologies and ideas that are in the right stage for commercialization. Students can also come into the class with ideas or technologies they want to work on.
After a few weeks of introductions and lectures, students form into multidisciplinary teams of about five and begin going through each of the 24 steps of building a startup described in Aulet’s book “Disciplined Entrepreneurship,” which includes things like engaging with potential early customers, quantifying a value proposition, and establishing a business model. Everything builds toward a one-hour final presentation that’s designed to simulate a pitch to investors or government officials.
“It’s a lot of work, and because it’s a team-based project, your grade is highly dependent on your team,” Hynes says. “You also get graded by your team; that’s about 10 percent of your grade. We try to encourage people to be proactive and supportive teammates.”
Students say the process is fast-paced but rewarding.
“It’s definitely demanding,” says Sofie Netteberg, a graduate student who is also in the LGO program at MIT. “Depending on where you’re at with your technology, you can be moving very quickly. That’s the stage that I was in, which I found really engaging. We basically just had a lab technology, and it was like, ‘What do we do next?’ You also get a ton of support from the professors.”
From the classroom to the world
This fall’s final presentations took place at the headquarters of the MIT-affiliated venture firm The Engine in front of an audience of professors, investors, members of foundations supporting entrepreneurship, and more.
“We got to hear feedback from people who would be the real next step for the technology if the startup gets up and running,” said Johnson, whose team was commercializing a method for storing energy in concrete. “That was really valuable. We know that these are not only people we might see in the next month or the next funding rounds, but they’re also exactly the type of people that are going to give us the questions we should be thinking about. It was clarifying.”
Throughout the semester, students treated the project like a real venture they’d be working on well beyond the length of the class.
“No one’s really thinking about this class for the grade; it’s about the learning,” says Netteberg, whose team was encouraged to keep working on their electrolyzer technology designed to more efficiently produce green hydrogen. “We’re not stressed about getting an A. If we want to keep working on this, we want real feedback: What do you think we did well? What do we need to keep working on?”
Hynes says several investors expressed interest in supporting the businesses coming out of the class. Moving forward, he hopes students embrace the test-bed environment his team has created for them and try bold new things.
“People have been very pragmatic over the years, which is good, but also potentially limiting,” Hynes says. “This is also an opportunity to do something that’s a little further out there — something that has really big potential impact if it comes together. This is the time where students get to experiment, so why not try something big?”
Expanding robot perceptionAssociate Professor Luca Carlone is working to give robots a more human-like awareness of their environment.Robots have come a long way since the Roomba. Today, drones are starting to deliver door to door, self-driving cars are navigating some roads, robo-dogs are aiding first responders, and still more bots are doing backflips and helping out on the factory floor. Still, Luca Carlone thinks the best is yet to come.
Carlone, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), directs the SPARK Lab, where he and his students are bridging a key gap between humans and robots: perception. The group does theoretical and experimental research, all toward expanding a robot’s awareness of its environment in ways that approach human perception. And perception, as Carlone often says, is more than detection.
While robots have grown by leaps and bounds in terms of their ability to detect and identify objects in their surroundings, they still have a lot to learn when it comes to making higher-level sense of their environment. As humans, we perceive objects with an intuitive sense of not just of their shapes and labels but also their physics — how they might be manipulated and moved — and how they relate to each other, their larger environment, and ourselves.
That kind of human-level perception is what Carlone and his group are hoping to impart to robots, in ways that enable them to safely and seamlessly interact with people in their homes, workplaces, and other unstructured environments.
Since joining the MIT faculty in 2017, Carlone has led his team in developing and applying perception and scene-understanding algorithms for various applications, including autonomous underground search-and-rescue vehicles, drones that can pick up and manipulate objects on the fly, and self-driving cars. They might also be useful for domestic robots that follow natural language commands and potentially even anticipate human’s needs based on higher-level contextual clues.
“Perception is a big bottleneck toward getting robots to help us in the real world,” Carlone says. “If we can add elements of cognition and reasoning to robot perception, I believe they can do a lot of good.”
Expanding horizons
Carlone was born and raised near Salerno, Italy, close to the scenic Amalfi coast, where he was the youngest of three boys. His mother is a retired elementary school teacher who taught math, and his father is a retired history professor and publisher, who has always taken an analytical approach to his historical research. The brothers may have unconsciously adopted their parents’ mindsets, as all three went on to be engineers — the older two pursued electronics and mechanical engineering, while Carlone landed on robotics, or mechatronics, as it was known at the time.
He didn’t come around to the field, however, until late in his undergraduate studies. Carlone attended the Polytechnic University of Turin, where he focused initially on theoretical work, specifically on control theory — a field that applies mathematics to develop algorithms that automatically control the behavior of physical systems, such as power grids, planes, cars, and robots. Then, in his senior year, Carlone signed up for a course on robotics that explored advances in manipulation and how robots can be programmed to move and function.
“It was love at first sight. Using algorithms and math to develop the brain of a robot and make it move and interact with the environment is one of the most fulfilling experiences,” Carlone says. “I immediately decided this is what I want to do in life.”
He went on to a dual-degree program at the Polytechnic University of Turin and the Polytechnic University of Milan, where he received master’s degrees in mechatronics and automation engineering, respectively. As part of this program, called the Alta Scuola Politecnica, Carlone also took courses in management, in which he and students from various academic backgrounds had to team up to conceptualize, build, and draw up a marketing pitch for a new product design. Carlone’s team developed a touch-free table lamp designed to follow a user’s hand-driven commands. The project pushed him to think about engineering from different perspectives.
“It was like having to speak different languages,” he says. “It was an early exposure to the need to look beyond the engineering bubble and think about how to create technical work that can impact the real world.”
The next generation
Carlone stayed in Turin to complete his PhD in mechatronics. During that time, he was given freedom to choose a thesis topic, which he went about, as he recalls, “a bit naively.”
“I was exploring a topic that the community considered to be well-understood, and for which many researchers believed there was nothing more to say.” Carlone says. “I underestimated how established the topic was, and thought I could still contribute something new to it, and I was lucky enough to just do that.”
The topic in question was “simultaneous localization and mapping,” or SLAM — the problem of generating and updating a map of a robot’s environment while simultaneously keeping track of where the robot is within that environment. Carlone came up with a way to reframe the problem, such that algorithms could generate more precise maps without having to start with an initial guess, as most SLAM methods did at the time. His work helped to crack open a field where most roboticists thought one could not do better than the existing algorithms.
“SLAM is about figuring out the geometry of things and how a robot moves among those things,” Carlone says. “Now I’m part of a community asking, what is the next generation of SLAM?”
In search of an answer, he accepted a postdoc position at Georgia Tech, where he dove into coding and computer vision — a field that, in retrospect, may have been inspired by a brush with blindness: As he was finishing up his PhD in Italy, he suffered a medical complication that severely affected his vision.
“For one year, I could have easily lost an eye,” Carlone says. “That was something that got me thinking about the importance of vision, and artificial vision.”
He was able to receive good medical care, and the condition resolved entirely, such that he could continue his work. At Georgia Tech, his advisor, Frank Dellaert, showed him ways to code in computer vision and formulate elegant mathematical representations of complex, three-dimensional problems. His advisor was also one of the first to develop an open-source SLAM library, called GTSAM, which Carlone quickly recognized to be an invaluable resource. More broadly, he saw that making software available to all unlocked a huge potential for progress in robotics as a whole.
“Historically, progress in SLAM has been very slow, because people kept their codes proprietary, and each group had to essentially start from scratch,” Carlone says. “Then open-source pipelines started popping up, and that was a game changer, which has largely driven the progress we have seen over the last 10 years.”
Spatial AI
Following Georgia Tech, Carlone came to MIT in 2015 as a postdoc in the Laboratory for Information and Decision Systems (LIDS). During that time, he collaborated with Sertac Karaman, professor of aeronautics and astronautics, in developing software to help palm-sized drones navigate their surroundings using very little on-board power. A year later, he was promoted to research scientist, and then in 2017, Carlone accepted a faculty position in AeroAstro.
“One thing I fell in love with at MIT was that all decisions are driven by questions like: What are our values? What is our mission? It’s never about low-level gains. The motivation is really about how to improve society,” Carlone says. “As a mindset, that has been very refreshing.”
Today, Carlone’s group is developing ways to represent a robot’s surroundings, beyond characterizing their geometric shape and semantics. He is utilizing deep learning and large language models to develop algorithms that enable robots to perceive their environment through a higher-level lens, so to speak. Over the last six years, his lab has released more than 60 open-source repositories, which are used by thousands of researchers and practitioners worldwide. The bulk of his work fits into a larger, emerging field known as “spatial AI.”
“Spatial AI is like SLAM on steroids,” Carlone says. “In a nutshell, it has to do with enabling robots to think and understand the world as humans do, in ways that can be useful.”
It’s a huge undertaking that could have wide-ranging impacts, in terms of enabling more intuitive, interactive robots to help out at home, in the workplace, on the roads, and in remote and potentially dangerous areas. Carlone says there will be plenty of work ahead, in order to come close to how humans perceive the world.
“I have 2-year-old twin daughters, and I see them manipulating objects, carrying 10 different toys at a time, navigating across cluttered rooms with ease, and quickly adapting to new environments. Robot perception cannot yet match what a toddler can do,” Carlone says. “But we have new tools in the arsenal. And the future is bright.”
MIT Press’ Direct to Open opens access to over 80 new monographsSupport for D2O in 2025 includes two new three-year, all-consortium commitments from the Florida Virtual Campus and the Big Ten Academic Alliance.The MIT Press has announced that Direct to Open (D2O) will open access to over 80 new monographs and edited book collections in the spring and fall publishing seasons, after reaching its full funding goal for 2025.
“It has been one of the greatest privileges of my career to contribute to this program and demonstrate that our academic community can unite to publish high-quality open-access monographs at scale,” says Amy Harris, senior manager of library relations and sales at the MIT Press. “We are deeply grateful to all of the consortia that have partnered with us and to the hundreds of libraries that have invested in this program. Together, we are expanding the public knowledge commons in ways that benefit scholars, the academy, and readers around the world.”
Among the highlights from the MIT Press’s fourth D2O funding cycle is a new three-year, consortium-wide commitment from the Florida Virtual Campus (FLVC) and a renewed three-year commitment from the Big Ten Academic Alliance (BTAA). These long-term collaborations will play a pivotal role in supporting the press’s open-access efforts for years to come.
“The Florida Virtual Campus is honored to participate in D2O in order to provide this collection of high-quality scholarship to more than 1.2 million students and faculty at the 28 state colleges and 12 state universities of Florida,” says Elijah Scott, executive director of library services for the Florida Virtual Campus. “The D2O program allows FLVC to make this research collection available to our member libraries while concurrently fostering the larger global aspiration of sustainable and equitable access to information.”
“The libraries of the Big Ten Academic Alliance are committed to supporting the creation of open-access content,” adds Kate McCready, program director for open publishing at the Big Ten Academic Alliance Library. “We're thrilled that our participation in D2O contributes to the opening of this collection, as well as championing the exploration of new models for opening scholarly monographs.”
In 2025, hundreds of libraries renewed their support thanks to the teams at consortia around the world, including the Council of Australasian University Librarians, the CBB Library Consortium, the California Digital Library, the Canadian Research Knowledge Network, CRL/NERL, the Greater Western Library Alliance, Jisc, Lyrasis, MOBIUS, PALCI, SCELC, and the Tri-College Library Consortium.
Launched in 2021, D2O is an innovative sustainable framework for open-access monographs that shifts publishing from a solely market-based, purchase model where individuals and libraries buy single e-books, to a collaborative, library-supported open-access model.
Many other models offer open-access opportunities on a title-by-title basis or within specific disciplines. D2O’s particular advantage is that it enables a press to provide open access to its entire list of scholarly books at scale, embargo-free, during each funding cycle. Thanks to D2O, all MIT Press monograph authors have the opportunity for their work to be published open access, with equal support to traditionally underserved and underfunded disciplines in the social sciences and humanities.
The MIT Press will now turn its attention to its fifth funding cycle and invites libraries and library consortia to participate. For details, please visit the MIT Press website or contact the Library Relations team.
Faces of MIT: Melissa Smith PhD ’12The associate leader in the Advanced Materials and Microsystems Group at Lincoln Laboratory strongly believes in the power of collaboration and how it seeds innovation.Melissa Smith PhD ’12 is an associate leader in the Advanced Materials and Microsystems Group at MIT Lincoln Laboratory. Her team, which is embedded within the laboratory’s Advanced Technology Division, drives innovation in fields including computation, aerospace, optical systems, and bioengineering by applying micro- and nanofabrication techniques. Smith, an inventor of 11 patents, strongly believes in the power of collaboration when it comes to her own work, the work of her Lincoln Laboratory colleagues, and the innovative research done by MIT professors and students.
Lincoln Laboratory researches and develops advanced technologies in support of national security. Research done at the laboratory is applied, meaning staff members are given a specific problem to solve by a deadline. Divisions within the laboratory are made up of technical experts, ranging from biologists to cybersecurity researchers, working on different projects simultaneously. Smith appreciates the broad application space of her group’s work, which feeds into programs across the laboratory. “We are like a kitchen drawer full of indispensable gadgets,” she says, some of which are used to develop picosatellites, smart textiles, or microrobots. Their position as a catch-all team makes their work fun, somewhat open-ended, and always interesting.
In 2012, Smith received her PhD from the MIT Department of Materials Science & Engineering (DMSE). After graduation, she remained at the Institute for nine months as a postdoc before beginning her career as an engineer at IBM. While at IBM, Smith maintained a research affiliation with MIT to continue to work on patents and write papers. In 2015, she formally returned to MIT as a technical staff member at Lincoln Laboratory. In 2020, she was promoted to the position of assistant group leader and was awarded the laboratory’s Best Invention Award for “Electrospray devices and methods for fabricating electrospray devices” (U.S. Patent 11,708,182 B2). In 2024, she was promoted to associate group leader.
Management is an important aspect of Smith’s role, and she credits the laboratory for cultivating people with both academic and technical backgrounds to learn how to effectively run programs and teams. Her demonstrated efficacy in the academic and corporate spaces — both of which contain deadlines and collaborative work — allows her to inspire her team to be innovative and efficient. She keeps her group running smoothly by removing potential roadblocks so they can adequately attend to their projects. Smith focuses on specific tasks that aid in her group’s success, including writing grant proposals, a skill she learned while working at the laboratory, which allows her staff to prioritize their technical work. That, she says, is the value of working as a team.
A true champion of teamwork, Smith advises new staff members to maintain an open mind because they can learn something from everyone they encounter, especially when first starting at the Institute. She notes that every colleague has something unique to offer, and taking time to understand the wealth of experience and knowledge around you will only help you succeed as a staff member at MIT. “Be who you are, do what you do, and run with it,” she says.
Soundbytes
Q: What project at MIT are you the proudest of?
Smith: We are building a wafer-scale satellite, which is a little bit out-there as an idea. It was thought up in the 1960s, but the technology wasn't to the point where it could be realized. Technology today is more than capable of making this small space microsystem. I was tasked with taking the idea further. Some people say that it is impossible, and for a lot of good reasons! Slowly addressing the technical issues to the point where people now say, “Oh, you could probably do this,” is exciting.
I never want to be someone who thinks something is impossible. I'll say, “I can't do it, but maybe somebody else can,” and I will also add, “Here is what I tried, here is all the data, and here is how I came to the point where I got stuck.” I like taking something that was initially met with disbelief and rendering it. Lincoln Laboratory is active with professors and students. I am collaborating with students from the Department of Aeronautics and Astronautics on the project, and we now have a patent on the technology that came from it. I am happy to have students assist, write papers, and occasionally get their names on patents. It is seeding additional innovation. We don't have the system quite yet, but I've converted a few skeptics!
Q: What are your favorite campus memories from when you were a student?
Smith: When I was a graduate student, I would go with friends to the Muddy Charles Pub in Walker Memorial. One of the things I really enjoy about Walker Memorial is the prime view over the Charles River, and I remember staring out of the windows at the top of Walker Memorial after exams. Also, during Independent Activities Period I learned how to snowboard. I'm from Illinois where there are no mountains. When I came to the East Coast and saw that there were a lot of mountains with people strapping metal to their feet in the snow, I thought, “OK, let's try it.” I love snowboarding to this day. MIT has this kind of unfettered freedom in a way that, even beyond the technical stuff, people can try things from a personal standpoint they maybe wouldn’t have tried somewhere else.
Q: What do you like the most about the culture at MIT?
Smith: We help people grow professionally. The staff here are above average in terms of capability in what they do. When I interviewed for my job, I asked where people work when they leave MIT. People move on to other labs like the Jet Propulsion Laboratory or companies like Raytheon, they become professors, or they start their own companies. I make sure that people are learning what they want to do with their careers while they work at the laboratory. That is the cultural overlay that exists on campus. When I was a student, I interned at John Deere, 3M, Xerox, and IBM and saw how they are innovative in their own ways that define their corporate cultures. At MIT, you are supported to explore and play. At Lincoln Laboratory people are not pigeonholed into a particular role. If you have an idea, you are encouraged to explore it, as long as it aligns with the mission. There is a specific freedom you can experience at MIT that is above and beyond a typical academic environment.
Gerald E. Schneider, a professor emeritus of psychology and member of the MIT community for over 60 years, passed away on Dec. 11, 2024. He was 84.
Schneider was an authority on the relationships between brain structure and behavior, concentrating on neuronal development, regeneration or altered growth after brain injury, and the behavioral consequences of altered connections in the brain.
Using the Syrian golden hamster as his test subject of choice, Schneider made numerous contributions to the advancement of neuroscience. He laid out the concept of two visual systems — one for locating objects and one for the identification of objects — in a 1969 issue of Science, a milestone in the study of brain-behavior relationships. In 1973, he described a “pruning effect” in the optic tract axons of adult hamsters who had brain lesions early in life. In 2006, his lab reported a previously undiscovered nanobiomedical technology for tissue repair and restoration in Biological Sciences. The paper showed how a designed self-assembling peptide nanofiber scaffold could create a permissive environment for axons, not only to regenerate through the site of an acute injury in the optic tract of hamsters, but also to knit the brain tissue together.
His work shaped the research and thinking of numerous colleagues and trainees. Mriganka Sur, the Newton Professor of Neuroscience and former Department of Brain and Cognitive Sciences (BCS) department head, recalls how Schneider’s paper, “Is it really better to have your brain lesion early? A revision of the ‘Kennard Principle,’” published in 1979 in the journal Neuropsychologia, influenced his work on rewiring retinal projections to the auditory thalamus, which was used to derive principles of functional plasticity in the cortex.
“Jerry was an extremely innovative thinker. His hypothesis of two visual systems — for detailed spatial processing and for movement processing — based on his analysis of visual pathways in hamsters presaged and inspired later work on form and motion pathways in the primate brain,” Sur says. “His description of conservation of axonal arbor during development laid the foundation for later ideas about homeostatic mechanisms that co-regulate neuronal plasticity.”
Institute Professor Ann Graybiel was a colleague of Schneider’s for over five decades. She recalls early in her career being asked by Schneider to help make a map of the superior colliculus.
“I took it as an honor to be asked, and I worked very hard on this, with great excitement. It was my first such mapping, to be followed by much more in the future,” Graybiel recalls. “Jerry was fascinated by animal behavior, and from early on he made many discoveries using hamsters as his main animals of choice. He found that they could play. He found that they could operate in ways that seemed very sophisticated. And, yes, he mapped out pathways in their brains.”
Schneider was raised in Wheaton, Illinois, and graduated from Wheaton College in 1962 with a degree in physics. He was recruited to MIT by Hans-Lukas Teuber, one of the founders of the Department of Psychology, which eventually became the Department of Brain and Cognitive Sciences. Walle Nauta, another founder of the department, taught Schneider neuroanatomy. The pair were deeply influential in shaping his interests in neuroscience and his research.
“He admired them both very much and was very attached to them,” his daughter, Nimisha Schneider, says. “He was an interdisciplinary scholar and he liked that aspect of neuroscience, and he was fascinated by the mysteries of the human brain.”
Shortly after completing his PhD in psychology in 1966, he was hired as an assistant professor in 1967. He was named an associate professor in 1970, received tenure in 1975, and was appointed a full professor in 1977.
After his retirement in 2017, Schneider remained involved with the Department of BCS. Professor Pawan Sinha brought Schneider to campus for what would be his last on-campus engagement, as part of the “SilverMinds Series,” an initiative in the Sinha Lab to engage with scientists now in their “silver years.”
Schneider’s research made an indelible impact on Sinha, beginning as a graduate student when he was inspired by Schneider’s work linking brain structure and function. His work on nerve regeneration, which merged fundamental science and real-world impact, served as a “North Star” that guided Sinha’s own work as he established his lab as a junior faculty member.
“Even through the sadness of his loss, I am grateful for the inspiring example he has left for us of a life that so seamlessly combined brilliance, kindness, modesty, and tenacity,” Sinha says. “He will be missed.”
Schneider’s life centered around his research and teaching, but he also had many other skills and hobbies. Early in his life, he enjoyed painting, and as he grew older he was drawn to poetry. He was also skilled in carpentry and making furniture. He built the original hamster cages for his lab himself, along with numerous pieces of home furniture and shelving. He enjoyed nature anywhere it could be found, from the bees in his backyard to hiking and visiting state and national parks.
He was a Type 1 diabetic, and at the time of his death, he was nearing the completion of a book on the effects of hypoglycemia on the brain, which his family hopes to have published in the future. He was also the author of “Brain Structure and Its Origins,” published in 2014 by MIT Press.
He is survived by his wife, Aiping; his children, Cybele, Aniket, and Nimisha; and step-daughter Anna. He was predeceased by a daughter, Brenna. He is also survived by eight grandchildren and 10 great-grandchildren. A memorial in his honor was held on Jan. 11 at Saint James Episcopal Church in Cambridge.
Kingdoms collide as bacteria and cells form captivating connectionsStudying the pathogen R. parkeri, researchers discovered the first evidence of extensive and stable interkingdom contacts between a pathogen and a eukaryotic organelle.In biology textbooks, the endoplasmic reticulum is often portrayed as a distinct, compact organelle near the nucleus, and is commonly known to be responsible for protein trafficking and secretion. In reality, the ER is vast and dynamic, spread throughout the cell and able to establish contact and communication with and between other organelles. These membrane contacts regulate processes as diverse as fat metabolism, sugar metabolism, and immune responses.
Exploring how pathogens manipulate and hijack essential processes to promote their own life cycles can reveal much about fundamental cellular functions and provide insight into viable treatment options for understudied pathogens.
New research from the Lamason Lab in the Department of Biology at MIT recently published in the Journal of Cell Biology has shown that Rickettsia parkeri, a bacterial pathogen that lives freely in the cytosol, can interact in an extensive and stable way with the rough endoplasmic reticulum, forming previously unseen contacts with the organelle.
It’s the first known example of a direct interkingdom contact site between an intracellular bacterial pathogen and a eukaryotic membrane.
The Lamason Lab studies R. parkeri as a model for infection of the more virulent Rickettsia rickettsii. R. rickettsii, carried and transmitted by ticks, causes Rocky Mountain Spotted Fever. Left untreated, the infection can cause symptoms as severe as organ failure and death.
Rickettsia is difficult to study because it is an obligate pathogen, meaning it can only live and reproduce inside living cells, much like a virus. Researchers must get creative to parse out fundamental questions and molecular players in the R. parkeri life cycle, and much remains unclear about how R. parkeri spreads.
Detour to the junction
First author Yamilex Acevedo-Sánchez, a BSG-MSRP-Bio program alum and a graduate student at the time, stumbled across the ER and R. parkeri interactions while trying to observe Rickettsia reaching a cell junction.
The current model for Rickettsia infection involves R. parkeri spreading cell to cell by traveling to the specialized contact sites between cells and being engulfed by the neighboring cell in order to spread. Listeria monocytogenes, which the Lamason Lab also studies, uses actin tails to forcefully propel itself into a neighboring cell. By contrast, R. parkeri can form an actin tail, but loses it before reaching the cell junction. Somehow, R. parkeri is still able to spread to neighboring cells.
After an MIT seminar about the ER’s lesser-known functions, Acevedo-Sánchez developed a cell line to observe whether Rickettsia might be spreading to neighboring cells by hitching a ride on the ER to reach the cell junction.
Instead, she saw an unexpectedly high percentage of R. parkeri surrounded and enveloped by the ER, at a distance of about 55 nanometers. This distance is significant because membrane contacts for interorganelle communication in eukaryotic cells form connections from 10-80 nanometers wide. The researchers ruled out that what they saw was not an immune response, and the sections of the ER interacting with the R. parkeri were still connected to the wider network of the ER.
“I’m of the mind that if you want to learn new biology, just look at cells,” Acevedo-Sánchez says. “Manipulating the organelle that establishes contact with other organelles could be a great way for a pathogen to gain control during infection.”
The stable connections were unexpected because the ER is constantly breaking and reforming connections, lasting seconds or minutes. It was surprising to see the ER stably associating around the bacteria. As a cytosolic pathogen that exists freely in the cytosol of the cells it infects, it was also unexpected to see R. parkeri surrounded by a membrane at all.
Small margins
Acevedo-Sánchez collaborated with the Center for Nanoscale Systems at Harvard University to view her initial observations at higher resolution using focused ion beam scanning electron microscopy. FIB-SEM involves taking a sample of cells and blasting them with a focused ion beam in order to shave off a section of the block of cells. With each layer, a high-resolution image is taken. The result of this process is a stack of images.
From there, Acevedo-Sánchez marked what different areas of the images were — such as the mitochondria, Rickettsia, or the ER — and a program called ORS Dragonfly, a machine learning program, sorted through the thousand or so images to identify those categories. That information was then used to create 3D models of the samples.
Acevedo-Sánchez noted that less than 5 percent of R. parkeri formed connections with the ER — but small quantities of certain characteristics are known to be critical for R. parkeri infection. R. parkeri can exist in two states: motile, with an actin tail, and nonmotile, without it. In mutants unable to form actin tails, R. parkeri are unable to progress to adjacent cells — but in nonmutants, the percentage of R. parkeri that have tails starts at about 2 percent in early infection and never exceeds 15 percent at the height of it.
The ER only interacts with nonmotile R. parkeri, and those interactions increased 25-fold in mutants that couldn’t form tails.
Creating connections
Co-authors Acevedo-Sánchez, Patrick Woida, and Caroline Anderson also investigated possible ways the connections with the ER are mediated. VAP proteins, which mediate ER interactions with other organelles, are known to be co-opted by other pathogens during infection.
During infection by R. parkeri, VAP proteins were recruited to the bacteria; when VAP proteins were knocked out, the frequency of interactions between R. parkeri and the ER decreased, indicating R. parkeri may be taking advantage of these cellular mechanisms for its own purposes during infection.
Although Acevedo-Sánchez now works as a senior scientist at AbbVie, the Lamason Lab is continuing the work of exploring the molecular players that may be involved, how these interactions are mediated, and whether the contacts affect the host or bacteria’s life cycle.
Senior author and associate professor of biology Rebecca Lamason noted that these potential interactions are particularly interesting because bacteria and mitochondria are thought to have evolved from a common ancestor. The Lamason Lab has been exploring whether R. parkeri could form the same membrane contacts that mitochondria do, although they haven’t proven that yet. So far, R. parkeri is the only cytosolic pathogen that has been observed behaving this way.
“It’s not just bacteria accidentally bumping into the ER. These interactions are extremely stable. The ER is clearly extensively wrapping around the bacterium, and is still connected to the ER network,” Lamason says. “It seems like it has a purpose — what that purpose is remains a mystery.”
Is this the new playbook for curing rare childhood diseases?When his son received a devastating diagnosis, Fernando Goldsztein MBA ’03 founded an initiative to help him and others.“There is no treatment available for your son. We can’t do anything to help him.”
When Fernando Goldsztein MBA ’03 heard those words, something inside him snapped.
“I refused to accept what the doctors were saying. I transformed my fear into my greatest strength and started fighting.”
Goldsztein’s 12-year-old son Frederico was diagnosed with relapsing medulloblastoma, a life-threatening pediatric brain tumor. Goldsztein's life — and career plan — changed in an instant. He had to learn to become a different kind of leader altogether.
While Goldsztein never set out to become a founder, the MIT Sloan School of Management taught him the importance of networking, building friendships, and making career connections with peers and faculty from all walks of life. He began using those skills in a new way — boldly reaching out to the top medulloblastoma doctors and scientists at hospitals around the world to ask for help.
“I knew that I had to do something to save Frederico, but also the other estimated 15,000 children diagnosed with the disease around the world each year,” he says.
In 2021, Goldsztein launched The Medulloblastoma Initiative (MBI), a nonprofit organization dedicated to finding a cure using a remarkable new model for funding rare disease research.
In just 18 months, the organization — which is still in startup mode — has raised $11 million in private funding and brought together 14 of the world’s most prestigious labs and hospitals from across North America, Europe, and Brazil.
Two promising trials will launch in the coming months, and three additional trials are in the pipeline and currently awaiting U.S. Food and Drug Administration approval.
All of this in an industry that is notorious for bureaucratic red tape, and where the timeline from an initial lab discovery to a patient receiving a first treatment averages seven to 15 years.
While government research grants typically allocate just 4 cents on the dollar toward pediatric cancer research — pennies doled out across multiple labs pursuing uncoordinated efforts — MBI is laser-focused on pushing 100 percent of their funding toward a singular goal, without any overhead or administrative costs.
“There is no time to lose,” Goldsztein says. “We are making science move faster than it ever has before.”
The MBI blueprint for funding cures for rare diseases is replicable, and likely to disrupt the standard way health care research is funded and carried out by radically shortening the timeline.
From despair to strength
After his initial diagnosis at age 9, Frederico went through a nine-hour brain surgery and came to the United States to receive standard treatment. Goldsztein looked on helplessly as his son received radiation and then nine grueling rounds of chemotherapy.
First pioneered in the 1980s, this standard treatment protocol cures 70 percent of children. Still, it leaves most of them with lifelong side effects like cognitive problems, endocrine issues that stunt growth, and secondary tumors. Frederico was on the wrong side of that statistic. Just three years later, his tumor relapsed.
Goldsztein grimaces as he recalls the prognosis he and his wife heard from the doctors.
“It was unbelievable to me that there had been almost no discoveries in 40 years,” he says.
Ultimately, he found hope and partnership in Roger Packer, the director of the Brain Tumor Institute and the Gilbert Family Neurofibromatosis Institute of Children’s National Hospital. He is also the very doctor who created the standard treatment years before.
Packer explains that finding effective therapies for medulloblastoma was complex for 30 years because it is an umbrella term for 13 types of tumors. Frederico suffers from the most common one, Group 4.
Part of the reason the treatment has not changed is that, until recently, medicine has not advanced enough to detect differences between the different tumor types. Packer explains, “Now with molecular genetic testing and methylation, which is a way to essentially sort tumors, that has changed.”
The problem for Frederico was that very few researchers were working on Group 4, the sub-type of medulloblastoma that is the most common tumor, yet also the one that scientists know the least about.
Goldsztein challenged Packer: “If I can get you the funding, what can your lab do to advance medulloblastoma research quickly?”
An open-source consortium model
Packer advised that they work together to “try something different,” instead of just throwing money at research without any guideposts.
“We set up a consortium of leading institutions around the world doing medulloblastoma research, asked them to change their lab approach to focus on the Group 4 tumor, and assigned each lab a question to answer. We charged them with coming up with therapy — not in seven to 10 years, which is the normal transition from discovery to developing a drug and getting it to a patient, but within a two-year timeline,” he says.
Initially, seven labs signed on. Today, the Cure Group 4 Consortium is made up of 14 partners and reads like a who’s who of medulloblastoma heavy hitters: Children’s National Hospital, SickKids, Hopp Children’s Cancer Center, and Texas Children’s Hospital.
Labs can only join the consortium if they agree to follow some unusual rules. As Goldsztein explains, “To be accepted into this group and receive funding, there are no silos, and there is no duplicated work. Everyone has a piece of the puzzle, and we work together to move fast. That is the magic of our model.”
Inspired by MIT’s open-source methods, researchers must share data freely with one another to accelerate the group’s overall progress. This kind of partnership across labs and borders is unprecedented in a highly competitive sector.
Mariano Gargiulo MBA ’03 met Goldsztein on the first day of their MIT Sloan Fellows MBA program orientation and has been his dear friend ever since. An early-stage donor to MBI and a Houston-based executive in the energy sector, Gargiulo sat down with Goldsztein as he first conceptualized MBI’s operating model.
“Usually, startup business models plot out the next 10-15 years; Fernando’s timeline was only two years, and his benchmarks were in three-month increments.” It was audaciously optimistic, says Gargiulo, but so was the founder.
“When I saw it, I did not doubt that he would achieve his goals. I’m seeing Fernando hit those first targets now and it’s amazing to watch,” Gargiulo says.
Children’s National Hospital endorsed MBI in 2023 and invited Goldsztein to sit on its foundation’s board, adding credibility to the initiative and his ability to fundraise more ambitiously.
According to Packer, in the next few months, the first two MBI protocols will reach patients for the first time: an immunotherapy protocol, which “leverages the body’s immune response to target cancer cells more effectively and safely than traditional therapies,” and a medulloblastoma vaccine, which “adapts similar methodologies used in Covid-19 vaccine development. This approach aims to provide a versatile and mobile treatment that could be distributed globally.”
A matter of when
When Goldsztein is not with his own family in Brazil, fundraising, or managing MBI, he is on Zoom with a network of more than 70 other families with children with relapsed medulloblastoma. “I’m not a doctor and I don’t give out medical advice, but with these trials, we are giving each other hope,” he explains.
Hope and purpose are commodities that Goldsztein has in spades. “I don’t understand the idea of doing business and accumulating assets, but not helping others,” he says. He shared that message with an auditorium of his fellow alumni at his 2023 MIT Sloan Reunion.
Frederico, who defied all odds and lived with the threat of recurrence, recently graduated high school. He is interested in international relations and passionate about photography. “This is about finding a cure for Frederico and for all kids,” Goldsztein says.
When asked how the world would be impacted if MBI found a cure for medulloblastoma, Goldsztein shakes his head.
“We are going to find the cure. It’s not if, it’s a matter of when.”
His next goal is to scale MBI and have it serve as a resource for groups that want to replicate its playbook to solve other childhood diseases.
“I’m never going to stop,” he says.
How good old mud can lower building costsBuilders pour concrete into temporary molds called formwork. MIT researchers invented a way to make these structures out of on-site soil.Buildings cost a lot these days. But when concrete buildings are being constructed, there’s another material that can make them less expensive: mud.
MIT researchers have developed a method to use lightly treated mud, including soil from a building site, as the “formwork” molds into which concrete is poured. The technique deploys 3D printing and can replace the more costly method of building elaborate wood formworks for concrete construction.
“What we’ve demonstrated is that we can essentially take the ground we’re standing on, or waste soil from a construction site, and transform it into accurate, highly complex, and flexible formwork for customized concrete structures,” says Sandy Curth, a PhD student in MIT’s Department of Architecture who has helped spearhead the project.
The approach could help concrete-based construction take place more quickly and efficiently. It could also reduce costs and carbon emissions.
“It has the potential for immediate impact and doesn’t require changing the nature of the construction industry,” says Curth, who doubles as director of the Programmable Mud Initiative.
Curth has co-authored multiple papers about the method, most recently, “EarthWorks: Zero waste 3D printed earthen formwork for shape-optimized, reinforced concrete construction,” published in the journal Construction and Building Materials. Curth wrote that paper with nine co-authors, including Natalie Pearl, Emily Wissemann, Tim Cousin, Latifa Alkhayat, Vincent Jackow, Keith Lee, and Oliver Moldow, all MIT students; and Mohamed Ismail of the University of Virginia.
The paper’s final two co-authors are Lawrence Sass, professor and chair of the Computation Group in MIT’s Department of Architecture, and Caitlin Mueller, an associate professor at MIT in the Department of Architecture and the Department of Civil and Environmental Engineering. Sass is Curth’s graduate advisor.
Building a structure once, not twice
Constructing wooden formwork for a building is costly and time-consuming. There is saying in the industry that concrete structures have to be built twice — once through the wooden formwork, then again in the concrete poured into the forms.
Using soil for the formwork could change that process. While it might seem like an unusual material compared to the solidity of wooden formwork, soil is firm enough to handle poured concrete. The EarthWorks method, as its known, introduces some additive materials, such as straw, and a wax-like coating for the soil material to prevent any water from draining out of the concrete. Using large-scale 3D printing, the researchers can take soil from a construction site and print it into a custom-designed formwork shape.
“What we’ve done is make a system where we are using what is largely straightforward, large-scale 3D printing technology, and making it highly functional for the material,” Curth says. “We found a way to make formwork that is infinitely recyclable. It’s just dirt.”
Beyond cost and ease of acquiring the materials, the method offers at least two other interrelated advantages. One is environmental: Concrete construction accounts for as much as 8 percent of global carbon emissions, and this approach supports substantial emissions reductions, both through the formwork material itself and the ease of shaping the resulting concrete to only use what is structurally required. Using a method called shape optimization, developed for reinforced concrete in previous research by Ismail and Mueller, it is possible to reduce the carbon emissions of concrete structural frames by more than 50 percent.
“The EarthWorks technique brings these complex, optimized structures much closer to built reality by offering a low-cost, low-carbon fabrication technique for formwork that can be deployed anywhere in the world,” Mueller says.
“It’s an enabling technology to make reinforced concrete buildings much, much more materially efficient, which has a direct impact on global carbon emissions,” Curth adds.
More generally, the EarthWorks method allows architects and engineers to create customized concrete shapes more easily, due to the flexibility of the formwork material. It is easier to cast concrete in an unusual shape when molding it with soil, not wood.
“What’s cool here is we’re able to make shape-optimized building elements for the same amount of time and energy it would take to make rectilinear building elements,” Curth says.
Group project
As Curth notes, the projects developed by the Programmable Mud group are highly collaborative. He emphasizes the roles played by both Sass, a leader in using computation to help develop low-cost housing, and Mueller, whose work also deploys new computational methods to assess innovative structural ideas in architecture.
“Concrete is a wonderful material when it is used thoughtfully and efficiently, which is inherently connected to how it is shaped,” Mueller says. “However, the minimal forms that emerge from optimization are at odds with conventional construction logics. It is very exciting to advance a technique that subverts this supposed tradeoff, showing that performance-driven complexity can be achieved with low carbon emissions and low cost.”
While finishing his doctorate at MIT, Curth has also founded a firm, FORMA Systems, through which he hopes to take the EarthWorks method into the construction industry. Using this approach does mean builders would need to have a large 3D printer on-site. However, they would also save significantly on materials costs, he says.
Further in the future, Curth envisions a time when the method could be used not just for formworks, but to construct templates for, say, two-story residential building made entirely out of earth. Of course, some parts of the world, including the U.S., extensively use adobe architecture already, but the idea here would be to systematize the production of such homes and make them inexpensive in the process.
In either case, Curth says, as formwork for concrete or by itself, we now have new ways to apply soil to construction.
“People have built with earth for as long as we’ve had buildings, but given contemporary demands for urban concrete buildings, this approach basically decouples cost from complexity,” Curth says. “I guarantee you we can start to make higher-performance buildings for less money.”
The project was supported by the Sidara Urban Research Seed Fund administered by MIT’s Leventhal Center for Advanced Urbanism, and by lyndaLABS.
Building resiliencyIn a new book, Lawrence Vale spotlights projects from around the globe that help insulate communities from climate shocks.Several years ago, the residents of a manufactured-home neighborhood in southeast suburban Houston, not far from the Buffalo Bayou, took a major step in dealing with climate problems: They bought the land under their homes. Then they installed better drainage and developed strategies to share expertise and tools for home repairs. The result? The neighborhood made it through Hurricane Harvey in 2017 and a winter freeze in 2021 without major damage.
The neighborhood is part of a U.S. movement toward the Resident Owned Community (ROC) model for manufactured home parks. Many people in manufactured homes — mobile homes — do not own the land under them. But if the residents of a manufactured-home park can form an ROC, they can take action to adapt to climate risks — and ease the threat of eviction. With an ROC, manufactured-home residents can be there to stay.
That speaks to a larger issue: In cities, lower-income residents are often especially vulnerable to natural hazards, such as flooding, extreme heat, and wildfire. But efforts aimed at helping cities as a whole withstand these disasters can lead to interventions that displace already-disadvantaged residents — by turning a low-lying neighborhood into a storm buffer, for instance.
“The global climate crisis has very differential effects on cities, and neighborhoods within cities,” says Lawrence Vale, a professor of urban studies at MIT and co-author of a new book on the subject, “The Equitably Resilient City,” published by the MIT Press and co-authored with Zachary B. Lamb PhD ’18, an assistant professor at the University of California at Berkeley.
In the book, the scholars delve into 12 case studies from around the globe which, they believe, have it both ways: Low- and middle-income communities have driven climate progress through tangible built projects, while also keeping people from being displaced, and indeed helping them participate in local governance and neighborhood decision-making.
“We can either dive into despair about climate issues, or think they’re solvable and ask what it takes to succeed in a more equitable way,” says Vale, who is the Ford Professor of Urban Design and Planning at MIT. “This book is asking how people look at problems more holistically — to show how environmental impacts are integrated with their livelihoods, with feeling they can have security from displacement, and feeling they’re not going to be displaced, with being empowered to share in the governance where they live.”
As Lamb notes, “Pursuing equitable urban climate adaptation requires both changes in the physical built environment of cities and innovations in institutions and governance practices to address deep-seated causes of inequality.”
Twelve projects, four elements
Research for “The Equitably Resilient City” began with exploration of about 200 potential cases, and ultimately focused on 12 projects from around the globe, including the U.S., Brazil, Thailand, and France. Vale and Lamb, coordinating with locally-based research teams, visited these diverse sites and conducted interviews in nine languages.
All 12 projects work on multiple levels at once: They are steps toward environmental progress that also help local communities in civic and economic terms. The book uses the acronym LEGS (“livelihood, environment, governance, and security”) to encapsulate this need to make equitable progress on four different fronts.
“Doing one of those things well is worth recognition, and doing all of them well is exciting,” Vale says. “It’s important to understand not just what these communities did, but how they did it and whose views were involved. These 12 cases are not a random sample. The book looks for people who are partially succeeding at difficult things in difficult circumstances.”
One case study is set in São Paolo, Brazil, where low-income residents of a hilly favela benefitted from new housing in the area on undeveloped land that is less prone to slides. In San Juan, Puerto Rico, residents of low-lying neighborhoods abutting a water channel formed a durable set of community groups to create a fairer solution to flooding: Although the channel needed to be re-widened, the local coalition insisted on limiting displacement, supporting local livelihoods and improving environmental conditions and public space.
“There is a backlash to older practices,” Vale says, referring to the large-scale urban planning and infrastructure projects of the mid-20th century, which often ignored community input. “People saw what happened during the urban renewal era and said, ‘You’re not going to do that to us again.’”
Indeed, one through-line in “The Equitably Resilient City” is that cities, like all places, can be contested political terrain. Often, solid solutions emerge when local groups organize, advocate for new solutions, and eventually gain enough traction to enact them.
“Every one of our examples and cases has probably 15 or 20 years of activity behind it, as well as engagements with a much deeper history,” Vale says. “They’re all rooted in a very often troubled [political] context. And yet these are places that have made progress possible.”
Think locally, adapt anywhere
Another motif of “The Equitably Resilient City” is that local progress matters greatly, for a few reasons — including the value of having communities develop projects that meet their own needs, based on their input. Vale and Lamb are interested in projects even if they are very small-scale, and devote one chapter of the book to the Paris OASIS program, which has developed a series of cleverly designed, heavily tree-dotted school playgrounds across Paris. These projects provide environmental education opportunities and help mitigate flooding and urban heat while adding CO2-harnessing greenery to the cityscape.
An individual park, by itself, can only do so much, but the concept behind it can be adopted by anyone.
“This book is mostly centered on local projects rather than national schemes,” Vale says. “The hope is they serve as an inspiration for people to adapt to their own situations.”
After all, the urban geography and governance of places such as Paris or São Paulo will differ widely. But efforts to make improvements to public open space or to well-located inexpensive housing stock applies in cities across the world.
Similarly, the authors devote a chapter to work in the Cully neighborhood in Portland, Oregon, where community leaders have instituted a raft of urban environmental improvements while creating and preserving more affordable housing. The idea in the Cully area, as in all these cases, is to make places more resistant to climate change while enhancing them as good places to live for those already there.
“Climate adaptation is going to mobilize enormous public and private resources to reshape cities across the globe,” Lamb notes. “These cases suggest pathways where those resources can make cities both more resilient in the face of climate change and more equitable. In fact, these projects show how making cities more equitable can be part of making them more resilient.”
Other scholars have praised the book. Eric Klinenberg, director of New York University’s Institute for Public Knowledge has called it “at once scholarly, constructive, and uplifting, a reminder that better, more just cities remain within our reach.”
Vale also teaches some of the book’s concepts in his classes, finding that MIT students, wherever they are from, enjoy the idea of thinking creatively about climate resilience.
“At MIT, students want to find ways of applying technical skills to urgent global challenges,” Vale says. “I do think there are many opportunities, especially at a time of climate crisis. We try to highlight some of the solutions that are out there. Give us an opportunity, and we’ll show you what a place can be.”
A platform to expedite clean energy projectsStation A, founded by MIT alumni, makes the process of buying clean energy simple for property owners.Businesses and developers often face a steep learning curve when installing clean energy technologies, such as solar installations and EV chargers. To get a fair deal, they need to navigate a complex bidding process that involves requesting proposals, evaluating bids, and ultimately contracting with a provider.
Now the startup Station A, founded by a pair of MIT alumni and their colleagues, is streamlining the process of deploying clean energy. The company has developed a marketplace for clean energy that helps real estate owners and businesses analyze properties to calculate returns on clean energy projects, create detailed project listings, collect and compare bids, and select a provider.
The platform helps real estate owners and businesses adopt clean energy technologies like solar panels, batteries, and EV chargers at the lowest possible prices, in places with the highest potential to reduce energy costs and emissions.
“We do a lot to make adopting clean energy simple,” explains Manos Saratsis SMArchS ’15, who co-founded Station A with Kevin Berkemeyer MBA ’14. “Imagine if you were trying to buy a plane ticket and your travel agent only used one carrier. It would be more expensive, and you couldn’t even get to some places. Our customers want to have multiple options and easily learn about the track record of whoever they’re working with.”
Station A has already partnered with some of the largest real estate companies in the country, some with thousands of properties, to reduce the carbon footprint of their buildings. The company is also working with grocery chains, warehouses, and other businesses to accelerate the clean energy transition.
“Our platform uses a lot of AI and machine learning to turn addresses into building footprints and to understand their electricity costs, available incentives, and where they can expect the highest ROI,” says Saratsis, who serves as Station A’s head of product. “This would normally require tens or hundreds of thousands of dollars’ worth of consulting time, and we can do it for next to no money very quickly.”
Building the foundation
As a graduate student in MIT’s Department of Architecture, Saratsis studied environmental design modeling, using data from sources like satellite imagery to understand how communities consume energy and to propose the most impactful potential clean energy solutions. He says classes with professors Christoph Reinhart and Kent Larson were particularly eye-opening.
“My ability to build a thermal energy model and simulate electricity usage in a building started at MIT,” Saratsis says.
Berkemeyer served as president of the MIT Energy Club while at the MIT Sloan School of Management. He was also a research assistant at the MIT Energy Initiative as part of the Future of Solar report and a teacher’s assistant for course 15.366 (Climate and Energy Ventures). He says classes in entrepreneurship with professor of the practice Bill Aulet and in sustainability with Senior Lecturer Jason Jay were formative. Prior to his studies at MIT, Berkemeyer had extensive experience developing solar and storage projects and selling clean energy products to commercial customers. The eventual co-founders didn’t cross paths at MIT, but they ended up working together at the utility NRG Energy after graduation.
“As co-founders, we saw an opportunity to transform how businesses approach clean energy,” said Berkemeyer, who is now Station A’s CEO. “Station A was born out of a shared belief that data and transparency could unlock the full potential of clean energy technologies for everyone.”
At NRG, the founders built software to help identify decarbonization opportunities for customers without having to send analysts to the sites for in-person audits.
“If they worked with a big grocery chain or a big retailer, we would use proprietary analytics to evaluate that portfolio and come up with recommendations for things like solar projects, energy efficiency, and demand response that would yield positive returns within a year,” Saratsis explains.
The tools were a huge success within the company. In 2018, the pair, along with co-founders Jeremy Lucas and Sam Steyer, decided to spin out the technology into Station A.
The founders started by working with energy companies but soon shifted their focus to real estate owners with huge portfolios and large businesses with long-term leasing contracts. Many customers have hundreds or even thousands of addresses to evaluate. Using just the addresses, Station A can provide detailed financial return estimates for clean energy investments.
In 2020, the company widened its focus from selling access to its analytics to creating a marketplace for clean energy transactions, helping businesses run the competitive bidding process for clean energy projects. After a project is installed, Station A can also evaluate whether it’s achieving its expected performance and track financial returns.
“When I talk to people outside the industry, they’re like, ‘Wait, this doesn’t exist already?’” Saratsis says. “It’s kind of crazy, but the industry is still very nascent, and no one’s been able to figure out a way to run the bidding process transparently and at scale.”
From the campus to the world
Today, about 2,500 clean energy developers are active on Station A’s platform. A number of large real estate investment trusts also use its services, in addition to businesses like HP, Nestle, and Goldman Sachs. If Station A were a developer, Saratsis says it would now rank in the top 10 in terms of annual solar deployments.
The founders credit their time at MIT with helping them scale.
“A lot of these relationships originated within the MIT network, whether through folks we met at Sloan or through engagement with MIT,” Saratsis says. “So much of this business is about reputation, and we’ve established a really good reputation.”
Since its founding, Station A has also been sponsoring classes at the Sustainability Lab at MIT, where Saratsis conducted research as a student. As they work to grow Station A’s offerings, the founders say they use the skills they gained as students every day.
“Everything we do around building analysis is inspired in some ways by the stuff that I did when I was at MIT,” Saratsis says.
“Station A is just getting started,” Berkemeyer says. “Clean energy adoption isn’t just about technology — it’s about making the process seamless and accessible. That’s what drives us every day, and we’re excited to lead this transformation.”
A new vaccine approach could help combat future coronavirus pandemicsThe nanoparticle-based vaccine shows promise against many variants of SARS-CoV-2, as well as related sarbecoviruses that could jump to humans.A new experimental vaccine developed by researchers at MIT and Caltech could offer protection against emerging variants of SARS-CoV-2, as well as related coronaviruses, known as sarbecoviruses, that could spill over from animals to humans.
In addition to SARS-CoV-2, the virus that causes COVID-19, sarbecoviruses — a subgenus of coronaviruses — include the virus that led to the outbreak of the original SARS in the early 2000s. Sarbecoviruses that currently circulate in bats and other mammals may also hold the potential to spread to humans in the future.
By attaching up to eight different versions of sarbecovirus receptor-binding proteins (RBDs) to nanoparticles, the researchers created a vaccine that generates antibodies that recognize regions of RBDs that tend to remain unchanged across all strains of the viruses. That makes it much more difficult for viruses to evolve to escape vaccine-induced antibodies.
“This work is an example of how bringing together computation and immunological experiments can be fruitful,” says Arup K. Chakraborty, the John M. Deutch Institute Professor at MIT and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MIT, MGH and Harvard University.
Chakraborty and Pamela Bjorkman, a professor of biology and biological engineering at Caltech, are the senior authors of the study, which appears today in Cell. The paper’s lead authors are Eric Wang PhD ’24, Caltech postdoc Alexander Cohen, and Caltech graduate student Luis Caldera.
Mosaic nanoparticles
The new study builds on a project begun in Bjorkman’s lab, in which she and Cohen created a “mosaic” 60-mer nanoparticle that presents eight different sarbecovirus RBD proteins. The RBD is the part of the viral spike protein that helps the virus get into host cells. It is also the region of the coronavirus spike protein that is usually targeted by antibodies against sarbecoviruses.
RBDs contain some regions that are variable and can easily mutate to escape antibodies. Most of the antibodies generated by mRNA COVID-19 vaccines target those variable regions because they are more easily accessible. That is one reason why mRNA vaccines need to be updated to keep up with the emergence of new strains.
If researchers could create a vaccine that stimulates production of antibodies that target RBD regions that can’t easily change and are shared across viral strains, it could offer broader protection against a variety of sarbecoviruses.
Such a vaccine would have to stimulate B cells that have receptors (which then become antibodies) that target those shared, or “conserved,” regions. When B cells circulating in the body encounter a vaccine or other antigen, their B cell receptors, each of which have two “arms,” are more effectively activated if two copies of the antigen are available for binding to each arm. The conserved regions tend to be less accessible to B cell receptors, so if a nanoparticle vaccine presents just one type of RBD, B cells with receptors that bind to the more accessible variable regions, are most likely to be activated.
To overcome this, the Caltech researchers designed a nanoparticle vaccine that includes 60 copies of RBDs from eight different related sarbecoviruses, which have different variable regions but similar conserved regions. Because eight different RBDs are displayed on each nanoparticle, it’s unlikely that two identical RBDs will end up next to each other. Therefore, when a B cell receptor encounters the nanoparticle immunogen, the B cell is more likely to become activated if its receptor can recognize the conserved regions of the RBD.
“The concept behind the vaccine is that by co-displaying all these different RBDs on the nanoparticle, you are selecting for B cells that recognize the conserved regions that are shared between them,” Cohen says. “As a result, you’re selecting for B cells that are more cross-reactive. Therefore, the antibody response would be more cross-reactive and you could potentially get broader protection.”
In studies conducted in animals, the researchers showed that this vaccine, known as mosaic-8, produced strong antibody responses against diverse strains of SARS-CoV-2 and other sarbecoviruses and protected from challenges by both SARS-CoV-2 and SARS-CoV (original SARS).
Broadly neutralizing antibodies
After these studies were published in 2021 and 2022, the Caltech researchers teamed up with Chakraborty’s lab at MIT to pursue computational strategies that could allow them to identify RBD combinations that would generate even better antibody responses against a wider variety of sarbecoviruses.
Led by Wang, the MIT researchers pursued two different strategies — first, a large-scale computational screen of many possible mutations to the RBD of SARS-CoV-2, and second, an analysis of naturally occurring RBD proteins from zoonotic sarbecoviruses.
For the first approach, the researchers began with the original strain of SARS-CoV-2 and generated sequences of about 800,000 RBD candidates by making substitutions in locations that are known to affect antibody binding to variable portions of the RBD. Then, they screened those candidates for their stability and solubility, to make sure they could withstand attachment to the nanoparticle and injection as a vaccine.
From the remaining candidates, the researchers chose 10 based on how different their variable regions were. They then used these to create mosaic nanoparticles coated with either two or five different RBD proteins (mosaic-2COM and mosaic-5COM).
In their second approach, instead of mutating the RBD sequences, the researchers chose seven naturally occurring RBD proteins, using computational techniques to select RBDs that were different from each other in regions that are variable, but retained their conserved regions. They used these to create another vaccine, mosaic-7COM.
Once the researchers produced the RBD-nanoparticles, they evaluated each one in mice. After each mouse received three doses of one of the vaccines, the researchers analyzed how well the resulting antibodies bound to and neutralized seven variants of SARS-CoV-2 and four other sarbecoviruses.
They also compared the mosaic nanoparticle vaccines to a nanoparticle with only one type of RBD displayed, and to the original mosaic-8 particle from their 2021, 2022, and 2024 studies. They found that mosaic-2COM and mosaic-5COM outperformed both of those vaccines, and mosaic-7COM showed the best responses of all. Mosaic-7COM elicited antibodies with binding to most of the viruses tested, and these antibodies were also able to prevent the viruses from entering cells.
The researchers saw similar results when they tested the new vaccines in mice that were previously vaccinated with a bivalent mRNA COVID-19 vaccine.
“We wanted to simulate the fact that people have already been infected and/or vaccinated against SARS-CoV-2,” Wang says. “In pre-vaccinated mice, mosaic-7COM is consistently giving the highest binding titers for both SARS-CoV-2 variants and other sarbecoviruses.”
Bjorkman’s lab has received funding from the Coalition for Epidemic Preparedness Innovations to do a clinical trial of the mosaic-8 RBD-nanoparticle. They also hope to move mosaic-7COM, which performed better in the current study, into clinical trials. The researchers plan to work on redesigning the vaccines so that they could be delivered as mRNA, which would make them easier to manufacture.
The research was funded by a National Science Foundation Graduate Research Fellowship, the National Institutes of Health, Wellcome Leap, the Bill and Melinda Gates Foundation, the Coalition for Epidemic Preparedness Innovations, and the Caltech Merkin Institute for Translational Research.
Toward video generative models of the molecular worldStarting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.As the capabilities of generative AI models have grown, you've probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.
More recently, generative AI has shown potential in helping chemists and biologists explore static molecules, like proteins and DNA. Models like AlphaFold can predict molecular structures to accelerate drug discovery, and the MIT-assisted “RFdiffusion,” for example, can help design new proteins. One challenge, though, is that molecules are constantly moving and jiggling, which is important to model when constructing new proteins and drugs. Simulating these motions on a computer using physics — a technique known as molecular dynamics — can be very expensive, requiring billions of time steps on supercomputers.
As a step toward simulating these behaviors more efficiently, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Mathematics researchers have developed a generative model that learns from prior data. The team’s system, called MDGen, can take a frame of a 3D molecule and simulate what will happen next like a video, connect separate stills, and even fill in missing frames. By hitting the “play button” on molecules, the tool could potentially help chemists design new molecules and closely study how well their drug prototypes for cancer and other diseases would interact with the molecular structure it intends to impact.
Co-lead author Bowen Jing SM ’22 says that MDGen is an early proof of concept, but it suggests the beginning of an exciting new research direction. “Early on, generative AI models produced somewhat simple videos, like a person blinking or a dog wagging its tail,” says Jing, a PhD student at CSAIL. “Fast forward a few years, and now we have amazing models like Sora or Veo that can be useful in all sorts of interesting ways. We hope to instill a similar vision for the molecular world, where dynamics trajectories are the videos. For example, you can give the model the first and 10th frame, and it’ll animate what’s in between, or it can remove noise from a molecular video and guess what was hidden.”
The researchers say that MDGen represents a paradigm shift from previous comparable works with generative AI in a way that enables much broader use cases. Previous approaches were “autoregressive,” meaning they relied on the previous still frame to build the next, starting from the very first frame to create a video sequence. In contrast, MDGen generates the frames in parallel with diffusion. This means MDGen can be used to, for example, connect frames at the endpoints, or “upsample” a low frame-rate trajectory in addition to pressing play on the initial frame.
This work was presented in a paper shown at the Conference on Neural Information Processing Systems (NeurIPS) this past December. Last summer, it was awarded for its potential commercial impact at the International Conference on Machine Learning’s ML4LMS Workshop.
Some small steps forward for molecular dynamics
In experiments, Jing and his colleagues found that MDGen’s simulations were similar to running the physical simulations directly, while producing trajectories 10 to 100 times faster.
The team first tested their model’s ability to take in a 3D frame of a molecule and generate the next 100 nanoseconds. Their system pieced together successive 10-nanosecond blocks for these generations to reach that duration. The team found that MDGen was able to compete with the accuracy of a baseline model, while completing the video generation process in roughly a minute — a mere fraction of the three hours that it took the baseline model to simulate the same dynamic.
When given the first and last frame of a one-nanosecond sequence, MDGen also modeled the steps in between. The researchers’ system demonstrated a degree of realism in over 100,000 different predictions: It simulated more likely molecular trajectories than its baselines on clips shorter than 100 nanoseconds. In these tests, MDGen also indicated an ability to generalize on peptides it hadn’t seen before.
MDGen’s capabilities also include simulating frames within frames, “upsampling” the steps between each nanosecond to capture faster molecular phenomena more adequately. It can even “inpaint” structures of molecules, restoring information about them that was removed. These features could eventually be used by researchers to design proteins based on a specification of how different parts of the molecule should move.
Toying around with protein dynamics
Jing and co-lead author Hannes Stärk say that MDGen is an early sign of progress toward generating molecular dynamics more efficiently. Still, they lack the data to make these models immediately impactful in designing drugs or molecules that induce the movements chemists will want to see in a target structure.
The researchers aim to scale MDGen from modeling molecules to predicting how proteins will change over time. “Currently, we’re using toy systems,” says Stärk, also a PhD student at CSAIL. “To enhance MDGen’s predictive capabilities to model proteins, we’ll need to build on the current architecture and data available. We don’t have a YouTube-scale repository for those types of simulations yet, so we’re hoping to develop a separate machine-learning method that can speed up the data collection process for our model.”
For now, MDGen presents an encouraging path forward in modeling molecular changes invisible to the naked eye. Chemists could also use these simulations to delve deeper into the behavior of medicine prototypes for diseases like cancer or tuberculosis.
“Machine learning methods that learn from physical simulation represent a burgeoning new frontier in AI for science,” says Bonnie Berger, MIT Simons Professor of Mathematics, CSAIL principal investigator, and senior author on the paper. “MDGen is a versatile, multipurpose modeling framework that connects these two domains, and we’re very excited to share our early models in this direction.”
“Sampling realistic transition paths between molecular states is a major challenge,” says fellow senior author Tommi Jaakkola, who is the MIT Thomas Siebel Professor of electrical engineering and computer science and the Institute for Data, Systems, and Society, and a CSAIL principal investigator. “This early work shows how we might begin to address such challenges by shifting generative modeling to full simulation runs.”
Researchers across the field of bioinformatics have heralded this system for its ability to simulate molecular transformations. “MDGen models molecular dynamics simulations as a joint distribution of structural embeddings, capturing molecular movements between discrete time steps,” says Chalmers University of Technology associate professor Simon Olsson, who wasn’t involved in the research. “Leveraging a masked learning objective, MDGen enables innovative use cases such as transition path sampling, drawing analogies to inpainting trajectories connecting metastable phases.”
The researchers’ work on MDGen was supported, in part, by the National Institute of General Medical Sciences, the U.S. Department of Energy, the National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency.
MIT physicists have created a new ultrathin, two-dimensional material with unusual magnetic properties that initially surprised the researchers before they went on to solve the complicated puzzle behind those properties’ emergence. As a result, the work introduces a new platform for studying how materials behave at the most fundamental level — the world of quantum physics.
Ultrathin materials made of a single layer of atoms have riveted scientists’ attention since the discovery of the first such material — graphene, composed of carbon — about 20 years ago. Among other advances since then, researchers have found that stacking individual sheets of the 2D materials, and sometimes twisting them at a slight angle to each other, can give them new properties, from superconductivity to magnetism. Enter the field of twistronics, which was pioneered at MIT by Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.
In the current research, reported in the Jan. 7 issue of Nature Physics, the scientists, led by Jarillo-Herrero, worked with three layers of graphene. Each layer was twisted on top of the next at the same angle, creating a helical structure akin to the DNA helix or a hand of three cards that are fanned apart.
“Helicity is a fundamental concept in science, from basic physics to chemistry and molecular biology. With 2D materials, one can create special helical structures, with novel properties which we are just beginning to understand. This work represents a new twist in the field of twistronics, and the community is very excited to see what else we can discover using this helical materials platform!” says Jarillo-Herrero, who is also affiliated with MIT’s Materials Research Laboratory.
Do the twist
Twistronics can lead to new properties in ultrathin materials because arranging sheets of 2D materials in this way results in a unique pattern called a moiré lattice. And a moiré pattern, in turn, has an impact on the behavior of electrons.
“It changes the spectrum of energy levels available to the electrons and can provide the conditions for interesting phenomena to arise,” says Sergio C. de la Barrera, one of three co-first authors of the recent paper. De la Barrera, who conducted the work while a postdoc at MIT, is now an assistant professor at the University of Toronto.
In the current work, the helical structure created by the three graphene layers forms two moiré lattices. One is created by the first two overlapping sheets; the other is formed between the second and third sheets.
The two moiré patterns together form a third moiré, a supermoiré, or “moiré of a moiré,” says Li-Qiao Xia, a graduate student in MIT physics and another of the three co-first authors of the Nature Physics paper. “It’s like a moiré hierarchy.” While the first two moiré patterns are only nanometers, or billionths of a meter, in scale, the supermoiré appears at a scale of hundreds of nanometers superimposed over the other two. You can only see it if you zoom out to get a much wider view of the system.
A major surprise
The physicists expected to observe signatures of this moiré hierarchy. They got a huge surprise, however, when they applied and varied a magnetic field. The system responded with an experimental signature for magnetism, one that arises from the motion of electrons. In fact, this orbital magnetism persisted to -263 degrees Celsius — the highest temperature reported in carbon-based materials to date.
But that magnetism can only occur in a system that lacks a specific symmetry — one that the team’s new material should have had. “So the fact that we saw this was very puzzling. We didn’t really understand what was going on,” says Aviram Uri, an MIT Pappalardo postdoc in physics and the third co-first author of the new paper.
Other authors of the paper include MIT professor of physics Liang Fu; Aaron Sharpe of Sandia National Laboratories; Yves H. Kwan of Princeton University; Ziyan Zhu, David Goldhaber-Gordon, and Trithep Devakul of Stanford University; and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
What was happening?
It turns out that the new system did indeed break the symmetry that prohibits the orbital magnetism the team observed, but in a very unusual way. “What happens is that the atoms in this system aren’t very comfortable, so they move in a subtle orchestrated way that we call lattice relaxation,” says Xia. And the new structure formed by that relaxation does indeed break the symmetry locally, on the moiré length scale.
This opens the possibility for the orbital magnetism the team observed. However, if you zoom out to view the system on the supermoiré scale, the symmetry is restored. “The moiré hierarchy turns out to support interesting phenomena at different length scales,” says de la Barrera.
Concludes Uri: “It’s a lot of fun when you solve a riddle and it’s such an elegant solution. We’ve gained new insights into how electrons behave in these complex systems, insights that we couldn’t have had unless our experimental observations forced to think about these things.”
This work was supported by the Army Research Office, the National Science Foundation, the Gordon and Betty Moore Foundation, the Ross M. Brown Family Foundation, an MIT Pappalardo Fellowship, the VATAT Outstanding Postdoctoral Fellowship in Quantum Science and Technology, the JSPS KAKENHI, and a Stanford Science Fellowship. This work was carried out, in part, through the use of MIT.nano facilities.
New START.nano cohort is developing solutions in health, data storage, power, and sustainable energyWith seven new startups, MIT.nano's program for hard-tech ventures expands to more than 20 companies.MIT.nano has announced seven new companies to join START.nano, a program aimed at speeding the transition of hard-tech innovation to market. The program supports new ventures through discounted use of MIT.nano’s facilities and access to the MIT innovation ecosystem.
The advancements pursued by the newly engages startups include wearables for health care, green alternatives to fossil fuel-based energy, novel battery technologies, enhancements in data systems, and interconnecting nanofabrication knowledge networks, among others.
“The transition of the grand idea that is imagined in the laboratory to something that a million people can use in their hands is a journey fraught with many challenges,” MIT.nano Director Vladimir Bulović said at the 2024 Nano Summit, where nine START.nano companies presented their work. The program provides resources to ease startups over the first two hurdles — finding stakeholders and building a well-developed prototype.
In addition to access to laboratory tools necessary to advance their technologies, START.nano companies receive advice from MIT.nano expert staff, are connected to MIT.nano Consortium companies, gain a broader exposure at MIT conferences and community events, and are eligible to join the MIT Startup Exchange.
“MIT.nano has allowed us to push our project to the frontiers of sensing by implementing advanced fabrication techniques using their machinery,” said Uroš Kuzmanović, CEO and founder of Biosens8. “START.nano has surrounded us with exciting peers, a strong support system, and a spotlight to present our work. By taking advantage of all that the program has to offer, BioSens8 is moving faster than we could anywhere else.”
Here are the seven new START.nano participants:
Analog Photonics is developing lidar and optical communications technology using silicon photonics.
Biosens8 is engineering novel devices to enable health ownership. Their research focuses on multiplexed wearables for hormones, neurotransmitters, organ health markers, and drug use that will give insight into the body's health state, opening the door to personalized medicine and proactive, data-driven health decisions.
Casimir, Inc. is working on power-generating nanotechnology that interacts with quantum fields to create a continuous source of power. The team compares their technology to a solar panel that works in the dark or a battery that never needs to be recharged.
Central Spiral focuses on lossless data compression. Their technology allows for the compression of any type of data, including those that are already compressed, reducing data storage and transmission costs, lowering carbon dioxide emissions, and enhancing efficiency.
FabuBlox connects stakeholders across the nanofabrication ecosystem and resolves issues of scattered, unorganized, and isolated fab knowledge. Their cloud-based platform combines a generative process design and simulation interface with GitHub-like repository building capabilities.
Metal Fuels is converting industrial waste aluminum to onsite energy and high-value aluminum/aluminum-oxide powders. Their approach combines existing mature technologies of molten metal purification and water atomization to develop a self-sustaining reactor that produces alumina of higher value than our input scrap aluminum feedstock, while also collecting the hydrogen off-gas.
PolyJoule, Inc. is an energy storage startup working on conductive polymer battery technology. The team’s goal is a grid battery of the future that is ultra-safe, sustainable, long living, and low-cost.
In addition to the seven startups that are actively using MIT.nano, nine other companies have been invited to join the latest START.nano cohort:
Launched in 2021, START.nano now comprises over 20 companies and eight graduates — ventures that have moved beyond the initial startup stages and some into commercialization.
Steven Strang, literary scholar and leader in writing and communication support at MIT, dies at 77The founding director of the Writing and Communication Center worked with thousands of students, faculty, and staff over four decades at MIT.Steven Strang, a writer and literary scholar who founded MIT’s Writing and Communication Center in 1981 and directed it for 40 years, died with family at his side on Dec. 29, 2024. He was 77.
His vision for the center was ambitious. After an MIT working group identified gaps between the students’ technical knowledge and their ability to communicate it — particularly once in positions of leadership — Strang advocated an even broader approach rarely used at other universities. Rather than student-tutors working with peers, Strang hired instructors with doctorates, subject matter expertise, and teaching experience to help train all MIT community members for the current and future careers becoming increasingly reliant on persuasion and the need to communicate with varied audiences.
“He made an indelible mark on the MIT community,” wrote current director Elena Kallestinova in a message to WCC staff soon after Strang’s death. “He was deeply respected as a leader, educator, mentor, and colleague.”
Beginning his professional life as a journalist with the Bangor Daily News, Strang soon shifted to academia, receiving a PhD in English from Brown University and over the decades publishing countless pieces of fiction, poetry, and criticism, in addition to his pedagogical articles on writing and rhetoric.
But the Writing and Communication Center is his legacy. At a retirement party, longtime MIT lecturer and colleague Thalia Rubio called the WCC “Steve’s creation,” pointing out that it went on to serve many thousands of students and others. Another colleague, Bob Irwin, described in a note Strang’s commitment to making the WCC “a place that offered both friendliness and the highest professional standards of advice and consultation on all communication tasks and issues. Steve himself was conscientious, a respectful director, and a warm and reliable mentor to me and others. I think he was exemplary in his job.”
MIT recognized Strang’s major contributions with a Levitan Teaching Award, an Infinite Mile Award, and an Excellence Award. In nomination letters and testimonials, students and peers alike told of a “tireless commitment,” that “they might not have graduated, or been hired to the job they have today, or gained admittance to graduate school had it not been for the help of The Writing Center.”
Strang is also remembered for his work founding the MIT Writers Group, which he first offered as a creative writing workshop for Independent Activities Period in 2002. In yet another example of Strang recognizing and meeting a community need, about 70 people from across the Institute showed up that first year.
Strang is survived by a large extended family, including his wife Ayni and her two children, Elly and Marta, whom Strang adopted as his own. Donations in his memory can be made to The Rhode Island Society for the Prevention of Cruelty to Animals.
New general law governs fracture energy of networks across materials and length scalesFindings reported by MIT researchers may have significant implications in material design.Materials like car tires, human tissues, and spider webs are diverse in composition, but all contain networks of interconnected strands. A long-standing question about the durability of these materials asks: What is the energy required to fracture these diverse networks? A recently published paper by MIT researchers offers new insights.
“Our findings reveal a simple, general law that governs the fracture energy of networks across various materials and length scales,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor and professor of mechanical engineering and civil and environmental engineering at MIT. “This discovery has significant implications for the design of new materials, structures, and metamaterials, allowing for the creation of systems that are incredibly tough, soft, and stretchable.”
Despite an established understanding of the importance of failure resistance in design of such networks, no existing physical model effectively linked strand mechanics and connectivity to predict bulk fracture — until now. This new research reveals a universal scaling law that bridges length scales and makes it possible to predict the intrinsic fracture energy of diverse networks.
“This theory helps us predict how much energy it takes to break these networks by advancing a crack,” says graduate student Chase Hartquist, one of the paper’s lead authors. “It turns out that you can design tougher versions of these materials by making the strands longer, more stretchable, or resistant to higher forces before breaking.”
To validate their results, the team 3D-printed a giant, stretchable network, allowing them to demonstrate fracture properties in practice. They found that despite the differences in the networks, they all followed a simple and predictable rule. Beyond the changes to the strands themselves, a network can also be toughened by connecting the strands into larger loops.
“By adjusting these properties, car tires could last longer, tissues could better resist injury, and spider webs could become more durable,” says Hartquist.
Shu Wang, a postdoc in Zhao’s lab and fellow lead author of the paper, called the research findings “an extremely fulfilling moment ... it meant that the same rules could be applied to describe a wide variety of materials, making it easier to design the best material for a given situation.”
The researchers explain that this work represents progress in an exciting and emerging field called “architected materials,” where the structure within the material itself gives it unique properties. They say the discovery sheds light on how to make these materials even tougher, by focusing on designing the segments within the architecture stronger and more stretchable. The strategy is adaptable for materials across fields and can be applied to improve durability of soft robotic actuators, enhance the toughness of engineered tissues, or even create resilient lattices for aerospace technology.
Their open-access paper, “Scaling Law for Intrinsic Fracture Energy of Diverse Stretchable Networks,” is available now in Physical Review X, a leading journal in interdisciplinary physics.
“Forever grateful for MIT Open Learning for making knowledge accessible and fostering a network of curious minds”Psychologist Bia Adams discovered a passion for computational neuroscience thanks to open-access MIT educational resources.Bia Adams, a London-based neuropsychologist, former professional ballet dancer, and MIT Open Learning learner, has built her career across decades of diverse, interconnected experiences and an emphasis on lifelong learning. She earned her bachelor’s degree in clinical and behavioral psychology, and then worked as a psychologist and therapist for several years before taking a sabbatical in her late 20s to study at the London Contemporary Dance School and The Royal Ballet — fulfilling a long-time dream.
“In hindsight, I think what drew me most to ballet was not so much the form itself,” says Adams, “but more of a subconscious desire to make sense of my body moving through space and time, my emotions and motivations — all within a discipline that is rigorous, meticulous, and routine-based. It’s an endeavor to make sense of the world and myself.”
After acquiring some dance-related injuries, Adams returned to psychology. She completed an online certificate program specializing in medical neuroscience via Duke University, focusing on how pathology arises out of the way the brain computes information and generates behavior.
In addition to her clinical practice, she has also worked at a data science and AI consultancy for neural network research.
In 2022, in search of new things to learn and apply to both her work and personal life, Adams discovered MIT OpenCourseWare within MIT Open Learning. She was drawn to class 8.04 (Quantum Physics I), which specifically focuses on quantum mechanics, as she was hoping to finally gain some understanding of complex topics that she had tried to teach herself in the past with limited success. She credits the course’s lectures, taught by Allan Adams (physicist and principal investigator of the MIT Future Ocean Lab), with finally making these challenging topics approachable.
“I still talk to my friends at length about exciting moments in these lectures,” says Adams. “After the first class, I was hooked.”
Adams’s journey through MIT Open Learning’s educational resources quickly led to a deeper interest in computational neuroscience. She learned how to use tools from mathematics and computer science to better understand the brain, nervous system, and behavior.
She says she gained many new insights from class 6.034 (Artificial Intelligence), particularly in watching the late Professor Patrick Winston’s lectures. She appreciated learning more about the cognitive psychology aspect of AI, including how pioneers in the field looked at how the brain processes information and aimed to build programs that could solve problems. She further enhanced her understanding of AI with the Minds and Machines course on MITx Online, part of Open Learning.
Adams is now in the process of completing Introduction to Computer Science and Programming Using Python, taught by John Guttag; Eric Grimson, former interim vice president for Open Learning; and Ana Bell.
“I am multilingual, and I think the way my brain processes code is similar to the way computers code,” says Adams. “I find learning to code similar to learning a foreign language: both exhilarating and intimidating. Learning the rules, deciphering the syntax, and building my own world through code is one of the most fascinating challenges of my life.”
Adams is also pursuing a master’s degree at Duke and the University College of London, focusing on the neurobiology of sleep and looking particularly at how the biochemistry of the brain can affect this critical function. As a complement to this research, she is currently exploring class 9.40 (Introduction to Neural Computation), taught by Michale Fee and Daniel Zysman, which introduces quantitative approaches to understanding brain and cognitive functions and neurons and covers foundational quantitative tools of data analysis in neuroscience.
In addition to the courses related more directly to her field, MIT Open Learning also provided Adams an opportunity to explore other academic areas. She delved into philosophy for the first time, taking Paradox and Infinity, taught by Professor Agustín Rayo, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences, and Digital Learning Lab Fellow David Balcarras, which looks at the intersection of philosophy and mathematics. She also was able to explore in more depth immunology, which had always been of great interest to her, through Professor Adam Martin’s lectures on this topic in class 7.016 (Introductory Biology).
“I am forever grateful for MIT Open Learning,” says Adams, “for making knowledge accessible and fostering a network of curious minds, all striving to share, expand, and apply this knowledge for the greater good.”
Toward sustainable decarbonization of aviation in Latin AmericaSpecial report describes targets for advancing technologically feasible and economically viable strategies.According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?
To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.
Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.
Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.
SAF emissions, costs, and sources
Under an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.
As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.
Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.
“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.
Recommendations
The researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.
They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.
Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.
“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”
Financial support for this study was provided by LATAM Airlines and Airbus.
Bryan Reimer named to FAA Rulemaking CommitteeCTL research scientist will provide recommendations to the Federal Aviation Administration focused on the most significant human-factor risks to aviation safety.Bryan Reimer, a research scientist at the MIT Center for Transportation and Logistics (CTL), and the founder and co-leader of the Advanced Vehicle Technology Consortium and the Human Factors Evaluator for Automotive Demand Consortium in the MIT AgeLab, has been appointed to the Task Force on Human Factors in Aviation Safety Aviation Rulemaking Committee (HF Task Force ARC). The HF Task Force ARC will provide recommendations to the U.S. Federal Aviation Administration (FAA) on the most significant human factors and the relative contribution of these factors to aviation safety risk.
Reimer, who has worked at MIT since 2003, joins a committee whose operational or academic expertise includes air carrier operations, air traffic control, pilot experience, aeronautical information, aircraft maintenance and mechanics psychology, human-machine integration, and general aviation operations. Their recommendations to the FAA will help ensure safety for passengers, aircraft crews, and cargo for years to come. His appointment follows a year of serving on the Transforming Transportation Advisory Committee (TTAC) for the U.S. Department of Transportation, where he has taken on the role of vice chair on the Artificial Intelligence subcommittee. The TTAC recently released a report to the Secretary of Transportation in response to its charter.
As a mobility and technology futurist working at the intersection of technology, human behavior, and public policy, Reimer brings his expertise in human-machine integration, transportation safety, and AI to the committee. The committee, chartered by congressional mandate through the bipartisan FAA Reauthorization Act of 2024, specifically calls for a portion of the committee to have expertise on human factors but whose experience and training are not primarily in aviation, which Reimer will provide.
MIT CTL creates supply chain innovation and drives it into practice through the three pillars of research, outreach, and education, working with businesses, government, and nongovernmental organizations. As a longtime advocate of collaboration across public and private sectors to ensure consumers’ safety in transportation, Reimer’s particular expertise will help the FAA more broadly consider the human element of aviation safety. Yossi Sheffi, director of MIT CTL, says, “Aviation plays a critical role in the rapid and reliable transportation of goods across vast distances, making it essential for delivering time-sensitive products globally. We must understand the current human factors involved in this process to help ensure smooth operation of this indispensable service amid potential disruptions.”
Reimer recently discussed his research on an episode of The Ojo-Yoshida Report with Phil Koopman, a professor of electrical and computer engineering.
HF Task Force ARC members will serve a two-year term. The first ARC plenary meeting was held Jan. 15-16 in Washington.
For clean ammonia, MIT engineers propose going undergroundUsing the Earth itself as a chemical reactor could reduce the need for fossil-fuel-powered chemical plants.Ammonia is the most widely produced chemical in the world today, used primarily as a source for nitrogen fertilizer. Its production is also a major source of greenhouse gas emissions — the highest in the whole chemical industry.
Now, a team of researchers at MIT has developed an innovative way of making ammonia without the usual fossil-fuel-powered chemical plants that require high heat and pressure. Instead, they have found a way to use the Earth itself as a geochemical reactor, producing ammonia underground. The processes uses Earth’s naturally occurring heat and pressure, provided free of charge and free of emissions, as well as the reactivity of minerals already present in the ground.
The trick the team devised is to inject water underground, into an area of iron-rich subsurface rock. The water carries with it a source of nitrogen and particles of a metal catalyst, allowing the water to react with the iron to generate clean hydrogen, which in turn reacts with the nitrogen to make ammonia. A second well is then used to pump that ammonia up to the surface.
The process, which has been demonstrated in the lab but not yet in a natural setting, is described today in the journal Joule. The paper’s co-authors are MIT professors of materials science and engineering Iwnetim Abate and Ju Li, postdoc Yifan Gao, and five others at MIT.
“When I first produced ammonia from rock in the lab, I was so excited,” Gao recalls. “I realized this represented an entirely new and never-reported approach to ammonia synthesis.’”
The standard method for making ammonia is called the Haber-Bosch process, which was developed in Germany in the early 20th century to replace natural sources of nitrogen fertilizer such as mined deposits of bat guano, which were becoming depleted. But the Haber-Bosch process is very energy intensive: It requires temperatures of 400 degrees Celsius and pressures of 200 atmospheres, and this means it needs huge installations in order to be efficient. Some areas of the world, such as sub-Saharan Africa and Southeast Asia, have few or no such plants in operation. As a result, the shortage or extremely high cost of fertilizer in these regions has limited their agricultural production.
The Haber-Bosch process “is good. It works,” Abate says. “Without it, we wouldn’t have been able to feed 2 out of the total 8 billion people in the world right now, he says, referring to the portion of the world’s population whose food is grown with ammonia-based fertilizers. But because of the emissions and energy demands, a better process is needed, he says.
Burning fuel to generate heat is responsible for about 20 percent of the greenhouse gases emitted from plants using the Haber-Bosch process. Making hydrogen accounts for the remaining 80 percent. But ammonia, the molecule NH3, is made up only of nitrogen and hydrogen. There’s no carbon in the formula, so where do the carbon emissions come from? The standard way of producing the needed hydrogen is by processing methane gas with steam, breaking down the gas into pure hydrogen, which gets used, and carbon dioxide gas that gets released into the air.
Other processes exist for making low- or no-emissions hydrogen, such as by using solar or wind-generated electricity to split water into oxygen and hydrogen, but that process can be expensive. That’s why Abate and his team worked on developing a system to produce what they call geological hydrogen. Some places in the world, including some in Africa, have been found to naturally generate hydrogen underground through chemical reactions between water and iron-rich rocks. These pockets of naturally occurring hydrogen can be mined, just like natural methane reservoirs, but the extent and locations of such deposits are still relatively unexplored.
Abate realized this process could be created or enhanced by pumping water, laced with copper and nickel catalyst particles to speed up the process, into the ground in places where such iron-rich rocks were already present. “We can use the Earth as a factory to produce clean flows of hydrogen,” he says.
He recalls thinking about the problem of the emissions from hydrogen production for ammonia: “The ‘aha!’ moment for me was thinking, how about we link this process of geological hydrogen production with the process of making Haber-Bosch ammonia?”
That would solve the biggest problem of the underground hydrogen production process, which is how to capture and store the gas once it’s produced. Hydrogen is a very tiny molecule — the smallest of them all — and hard to contain. But by implementing the entire Haber-Bosch process underground, the only material that would need to be sent to the surface would be the ammonia itself, which is easy to capture, store, and transport.
The only extra ingredient needed to complete the process was the addition of a source of nitrogen, such as nitrate or nitrogen gas, into the water-catalyst mixture being injected into the ground. Then, as the hydrogen gets released from water molecules after interacting with the iron-rich rocks, it can immediately bond with the nitrogen atoms also carried in the water, with the deep underground environment providing the high temperatures and pressures required by the Haber-Bosch process. A second well near the injection well then pumps the ammonia out and into tanks on the surface.
“We call this geological ammonia,” Abate says, “because we are using subsurface temperature, pressure, chemistry, and geologically existing rocks to produce ammonia directly.”
Whereas transporting hydrogen requires expensive equipment to cool and liquefy it, and virtually no pipelines exist for its transport (except near oil refinery sites), transporting ammonia is easier and cheaper. It’s about one-sixth the cost of transporting hydrogen, and there are already more than 5,000 miles of ammonia pipelines and 10,000 terminals in place in the U.S. alone. What’s more, Abate explains, ammonia, unlike hydrogen, already has a substantial commercial market in place, with production volume projected to grow by two to three times by 2050, as it is used not only for fertilizer but also as feedstock for a wide variety of chemical processes.
For example, ammonia can be burned directly in gas turbines, engines, and industrial furnaces, providing a carbon-free alternative to fossil fuels. It is being explored for maritime shipping and aviation as an alternative fuel, and as a possible space propellant.
Another upside to geological ammonia is that untreated wastewater, including agricultural runoff, which tends to be rich in nitrogen already, could serve as the water source and be treated in the process. “We can tackle the problem of treating wastewater, while also making something of value out of this waste,” Abate says.
Gao adds that this process “involves no direct carbon emissions, presenting a potential pathway to reduce global CO2 emissions by up to 1 percent.” To arrive at this point, he says, the team “overcame numerous challenges and learned from many failed attempts. For example, we tested a wide range of conditions and catalysts before identifying the most effective one.”
The project was seed-funded under a flagship project of MIT’s Climate Grand Challenges program, the Center for the Electrification and Decarbonization of Industry. Professor Yet-Ming Chiang, co-director of the center, says “I don’t think there’s been any previous example of deliberately using the Earth as a chemical reactor. That’s one of the key novel points of this approach.” Chiang emphasizes that even though it is a geological process, it happens very fast, not on geological timescales. “The reaction is fundamentally over in a matter of hours,” he says. “The reaction is so fast that this answers one of the key questions: Do you have to wait for geological times? And the answer is absolutely no.”
Professor Elsa Olivetti, a mission director of the newly established Climate Project at MIT, says, “The creative thinking by this team is invaluable to MIT’s ability to have impact at scale. Coupling these exciting results with, for example, advanced understanding of the geology surrounding hydrogen accumulations represent the whole-of-Institute efforts the Climate Project aims to support.”
“This is a significant breakthrough for the future of sustainable development,” says Geoffrey Ellis, a geologist at the U.S. Geological Survey, who was not associated with this work. He adds, “While there is clearly more work that needs to be done to validate this at the pilot stage and to get this to the commercial scale, the concept that has been demonstrated is truly transformative. The approach of engineering a system to optimize the natural process of nitrate reduction by Fe2+ is ingenious and will likely lead to further innovations along these lines.”
The initial work on the process has been done in the laboratory, so the next step will be to prove the process using a real underground site. “We think that kind of experiment can be done within the next one to two years,” Abate says. This could open doors to using a similar approach for other chemical production processes, he adds.
The team has applied for a patent and aims to work towards bringing the process to market.
“Moving forward,” Gao says, “our focus will be on optimizing the process conditions and scaling up tests, with the goal of enabling practical applications for geological ammonia in the near future.”
The research team also included Ming Lei, Bachu Sravan Kumar, Hugh Smith, Seok Hee Han, and Lokesh Sangabattula, all at MIT. Additional funding was provided by the National Science Foundation and was carried out, in part, through the use of MIT.nano facilities.
Modeling complex behavior with a simple organismBy studying the roundworm C. elegans, neuroscientist Steven Flavell explores how neural circuits give rise to behavior.The roundworm C. elegans is a simple animal whose nervous system has exactly 302 neurons. Each of the connections between those neurons has been comprehensively mapped, allowing researchers to study how they work together to generate the animal’s different behaviors.
Steven Flavell, an MIT associate professor of brain and cognitive sciences and investigator with The Picower Institute for Learning and Memory at MIT and the Howard Hughes Medical Institute, uses the worm as a model to study motivated behaviors such as feeding and navigation, in hopes of shedding light on the fundamental mechanisms that may also determine how similar behaviors are controlled in other animals.
In recent studies, Flavell’s lab has uncovered neural mechanisms underlying adaptive changes in the worms’ feeding behavior, and his lab has also mapped how the activity of each neuron in the animal’s nervous system affects the worms’ different behaviors.
Such studies could help researchers gain insight into how brain activity generates behavior in humans. “It is our aim to identify molecular and neural circuit mechanisms that may generalize across organisms,” he says, noting that many fundamental biological discoveries, including those related to programmed cell death, microRNA, and RNA interference, were first made in C. elegans.
“Our lab has mostly studied motivated state-dependent behaviors, like feeding and navigation. The machinery that’s being used to control these states in C. elegans — for example, neuromodulators — are actually the same as in humans. These pathways are evolutionarily ancient,” he says.
Drawn to the lab
Born in London to an English father and a Dutch mother, Flavell came to the United States in 1982 at the age of 2, when his father became chief scientific officer at Biogen. The family lived in Sudbury, Massachusetts, and his mother worked as a computer programmer and math teacher. His father later became a professor of immunology at Yale University.
Though Flavell grew up in a science family, he thought about majoring in English when he arrived at Oberlin College. A musician as well, Flavell took jazz guitar classes at Oberlin’s conservatory, and he also plays the piano and the saxophone. However, taking classes in psychology and physiology led him to discover that the field that most captivated him was neuroscience.
“I was immediately sold on neuroscience. It combined the rigor of the biological sciences with deep questions from psychology,” he says.
While in college, Flavell worked on a summer research project related to Alzheimer’s disease, in a lab at Case Western Reserve University. He then continued the project, which involved analyzing post-mortem Alzheimer’s tissue, during his senior year at Oberlin.
“My earliest research revolved around mechanisms of disease. While my research interests have evolved since then, my earliest research experiences were the ones that really got me hooked on working at the bench: running experiments, looking at brand new results, and trying to understand what they mean,” he says.
By the end of college, Flavell was a self-described lab rat: “I just love being in the lab.” He applied to graduate school and ended up going to Harvard Medical School for a PhD in neuroscience. Working with Michael Greenberg, Flavell studied how sensory experience and resulting neural activity shapes brain development. In particular, he focused on a family of gene regulators called MEF2, which play important roles in neuronal development and synaptic plasticity.
All of that work was done using mouse models, but Flavell transitioned to studying C. elegans during a postdoctoral fellowship working with Cori Bargmann at Rockefeller University. He was interested in studying how neural circuits control behavior, which seemed to be more feasible in simpler animal models.
“Studying how neurons across the brain govern behavior felt like it would be nearly intractable in a large brain — to understand all the nuts and bolts of how neurons interact with each other and ultimately generate behavior seemed daunting,” he says. “But I quickly became excited about studying this in C. elegans because at the time it was still the only animal with a full blueprint of its brain: a map of every brain cell and how they are all wired up together.”
That wiring diagram includes about 7,000 synapses in the entire nervous system. By comparison, a single human neuron may form more than 10,000 synapses. “Relative to those larger systems, the C. elegans nervous system is mind-bogglingly simple,” Flavell says.
Despite their much simpler organization, roundworms can execute complex behaviors such as feeding, locomotion, and egg-laying. They even sleep, form memories, and find suitable mating partners. The neuromodulators and cellular machinery that give rise to those behaviors are similar to those found in humans and other mammals.
“C. elegans has a fairly well-defined, smallish set of behaviors, which makes it attractive for research. You can really measure almost everything that the animal is doing and study it,” Flavell says.
How behavior arises
Early in his career, Flavell’s work on C. elegans revealed the neural mechanisms that underlie the animal’s stable behavioral states. When worms are foraging for food, they alternate between stably exploring the environment and pausing to feed. “The transition rates between those states really depend on all these cues in the environment. How good is the food environment? How hungry are they? Are there smells indicating a better nearby food source? The animal integrates all of those things and then adjusts their foraging strategy,” Flavell says.
These stable behavioral states are controlled by neuromodulators like serotonin. By studying serotonergic regulation of the worm’s behavioral states, Flavell’s lab has been able to uncover how this important system is organized. In a recent study, Flavell and his colleagues published an “atlas” of the C. elegans serotonin system. They identified every neuron that produces serotonin, every neuron that has serotonin receptors, and how brain activity and behavior change across the animal as serotonin is released.
“Our studies of how the serotonin system works to control behavior have already revealed basic aspects of serotonin signaling that we think ought to generalize all the way up to mammals,” Flavell says. “By studying the way that the brain implements these long-lasting states, we can tap into these basic features of neuronal function. With the resolution that you can get studying specific C. elegans neurons and the way that they implement behavior, we can uncover fundamental features of the way that neurons act.”
In parallel, Flavell’s lab has also been mapping out how neurons across the C. elegans brain control different aspects of behavior. In a 2023 study, Flavell’s lab mapped how changes in brain-wide activity relate to behavior. His lab uses special microscopes that can move along with the worms as they explore, allowing them to simultaneously track every behavior and measure the activity of every neuron in the brain. Using these data, the researchers created computational models that can accurately capture the relationship between brain activity and behavior.
This type of research requires expertise in many areas, Flavell says. When looking for faculty jobs, he hoped to find a place where he could collaborate with researchers working in different fields of neuroscience, as well as scientists and engineers from other departments.
“Being at MIT has allowed my lab to be much more multidisciplinary than it could have been elsewhere,” he says. “My lab members have had undergrad degrees in physics, math, computer science, biology, neuroscience, and we use tools from all of those disciplines. We engineer microscopes, we build computational models, we come up with molecular tricks to perturb neurons in the C. elegans nervous system. And I think being able to deploy all those kinds of tools leads to exciting research outcomes.”
Explained: Generative AI’s environmental impactRapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.
The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.
The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.
Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.
Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.
“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.
Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.
Demanding data centers
The electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.
A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.
While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.
“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.
By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).
While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.
“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.
The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.
While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.
Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.
Increasing impacts from inference
Once a generative AI model is trained, the energy demands don’t disappear.
Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.
“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”
With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.
Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.
While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.
Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.
“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.
The computing hardware inside data centers brings its own, less direct environmental impacts.
While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.
There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.
Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.
The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.
He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.
“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says.
Making the art world more accessibleThe startup NALA, which began as an MIT class project, directly matches art buyers with artists.In the world of high-priced art, galleries usually act as gatekeepers. Their selective curation process is a key reason galleries in major cities often feature work from the same batch of artists. The system limits opportunities for emerging artists and leaves great art undiscovered.
NALA was founded by Benjamin Gulak ’22 to disrupt the gallery model. The company’s digital platform, which was started as part of an MIT class project, allows artists to list their art and uses machine learning and data science to offer personalized recommendations to art lovers.
By providing a much larger pool of artwork to buyers, the company is dismantling the exclusive barriers put up by traditional galleries and efficiently connecting creators with collectors.
“There’s so much talent out there that has never had the opportunity to be seen outside of the artists’ local market,” Gulak says. “We’re opening the art world to all artists, creating a true meritocracy.”
NALA takes no commission from artists, instead charging buyers an 11.5 percent commission on top of the artist’s listed price. Today more than 20,000 art lovers are using NALA's platform, and the company has registered more than 8,500 artists.
“My goal is for NALA to become the dominant place where art is discovered, bought, and sold online,” Gulak says. “The gallery model has existed for such a long period of time that they are the tastemakers in the art world. However, most buyers never realize how restrictive the industry has been.”
From founder to student to founder again
Growing up in Canada, Gulak worked hard to get into MIT, participating in science fairs and robotic competitions throughout high school. When he was 16, he created an electric, one-wheeled motorcycle that got him on the popular television show “Shark Tank” and was later named one of the top inventions of the year by Popular Science.
Gulak was accepted into MIT in 2009 but withdrew from his undergrad program shortly after entering to launch a business around the media exposure and capital from “Shark Tank.” Following a whirlwind decade in which he raised more than $12 million and sold thousands of units globally, Gulak decided to return to MIT to complete his degree, switching his major from mechanical engineering to one combining computer science, economics, and data science.
“I spent 10 years of my life building my business, and realized to get the company where I wanted it to be, it would take another decade, and that wasn’t what I wanted to be doing,” Gulak says. “I missed learning, and I missed the academic side of my life. I basically begged MIT to take me back, and it was the best decision I ever made.”
During the ups and downs of running his company, Gulak took up painting to de-stress. Art had always been a part of Gulak’s life, and he had even done a fine arts study abroad program in Italy during high school. Determined to try selling his art, he collaborated with some prominent art galleries in London, Miami, and St. Moritz. Eventually he began connecting artists he’d met on travels from emerging markets like Cuba, Egypt, and Brazil to the gallery owners he knew.
“The results were incredible because these artists were used to selling their work to tourists for $50, and suddenly they’re hanging work in a fancy gallery in London and getting 5,000 pounds,” Gulak says. “It was the same artist, same talent, but different buyers.”
At the time, Gulak was in his third year at MIT and wondering what he’d do after graduation. He thought he wanted to start a new business, but every industry he looked at was dominated by tech giants. Every industry, that is, except the art world.
“The art industry is archaic,” Gulak says. “Galleries have monopolies over small groups of artists, and they have absolute control over the prices. The buyers are told what the value is, and almost everywhere you look in the industry, there’s inefficiencies.”
At MIT, Gulak was studying the recommender engines that are used to populate social media feeds and personalize show and music suggestions, and he envisioned something similar for the visual arts.
“I thought, why, when I go on the big art platforms, do I see horrible combinations of artwork even though I’ve had accounts on these platforms for years?” Gulak says. “I’d get new emails every week titled ‘New art for your collection,’ and the platform had no idea about my taste or budget.”
For a class project at MIT, Gulak built a system that tried to predict the types of art that would do well in a gallery. By his final year at MIT, he had realized that working directly with artists would be a more promising approach.
“Online platforms typically take a 30 percent fee, and galleries can take an additional 50 percent fee, so the artist ends up with a small percentage of each online sale, but the buyer also has to pay a luxury import duty on the full price,” Gulak explains. “That means there’s a massive amount of fat in the middle, and that’s where our direct-to-artist business model comes in.”
Today NALA, which stands for Networked Artistic Learning Algorithm, onboards artists by having them upload artwork and fill out a questionnaire about their style. They can begin uploading work immediately and choose their listing price.
The company began by using AI to match art with its most likely buyer. Gulak notes that not all art will sell — “if you’re making rock paintings there may not be a big market” — and artists may price their work higher than buyers are willing to pay, but the algorithm works to put art in front of the most likely buyer based on style preferences and budget. NALA also handles sales and shipments, providing artists with 100 percent of their list price from every sale.
“By not taking commissions, we’re very pro artists,” Gulak says. “We also allow all artists to participate, which is unique in this space. NALA is built by artists for artists.”
Last year, NALA also started allowing buyers to take a photo of something they like and see similar artwork from its database.
“In museums, people will take a photo of masterpieces they’ll never be able to afford, and now they can find living artists producing the same style that they could actually put in their home,” Gulak says. “It makes art more accessible.”
Championing artists
Ten years ago, Ben Gulak was visiting Egypt when he discovered an impressive mural on the street. Gulak found the local artist, Ahmed Nofal, on Instagram and bought some work. Later, he brought Nofal to Dubai to participate in World Art Dubai. The artist’s work was so well-received he ended up creating murals for the Royal British Museum in London and Red Bull. Most recently, Nofal and Gulak collaborated together during Art Basel 2024 doing a mural at the Museum of Graffiti in Miami.
Gulak has worked personally with many of the artists on his platform. For more than a decade he’s travelled to Cuba buying art and delivering art supplies to friends. He’s also worked with artists as they work to secure immigration visas.
“Many people claim they want to help the art world, but in reality, they often fall back on the same outdated business models,” says Gulak. “Art isn’t just my passion — it’s a way of life for me. I’ve been on every side of the art world: as a painter selling my work through galleries, as a collector with my office brimming with art, and as a collaborator working alongside incredible talents like Raheem Saladeen Johnson. When artists visit, we create together, sharing ideas and brainstorming. These experiences, combined with my background as both an artist and a computer scientist, give me a unique perspective. I’m trying to use technology to provide artists with unparalleled access to the global market and shake things up.”
Karl Berggren named faculty head of electrical engineering in EECSProfessor who develops technologies to push the envelope of what is possible with photonics and electronic devices succeeds Joel Voldman.Karl K. Berggren, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering at MIT, has been named the new faculty head of electrical engineering in the Department of Electrical Engineering and Computer Science (EECS), effective Jan. 15.
“Karl’s exceptional interdisciplinary research combining electrical engineering, physics, and materials science, coupled with his experience working with industry and government organizations, makes him an ideal fit to head electrical engineering. I’m confident electrical engineering will continue to grow under his leadership,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.
“Karl has made an incredible impact as a researcher and educator over his two decades in EECS. Students and faculty colleagues praise his thoughtful approach to teaching, and the care with which he oversaw the teaching labs in his prior role as undergraduate lab officer for the department. He will undoubtedly be an excellent leader, bringing his passion for education and collaborative spirit to this new role,” adds Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
Berggren joins the leadership of EECS, which jointly reports to the MIT Schwarzman College of Computing and the School of Engineering. The largest academic department at MIT, EECS was reorganized in 2019 as part of the formation of the college into three overlapping sub-units in electrical engineering, computer science, and artificial intelligence and decision-making. The restructuring has enabled each of the three sub-units to concentrate on faculty recruitment, mentoring, promotion, academic programs, and community building in coordination with the others.
A member of the EECS faculty since 2003, Berggren has taught a range of subjects in the department, including 6.02 (Digital Communications), 6.002 (Circuits and Electronics), 6.1010 (Fundamentals of Programming), 6.6400 (Applied Quantum and Statistical Physics), 6.9010 (Introduction to EECS via Interconnected Embedded Systems), 6.2400 (Introduction to Quantum Systems Engineering), and 6.6600 (Nanostructure Fabrication). Before joining EECS, Berggren worked as a staff member at MIT Lincoln Laboratory for seven years. Berggren also maintains an active consulting practice and has experience working with industrial and government organizations.
Berggren’s current research focuses on superconductive circuits, electronic devices, single-photon detectors for quantum applications, and electron-optical systems. He heads the Quantum Nanostructures and Nanofabrication Group, which develops nanofabrication technology at the few-nanometer length scale. The group uses these technologies to push the envelope of what is possible with photonic and electrical devices, focusing on superconductive and free-electron devices.
Berggren has received numerous prestigious awards and honors throughout his career. Most recently, he was named an MIT MacVicar Fellow in 2024. Berggren is also a fellow of the AAAS, IEEE, and the Kavli Foundation, and a recipient of the 2015 Paul T. Forman Team Engineering Award from the Optical Society of America (now Optica). In 2016, he received a Bose Fellowship and was also a recipient of the EECS department’s Frank Quick Innovation Fellowship and the Burgess (’52) & Elizabeth Jamieson Award for Excellence in Teaching.
Berggren succeeds Joel Voldman, who has served as the inaugural electrical engineering faculty head since January 2020.
“Joel has been in leadership roles since 2018, when he was named associate department head of EECS. I am deeply grateful to him for his invaluable contributions to EECS since that time,” says Asu Ozdaglar, MathWorks Professor and head of EECS, who also serves as the deputy dean of the MIT Schwarzman College of Computing. “I look forward to working with Karl now and continuing along the amazing path we embarked on in 2019.”
Three MIT students named 2026 Schwarzman ScholarsYutao Gong, Brandon Man, and Andrii Zahorodnii will spend 2025-26 at Tsinghua University in China studying global affairs.Three MIT students — Yutao Gong, Brandon Man, and Andrii Zahorodnii — have been awarded 2025 Schwarzman Scholarships and will join the program’s 10th cohort to pursue a master’s degree in global affairs at Tsinghua University in Beijing, China.
The MIT students were selected from a pool of over 5,000 applicants. This year’s class of 150 scholars represents 38 countries and 105 universities from around the world.
The Schwarzman Scholars program aims to develop leadership skills and deepen understanding of China’s changing role in the world. The fully funded one-year master’s program at Tsinghua University emphasizes leadership, global affairs, and China. Scholars also gain exposure to China through mentoring, internships, and experiential learning.
MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in Career Advising and Professional Development and the Presidential Committee on Distinguished Fellowships.
Yutao Gong will graduate this spring from the Leaders for Global Operations program at the MIT Sloan School of Management, earning a dual MBA and a MS degree in civil and environmental engineering with a focus on manufacturing and operations. Gong, who hails from Shanghai, China, has academic, work, and social engagement experiences in China, the United States, Jordan, and Denmark. She was previously a consultant at Boston Consulting Group working on manufacturing, agriculture, sustainability, and renewable energy-related projects, and spent two years in Chicago and one year in Greater China as a global ambassador. Gong graduated magna cum laude from Duke University with double majors in environmental science and statistics, where she organized the Duke China-U.S. Summit.
Brandon Man, from Canada and Hong Kong, is a master’s student in the Department of Mechanical Engineering at MIT, where he studies generative artificial intelligence (genAI) for engineering design. Previously, he graduated from Cornell University magna cum laude with honors in computer science. With a wealth of experience in robotics — from assistive robots to next-generation spacesuits for NASA to Tencent’s robot dog, Max — he is now a co-founder of Sequestor, a genAI-powered data aggregation platform that enables carbon credit investors to perform faster due diligence. His goal is to bridge the best practices of the Eastern and Western tech worlds.
Andrii Zahorodnii, from Ukraine, will graduate this spring with a bachelor of science and a master of engineering degree in computer science and cognitive sciences. An engineer as well as a neuroscientist, he has conducted research at MIT with Professor Guangyu Robert Yang’s MetaConscious Group and the Fiete Lab. Zahorodnii is passionate about using AI to uncover insights into human cognition, leading to more-informed, empathetic, and effective global decision-making and policy. Besides driving the exchange of ideas as a TEDxMIT organizer, he strives to empower and inspire future leaders internationally and in Ukraine through the Ukraine Leadership and Technology Academy he founded.
This fast and agile robotic insect could someday aid in mechanical pollinationWith a new design, the bug-sized bot was able to fly 100 times longer than prior versions.With a more efficient method for artificial pollination, farmers in the future could grow fruits and vegetables inside multilevel warehouses, boosting yields while mitigating some of agriculture’s harmful impacts on the environment.
To help make this idea a reality, MIT researchers are developing robotic insects that could someday swarm out of mechanical hives to rapidly perform precise pollination. However, even the best bug-sized robots are no match for natural pollinators like bees when it comes to endurance, speed, and maneuverability.
Now, inspired by the anatomy of these natural pollinators, the researchers have overhauled their design to produce tiny, aerial robots that are far more agile and durable than prior versions.
The new bots can hover for about 1,000 seconds, which is more than 100 times longer than previously demonstrated. The robotic insect, which weighs less than a paperclip, can fly significantly faster than similar bots while completing acrobatic maneuvers like double aerial flips.
The revamped robot is designed to boost flight precision and agility while minimizing the mechanical stress on its artificial wing flexures, which enables faster maneuvers, increased endurance, and a longer lifespan.
The new design also has enough free space that the robot could carry tiny batteries or sensors, which could enable it to fly on its own outside the lab.
“The amount of flight we demonstrated in this paper is probably longer than the entire amount of flight our field has been able to accumulate with these robotic insects. With the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and the senior author of an open-access paper on the new design.
Chen is joined on the paper by co-lead authors Suhan Kim and Yi-Hsuan Hsiao, who are EECS graduate students; as well as EECS graduate student Zhijian Ren and summer visiting student Jiashu Huang. The research appears today in Science Robotics.
Boosting performance
Prior versions of the robotic insect were composed of four identical units, each with two wings, combined into a rectangular device about the size of a microcassette.
“But there is no insect that has eight wings. In our old design, the performance of each individual unit was always better than the assembled robot,” Chen says.
This performance drop was partly caused by the arrangement of the wings, which would blow air into each other when flapping, reducing the lift forces they could generate.
The new design chops the robot in half. Each of the four identical units now has one flapping wing pointing away from the robot’s center, stabilizing the wings and boosting their lift forces. With half as many wings, this design also frees up space so the robot could carry electronics.
In addition, the researchers created more complex transmissions that connect the wings to the actuators, or artificial muscles, that flap them. These durable transmissions, which required the design of longer wing hinges, reduce the mechanical strain that limited the endurance of past versions.
“Compared to the old robot, we can now generate control torque three times larger than before, which is why we can do very sophisticated and very accurate path-finding flights,” Chen says.
Yet even with these design innovations, there is still a gap between the best robotic insects and the real thing. For instance, a bee has only two wings, yet it can perform rapid and highly controlled motions.
“The wings of bees are finely controlled by a very sophisticated set of muscles. That level of fine-tuning is something that truly intrigues us, but we have not yet been able to replicate,” he says.
Less strain, more force
The motion of the robot’s wings is driven by artificial muscles. These tiny, soft actuators are made from layers of elastomer sandwiched between two very thin carbon nanotube electrodes and then rolled into a squishy cylinder. The actuators rapidly compress and elongate, generating mechanical force that flaps the wings.
In previous designs, when the actuator’s movements reach the extremely high frequencies needed for flight, the devices often start buckling. That reduces the power and efficiency of the robot. The new transmissions inhibit this bending-buckling motion, which reduces the strain on the artificial muscles and enables them to apply more force to flap the wings.
Another new design involves a long wing hinge that reduces torsional stress experienced during the flapping-wing motion. Fabricating the hinge, which is about 2 centimeters long but just 200 microns in diameter, was among their greatest challenges.
“If you have even a tiny alignment issue during the fabrication process, the wing hinge will be slanted instead of rectangular, which affects the wing kinematics,” Chen says.
After many attempts, the researchers perfected a multistep laser-cutting process that enabled them to precisely fabricate each wing hinge.
With all four units in place, the new robotic insect can hover for more than 1,000 seconds, which equates to almost 17 minutes, without showing any degradation of flight precision.
“When my student Nemo was performing that flight, he said it was the slowest 1,000 seconds he had spent in his entire life. The experiment was extremely nerve-racking,” Chen says.
The new robot also reached an average speed of 35 centimeters per second, the fastest flight researchers have reported, while performing body rolls and double flips. It can even precisely track a trajectory that spells M-I-T.
“At the end of the day, we’ve shown flight that is 100 times longer than anyone else in the field has been able to do, so this is an extremely exciting result,” he says.
From here, Chen and his students want to see how far they can push this new design, with the goal of achieving flight for longer than 10,000 seconds.
They also want to improve the precision of the robots so they could land and take off from the center of a flower. In the long run, the researchers hope to install tiny batteries and sensors onto the aerial robots so they could fly and navigate outside the lab.
“This new robot platform is a major result from our group and leads to many exciting directions. For example, incorporating sensors, batteries, and computing capabilities on this robot will be a central focus in the next three to five years,” Chen says.
This research is funded, in part, by the U.S. National Science Foundation and a Mathworks Fellowship.
How one brain circuit encodes memories of both places and eventsA new computational model explains how neurons linked to spatial navigation can also help store episodic memories.Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.
A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.
“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”
MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.
An index of memories
To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.
In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.
“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” Fiete says. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”
Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.
An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.
In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.
“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.
In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.
When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.
“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”
Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.
Modeling memory cliffs and palaces
The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.
While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.
The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.
The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.
The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.
The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.
X-ray flashes from a nearby supermassive black hole accelerate mysteriouslyTheir source could be the core of a dead star that’s teetering at the black hole’s edge, MIT astronomers report.One supermassive black hole has kept astronomers glued to their scopes for the last several years. First came a surprise disappearance, and now, a precarious spinning act.
The black hole in question is 1ES 1927+654, which is about as massive as a million suns and sits in a galaxy that is 270 million light-years away. In 2018, astronomers at MIT and elsewhere observed that the black hole’s corona — a cloud of whirling, white-hot plasma — suddenly disappeared, before reassembling months later. The brief though dramatic shut-off was a first in black hole astronomy.
Members of the MIT team have now caught the same black hole exhibiting more unprecedented behavior.
The astronomers have detected flashes of X-rays coming from the black hole at a steadily increasing clip. Over a period of two years, the flashes, at millihertz frequencies, increased from every 18 minutes to every seven minutes. This dramatic speed-up in X-rays has not been seen from a black hole until now.
The researchers explored a number of scenarios for what might explain the flashes. They believe the most likely culprit is a spinning white dwarf — an extremely compact core of a dead star that is orbiting around the black hole and getting precariously closer to its event horizon, the boundary beyond which nothing can escape the black hole’s gravitational pull. If this is the case, the white dwarf must be pulling off an impressive balancing act, as it could be coming right up to the black hole’s edge without actually falling in.
“This would be the closest thing we know of around any black hole,” says Megan Masterson, a graduate student in physics at MIT, who co-led the discovery. “This tells us that objects like white dwarfs may be able to live very close to an event horizon for a relatively extended period of time.”
The researchers present their findings today at the 245th meeting of the American Astronomical Society.
If a white dwarf is at the root of the black hole’s mysterious flashing, it would also give off gravitational waves, in a range that would be detectable by next-generation observatories such as the European Space Agency's Laser Interferometer Space Antenna (LISA).
“These new detectors are designed to detect oscillations on the scale of minutes, so this black hole system is in that sweet spot,” says co-author Erin Kara, associate professor of physics at MIT.
The study’s other co-authors include MIT Kavli members Christos Panagiotou, Joheen Chakraborty, Kevin Burdge, Riccardo Arcodia, Ronald Remillard, and Jingyi Wang, along with collaborators from multiple other institutions.
Nothing normal
Kara and Masterson were part of the team that observed 1ES 1927+654 in 2018, as the black hole’s corona went dark, then slowly rebuilt itself over time. For a while, the newly reformed corona — a cloud of highly energetic plasma and X-rays — was the brightest X-ray-emitting object in the sky.
“It was still extremely bright, though it wasn’t doing anything new for a couple years and was kind of gurgling along. But we felt we had to keep monitoring it because it was so beautiful,” Kara says. “Then we noticed something that has never really been seen before.”
In 2022, the team looked through observations of the black hole taken by the European Space Agency’s XMM-Newton, a space-based observatory that detects and measures X-ray emissions from black holes, neutron stars, galactic clusters, and other extreme cosmic sources. They noticed that X-rays from the black hole appeared to pulse with increasing frequency. Such “quasi-periodic oscillations” have only been observed in a handful of other supermassive black holes, where X-ray flashes appear with regular frequency.
In the case of 1ES 1927+654, the flickering seemed to steadily ramp up, from every 18 minutes to every seven minutes over the span of two years.
“We’ve never seen this dramatic variability in the rate at which it’s flashing,” Masterson says. “This looked absolutely nothing like a normal supermassive black hole.”
The fact that the flashing was detected in the X-ray band points to the strong possibility that the source is somewhere very close to the black hole. The innermost regions of a black hole are extremely high-energy environments, where X-rays are produced by fast-moving, hot plasma. X-rays are less likely to be seen at farther distances, where gas can circle more slowly in an accretion disk. The cooler environment of the disk can emit optical and ultraviolet light, but rarely gives off X-rays.
“Seeing something in the X-rays is already telling you you’re pretty close to the black hole,” Kara says. “When you see variability on the timescale of minutes, that’s close to the event horizon, and the first thing your mind goes to is circular motion, and whether something could be orbiting around the black hole.”
X-ray kick-up
Whatever was producing the X-ray flashes was doing so at an extremely close distance from the black hole, which the researchers estimate to be within a few million miles of the event horizon.
Masterson and Kara explored models for various astrophysical phenomena that could explain the X-ray patterns that they observed, including a possibility relating to the black hole’s corona.
“One idea is that this corona is oscillating, maybe blobbing back and forth, and if it starts to shrink, those oscillations get faster as the scales get smaller,” Masterson says. “But we’re in the very early stages of understanding coronal oscillations.”
Another promising scenario, and one that scientists have a better grasp on in terms of the physics involved, has to do with a daredevil of a white dwarf. According to their modeling, the researchers estimate the white dwarf could have been about one-tenth the mass of the sun. In contrast, the supermassive black hole itself is on the order of 1 million solar masses.
When any object gets this close to a supermassive black hole, gravitational waves are expected to be emitted, dragging the object closer to the black hole. As it circles closer, the white dwarf moves at a faster rate, which can explain the increasing frequency of X-ray oscillations that the team observed.
The white dwarf is practically at the precipice of no return and is estimated to be just a few million miles from the event horizon. However, the researchers predict that the star will not fall in. While the black hole’s gravity may pull the white dwarf inward, the star is also shedding part of its outer layer into the black hole. This shedding acts as a small kick-back, such that the white dwarf — an incredibly compact object itself — can resist crossing the black hole’s boundary.
“Because white dwarfs are small and compact, they’re very difficult to shred apart, so they can be very close to a black hole,” Kara says. “If this scenario is correct, this white dwarf is right at the turn around point, and we may see it get further away.”
The team plans to continue observing the system, with existing and future telescopes, to better understand the extreme physics at work in a black hole’s innermost environments. They are particularly excited to study the system once the space-based gravitational-wave detector LISA launches — currently planned for the mid 2030s — as the gravitational waves that the system should give off will be in a sweet spot that LISA can clearly detect.
“The one thing I’ve learned with this source is to never stop looking at it because it will probably teach us something new,” Masterson says. “The next step is just to keep our eyes open.”
Study shows how households can cut energy costsAn experiment in Amsterdam suggests providing better information to people can help move them out of “energy poverty.”Many people around the globe are living in energy poverty, meaning they spend at least 8 percent of their annual household income on energy. Addressing this problem is not simple, but an experiment by MIT researchers shows that giving people better data about their energy use, plus some coaching on the subject, can lead them to substantially reduce their consumption and costs.
The experiment, based in Amsterdam, resulted in households cutting their energy expenses in half, on aggregate — a savings big enough to move three-quarters of them out of energy poverty.
“Our energy coaching project as a whole showed a 75 percent success rate at alleviating energy poverty,” says Joseph Llewellyn, a researcher with MIT’s Senseable City Lab and co-author of a newly published paper detailing the experiment’s results.
“Energy poverty afflicts families all over the world. With empirical evidence on which policies work, governments could focus their efforts more effectively,” says Fábio Duarte, associate director of MIT’s Senseable City Lab, and another co-author of the paper.
The paper, “Assessing the impact of energy coaching with smart technology interventions to alleviate energy poverty,” appears today in Nature Scientific Reports.
The authors are Llewellyn, who is also a researcher at the Amsterdam Institute for Advanced Metropolitan Solutions (AMS) and the KTH Royal Institute of Technology in Stockholm; Titus Venverloo, a research fellow at the MIT Senseable City Lab and AMS; Fábio Duarte, who is also a principal researcher MIT’s Senseable City Lab; Carlo Ratti, director of the Senseable City Lab; and Cecilia Katzeff, Fredrik Johansson, and Daniel Pargman of the KTH Royal Institute of Technology.
The researchers developed the study after engaging with city officials in Amsterdam. In the Netherlands, about 550,000 households, or 7 percent of the population, are considered to be in energy poverty; in the European Union, that figure is about 50 million. In the U.S., separate research has shown that about three in 10 households report trouble paying energy bills.
To conduct the experiment, the researchers ran two versions of an energy coaching intervention. In one version, 67 households received one report on their energy usage, along with coaching about how to increase energy efficiency. In the other version, 50 households received those things as well as a smart device giving them real-time updates on their energy consumption. (All households also received some modest energy-savings improvements at the outset, such as additional insulation.)
Across the two groups, homes typically reduced monthly consumption of electricity by 33 percent and gas by 42 percent. They lowered their bills by 53 percent, on aggregate, and the percentage of income they spent on energy dropped from 10.1 percent to 5.3 percent.
What were these households doing differently? Some of the biggest behavioral changes included things such as only heating rooms that were in use and unplugging devices not being used. Both of those changes save energy, but their benefits were not always understood by residents before they received energy coaching.
“The range of energy literacy was quite wide from one home to the next,” Llewellyn says. “And when I went somewhere as an energy coach, it was never to moralize about energy use. I never said, ‘Oh, you’re using way too much.’ It was always working on it with the households, depending on what people need for their homes.”
Intriguingly, the homes receiving the small devices that displayed real-time energy data only tended to use them for three or four weeks following a coaching visit. After that, people seemed to lose interest in very frequent monitoring of their energy use. And yet, a few weeks of consulting the devices tended to be long enough to get people to change their habits in a lasting way.
“Our research shows that smart devices need to be accompanied by a close understanding of what drives families to change their behaviors,” Venverloo says.
As the researchers acknowledge, working with consumers to reduce their energy consumption is just one way to help people escape energy poverty. Other “structural” factors that can help include lower energy prices and more energy-efficient buildings.
On the latter note, the current paper has given rise to a new experiment Llewellyn is developing with Amsterdam officials, to examine the benefits of retrofitting residental buildings to lower energy costs. In that case, local policymakers are trying to work out how to fund the retrofitting in such a way that landlords do not simply pass those costs on to tenants.
“We don’t want a household to save money on their energy bills if it also means the rent increases, because then we’ve just displaced expenses from one item to another,” Llewellyn says.
Households can also invest in products like better insulation themselves, for windows or heating components, although for low-income households, finding the money to pay for such things may not be trivial. That is especially the case, Llewellyn suggests, because energy costs can seem “invisible,” and a lower priority, than feeding and clothing a family.
“It’s a big upfront cost for a household that does not have 100 Euros to spend,” Llewellyn says. Compared to paying for other necessities, he notes, “Energy is often the thing that tends to fall last on their list. Energy is always going to be this invisible thing that hides behind the walls, and it’s not easy to change that.”
Designing tiny filters to solve big problemsBy developing new materials for separating a mixture’s components, Zachary Smith hopes to reduce costs and environmental impact across many U.S. industries.For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.
MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.
In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.
“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.
Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.
“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.
Hooked on research
As a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.
“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”
At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.
“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”
After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.
“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.
After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.
“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.
At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.
“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”
Solving global problems
While interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.
“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”
In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.
In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.
Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.
“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”
These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.
“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.”
Professor William Thilly, whose research illuminated the effects of mutagens on human cells, dies at 79A professor of genetics, toxicology, and biological engineering, Thilly pushed himself and his students to develop solutions to real-world problems.William Thilly ’67, ScD ’71, a professor in MIT’s Department of Biological Engineering, died Dec. 24 at his home in Winchester, Massachusetts. He was 79.
Thilly, a pioneer in the study of human genetic mutations, had been a member of the MIT faculty since 1972. Throughout his career, he developed novel ways to measure how environmental mutagens affect human cells, creating assays that are now widely used in toxicology and pharmaceutical development.
He also served as a director of MIT’s Center for Environmental Health Sciences and in the 1980s established MIT’s first Superfund research program — an example of his dedication to ensuring that MIT’s research would have a real-world impact, colleagues say.
“He really was a giant in the field,” says Bevin Engelward, a professor of biological engineering at MIT. “He took his scientific understanding and said, ‘Let’s use this as a tool to go after this real-world problem.’ One of the things that Bill really pushed people on was challenging them to ask the question, ‘Does this research matter? Is this going to make a difference in the real world?’”
In a letter to the MIT community today, MIT President Sally Kornbluth noted that Thilly’s students and postdocs recalled him as “a wise but tough mentor.”
“Many of the students and postdocs Bill trained have become industry leaders in the fields of drug evaluation and toxicology. And he changed the lives of many more MIT students through his generous support of scholarships for undergraduates from diverse educational backgrounds,” Kornbluth wrote.
Tackling real-world problems
Thilly was born on Staten Island, New York, and his family later moved to a farm in Rush Township, located in central Pennsylvania. He earned his bachelor’s degree in biology in 1967 and an ScD in nutritional biochemistry in 1971, both from MIT. In 1972, he joined the MIT faculty as an assistant professor of genetic toxicology.
His research group began with the aim of discovering the origins of disease-causing mutations in humans. In the 1970s, his lab developed an assay that allows for quantitative measurement of mutations in human cells. This test, known as the TK6 assay, allows researchers to identify compounds that are likely to cause mutations, and it is now used by pharmaceutical companies to test whether new drug compounds are safe for human use.
Unlike many previous assays, which could identify only type of mutation at a time, Thilly’s TK6 assay could catch any mutation that would disrupt the function of a gene.
From 1980 to 2001, Thilly served as the director of MIT’s Center for Environmental Health Sciences. During that time, he assembled a cross-disciplinary team, including experts from several MIT departments, that examined the health effects of burning fossil fuels.
“Working in a coordinated manner, the team established more efficient ways to burn fuel, and, importantly, they were able to assess which combustion methods would have the least impact on human and environmental health,” says John Essigmann, the William R. and Betsy P. Leitch Professor of Chemistry, Toxicology, and Biological Engineering at MIT.
Thilly was also instrumental in developing MIT’s first Superfund program. In the 1980s, he mobilized a group of MIT researchers from different disciplines to investigate the effects of the toxic waste at a Superfund site in Woburn, Massachusetts, and help devise remediation plans.
Bringing together scientists and engineers from different fields, who were at the time very siloed within their own departments, was a feat of creativity and leadership, Thilly’s colleagues say, and an example of his dedication to tackling real-world problems.
Later, Thilly utilized a protocol known as denaturing gel electrophoresis to visualize environmentally caused mutations by their ability to alter the melting temperature of the DNA duplex. He used this tool to study human tissue derived from people who had experienced exposure to agents such as tobacco smoke, allowing him to create a rough draft of the mutational spectrum that such agents produce in human cells. This work led him to propose that the mutations in many cancers are likely caused by inaccurate copying of DNA by specialized polymerases known as non-replicative polymerases.
One of Thilly’s most significant discoveries was the fact that cells that are deficient in a DNA repair process called mismatch repair were resistant to certain DNA-damaging agents. Later work by Nobel laureate Paul Modrich ’68 showed how cells lacking mismatch repair become resistant to anticancer drugs.
In 2001, Thilly joined MIT’s newly formed Department of Biological Engineering. During the 2000s, Thilly’s wife, MIT Research Scientist Elena Gostjeva, discovered an unusual, bell-shaped structure in the nuclei of plant cells, known as metakaryotic nuclei. Thilly and Gostjeva later found these nuclei in mammalian stem cells. In recent years, they were exploring the possibility that these cells give rise to tumors, and investigating potential compounds that could be used to combat that type of tumor growth.
A wrestling mentality
Thilly was a dedicated teacher and received the Everett Moore Baker Award for Excellence in Undergraduate Teaching in 1974. In 1991, a series of courses he helped to create, called Chemicals in the Environment, was honored with the Irwin Sizer Award for the Most Significant Improvement to MIT Education. Many of the students and postdocs that he trained have become industry leaders in drug evaluation and toxicant identification. This past semester, Thilly and Gostjeva co-taught two undergraduate courses in the biology of metakaryotic stem cells.
A champion wrestler in his youth, Thilly told colleagues that he considered teaching “a contact sport.” “He had this wrestling mentality. He wanted a challenge,” Engelward says. “Whatever the issue was scientifically that he felt needed to be hashed out, he wanted to battle it out.”
In addition to wrestling, Thilly was also a captain of the MIT Rugby Football Club in the 1970s, and one of the founders of the New England Rugby Football Union.
Thilly loved to talk about science and often held court in the hallway outside his office on the seventh floor of Building 16, regaling colleagues and students who happened to come by.
“Bill was the kind of guy who would pull you aside and then start going on and on about some aspect of his work and why it was so important. And he was very passionate about it,” Essigmann recalls. “He was also an amazing scholar of the early literature of not only genetic toxicology, but molecular biology. His scholarship was extremely good, and he'd be the go-to person if you had a question about something.”
Thilly also considered it his duty to question students about their work and to make sure that they were thinking about whether their research would have real-world applications.
“He really was tough, but I think he really did see it as his responsibility. I think he felt like he needed to always be pushing people to do better when it comes to the real world,” Engelward says. “That’s a huge legacy. He affected probably hundreds of students, because he would go to the graduate student seminar series and he was always asking questions, always pushing people.”
Thilly was a strong proponent of recruiting more underserved students to MIT and made many trips to historically Black universities and colleges to recruit applicants. He also donated more than $1 million to scholarship funds for underserved students, according to colleagues.
While an undergraduate at MIT, Thilly also made a significant mark in the world of breakfast cereals. During the summer of 1965, he worked as an intern at Kellogg’s, where he was given the opportunity to create his own cereal, according to the breakfast food blog Extra Crispy. His experiments with dried apples and leftover O’s led to the invention of the cereal that eventually became Apple Jacks.
In addition to his wife, Thilly is survived by five children: William, Grethe, Walter, and Audrey Thilly, and Fedor Gostjeva; a brother, Walter; a sister, Joan Harmon; and two grandchildren.