Lithium-ion batteries are the workhorses of home electronics and are powering an electric revolution in transportation. But they are not suitable for every application.
A key drawback is their flammability and toxicity, which make large-scale lithium-ion energy storage a bad fit in densely populated city centers and near metal processing or chemical manufacturing plants.
Now Alsym Energy has developed a nonflammable, nontoxic alternative to lithium-ion batteries to help renewables like wind and solar bridge the gap in a broader range of sectors. The company’s electrodes use relatively stable, abundant materials, and its electrolyte is primarily water with some nontoxic add-ons.
“Renewables are intermittent, so you need storage, and to really solve the decarbonization problem, we need to be able to make these batteries anywhere at low cost,” says Alsym co-founder and MIT Professor Kripa Varanasi.
The company believes its batteries, which are currently being tested by potential customers around the world, hold enormous potential to decarbonize the high-emissions industrial manufacturing sector, and they see other applications ranging from mining to powering data centers, homes, and utilities.
“We are enabling a decarbonization of markets that was not possible before,” Alsym co-founder and CEO Mukesh Chatter says. “No chemical or steel plant would dare put a lithium battery close to their premises because of the flammability, and industrial emissions are a much bigger problem than passenger cars. With this approach, we’re able to offer a new path.”
Helping 1 billion people
Chatter started a telecommunications company with serial entrepreneurs and longtime members of the MIT community Ray Stata ’57, SM ’58 and Alec Dingee ’52 in 1997. Since the company was acquired in 1999, Chatter and his wife have started other ventures and invested in some startups, but after losing his mother to cancer in 2012, Chatter decided he wanted to maximize his impact by only working on technologies that could reach 1 billion people or more.
The problem Chatter decided to focus on was electricity access.
“The intent was to light up the homes of at least 1 billion people around the world who either did not have electricity, or only got it part of the time, condemning them basically to a life of poverty in the 19th century,” Chatter says. “When you don’t have access to electricity, you also don’t have the internet, cell phones, education, etc.”
To solve the problem, Chatter decided to fund research into a new kind of battery. The battery had to be cheap enough to be adopted in low-resource settings, safe enough to be deployed in crowded areas, and work well enough to support two light bulbs, a fan, a refrigerator, and an internet modem.
At first, Chatter was surprised how few takers he had to start the research, even from researchers at the top universities in the world.
“It’s a burning problem, but the risk of failure was so high that nobody wanted to take the chance,” Chatter recalls.
He finally found his partners in Varanasi, Rensselaer Polytechnic Institute Professor Nikhil Koratkar and Rensselaer researcher Rahul Mukherjee. Varanasi, who notes he’s been at MIT for 22 years, says the Institute’s culture gave him the confidence to tackle big problems.
“My students, postdocs, and colleagues are inspirational to me,” he says. “The MIT ecosystem infuses us with this resolve to go after problems that look insurmountable.”
Varanasi leads an interdisciplinary lab at MIT dedicated to understanding physicochemical and biological phenomena. His research has spurred the creation of materials, devices, products, and processes to tackle challenges in energy, agriculture, and other sectors, as well as startup companies to commercialize this work.
“Working at the interfaces of matter has unlocked numerous new research pathways across various fields, and MIT has provided me the creative freedom to explore, discover, and learn, and apply that knowledge to solve critical challenges,” he says. “I was able to draw significantly from my learnings as we set out to develop the new battery technology.”
Alsym’s founding team began by trying to design a battery from scratch based on new materials that could fit the parameters defined by Chatter. To make it nonflammable and nontoxic, the founders wanted to avoid lithium and cobalt.
After evaluating many different chemistries, the founders settled on Alsym’s current approach, which was finalized in 2020.
Although the full makeup of Alsym’s battery is still under wraps as the company waits to be granted patents, one of Alsym’s electrodes is made mostly of manganese oxide while the other is primarily made of a metal oxide. The electrolyte is primarily water.
There are several advantages to Alsym’s new battery chemistry. Because the battery is inherently safer and more sustainable than lithium-ion, the company doesn’t need the same safety protections or cooling equipment, and it can pack its batteries close to each other without fear of fires or explosions. Varanasi also says the battery can be manufactured in any of today’s lithium-ion plants with minimal changes and at significantly lower operating cost.
“We are very excited right now,” Chatter says. “We started out wanting to light up 1 billion people’s homes, and now in addition to the original goal we have a chance to impact the entire globe if we are successful at cutting back industrial emissions.”
A new platform for energy storage
Although the batteries don’t quite reach the energy density of lithium-ion batteries, Varanasi says Alsym is first among alternative chemistries at the system-level. He says 20-foot containers of Alsym’s batteries can provide 1.7 megawatt hours of electricity. The batteries can also fast-charge over four hours and can be configured to discharge over anywhere from two to 110 hours.
“We’re highly configurable, and that’s important because depending on where you are, you can sometimes run on two cycles a day with solar, and in combination with wind, you could truly get 24/7 electricity,” Chatter says. “The need to do multiday or long duration storage is a small part of the market, but we support that too.”
Alsym has been manufacturing prototypes at a small facility in Woburn, Massachusetts, for the last two years, and early this year it expanded its capacity and began to send samples to customers for field testing.
In addition to large utilities, the company is working with municipalities, generator manufacturers, and providers of behind-the-meter power for residential and commercial buildings. The company is also in discussion with a large chemical manufacturers and metal processing plants to provide energy storage system to reduce their carbon footprint, something they say was not feasible with lithium-ion batteries, due to their flammability, or with nonlithium batteries, due to their large space requirements.
Another critical area is data centers. With the growth of AI, the demand for data centers — and their energy consumption — is set to surge.
“We must power the AI and digitization revolution without compromising our planet,” says Varanasi, adding that lithium batteries are unsuitable for co-location with data centers due to flammability risks. “Alsym batteries are well-positioned to offer a safer, more sustainable alternative. Intermittency is also a key issue for electrolyzers used in green hydrogen production and other markets.”
Varanasi sees Alsym as a platform company, and Chatter says Alsym is already working on other battery chemistries that have higher densities and maintain performance at even more extreme temperatures.
“When you use a single material in any battery, and the whole world starts to use it, you run out of that material,” Varanasi says. “What we have is a platform that has enabled us to not just to come up with just one chemistry, but at least three or four chemistries targeted at different applications so no one particular set of materials will be stressed in terms of supply.”
3 Questions: Claire Wang on training the brain for memory sports The MIT sophomore and award-winning memory champion explains what these competitions are all about and why you might want to build a “memory palace.”On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.
The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.
The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.
MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.
Q: How did you come to get involved in memory competitions?
A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.
Q: What was the Tournament of Memory Champions like?
A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.
Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.
Since I hadn't done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.
Q: What is your approach to improving memory?
A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.
Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.
The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot.
Tunable ultrasound propagation in microscale metamaterials New framework advances experimental capabilities, including design and characterization, of microscale acoustic metamaterials.Acoustic metamaterials — architected materials that have tailored geometries designed to control the propagation of acoustic or elastic waves through a medium — have been studied extensively through computational and theoretical methods. Physical realizations of these materials to date have been restricted to large sizes and low frequencies.
“The multifunctionality of metamaterials — being simultaneously lightweight and strong while having tunable acoustic properties — make them great candidates for use in extreme-condition engineering applications,” explains Carlos Portela, the Robert N. Noyce Career Development Chair and assistant professor of mechanical engineering at MIT. “But challenges in miniaturizing and characterizing acoustic metamaterials at high frequencies have hindered progress towards realizing advanced materials that have ultrasonic-wave control capabilities.”
A new study coauthored by Portela; Rachel Sun, Jet Lem, and Yun Kai of the MIT Department of Mechanical Engineering (MechE); and Washington DeLima of the U.S. Department of Energy Kansas City National Security Campus presents a design framework for controlling ultrasound wave propagation in microscopic acoustic metamaterials. A paper on the work, “Tailored Ultrasound Propagation in Microscale Metamaterials via Inertia Design,” was recently published in the journal Science Advances.
“Our work proposes a design framework based on precisely positioning microscale spheres to tune how ultrasound waves travel through 3D microscale metamaterials,” says Portela. “Specifically, we investigate how placing microscopic spherical masses within a metamaterial lattice affect how fast ultrasound waves travel throughout, ultimately leading to wave guiding or focusing responses.”
Through nondestructive, high-throughput laser-ultrasonics characterization, the team experimentally demonstrates tunable elastic-wave velocities within microscale materials. They use the varied wave velocities to spatially and temporally tune wave propagation in microscale materials, also demonstrating an acoustic demultiplexer (a device that separates one acoustic signal into multiple output signals). The work paves the way for microscale devices and components that could be useful for ultrasound imaging or information transmission via ultrasound.
“Using simple geometrical changes, this design framework expands the tunable dynamic property space of metamaterials, enabling straightforward design and fabrication of microscale acoustic metamaterials and devices,” says Portela.
The research also advances experimental capabilities, including fabrication and characterization, of microscale acoustic metamaterials toward application in medical ultrasound and mechanical computing applications, and underscores the underlying mechanics of ultrasound wave propagation in metamaterials, tuning dynamic properties via simple geometric changes and describing these changes as a function of changes in mass and stiffness. More importantly, the framework is amenable to other fabrication techniques beyond the microscale, requiring merely a single constituent material and one base 3D geometry to attain largely tunable properties.
“The beauty of this framework is that it fundamentally links physical material properties to geometric features. By placing spherical masses on a spring-like lattice scaffold, we could create direct analogies for how mass affects quasi-static stiffness and dynamic wave velocity,” says Sun, first author of the study. “I realized that we could obtain hundreds of different designs and corresponding material properties regardless of whether we vibrated or slowly compressed the materials.”
Reality check on technologies to remove carbon dioxide from the airStudy finds many climate-stabilization plans are based on questionable assumptions about the future cost and deployment of “direct air capture” and therefore may not bring about promised reductions.In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.
To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.
DAC: The promise and the reality
Including DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past).
DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.
Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.
Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.
Challenge 1: Scaling up
When it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”
Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.
Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”
Challenge 2: Energy requirement
Given the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.
The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.
Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.
Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.
Challenge 3: Siting
Some analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”
Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.
Challenge 4: Cost
Considering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.
To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.
As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.
Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.
Then there’s the cost of storage, which is ignored in many DAC cost estimates.
Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.
The bottom line
In their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.
The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.
Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.”
A bioinspired capsule can pump drugs directly into the walls of the GI tractThe needle-free device could be used to deliver insulin, antibodies, RNA, or other large molecules.Inspired by the way that squids use jets to propel themselves through the ocean and shoot ink clouds, researchers from MIT and Novo Nordisk have developed an ingestible capsule that releases a burst of drugs directly into the wall of the stomach or other organs of the digestive tract.
This capsule could offer an alternative way to deliver drugs that normally have to be injected, such as insulin and other large proteins, including antibodies. This needle-free strategy could also be used to deliver RNA, either as a vaccine or a therapeutic molecule to treat diabetes, obesity, and other metabolic disorders.
“One of the longstanding challenges that we’ve been exploring is the development of systems that enable the oral delivery of macromolecules that usually require an injection to be administered. This work represents one of the next major advances in that progression,” says Giovanni Traverso, director of the Laboratory for Translational Engineering and an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and the senior author of the study.
Traverso and his students at MIT developed the new capsule along with researchers at Brigham and Women’s Hospital and Novo Nordisk. Graham Arrick SM ’20 and Novo Nordisk scientists Drago Sticker and Aghiad Ghazal are the lead authors of the paper, which appears today in Nature.
Inspired by cephalopods
Drugs that consist of large proteins or RNA typically can’t be taken orally because they are easily broken down in the digestive tract. For several years, Traverso’s lab has been working on ways to deliver such drugs orally by encapsulating them in small devices that protect the drugs from degradation and then inject them directly into the lining of the digestive tract.
Most of these capsules use a small needle or set of microneedles to deliver drugs once the device arrives in the digestive tract. In the new study, Traverso and his colleagues wanted to explore ways to deliver these molecules without any kind of needle, which could reduce the possibility of any damage to the tissue.
To achieve that, they took inspiration from cephalopods. Squids and octopuses can propel themselves by filling their mantle cavity with water, then rapidly expelling it through their siphon. By changing the force of water expulsion and pointing the siphon in different directions, the animals can control their speed and direction of travel. The siphon organ also allows cephalopods to shoot jets of ink, forming decoy clouds to distract predators.
The researchers came up with two ways to mimic this jetting action, using compressed carbon dioxide or tightly coiled springs to generate the force needed to propel liquid drugs out of the capsule. The gas or spring is kept in a compressed state by a carbohydrate trigger, which is designed to dissolve when exposed to humidity or an acidic environment such as the stomach. When the trigger dissolves, the gas or spring is allowed to expand, propelling a jet of drugs out of the capsule.
In a series of experiments using tissue from the digestive tract, the researchers calculated the pressures needed to expel the drugs with enough force that they would penetrate the submucosal tissue and accumulate there, creating a depot that would then release drugs into the tissue.
“Aside from the elimination of sharps, another potential advantage of high-velocity columnated jets is their robustness to localization issues. In contrast to a small needle, which needs to have intimate contact with the tissue, our experiments indicated that a jet may be able to deliver most of the dose from a distance or at a slight angle,” Arrick says.
The researchers also designed the capsules so that they can target different parts of the digestive tract. One version of the capsule, which has a flat bottom and a high dome, can sit on a surface, such as the lining of the stomach, and eject drug downward into the tissue. This capsule, which was inspired by previous research from Traverso’s lab on self-orienting capsules, is about the size of a blueberry and can carry 80 microliters of drug.
The second version has a tube-like shape that allows it to align itself within a long tubular organ such as the esophagus or small intestine. In that case, the drug is ejected out toward the side wall, rather than downward. This version can deliver 200 microliters of drug.
Made of metal and plastic, the capsules can pass through the digestive tract and are excreted after releasing their drug payload.
Needle-free drug delivery
In tests in animals, the researchers showed that they could use these capsules to deliver insulin, a GLP-1 receptor agonist similar to the diabetes drug Ozempic, and a type of RNA called short interfering RNA (siRNA). This type of RNA can be used to silence genes, making it potentially useful in treating many genetic disorders.
They also showed that the concentration of the drugs in the animals’ bloodstream reached levels on the same order of magnitude as those seen when the drugs were injected with a syringe, and they did not detect any tissue damage.
The researchers envision that the ingestible capsule could be used at home by patients who need to take insulin or other injected drugs frequently. In addition to making it easier to administer drugs, especially for patients who don’t like needles, this approach also eliminates the need to dispose of sharp needles. The researchers also created and tested a version of the device that could be attached to an endoscope, allowing doctors to use it in an endoscopy suite or operating room to deliver drugs to a patient.
“This technology is a significant leap forward in oral drug delivery of macromolecule drugs like insulin and GLP-1 agonists. While many approaches for oral drug delivery have been attempted in the past, they tend to be poorly efficient in achieving high bioavailability. Here, the researchers demonstrate the ability to deliver bioavailability in animal models with high efficiency. This is an exciting approach which could be impactful for many biologics which are currently administered through injections or intravascular infusions,” says Omid Veiseh, a professor of bioengineering at Rice University, who was not involved in the research.
The researchers now plan to further develop the capsules, in hopes of testing them in humans.
The research was funded by Novo Nordisk, the Natural Sciences and Engineering Research Council of Canada, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and the U.S. Advanced Research Projects Agency for Health.
Undergraduates with family income below $200,000 can expect to attend MIT tuition-free starting in 2025Newly expanded financial aid will cover tuition costs for admitted students from 80 percent of U.S. families.Undergraduates with family income below $200,000 can expect to attend MIT tuition-free starting next fall, thanks to newly expanded financial aid. Eighty percent of American households meet this income threshold.
And for the 50 percent of American families with income below $100,000, parents can expect to pay nothing at all toward the full cost of their students’ MIT education, which includes tuition as well as housing, dining, fees, and an allowance for books and personal expenses.
This $100,000 threshold is up from $75,000 this year, while next year’s $200,000 threshold for tuition-free attendance will increase from its current level of $140,000.
These new steps to enhance MIT’s affordability for students and families are the latest in a long history of efforts by the Institute to free up more resources to make an MIT education as affordable and accessible as possible. Toward that end, MIT has earmarked $167.3 million in need-based financial aid this year for undergraduate students — up some 70 percent from a decade ago.
“MIT’s distinctive model of education — intense, demanding, and rooted in science and engineering — has profound practical value to our students and to society,” MIT President Sally Kornbluth says. “As the Wall Street Journal recently reported, MIT is better at improving the financial futures of its graduates than any other U.S. college, and the Institute also ranks number one in the world for the employability of its graduates.”
“The cost of college is a real concern for families across the board,” Kornbluth adds, “and we’re determined to make this transformative educational experience available to the most talented students, whatever their financial circumstances. So, to every student out there who dreams of coming to MIT: Don’t let concerns about cost stand in your way.”
MIT is one of only nine colleges in the US that does not consider applicants’ ability to pay as part of its admissions process and that meets the full demonstrated financial need for all undergraduates. MIT does not expect students on aid to take loans, and, unlike many other institutions, MIT does not provide an admissions advantage to the children of alumni or donors. Indeed, 18 percent of current MIT undergraduates are first-generation college students.
“We believe MIT should be the preeminent destination for the most talented students in the country interested in an education centered on science and technology, and accessible to the best students regardless of their financial circumstances,” says Stu Schmill, MIT’s dean of admissions and student financial services.
“With the need-based financial aid we provide today, our education is much more affordable now than at any point in the past,” adds Schmill, who graduated from MIT in 1986, “even though the ‘sticker price’ of MIT is higher now than it was when I was an undergraduate.”
Last year, the median annual cost paid by an MIT undergraduate receiving financial aid was $12,938, allowing 87 percent of students in the Class of 2024 to graduate debt-free. Those who did borrow graduated with median debt of $14,844. At the same time, graduates benefit from the lifelong value of an MIT degree, with an average starting salary of $126,438 for graduates entering industry, according to MIT’s most recent survey of its graduating students.
MIT’s endowment — made up of generous gifts made by individual alumni and friends — allows the Institute to provide this level of financial aid, both now and into the future.
“Today’s announcement is a powerful expression of how much our graduates value their MIT experience,” Kornbluth says, “because our ability to provide financial aid of this scope depends on decades of individual donations to our endowment, from generations of MIT alumni and other friends. In effect, our endowment is an inter-generational gift from past MIT students to the students of today and tomorrow.”
What MIT families can expect in 2025
As noted earlier: Starting next fall, for families with income below $100,000, with typical assets, parents can expect to pay nothing for the full cost of attendance, which includes tuition, housing, dining, fees, and allowances for books and personal expenses.
For families with income from $100,000 to $200,000, with typical assets, parents can expect to pay on a sliding scale from $0 up to a maximum of around $23,970, which is this year’s total cost for MIT housing, dining, fees, and allowances for books and personal expenses.
Put another way, next year all MIT families with income below $200,000 can expect to contribute well below $27,146, which is the annual average cost for in-state students to attend and live on campus at public universities in the US, according to the Education Data Initiative. And even among families with income above $200,000, many still receive need-based financial aid from MIT, based on their unique financial circumstances. Families can use MIT’s online calculators to estimate the cost of attendance for their specific family.
This past summer, MIT’s faculty-led Committee on Undergraduate Admissions and Financial Aid was publicly charged by President Kornbluth with undertaking a review of the Institute’s admissions and financial aid policies, to ensure that MIT remains as fully accessible as possible to all students, regardless of their financial circumstances. The steps announced today are the first of these recommendations to be reviewed and adopted.
In April 2019, a group of astronomers from around the globe stunned the world when they revealed the first image of a black hole — the monstrous accumulation of collapsed stars and gas that lets nothing escape, not even light. The image, which was of the black hole that sits at the core of a galaxy called Messier 87 (M87), revealed glowing gas around the center of the black hole. In March 2021, the same team produced yet another stunning image that showed the polarization of light around the black hole, revealing its magnetic field.
The "camera" that took both images is the Event Horizon Telescope (EHT), which is not one singular instrument but rather a collection of radio telescopes situated around the globe that work together to create high-resolution images by combining data from each individual telescope. Now, scientists are looking to extend the EHT into space to get an even sharper look at M87's black hole. But producing the sharpest images in the history of astronomy presents a challenge: transmitting the telescope's massive dataset back to Earth for processing. A small but powerful laser communications (lasercom) payload developed at MIT Lincoln Laboratory operates at the high data rates needed to image the aspects of interest of the black hole.
Extending baseline distances into space
The EHT created the two existing images of M87's black hole via interferometry — specifically, very long-baseline interferometry. Interferometry works by collecting light in the form of radio waves simultaneously with multiple telescopes in separate places on the globe and then comparing the phase difference of the radio waves at the various locations in order to pinpoint the direction of the source. By taking measurements with different combinations of the telescopes around the planet, the EHT collaboration — which included staff members at the Harvard-Smithsonian Center for Astrophysics (CfA) and MIT Haystack Observatory — essentially created an Earth-sized telescope in order to image the incredibly faint black hole 55 million light-years away from Earth.
With interferometry, the bigger the telescope, the better the resolution of the image. Therefore, in order to focus in on even finer characteristics of these black holes, a bigger instrument is needed. Details that astronomers hope to resolve include the turbulence of the gas falling into a black hole (which drives the accumulation of matter onto the black hole through a process called accretion) and a black hole's shadow (which could be used to help pin down where the jet coming from M87 is drawing its energy from). The ultimate goal is to observe a photon ring (the place where light orbits closest before escaping) around the black hole. Capturing an image of the photon ring would enable scientists to put Albert Einstein's general theory of relativity to the test.
With Earth-based telescopes, the farthest that two telescopes could be from one another is on opposite sides of the Earth, or about 13,000 kilometers apart. In addition to this maximum baseline distance, Earth-based instruments are limited by the atmosphere, which makes observing shorter wavelengths difficult. Earth's atmospheric limitations can be overcome by extending the EHT's baselines and putting at least one of the telescopes in space, which is exactly what the proposed CfA-led Black Hole Explorer (BHEX) mission aims to do.
One of the most significant challenges that comes with this space-based concept is transfer of information. The dataset to produce the first EHT image was so massive (totaling 4 petabytes) that the data had to be put on disks and shipped to a facility for processing. Gathering information from a telescope in orbit would be even more difficult; the team would need a system that can downlink data from the space telescope to Earth at approximately 100 gigabits per second (Gbps) in order to image the desired aspects of the black hole.
Enter TBIRD
Here is where Lincoln Laboratory comes in. In May 2023, the laboratory's TeraByte InfraRed Delivery (TBIRD) lasercom payload achieved the fastest data transfer from space, transmitting at a rate of 200 Gbps — which is 1,000 times faster than typical satellite communication systems — from low Earth orbit (LEO).
"We developed a novel technology for high-volume data transport from space to ground," says Jade Wang, assistant leader of the laboratory's Optical and Quantum Communications Group. "In the process of developing that technology, we looked for collaborations and other potential follow-on missions that could leverage this unprecedented data capability. The BHEX is one such mission. These high data rates will enable scientists to image the photon ring structure of a black hole for the first time."
A lasercom team led by Wang, in partnership with the CfA, is developing the long-distance, high-rate downlink needed for the BHEX mission in middle Earth orbit (MEO).
"Laser communications is completely upending our expectations for what astrophysical discoveries are possible from space," says CfA astrophysicist Michael Johnson, principal investigator for the BHEX mission. "In the next decade, this incredible new technology will bring us to the edge of a black hole, creating a window into the region where our current understanding of physics breaks down."
Though TBIRD is incredibly powerful, the technology needs some modifications to support the higher orbit that BHEX requires for its science mission. The small TBIRD payload (CubeSat) will be upgraded to a larger aperture size and higher transmit power. In addition, the TBIRD automatic request protocol — the error-control mechanism for ensuring data make it to Earth without loss due to atmospheric effects — will be adjusted to account for the longer round-trip times that come with a mission in MEO. Finally, the TBIRD LEO "buffer and burst" architecture for data delivery will shift to a streaming approach.
"With TBIRD and other lasercom missions, we have demonstrated that the lasercom technology for such an impactful science mission is available today," Wang says. "Having the opportunity to contribute to an area of really interesting scientific discovery is an exciting prospect."
The BHEX mission concept has been in development since 2019. Technical and concept studies for BHEX have been supported by the Smithsonian Astrophysical Observatory, the Internal Research and Development program at NASA Goddard Space Flight Center, the University of Arizona, and the ULVAC-Hayashi Seed Fund from the MIT-Japan Program at MIT International Science and Technology Initiatives. BHEX studies of lasercom have been supported by Fred Ehrsam and the Gordon and Betty Moore Foundation.
Making a mark in the nation’s capitalAlumni and founders of MIT Washington Summer Internship Program reflect on three decades of impact.Anoushka Bose ’20 spent the summer of 2018 as an MIT Washington program intern, applying her nuclear physics education to arms control research with a D.C. nuclear policy think tank.
“It’s crazy how much three months can transform people,” says Bose, now an attorney at the Department of Justice.
“Suddenly, I was learning far more than I had expected about treaties, nuclear arms control, and foreign relations,” adds Bose. “But once I was hooked, I couldn’t be stopped as that summer sparked a much broader interest in diplomacy and set me on a different path.”
Bose is one of hundreds of MIT undergraduates whose academic and career trajectories were influenced by their time in the nation’s capital as part of the internship program.
Leah Nichols ’00 is a former D.C. intern, and now executive director of George Mason University’s Institute for a Sustainable Earth. In 1998, Nichols worked in the office of U.S. Senator Max Baucus, D-Mont., developing options for protecting open space on private land.
“I really started to see how science and policy needed to interact in order to solve environmental challenges,” she says. “I’ve actually been working at that interface between science and policy ever since.”
Marking its 30th anniversary this year, the MIT Washington Summer Internship Program has shaped the lives of alumni, and expanded MIT’s capital in the capital city.
Bose believes the MIT Washington summer internship is more vital than ever.
“This program helps steer more technical expertise, analytical thinking, and classic MIT innovation into policy spaces to make them better-informed and better equipped to solve challenges,” she says. With so much at stake, she suggests, it is increasingly important “to invest in bringing the MIT mindset of extreme competence as well as resilience to D.C.”
MIT missionaries
Over the past three decades, students across MIT — whether studying aeronautics or nuclear engineering, management or mathematics, chemistry or computer science — have competed for and won an MIT Washington summer internship. Many describe it as a springboard into high-impact positions in politics, public policy, and the private sector.
The program was launched in 1994 by Charles Stewart III, the Kenan Sahin (1963) Distinguished Professor of Political Science, who still serves as the director.
“The idea 30 years ago was to make this a bit of a missionary program, where we demonstrate to Washington the utility of having MIT students around for things they’re doing,” says Stewart. “MIT’s reputation benefits because our students are unpretentious, down-to-earth, interested in how the world actually works, and dedicated to fixing things that are broken.”
The outlines of the program have remained much the same: A cohort of 15 to 20 students is selected from a pool of fall applicants. With the help of MIT’s Washington office, the students are matched with potential supervisors in search of technical and scientific talent. They travel in the spring to meet potential supervisors and receive a stipend and housing for the summer. In the fall, students take a course that Stewart describes as an “Oxbridge-type tutorial, where they contextualize their experiences and reflect on the political context of the place where they worked.”
Stewart remains as enthusiastic about the internship program as when he started and has notions for building on its foundations. His wish list includes running the program at other times of the year, and for longer durations. “Six months would really change and deepen the experience,” he says. He envisions a real-time tutorial while the students are in Washington. And he would like to draw more students from the data science world. “Part of the goal of this program is to hook non-obvious people into knowledge of the public policy realm,” he says.
Prized in Washington
MIT Vice Provost Philip Khoury, who helped get the program off the ground, praised Stewart’s vision for developing the initial idea.
“Charles understood why science- and technology-oriented students would be great beneficiaries of an experience in Washington and had something to contribute that other internship program students would not be able to do because of their prowess, their prodigious abilities in the technology-engineering-science world,” says Khoury.
Khoury adds that the program has benefited both the host organizations and the students.
“Members of Congress and senior staff who were developing policies prized MIT students, because they were powerful thinkers and workaholics, and students in the program learned that they really mattered to adults in Washington, wherever they went.”
David Goldston, director of the MIT Washington Office, says government is “kind of desperate for people who understand science and technology.” One example: The National Institute of Standards and Technology has launched an artificial intelligence safety division that is “almost begging for students to help conduct research and carry out the ever-expanding mission of worrying about AI issues,” he says.
Holly Krambeck ’06 MST/MCP, program manager of the World Bank Data Lab, can attest to this impact. She hired her first MIT summer intern, Chae Won Lee, in 2013, to analyze road crash data from the Philippines. “Her findings were so striking, we invited her to join the team on a mission to present her work to the government,” says Krambeck.
Subsequent interns have helped the World Bank demonstrate effective, low-cost, transit-fare collection systems; identify houses eligible for hurricane protection retrofits under World Bank loans; and analyze heatwave patterns in the Philippines to inform a lending program for mitigation measures.
“Every year, I’ve been so impressed by the maturity, energy, willingness to learn new skills, and curiosity of the MIT students,” says Krambeck. “At the end of each summer, we ask students to present their projects to World Bank staff, who are invariably amazed to learn that these are undergraduates and not PhD candidates!”
Career springboard
“It absolutely changed my career pathway,” says Samuel Rodarte Jr. ’13, a 2011 program alumnus who interned at the MIT Washington Office, where he tracked congressional hearings related to research at the Institute. Today, he serves as a legislative assistant to Senate Majority Leader Charles E. Schumer. An aerospace engineering and Latin American studies double major, Rodarte says the opportunity to experience policymaking from the inside came “at just the right time, when I was trying to figure out what I really wanted to do post-MIT.”
Miranda Priebe ’03 is director of the Center for Analysis of U.S. Grand Strategy for the Rand Corp. She briefs groups within the Pentagon, the U.S. Department of State, and the National Security Council, among others. “My job is to ask the big question: Does the United States have the right approach in the world in terms of advancing our interests with our capabilities and resources?”
Priebe was a physics major with an evolving interest in political science when she arrived in Washington in 2001 to work in the office of Senator Carl Levin, D-Mich., the chair of the Senate Armed Services Committee. “I was working really hard at MIT, but just hadn’t found my passion until I did this internship,” she says. “Once I came to D.C. I saw all the places I could fit in using my analytical skills — there were a million things I wanted to do — and the internship convinced me that this was the right kind of work for me.”
During her internship in 2022, Anushree Chaudhuri ’24, urban studies and planning and economics major, worked in the U.S. Department of Energy’s Building Technologies Office, where she hoped to experience day-to-day life in a federal agency — with an eye toward a career in high-level policymaking. She developed a web app to help local governments determine which census tracts qualified for environmental justice funds.
“I was pleasantly surprised to see that even as a lower-level civil servant you can make change if you know how to work within the system.” Chaudhuri is now a Marshall Scholar, pursuing a PhD at the University of Oxford on the socioeconomic impacts of energy infrastructure. “I’m pretty sure I want to work in the policy space long term,” she says.
A model of virtuosityAcclaimed keyboardist Jordan Rudess’s collaboration with the MIT Media Lab culminates in live improvisation between an AI “jam_bot” and the artist.A crowd gathered at the MIT Media Lab in September for a concert by musician Jordan Rudess and two collaborators. One of them, violinist and vocalist Camilla Bäckman, has performed with Rudess before. The other — an artificial intelligence model informally dubbed the jam_bot, which Rudess developed with an MIT team over the preceding several months — was making its public debut as a work in progress.
Throughout the show, Rudess and Bäckman exchanged the signals and smiles of experienced musicians finding a groove together. Rudess’ interactions with the jam_bot suggested a different and unfamiliar kind of exchange. During one duet inspired by Bach, Rudess alternated between playing a few measures and allowing the AI to continue the music in a similar baroque style. Each time the model took its turn, a range of expressions moved across Rudess’ face: bemusement, concentration, curiosity. At the end of the piece, Rudess admitted to the audience, “That is a combination of a whole lot of fun and really, really challenging.”
Rudess is an acclaimed keyboardist — the best of all time, according to one Music Radar magazine poll — known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which embarks this fall on a 40th anniversary tour. He is also a solo artist whose latest album, “Permission to Fly,” was released on Sept. 6; an educator who shares his skills through detailed online tutorials; and the founder of software company Wizdom Music. His work combines a rigorous classical foundation (he began his piano studies at The Juilliard School at age 9) with a genius for improvisation and an appetite for experimentation.
Last spring, Rudess became a visiting artist with the MIT Center for Art, Science and Technology (CAST), collaborating with the MIT Media Lab’s Responsive Environments research group on the creation of new AI-powered music technology. Rudess’ main collaborators in the enterprise are Media Lab graduate students Lancelot Blanchard, who researches musical applications of generative AI (informed by his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive, kinetic, light- and time-based media. Overseeing the project is Professor Joseph Paradiso, head of the Responsive Environments group and a longtime Rudess fan. Paradiso arrived at the Media Lab in 1994 with a CV in physics and engineering and a sideline designing and building synthesizers to explore his avant-garde musical tastes. His group has a tradition of investigating musical frontiers through novel user interfaces, sensor networks, and unconventional datasets.
The researchers set out to develop a machine learning model channeling Rudess’ distinctive musical style and technique. In a paper published online by MIT Press in September, co-authored with MIT music technology professor Eran Egozy, they articulate their vision for what they call “symbiotic virtuosity:” for human and computer to duet in real-time, learning from each duet they perform together, and making performance-worthy new music in front of a live audience.
Rudess contributed the data on which Blanchard trained the AI model. Rudess also provided continuous testing and feedback, while Naseck experimented with ways of visualizing the technology for the audience.
“Audiences are used to seeing lighting, graphics, and scenic elements at many concerts, so we needed a platform to allow the AI to build its own relationship with the audience,” Naseck says. In early demos, this took the form of a sculptural installation with illumination that shifted each time the AI changed chords. During the concert on Sept. 21, a grid of petal-shaped panels mounted behind Rudess came to life through choreography based on the activity and future generation of the AI model.
“If you see jazz musicians make eye contact and nod at each other, that gives anticipation to the audience of what’s going to happen,” says Naseck. “The AI is effectively generating sheet music and then playing it. How do we show what’s coming next and communicate that?”
Naseck designed and programmed the structure from scratch at the Media Lab with assistance from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), drawing some of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare that maps music to points moving in space. With the ability to spin and tilt its petals at speeds ranging from subtle to dramatic, the kinetic sculpture distinguished the AI’s contributions during the concert from those of the human performers, while conveying the emotion and energy of its output: swaying gently when Rudess took the lead, for example, or furling and unfurling like a blossom as the AI model generated stately chords for an improvised adagio. The latter was one of Naseck’s favorite moments of the show.
“At the end, Jordan and Camilla left the stage and allowed the AI to fully explore its own direction,” he recalls. “The sculpture made this moment very powerful — it allowed the stage to remain animated and intensified the grandiose nature of the chords the AI played. The audience was clearly captivated by this part, sitting at the edges of their seats.”
“The goal is to create a musical visual experience,” says Rudess, “to show what’s possible and to up the game.”
Musical futures
As the starting point for his model, Blanchard used a music transformer, an open-source neural network architecture developed by MIT Assistant Professor Anna Huang SM ’08, who joined the MIT faculty in September.
“Music transformers work in a similar way as large language models,” Blanchard explains. “The same way that ChatGPT would generate the most probable next word, the model we have would predict the most probable next notes.”
Blanchard fine-tuned the model using Rudess’ own playing of elements from bass lines to chords to melodies, variations of which Rudess recorded in his New York studio. Along the way, Blanchard ensured the AI would be nimble enough to respond in real-time to Rudess’ improvisations.
“We reframed the project,” says Blanchard, “in terms of musical futures that were hypothesized by the model and that were only being realized at the moment based on what Jordan was deciding.”
As Rudess puts it: “How can the AI respond — how can I have a dialogue with it? That’s the cutting-edge part of what we’re doing.”
Another priority emerged: “In the field of generative AI and music, you hear about startups like Suno or Udio that are able to generate music based on text prompts. Those are very interesting, but they lack controllability,” says Blanchard. “It was important for Jordan to be able to anticipate what was going to happen. If he could see the AI was going to make a decision he didn’t want, he could restart the generation or have a kill switch so that he can take control again.”
In addition to giving Rudess a screen previewing the musical decisions of the model, Blanchard built in different modalities the musician could activate as he plays — prompting the AI to generate chords or lead melodies, for example, or initiating a call-and-response pattern.
“Jordan is the mastermind of everything that’s happening,” he says.
What would Jordan do
Though the residency has wrapped up, the collaborators see many paths for continuing the research. For example, Naseck would like to experiment with more ways Rudess could interact directly with his installation, through features like capacitive sensing. “We hope in the future we’ll be able to work with more of his subtle motions and posture,” Naseck says.
While the MIT collaboration focused on how Rudess can use the tool to augment his own performances, it’s easy to imagine other applications. Paradiso recalls an early encounter with the tech: “I played a chord sequence, and Jordan’s model was generating the leads. It was like having a musical ‘bee’ of Jordan Rudess buzzing around the melodic foundation I was laying down, doing something like Jordan would do, but subject to the simple progression I was playing,” he recalls, his face echoing the delight he felt at the time. “You're going to see AI plugins for your favorite musician that you can bring into your own compositions, with some knobs that let you control the particulars,” he posits. “It’s that kind of world we’re opening up with this.”
Rudess is also keen to explore educational uses. Because the samples he recorded to train the model were similar to ear-training exercises he’s used with students, he thinks the model itself could someday be used for teaching. “This work has legs beyond just entertainment value,” he says.
The foray into artificial intelligence is a natural progression for Rudess’ interest in music technology. “This is the next step,” he believes. When he discusses the work with fellow musicians, however, his enthusiasm for AI often meets with resistance. “I can have sympathy or compassion for a musician who feels threatened, I totally get that,” he allows. “But my mission is to be one of the people who moves this technology toward positive things.”
“At the Media Lab, it’s so important to think about how AI and humans come together for the benefit of all,” says Paradiso. “How is AI going to lift us all up? Ideally it will do what so many technologies have done — bring us into another vista where we’re more enabled.”
“Jordan is ahead of the pack,” Paradiso adds. “Once it’s established with him, people will follow.”
Jamming with MIT
The Media Lab first landed on Rudess’ radar before his residency because he wanted to try out the Knitted Keyboard created by another member of Responsive Environments, textile researcher Irmandy Wickasono PhD ’24. From that moment on, “It's been a discovery for me, learning about the cool things that are going on at MIT in the music world,” Rudess says.
During two visits to Cambridge last spring (assisted by his wife, theater and music producer Danielle Rudess), Rudess reviewed final projects in Paradiso’s course on electronic music controllers, the syllabus for which included videos of his own past performances. He brought a new gesture-driven synthesizer called Osmose to a class on interactive music systems taught by Egozy, whose credits include the co-creation of the video game “Guitar Hero.” Rudess also provided tips on improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-created with Stanford University researchers, with student musicians in the MIT Laptop Ensemble and Arts Scholars program; and experienced immersive audio in the MIT Spatial Sound Lab. During his most recent trip to campus in September, he taught a masterclass for pianists in MIT’s Emerson/Harris Program, which provides a total of 67 scholars and fellows with support for conservatory-level musical instruction.
“I get a kind of rush whenever I come to the university,” Rudess says. “I feel the sense that, wow, all of my musical ideas and inspiration and interests have come together in this really cool way.”
Can robots learn from machine dreams?MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI's potential for creating robotics training data.For roboticists, one challenge towers above all others: generalization — the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans’ ability to provide it.
Now, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers has developed a novel approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called “LucidSim,” uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.
LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world. “A fundamental challenge in robot learning has long been the ‘sim-to-real gap’ — the disparity between simulated training environments and the complex, unpredictable real world,” says MIT CSAIL postdoc Ge Yang, a lead researcher on LucidSim. “Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities.”
The multipronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.
The birth of an idea: From burritos to breakthroughs
The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, Massachusetts. “We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn’t have a pure vision-based policy to begin with,” says Alan Yu, an undergraduate student in electrical engineering and computer science (EECS) at MIT and co-lead author on LucidSim. “We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half-an-hour. That’s where we had our moment.”
To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren’t different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.
This approach, however, only resulted in a single image. To make short, coherent videos that serve as little “experiences” for the robot, the scientists hacked together some image magic into another novel technique the team created, called “Dreams In Motion.” The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot’s perspective.
“We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days,” says Yu. “While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It’s exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments.”
The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main test bed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area; also, color perception is critical. “Today, these robots still learn from real-world demonstrations,” says Yang. “Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment.”
Who's the real expert?
The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: Robots trained by the expert struggled, succeeding only 15 percent of the time — and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88 percent. “And giving our robot more data monotonically improves its performance — eventually, the student becomes the expert,” says Yang.
“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” says Stanford University assistant professor of electrical engineering Shuran Song, who wasn’t involved in the research. “The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”
From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines — ones that learn to navigate our complex world without ever setting foot in it.
Yu and Yang wrote the paper with four fellow CSAIL affiliates: Ran Choi, an MIT postdoc in mechanical engineering; Yajvan Ravan, an MIT undergraduate in EECS; John Leonard, the Samuel C. Collins Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering; and Phillip Isola, an MIT associate professor in EECS. Their work was supported, in part, by a Packard Fellowship, a Sloan Research Fellowship, the Office of Naval Research, Singapore’s Defence Science and Technology Agency, Amazon, MIT Lincoln Laboratory, and the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions. The researchers presented their work at the Conference on Robot Learning (CoRL) in early November.
Curiosity, images, and scientific explorationProfessor of the practice Alan Lightman’s new book digs into the wonder of striking visual phenomena in nature.When we gaze at nature’s remarkable phenomena, we might feel a mix of awe, curiosity, and determination to understand what we are looking at. That is certainly a common response for MIT’s Alan Lightman, a trained physicist and prolific author of books about physics, science, and our understanding of the world around us.
“One of my favorite quotes from Einstein is to the effect that the most beautiful experience we can have is the mysterious,” Lightman says. “It’s the fundamental emotion that is the cradle of true art and true science.”
Lightman explores those concepts in his latest book, “The Miraculous from the Material,” published today by Penguin Random House. In it, Lightman has penned 35 essays about scientific understanding, each following photos of spectacular natural phenomena, from spider webs to sunsets, and from galaxies to hummingbirds.
Lightman, who is a professor of the practice of the humanities at MIT, calls himself a “spiritual materialist,” who finds wonder in the world while grounding his grasp of nature in scientific explanation.
“Understanding the material and scientific underpinnings of these spectacular phenomena hasn’t diminished my awe and amazement one iota,” Lightman writes in the book. MIT News talked to Lightman about a handful of the book’s chapters, and the relationship between seeing and scientific curiosity.
Aurora borealis
In 2024, many people ventured outside for a glimpse of the aurora borealis, or northern lights, the brilliant phenomenon caused by solar storms. Auroras occur when unusually large amounts of electrons from the sun energize oxygen and nitrogen molecules in the upper atmosphere. The Earth’s magnetic field creates the folding shapes.
Among much else, the aurora borealis — and aurora australis, in southern latitudes — are a testament to the way unusual things fire our curiosity.
“I think we respond emotionally as well as intellectually, with appreciation and plain old awe at nature,” Lightman says. “If we go back to the earliest times when people were thinking scientifically, the emotional connection to the natural world was probably as important as the intellectual connection. The wonder and curiosity stimulated by the night sky makes us want to understand it.”
He adds: “The aurora borealis is certainly very striking and makes us aware that we’re part of the cosmos; we’re not just living in the world of tables, and chairs, and houses. It does give us a cosmic sense of being on a planet out in the universe.”
Galileo coined the term “aurora borealis,” referring to the Roman goddess of the dawn and the Greek god of the north wind. People have created many suggestive accounts of the northern lights. As Lightman notes in the book, the Native American Cree regarded the lights as dead spirits in the sky; the Algonquin people saw them as a fire made by their creator; the Inuit tribes regarded the lights as spirits playing; and to the Vikings, the lights were a reflection off the armor of the Valkyries. It wasn’t until the 1900s that geomagnetic sunstorms were proposed as an explanation.
“It's all a search for meaning and understanding,” Lightman says. “Before we had modern science, we still wanted meaning, so we constructed these mythologies. And then as we developed science we had other tools. But the nonscientific accounts were also trying to explain this strange cosmos we find ourselves in.”
Fall foliage
The aurora borealis is unearthly; fall leaves and their colors are literally a down-to-earth matter. Still, Lightman says, while the aurora borealis “is more exotic,” fall foliage can also leave us gazing in wonder. In his book, he constructs a multilayered explanation of the subject, ranging from the chemical compounds in leaves to the properties of color to the mechanics of planetary motion.
First, the leaves. The fall hues come from chemical compounds in leaves called carotenoids (which produce yellow and orange colors) and anthocyanins (which create red hues). Those effects are usually hidden because of the presence of chlorophyll, which helps plants absorb sunlight and store energy, and gives off a green hue. But less sunlight in the fall means less chlorophyll at work in plants, so green leaves turn yellow, orange, or red.
To jump ahead, there are seasons because the Earth does not rotate on a vertical axis relative to the plane of its path around the sun. It tilts at about 23.5 degrees, so different parts of the planet receive differing amounts of sunlight during a yearlong revolution around the sun.
That tilt stems from cosmic collisions billions of years ago. Solar systems are formed from rotating clouds of gas and dust, with planets and moons condensing due to gravity. The Earth likely got knocked off its vertical axis when loose matter slammed into it, which has happened to most planets: In our solar system, only Mercury has almost no tilt.
Lightman muses, “I think there’s a kind of poetry in understanding that beautiful fall foliage was caused in part by a cosmic accident 4 billion years ago. That’s poetic and mind-blowing at the same time.”
Mandarinfish
It can seem astonishing to behold the mandarinfish, a native of the Pacific Ocean that sports bright color patterns only a bit less intricate than an ikat rug.
But what appears nearly miraculous can also be highly explainable in material terms. There are evolutionary advantages from brilliant coloration, something many scientists have recognized, from Charles Darwin to the present.
“There are a number of living organisms in the book that have striking features,” Lightman says. “I think scientists agree that most features of living organisms have some survival benefits, or are byproducts of features that once had survival benefits.”
Unusual coloration may serve as camouflage, help attract mates, or warn off predators. In this case, the mandarinfish is toxic and its spectacular coat helps remind its main predator, the scorpionfish, that the wrong snack comes with unfortunate consequences.
“For mandarinfish it’s related to the fact that it’s poisonous,” Lightman says. Here, the sense of wonder we may feel comes attached to a scientific mechanism: In a food chain, what is spectacular can be highly functional as well.
Paramecia
Paramecia are single-celled microorganisms that propel themselves thanks to thousands of tiny cilia, or hairs, which move back and forth like oars. People first observed paramecia after the development of the microscope in the 1600s; they may have been first seen by the Dutch scientist Antonie van Leeuwenhoek.
“I judged that some of these little creatures were about a thousand times smaller than the smallest ones I have ever yet seen upon the rind of cheese,” van Leeuwenhoek wrote.
“The first microscopes in the 17th century uncovered an entire universe at a tiny scale,” Lightman observes.
When we look at a picture of a paramecium, then, we are partly observing our own ingenuity. However, Lightman is most focused on paramecia as an evolutionary advance. In the book, he underscores the emerging sophistication represented by their arrival 600 million years ago, processing significant amounts of energy and applying it to motion.
“What interested me about the paramecium is not only that it was one of the first microorganisms discovered,” Lightman says, “but the mechanisms of its locomotion, the little cilia that wave back and forth and can propel it at relatively great speed. That was a big landmark in evolution. It requires energy, and a mechanical system, all developed by natural selection.”
He adds: “One beautiful thought that comes out of that is the commonality of all living things on the planet. We’re all related, in a very profound way.”
The rings of Saturn
The first time Lightman looked at the rings of Saturn, which are about 1,000 in number, he was at the Harvard-Smithsonian Center for Astrophysics, using a telescope in the late 1970s.
“I saw the rings of Saturn and I was totally blown away because they’re so perfect,” Lightman says. “I just couldn’t believe there was that kind of construction of such a huge scale. That sense of amazement has stayed with me. They are a visually stunning natural phenomenon.”
The rings are statistically stunning, too. The width of the rings is about 240,000 miles, roughly the same as the distance from the Earth to the moon. But the thickness of the rings is only about that of a football field. “That’s a pretty big ratio between diameter and thickness,” Lightman says. The mass of the rings is just 1/50 of 1 percent of our moon.
Most likely, the rings were formed from matter by a moon that approached Saturn — which has 146 known moons — but got ripped apart, its material scattering into the rings. Over time, gravity pulled the rings into their circular shape.
“The roundness of planets, the circularity of planetary rings, and so many other beautiful phenomena follow naturally from the laws of physics,” Lightman writes in the book. “Which are themselves beautiful.”
Over the years, he has been able to look many times at the rings of Saturn, always regarding it as a “natural miracle” to behold.
“Every time you see them, you are amazed by it,” Lightman says.
Turning automotive engines into modular chemical plants to make green fuelsThe MIT spinout Emvolon is placing its repurposed engines next to methane sources, to generate greener methanol and other chemicals.Reducing methane emissions is a top priority in the fight against climate change because of its propensity to trap heat in the atmosphere: Methane’s warming effects are 84 times more potent than CO2 over a 20-year timescale.
And yet, as the main component of natural gas, methane is also a valuable fuel and a precursor to several important chemicals. The main barrier to using methane emissions to create carbon-negative materials is that human sources of methane gas — landfills, farms, and oil and gas wells — are relatively small and spread out across large areas, while traditional chemical processing facilities are huge and centralized. That makes it prohibitively expensive to capture, transport, and convert methane gas into anything useful. As a result, most companies burn or “flare” their methane at the site where it’s emitted, seeing it as a sunk cost and an environmental liability.
The MIT spinout Emvolon is taking a new approach to processing methane by repurposing automotive engines to serve as modular, cost-effective chemical plants. The company’s systems can take methane gas and produce liquid fuels like methanol and ammonia on-site; these fuels can then be used or transported in standard truck containers.
"We see this as a new way of chemical manufacturing,” Emvolon co-founder and CEO Emmanuel Kasseris SM ’07, PhD ’11 says. “We’re starting with methane because methane is an abundant emission that we can use as a resource. With methane, we can solve two problems at the same time: About 15 percent of global greenhouse gas emissions come from hard-to-abate sectors that need green fuel, like shipping, aviation, heavy heavy-duty trucks, and rail. Then another 15 percent of emissions come from distributed methane emissions like landfills and oil wells.”
By using mass-produced engines and eliminating the need to invest in infrastructure like pipelines, the company says it’s making methane conversion economically attractive enough to be adopted at scale. The system can also take green hydrogen produced by intermittent renewables and turn it into ammonia, another fuel that can also be used to decarbonize fertilizers.
“In the future, we’re going to need green fuels because you can’t electrify a large ship or plane — you have to use a high-energy-density, low-carbon-footprint, low-cost liquid fuel,” Kasseris says. “The energy resources to produce those green fuels are either distributed, as is the case with methane, or variable, like wind. So, you cannot have a massive plant [producing green fuels] that has its own zip code. You either have to be distributed or variable, and both of those approaches lend themselves to this modular design.”
From a “crazy idea” to a company
Kasseris first came to MIT to study mechanical engineering as a graduate student in 2004, when he worked in the Sloan Automotive Lab on a report on the future of transportation. For his PhD, he developed a novel technology for improving internal combustion engine fuel efficiency for a consortium of automotive and energy companies, which he then went to work for after graduation.
Around 2014, he was approached by Leslie Bromberg ’73, PhD ’77, a serial inventor with more than 100 patents, who has been a principal research engineer in MIT’s Plasma Science and Fusion Center for nearly 50 years.
“Leslie had this crazy idea of repurposing an internal combustion engine as a reactor,” Kasseris recalls. “I had looked at that while working in industry, and I liked it, but my company at the time thought the work needed more validation.”
Bromberg had done that validation through a U.S. Department of Energy-funded project in which he used a diesel engine to “reform” methane — a high-pressure chemical reaction in which methane is combined with steam and oxygen to produce hydrogen. The work impressed Kasseris enough to bring him back to MIT as a research scientist in 2016.
“We worked on that idea in addition to some other projects, and eventually it had reached the point where we decided to license the work from MIT and go full throttle,” Kasseris recalls. “It’s very easy to work with MIT’s Technology Licensing Office when you are an MIT inventor. You can get a low-cost licensing option, and you can do a lot with that, which is important for a new company. Then, once you are ready, you can finalize the license, so MIT was instrumental.”
Emvolon continued working with MIT’s research community, sponsoring projects with Professor Emeritus John Heywood and participating in the MIT Venture Mentoring Service and the MIT Industrial Liaison Program.
An engine-powered chemical plant
At the core of Emvolon’s system is an off-the-shelf automotive engine that runs “fuel rich” — with a higher ratio of fuel to air than what is needed for complete combustion.
“That’s easy to say, but it takes a lot of [intellectual property], and that’s what was developed at MIT,” Kasseris says. “Instead of burning the methane in the gas to carbon dioxide and water, you partially burn it, or partially oxidize it, to carbon monoxide and hydrogen, which are the building blocks to synthesize a variety of chemicals.”
The hydrogen and carbon monoxide are intermediate products used to synthesize different chemicals through further reactions. Those processing steps take place right next to the engine, which makes its own power. Each of Emvolon’s standalone systems fits within a 40-foot shipping container and can produce about 8 tons of methanol per day from 300,000 standard cubic feet of methane gas.
The company is starting with green methanol because it’s an ideal fuel for hard-to-abate sectors such as shipping and heavy-duty transport, as well as an excellent feedstock for other high-value chemicals, such as sustainable aviation fuel. Many shipping vessels have already converted to run on green methanol in an effort to meet decarbonization goals.
This summer, the company also received a grant from the Department of Energy to adapt its process to produce clean liquid fuels from power sources like solar and wind.
“We’d like to expand to other chemicals like ammonia, but also other feedstocks, such as biomass and hydrogen from renewable electricity, and we already have promising results in that direction” Kasseris says. “We think we have a good solution for the energy transition and, in the later stages of the transition, for e-manufacturing.”
A scalable approach
Emvolon has already built a system capable of producing up to six barrels of green methanol a day in its 5,000 square-foot headquarters in Woburn, Massachusetts.
“For chemical technologies, people talk about scale up risk, but with an engine, if it works in a single cylinder, we know it will work in a multicylinder engine,” Kasseris says. “It’s just engineering.”
Last month, Emvolon announced an agreement with Montauk Renewables to build a commercial-scale demonstration unit next to a Texas landfill that will initially produce up to 15,000 gallons of green methanol a year and later scale up to 2.5 million gallons. That project could be expanded tenfold by scaling across Montauk’s other sites.
“Our whole process was designed to be a very realistic approach to the energy transition,” Kasseris says. “Our solution is designed to produce green fuels and chemicals at prices that the markets are willing to pay today, without the need for subsidies. Using the engines as chemical plants, we can get the capital expenditure per unit output close to that of a large plant, but at a modular scale that enables us to be next to low-cost feedstock. Furthermore, our modular systems require small investments — of $1 to 10 million — that are quickly deployed, one at a time, within weeks, as opposed to massive chemical plants that require multiyear capital construction projects and cost hundreds of millions.”
When a cell protector collaborates with a killerNew research reveals what it takes for a protein that is best known for protecting cells against death to take on the opposite role.From early development to old age, cell death is a part of life. Without enough of a critical type of cell death known as apoptosis, animals wind up with too many cells, which can set the stage for cancer or autoimmune disease. But careful control is essential, because when apoptosis eliminates the wrong cells, the effects can be just as dire, helping to drive many kinds of neurodegenerative disease.
By studying the microscopic roundworm Caenorhabditis elegans — which was honored with its fourth Nobel Prize last month — scientists at MIT’s McGovern Institute for Brain Research have begun to unravel a longstanding mystery about the factors that control apoptosis: how a protein capable of preventing programmed cell death can also promote it. Their study, led by Robert Horvitz, the David H. Koch Professor of Biology at MIT, and reported Oct. 9 in the journal Science Advances, sheds light on the process of cell death in both health and disease.
“These findings, by graduate student Nolan Tucker and former graduate student, now MIT faculty colleague, Peter Reddien, have revealed that a protein interaction long thought to block apoptosis in C. elegans likely instead has the opposite effect,” says Horvitz, who is also an investigator at the Howard Hughes Medical Institute and the McGovern Institute. Horvitz shared the 2002 Nobel Prize in Physiology or Medicine for discovering and characterizing the genes controlling cell death in C. elegans.
Mechanisms of cell death
Horvitz, Tucker, Reddien, and colleagues have provided foundational insights in the field of apoptosis by using C. elegans to analyze the mechanisms that drive apoptosis, as well as the mechanisms that determine how cells ensure apoptosis happens when and where it should. Unlike humans and other mammals, which depend on dozens of proteins to control apoptosis, these worms use just a few. And when things go awry, it’s easy to tell: When there’s not enough apoptosis, researchers can see that there are too many cells inside the worms’ translucent bodies. And when there’s too much, the worms lack certain biological functions or, in more extreme cases, can’t reproduce or die during embryonic development.
Work in the Horvitz lab defined the roles of many of the genes and proteins that control apoptosis in worms. These regulators proved to have counterparts in human cells, and for that reason studies of worms have helped reveal how human cells govern cell death and pointed toward potential targets for treating disease.
A protein’s dual role
Three of C. elegans’ primary regulators of apoptosis actively promote cell death, whereas just one, CED-9, reins in the apoptosis-promoting proteins to keep cells alive. As early as the 1990s, however, Horvitz and colleagues recognized that CED-9 was not exclusively a protector of cells. Their experiments indicated that the protector protein also plays a role in promoting cell death. But while researchers thought they knew how CED-9 protected against apoptosis, its pro-apoptotic role was more puzzling.
CED-9’s dual role means that mutations in the gene that encode it can impact apoptosis in multiple ways. Most ced-9 mutations interfere with the protein’s ability to protect against cell death and result in excess cell death. Conversely, mutations that abnormally activate ced-9 cause too little cell death, just like mutations that inactivate any of the three killer genes.
An atypical ced-9 mutation, identified by Reddien when he was a PhD student in Horvitz’s lab, hinted at how CED-9 promotes cell death. That mutation altered the part of the CED-9 protein that interacts with the protein CED-4, which is proapoptotic. Since the mutation specifically leads to a reduction in apoptosis, this suggested that CED-9 might need to interact with CED-4 to promote cell death.
The idea was particularly intriguing because researchers had long thought that CED-9’s interaction with CED-4 had exactly the opposite effect: In the canonical model, CED-9 anchors CED-4 to cells’ mitochondria, sequestering the CED-4 killer protein and preventing it from associating with and activating another key killer, the CED-3 protein — thereby preventing apoptosis.
To test the hypothesis that CED-9’s interactions with the killer CED-4 protein enhance apoptosis, the team needed more evidence. So graduate student Nolan Tucker used CRISPR gene editing tools to create more worms with mutations in CED-9, each one targeting a different spot in the CED-4-binding region. Then he examined the worms. “What I saw with this particular class of mutations was extra cells and viability,” he says — clear signs that the altered CED-9 was still protecting against cell death, but could no longer promote it. “Those observations strongly supported the hypothesis that the ability to bind CED-4 is needed for the pro-apoptotic function of CED-9,” Tucker explains. Their observations also suggested that, contrary to earlier thinking, CED-9 doesn’t need to bind with CED-4 to protect against apoptosis.
When he looked inside the cells of the mutant worms, Tucker found additional evidence that these mutations prevented CED-9’s ability to interact with CED-4. When both CED-9 and CED-4 are intact, CED-4 appears associated with cells’ mitochondria. But in the presence of these mutations, CED-4 was instead at the edge of the cell nucleus. CED-9’s ability to bind CED-4 to mitochondria appeared to be necessary to promote apoptosis, not to protect against it.
Looking ahead
While the team’s findings begin to explain a long-unanswered question about one of the primary regulators of apoptosis, they raise new ones, as well. “I think that this main pathway of apoptosis has been seen by a lot of people as more-or-less settled science. Our findings should change that view,” Tucker says.
The researchers see important parallels between their findings from this study of worms and what’s known about cell death pathways in mammals. The mammalian counterpart to CED-9 is a protein called BCL-2, mutations in which can lead to cancer. BCL-2, like CED-9, can both promote and protect against apoptosis. As with CED-9, the pro-apoptotic function of BCL-2 has been mysterious. In mammals, too, mitochondria play a key role in activating apoptosis. The Horvitz lab’s discovery opens opportunities to better understand how apoptosis is regulated not only in worms but also in humans, and how dysregulation of apoptosis in humans can lead to such disorders as cancer, autoimmune disease, and neurodegeneration.
MIT physicists predict exotic form of matter with potential for quantum computingNew work suggests the ability to create fractionalized electrons known as non-Abelian anyons without a magnetic field, opening new possibilities for basic research and future applications.MIT physicists have shown that it should be possible to create an exotic form of matter that could be manipulated to form the qubit (quantum bit) building blocks of future quantum computers that are even more powerful than the quantum computers in development today.
The work builds on a discovery last year of materials that host electrons that can split into fractions of themselves but, importantly, can do so without the application of a magnetic field.
The general phenomenon of electron fractionalization was first discovered in 1982 and resulted in a Nobel Prize. That work, however, required the application of a magnetic field. The ability to create the fractionalized electrons without a magnetic field opens new possibilities for basic research and makes the materials hosting them more useful for applications.
When electrons split into fractions of themselves, those fractions are known as anyons. Anyons come in variety of flavors, or classes. The anyons discovered in the 2023 materials are known as Abelian anyons. Now, in a paper reported in the Oct. 17 issue of Physical Review Letters, the MIT team notes that it should be possible to create the most exotic class of anyons, non-Abelian anyons.
“Non-Abelian anyons have the bewildering capacity of ‘remembering’ their spacetime trajectories; this memory effect can be useful for quantum computing,” says Liang Fu, a professor in MIT’s Department of Physics and leader of the work.
Fu further notes that “the 2023 experiments on electron fractionalization greatly exceeded theoretical expectations. My takeaway is that we theorists should be bolder.”
Fu is also affiliated with the MIT Materials Research Laboratory. His colleagues on the current work are graduate students Aidan P. Reddy and Nisarga Paul, and postdoc Ahmed Abouelkomsan, all of the MIT Department of Phsyics. Reddy and Paul are co-first authors of the Physical Review Letters paper.
The MIT work and two related studies were also featured in an Oct. 17 story in Physics Magazine. “If this prediction is confirmed experimentally, it could lead to more reliable quantum computers that can execute a wider range of tasks … Theorists have already devised ways to harness non-Abelian states as workable qubits and manipulate the excitations of these states to enable robust quantum computation,” writes Ryan Wilkinson.
The current work was guided by recent advances in 2D materials, or those consisting of only one or a few layers of atoms. “The whole world of two-dimensional materials is very interesting because you can stack them and twist them, and sort of play Legos with them to get all sorts of cool sandwich structures with unusual properties,” says Paul. Those sandwich structures, in turn, are called moiré materials.
Anyons can only form in two-dimensional materials. Could they form in moiré materials? The 2023 experiments were the first to show that they can. Soon afterwards, a group led by Long Ju, an MIT assistant professor of physics, reported evidence of anyons in another moiré material. (Fu and Reddy were also involved in the Ju work.)
In the current work, the physicists showed that it should be possible to create non-Abelian anyons in a moiré material composed of atomically thin layers of molybdenum ditelluride. Says Paul, “moiré materials have already revealed fascinating phases of matter in recent years, and our work shows that non-Abelian phases could be added to the list.”
Adds Reddy, “our work shows that when electrons are added at a density of 3/2 or 5/2 per unit cell, they can organize into an intriguing quantum state that hosts non-Abelian anyons.”
The work was exciting, says Reddy, in part because “oftentimes there’s subtlety in interpreting your results and what they are actually telling you. So it was fun to think through our arguments” in support of non-Abelian anyons.
Says Paul, “this project ranged from really concrete numerical calculations to pretty abstract theory and connected the two. I learned a lot from my collaborators about some very interesting topics.”
This work was supported by the U.S. Air Force Office of Scientific Research. The authors also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center, the Kavli Institute for Theoretical Physics, the Knut and Alice Wallenberg Foundation, and the Simons Foundation.
How can electrons split into fractions of themselves?Physicists surprised to discover electrons in pentalayer graphene can exhibit fractional charge. New study suggests how this could work.MIT physicists have taken a key step toward solving the puzzle of what leads electrons to split into fractions of themselves. Their solution sheds light on the conditions that give rise to exotic electronic states in graphene and other two-dimensional systems.
The new work is an effort to make sense of a discovery that was reported earlier this year by a different group of physicists at MIT, led by Assistant Professor Long Ju. Ju’s team found that electrons appear to exhibit “fractional charge” in pentalayer graphene — a configuration of five graphene layers that are stacked atop a similarly structured sheet of boron nitride.
Ju discovered that when he sent an electric current through the pentalayer structure, the electrons seemed to pass through as fractions of their total charge, even in the absence of a magnetic field. Scientists had already shown that electrons can split into fractions under a very strong magnetic field, in what is known as the fractional quantum Hall effect. Ju’s work was the first to find that this effect was possible in graphene without a magnetic field — which until recently was not expected to exhibit such an effect.
The phenemonon was coined the “fractional quantum anomalous Hall effect,” and theorists have been keen to find an explanation for how fractional charge can emerge from pentalayer graphene.
The new study, led by MIT professor of physics Senthil Todadri, provides a crucial piece of the answer. Through calculations of quantum mechanical interactions, he and his colleagues show that the electrons form a sort of crystal structure, the properties of which are ideal for fractions of electrons to emerge.
“This is a completely new mechanism, meaning in the decades-long history, people have never had a system go toward these kinds of fractional electron phenomena,” Todadri says. “It’s really exciting because it makes possible all kinds of new experiments that previously one could only dream about.”
The team’s study appeared last week in the journal Physical Review Letters. Two other research teams — one from Johns Hopkins University, and the other from Harvard University, the University of California at Berkeley, and Lawrence Berkeley National Laboratory — have each published similar results in the same issue. The MIT team includes Zhihuan Dong PhD ’24 and former postdoc Adarsh Patri.
“Fractional phenomena”
In 2018, MIT professor of physics Pablo Jarillo-Herrero and his colleagues were the first to observe that new electronic behavior could emerge from stacking and twisting two sheets of graphene. Each layer of graphene is as thin as a single atom and structured in a chicken-wire lattice of hexagonal carbon atoms. By stacking two sheets at a very specific angle to each other, he found that the resulting interference, or moiré pattern, induced unexpected phenomena such as both superconducting and insulating properties in the same material. This “magic-angle graphene,” as it was soon coined, ignited a new field known as twistronics, the study of electronic behavior in twisted, two-dimensional materials.
“Shortly after his experiments, we realized these moiré systems would be ideal platforms in general to find the kinds of conditions that enable these fractional electron phases to emerge,” says Todadri, who collaborated with Jarillo-Herrero on a study that same year to show that, in theory, such twisted systems could exhibit fractional charge without a magnetic field. “We were advocating these as the best systems to look for these kinds of fractional phenomena,” he says.
Then, in September of 2023, Todadri hopped on a Zoom call with Ju, who was familiar with Todari’s theoretical work and had kept in touch with him through Ju’s own experimental work.
“He called me on a Saturday and showed me the data in which he saw these [electron] fractions in pentalayer graphene,” Todadri recalls. “And that was a big surprise because it didn’t play out the way we thought.”
In his 2018 paper, Todadri predicted that fractional charge should emerge from a precursor phase characterized by a particular twisting of the electron wavefunction. Broadly speaking, he theorized that an electron’s quantum properties should have a certain twisting, or degree to which it can be manipulated without changing its inherent structure. This winding, he predicted, should increase with the number of graphene layers added to a given moiré structure.
“For pentalayer graphene, we thought the wavefunction would wind around five times, and that would be a precursor for electron fractions,” Todadri says. “But he did his experiments and discovered that it does wind around, but only once. That then raised this big question: How should we think about whatever we are seeing?”
Extraordinary crystal
In the team’s new study, Todadri went back to work out how electron fractions could emerge from pentalayer graphene if not through the path he initially predicted. The physicists looked through their original hypothesis and realized they may have missed a key ingredient.
“The standard strategy in the field when figuring out what’s happening in any electronic system is to treat electrons as independent actors, and from that, figure out their topology, or winding,” Todadri explains. “But from Long’s experiments, we knew this approximation must be incorrect.”
While in most materials, electrons have plenty of space to repel each other and zing about as independent agents, the particles are much more confined in two-dimensional structures such as pentalayer graphene. In such tight quarters, the team realized that electrons should also be forced to interact, behaving according to their quantum correlations in addition to their natural repulsion. When the physicists added interelectron interactions to their theory, they found it correctly predicted the winding that Ju observed for pentalayer graphene.
Once they had a theoretical prediction that matched with observations, the team could work from this prediction to identify a mechanism by which pentalayer graphene gave rise to fractional charge.
They found that the moiré arrangement of pentalayer graphene, in which each lattice-like layer of carbon atoms is arranged atop the other and on top of the boron-nitride, induces a weak electrical potential. When electrons pass through this potential, they form a sort of crystal, or a periodic formation, that confines the electrons and forces them to interact through their quantum correlations. This electron tug-of-war creates a sort of cloud of possible physical states for each electron, which interacts with every other electron cloud in the crystal, in a wavefunction, or a pattern of quantum correlations, that gives the winding that should set the stage for electrons to split into fractions of themselves.
“This crystal has a whole set of unusual properties that are different from ordinary crystals, and leads to many fascinating questions for future research,” Todadri says. “For the short term, this mechanism provides the theoretical foundation for understanding the observations of fractions of electrons in pentalayer graphene and for predicting other systems with similar physics.”
This work was supported, in part, by the National Science Foundation and the Simons Foundation.
Four from MIT named 2025 Rhodes ScholarsYiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo will start postgraduate studies at Oxford next fall.Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo have been selected as 2025 Rhodes Scholars and will begin fully funded postgraduate studies at Oxford University in the U.K. next fall. In addition to MIT’s two U.S. Rhodes winners, Oluigbo and Nair, two affiliates were awarded international Rhodes Scholarships: Chen for Rhodes’ China constituency and Hector for the Global Rhodes Scholarship. Hector is the first Haitian citizen to be named a Rhodes Scholar.
The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.
“It is profoundly inspiring to work with our amazing students, who have accomplished so much at MIT and, at the same time, thought deeply about how they can have an impact in solving the world's major challenges,” says Professor Nancy Kanwisher, who co-chairs the committee along with Professor Tom Levenson. “These students have worked hard to develop and articulate their vision and to learn to communicate it to others with passion, clarity, and confidence. We are thrilled but not surprised to see so many of them recognized this year as finalists and as winners.”
Yiming Chen ’24
Yiming Chen, from Beijing, China, and the Washington area, was named one of four Rhodes China Scholars on Sept 28. At Oxford, she will pursue graduate studies in engineering science, working toward her ongoing goal of advancing AI safety and reliability in clinical workflows.
Chen graduated from MIT in 2024 with a BS in mathematics and computer science and an MEng in computer science. She worked on several projects involving machine learning for health care, and focused her master’s research on medical imaging in the Medical Vision Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Collaborating with IBM Research, Chen developed a neural framework for clinical-grade lumen segmentation in intravascular ultrasound and presented her findings at the MICCAI Machine Learning in Medical Imaging conference. Additionally, she worked at Cleanlab, an MIT-founded startup, creating an open-source library to ensure the integrity of image datasets used in vision tasks.
Chen was a teaching assistant in the MIT math and electrical engineering and computer science departments, and received a teaching excellence award. She taught high school students at the Hampshire College Summer Studies in Math and was selected to participate in MISTI Global Teaching Labs in Italy.
Having studied the guzheng, a traditional Chinese instrument, since age 4, Chen served as president of the MIT Chinese Music Ensemble, explored Eastern and Western music synergies with the MIT Chamber Music Society, and performed at the United Nations. On campus, she was also active with Asymptones a capella, MIT Ring Committee, Ribotones, Figure Skating Club, and the Undergraduate Association Innovation Committee.
Wilhem Hector
Wilhem Hector, a senior from Port-au-Prince, Haiti, majoring in mechanical engineering, was awarded a Global Rhodes Scholarship on Nov 1. The first Haitian national to be named a Rhodes Scholar, Hector will pursue at Oxford a master’s in energy systems followed by a master’s in education, focusing on digital and social change. His long-term goals are twofold: pioneering Haiti’s renewable energy infrastructure and expanding hands-on opportunities in the country‘s national curriculum.
Hector developed his passion for energy through his research in the MIT Howland Lab, where he investigated the uncertainty of wind power production during active yaw control. He also helped launch the MIT Renewable Energy Clinic through his work on the sources of opposition to energy projects in the U.S. Beyond his research, Hector had notable contributions as an intern at Radia Inc. and DTU Wind Energy Systems, where he helped develop computational wind farm modeling and simulation techniques.
Outside of MIT, he leads the Hector Foundation, a nonprofit providing educational opportunities to young people in Haiti. He has raised over $80,000 in the past five years to finance their initiatives, including the construction of Project Manus, Haiti’s first open-use engineering makerspace. Hector’s service endeavors have been supported by the MIT PKG Center, which awarded him the Davis Peace Prize, the PKG Fellowship for Social Impact, and the PKG Award for Public Service.
Hector co-chairs both the Student Events Board and the Class of 2025 Senior Ball Committee and has served as the social chair for Chocolate City and the African Students Association.
Anushka Nair
Anushka Nair, from Portland, Oregon, will graduate next spring with BS and MEng degrees in computer science and engineering with concentrations in economics and AI. She plans to pursue a DPhil in social data science at the Oxford Internet Institute. Nair aims to develop ethical AI technologies that address pressing societal challenges, beginning with combating misinformation.
For her master’s thesis under Professor David Rand, Nair is developing LLM-powered fact-checking tools to detect nuanced misinformation beyond human or automated capabilities. She also researches human-AI co-reasoning at the MIT Center for Collective Intelligence with Professor Thomas Malone. Previously, she conducted research on autonomous vehicle navigation at Stanford’s AI and Robotics Lab, energy microgrid load balancing at MIT’s Institute for Data, Systems, and Society, and worked with Professor Esther Duflo in economics.
Nair interned in the Executive Office of the Secretary General at the United Nations, where she integrated technology solutions and assisted with launching the High-Level Advisory Body on AI. She also interned in Tesla’s energy sector, contributing to Autobidder, an energy trading tool, and led the launch of a platform for monitoring distributed energy resources and renewable power plants. Her work has earned her recognition as a Social and Ethical Responsibilities of Computing Scholar and a U.S. Presidential Scholar.
Nair has served as President of the MIT Society of Women Engineers and MIT and Harvard Women in AI, spearheading outreach programs to mentor young women in STEM fields. She also served as president of MIT Honors Societies Eta Kappa Nu and Tau Beta Pi.
David Oluigbo
David Oluigbo, from Washington, is a senior majoring in artificial intelligence and decision making and minoring in brain and cognitive sciences. At Oxford, he will undertake an MS in applied digital health followed by an MS in modeling for global health. Afterward, Oluigbo plans to attend medical school with the goal of becoming a physician-scientist who researches and applies AI to address medical challenges in low-income countries.
Since his first year at MIT, Oluigbo has conducted neural and brain research with Ev Fedorenko at the McGovern Institute for Brain Research and with Susanna Mierau’s Synapse and Network Development Group at Brigham and Women’s Hospital. His work with Mierau led to several publications and a poster presentation at the Federation of European Societies annual meeting.
In a summer internship at the National Institutes of Health Clinical Center, Oluigbo designed and trained machine-learning models on CT scans for automatic detection of neuroendocrine tumors, leading to first authorship on an International Society for Optics and Photonics conference proceeding paper, which he presented at the 2024 annual meeting. Oluigbo also did a summer internship with the Anyscale Learning for All Laboratory at the MIT Computer Science and Artificial Intelligence Laboratory.
Oluigbo is an EMT and systems administrator officer with MIT-EMS. He is a consultant for Code for Good, a representative on the MIT Schwarzman College of Computing Undergraduate Advisory Group, and holds executive roles with the Undergraduate Association, the MIT Brain and Cognitive Society, and the MIT Running Club.
A launchpad for entrepreneurship in aerospaceThe Certificate in Aerospace Innovation gives students the tools and confidence to be aerospace entrepreneurs during an inflection point in the industry.At age 22, aerospace engineer Eric Shaw worked on some of the world’s most powerful airplanes, yet learning to fly even the smallest one was out of reach. Just out of college, he could not afford civilian flight school and spent the next two years saving $12,000 to earn his private pilot’s license. Shaw knew there had to be a better, less expensive way to train pilots.
Now a graduate student at the MIT Sloan School of Management’s Leaders for Global Operations (LGO) program, Shaw joined the MIT Department of Aeronautics and Astronautics’ (AeroAstro) Certificate in Aerospace Innovation program to turn a years-long rumination into a viable solution. Along with fellow graduate students Gretel Gonzalez and Shaan Jagani, Shaw proposed training aspiring pilots on electric and hybrid planes. This approach reduces flight school expenses by up to 34 percent while shrinking the industry’s carbon footprint.
The trio shared their plan to create the Aeroelectric Flight Academy at the certificate program’s signature Pitchfest event last spring. Equipped with a pitch deck and a business plan, the team impressed the judges, who awarded them the competition’s top prize of $10,000.
What began as a curiosity to test an idea has reshaped Shaw’s view of his industry.
“Aerospace and entrepreneurship initially seemed antithetical to me,” Shaw says. “It’s a hard sector to break into because the capital expenses are huge and a few big dogs have a lot of influence. Earning this certificate and talking face-to-face with folks who have overcome this seemingly impossible gap has filled me with confidence.”
Disruption by design
AeroAstro introduced the Certificate in Aerospace Innovation in 2021 after engaging in a strategic planning process to take full advantage of the research and ideas coming out of the department. The initiative is spearheaded by AeroAstro professors Olivier L. de Weck SM ʼ99, PhD ʼ01 and Zoltán S. Spakovszky SM ʼ99, PhD ʼ00, in partnership with the Martin Trust Center for MIT Entrepreneurship. Its creation recognizes the aerospace industry is at an inflection point. Major advancements in drone, satellite, and other technologies, coupled with an infusion of nongovernmental funding, have made it easier than ever to bring aerospace innovations to the marketplace.
“The landscape has radically shifted,” says Spakovszky, the Institute’s T. Wilson (1953) Professor in Aeronautics. “MIT students are responding to this change because startups are often the quickest path to impact.”
The certificate program has three requirements: coursework in both aerospace engineering and entrepreneurship, a speaker series primarily featuring MIT alumni and faculty, and hands-on entrepreneurship experience. In the latter, participants can enroll in the Trust Center’s StartMIT program and then compete in Pitchfest, which is modeled after the MIT $100K Entrepreneurship Competition. They can also join a summer incubator, such as the Trust Center’s MIT delta v or the Venture Exploration Program, run by the MIT Office of Innovation and the National Science Foundation’s Innovation Corps.
“At the end of the program, students will be able to look at a technical proposal and fairly quickly run some numbers and figure out if this innovation has market viability or if it’s completely utopian,” says de Weck, the Apollo Program Professor of Astronautics and associate department head of AeroAstro.
Since its inception, 46 people from the MIT community have participated and 13 have fulfilled the requirements of the two-year program to earn the certificate. The program’s fourth cohort is underway this fall with its largest enrollment yet, with 21 postdocs, graduate students, and undergraduate seniors across seven courses and programs at MIT.
A unicorn industry
When Eddie Obropta SM ʼ13, SM ʼ15 attended MIT, aerospace entrepreneurship meant working for SpaceX or Blue Origin. Yet he knew more was possible. He gave himself a crash course in entrepreneurship by competing in the MIT $100K Entrepreneurship Competition four times. Each year, his ideas became more refined and battle-tested by potential customers.
In his final entry in the competition, Obropta, along with MIT doctoral student Nikhil Vadhavkar and Forrest Meyen SM ’13 PhD ’17, proposed using drones to maximize crop yields. Their business, Raptor Maps, won. Today, Obropta serves as the co-founder and chief technology officer of Raptor Maps, which builds software to automate the operations and maintenance of solar farms using drones, robots, and artificial intelligence
While Obropta received support from AeroAstro and MIT's existing entrepreneurial ecosystem, the tech leader was excited when de Weck and Spakovszky shared their plans to launch the Certificate in Aerospace Innovation. Obropta currently serves on the program’s advisory board, has been a presenter at the speaker series, and has served as a mentor and judge for Pitchfest.
“While there are a lot of excellent entrepreneurship programs across the Institute, the aerospace industry is its own unique beast,” Obropta says. “Today’s aspiring founders are visionaries looking to build a spacefaring civilization, but they need specialized support in navigating complex multidisciplinary missions and heavy government involvement.”
Entrepreneurs are everywhere, not just at startups
While the certificate program will likely produce success stories like Raptor Maps, that is not the ultimate goal, say de Weck and Spakovszky. Thinking and acting like an entrepreneur — such as understanding market potential, dealing with failure, and building a deep professional network — are characteristics that benefit everyone, no matter their occupation.
Paul Cheek, executive director of the Trust Center who also teaches a course in the certificate program, agrees.
“At its core, entrepreneurship is a mindset and a skill set; it’s about moving the needle forward for maximum impact,” Cheek says. “A lot of organizations, including large corporations, nonprofits, and the government, can benefit from that type of thinking.”
That form of entrepreneurship resonates with the Aeroelectric Flight Academy team. Although they are meeting with potential investors and looking to scale their business, all three plan to pursue their first passions: Jagani hopes to be an astronaut, Shaw would like to be an executive at one of the “big dog” aerospace companies, and Gonzalez wants to work for the Mexican Space Agency.
Gonzalez, who is on track to earn her certificate in 2025, says she is especially grateful for the people she met through the program.
“I didn’t know an aerospace entrepreneurship community even existed when I began the program,” Gonzalez says. “It’s here and it’s filled with very dedicated and generous people who have shared insights with me that I don’t think I would have learned anywhere else.”
Ensuring a durable transitionProgress on the energy transition depends on collective action benefiting all stakeholders, agreed participants in MITEI’s annual research conference.To fend off the worst impacts of climate change, “we have to decarbonize, and do it even faster,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor, MIT Department of Chemical Engineering, at MITEI’s Annual Research Conference.
“But how the heck do we actually achieve this goal when the United States is in the middle of a divisive election campaign, and globally, we’re facing all kinds of geopolitical conflicts, trade protectionism, weather disasters, increasing demand from developing countries building a middle class, and data centers in countries like the U.S.?”
Researchers, government officials, and business leaders convened in Cambridge, Massachusetts, Sept. 25-26 to wrestle with this vexing question at the conference that was themed, “A durable energy transition: How to stay on track in the face of increasing demand and unpredictable obstacles.”
“In this room we have a lot of power,” said Green, “if we work together, convey to all of society what we see as real pathways and policies to solve problems, and take collective action.”
The critical role of consensus-building in driving the energy transition arose repeatedly in conference sessions, whether the topic involved developing and adopting new technologies, constructing and siting infrastructure, drafting and passing vital energy policies, or attracting and retaining a skilled workforce.
Resolving conflicts
There is “blowback and a social cost” in transitioning away from fossil fuels, said Stephen Ansolabehere, the Frank G. Thompson Professor of Government at Harvard University, in a panel on the social barriers to decarbonization. “Companies need to engage differently and recognize the rights of communities,” he said.
Nora DeDontney, director of development at Vineyard Offshore, described her company’s two years of outreach and negotiations to bring large cables from ocean-based wind turbines onshore.
“Our motto is, 'community first,'” she said. Her company works to mitigate any impacts towns might feel because of offshore wind infrastructure construction with projects, such as sewer upgrades; provides workforce training to Tribal Nations; and lays out wind turbines in a manner that provides safe and reliable areas for local fisheries.
Elsa A. Olivetti, professor in the Department of Materials Science and Engineering at MIT and the lead of the Decarbonization Mission of MIT’s new Climate Project, discussed the urgent need for rapid scale-up of mineral extraction. “Estimates indicate that to electrify the vehicle fleet by 2050, about six new large copper mines need to come on line each year,” she said. To meet the demand for metals in the United States means pushing into Indigenous lands and environmentally sensitive habitats. “The timeline of permitting is not aligned with the temporal acceleration needed,” she said.
Larry Susskind, the Ford Professor of Urban and Environmental Planning in the MIT Department of Urban Studies and Planning, is trying to resolve such tensions with universities playing the role of mediators. He is creating renewable energy clinics where students train to participate in emerging disputes over siting. “Talk to people before decisions are made, conduct joint fact finding, so that facilities reduce harms and share the benefits,” he said.
Clean energy boom and pressure
A relatively recent and unforeseen increase in demand for energy comes from data centers, which are being built by large technology companies for new offerings, such as artificial intelligence.
“General energy demand was flat for 20 years — and now, boom,” said Sean James, Microsoft’s senior director of data center research. “It caught utilities flatfooted.” With the expansion of AI, the rush to provision data centers with upwards of 35 gigawatts of new (and mainly renewable) power in the near future, intensifies pressure on big companies to balance the concerns of stakeholders across multiple domains. Google is pursuing 24/7 carbon-free energy by 2030, said Devon Swezey, the company’s senior manager for global energy and climate.
“We’re pursuing this by purchasing more and different types of clean energy locally, and accelerating technological innovation such as next-generation geothermal projects,” he said. Pedro Gómez Lopez, strategy and development director, Ferrovial Digital, which designs and constructs data centers, incorporates renewable energy into their projects, which contributes to decarbonization goals and benefits to locales where they are sited. “We can create a new supply of power, taking the heat generated by a data center to residences or industries in neighborhoods through District Heating initiatives,” he said.
The Inflation Reduction Act and other legislation has ramped up employment opportunities in clean energy nationwide, touching every region, including those most tied to fossil fuels. “At the start of 2024 there were about 3.5 million clean energy jobs, with 'red' states showing the fastest growth in clean energy jobs,” said David S. Miller, managing partner at Clean Energy Ventures. “The majority (58 percent) of new jobs in energy are now in clean energy — that transition has happened. And one-in-16 new jobs nationwide were in clean energy, with clean energy jobs growing more than three times faster than job growth economy-wide”
In this rapid expansion, the U.S. Department of Energy (DoE) is prioritizing economically marginalized places, according to Zoe Lipman, lead for good jobs and labor standards in the Office of Energy Jobs at the DoE. “The community benefit process is integrated into our funding,” she said. “We are creating the foundation of a virtuous circle,” encouraging benefits to flow to disadvantaged and energy communities, spurring workforce training partnerships, and promoting well-paid union jobs. “These policies incentivize proactive community and labor engagement, and deliver community benefits, both of which are key to building support for technological change.”
Hydrogen opportunity and challenge
While engagement with stakeholders helps clear the path for implementation of technology and the spread of infrastructure, there remain enormous policy, scientific, and engineering challenges to solve, said multiple conference participants. In a “fireside chat,” Prasanna V. Joshi, vice president of low-carbon-solutions technology at ExxonMobil, and Ernest J. Moniz, professor of physics and special advisor to the president at MIT, discussed efforts to replace natural gas and coal with zero-carbon hydrogen in order to reduce greenhouse gas emissions in such major industries as steel and fertilizer manufacturing.
“We have gone into an era of industrial policy,” said Moniz, citing a new DoE program offering incentives to generate demand for hydrogen — more costly than conventional fossil fuels — in end-use applications. “We are going to have to transition from our current approach, which I would call carrots-and-twigs, to ultimately, carrots-and-sticks,” Moniz warned, in order to create “a self-sustaining, major, scalable, affordable hydrogen economy.”
To achieve net zero emissions by 2050, ExxonMobil intends to use carbon capture and sequestration in natural gas-based hydrogen and ammonia production. Ammonia can also serve as a zero-carbon fuel. Industry is exploring burning ammonia directly in coal-fired power plants to extend the hydrogen value chain. But there are challenges. “How do you burn 100 percent ammonia?”, asked Joshi. “That's one of the key technology breakthroughs that's needed.” Joshi believes that collaboration with MIT’s “ecosystem of breakthrough innovation” will be essential to breaking logjams around the hydrogen and ammonia-based industries.
MIT ingenuity essential
The energy transition is placing very different demands on different regions around the world. Take India, where today per capita power consumption is one of the lowest. But Indians “are an aspirational people … and with increasing urbanization and industrial activity, the growth in power demand is expected to triple by 2050,” said Praveer Sinha, CEO and managing director of the Tata Power Co. Ltd., in his keynote speech. For that nation, which currently relies on coal, the move to clean energy means bringing another 300 gigawatts of zero-carbon capacity online in the next five years. Sinha sees this power coming from wind, solar, and hydro, supplemented by nuclear energy.
“India plans to triple nuclear power generation capacity by 2032, and is focusing on advancing small modular reactors,” said Sinha. “The country also needs the rapid deployment of storage solutions to firm up the intermittent power.” The goal is to provide reliable electricity 24/7 to a population living both in large cities and in geographically remote villages, with the help of long-range transmission lines and local microgrids. “India’s energy transition will require innovative and affordable technology solutions, and there is no better place to go than MIT, where you have the best brains, startups, and technology,” he said.
These assets were on full display at the conference. Among them a cluster of young businesses, including:
The pipeline of research talent extended into the undergraduate ranks, with a conference “slam” competition showcasing students’ summer research projects in areas from carbon capture using enzymes to 3D design for the coils used in fusion energy confinement.
“MIT students like me are looking to be the next generation of energy leaders, looking for careers where we can apply our engineering skills to tackle exciting climate problems and make a tangible impact,” said Trent Lee, a junior in mechanical engineering researching improvements in lithium-ion energy storage. “We are stoked by the energy transition, because it’s not just the future, but our chance to build it.”
J-PAL North America announces new evaluation incubator collaborators from state and local governmentsSelected LEVER collaborators will work with the organization to develop an evaluation of their respective programs that alleviate poverty.J-PAL North America recently selected government partners for the 2024-25 Leveraging Evaluation and Evidence for Equitable Recovery (LEVER) Evaluation Incubator cohort. Selected collaborators will receive funding and technical assistance to develop or launch a randomized evaluation for one of their programs. These collaborations represent jurisdictions across the United States and demonstrate the growing enthusiasm for evidence-based policymaking.
Launched in 2023, LEVER is a joint venture between J-PAL North America and Results for America. Through the Evaluation Incubator, trainings, and other program offerings, LEVER seeks to address the barriers many state and local governments face around finding and generating evidence to inform program design. LEVER offers government leaders the opportunity to learn best practices for policy evaluations and how to integrate evidence into decision-making. Since the program’s inception, more than 80 government jurisdictions have participated in LEVER offerings.
J-PAL North America’s Evaluation Incubator helps collaborators turn policy-relevant research questions into well-designed randomized evaluations, generating rigorous evidence to inform pressing programmatic and policy decisions. The program also aims to build a culture of evidence use and give government partners the tools to continue generating and utilizing evidence in their day-to-day operations.
In addition to funding and technical assistance, the selected state and local government collaborators will be connected with researchers from J-PAL’s network to help advance their evaluation ideas. Evaluation support will also be centered on community-engaged research practices, which emphasize collaborating with and learning from the groups most affected by the program being evaluated.
Evaluation Incubator selected projects
Pierce County Human Services (PCHS) in the state of Washington will evaluate two programs as part of the Evaluation Incubator. The first will examine how extending stays in a fentanyl detox program affects the successful completion of inpatient treatment and hospital utilization for individuals. “PCHS is interested in evaluating longer fentanyl detox stays to inform our funding decisions, streamline our resource utilization, and encourage additional financial commitments to address the unmet needs of individuals dealing with opioid use disorder,” says Trish Crocker, grant coordinator.
The second PCHS program will evaluate the impact of providing medication and outreach services via a mobile distribution unit to individuals with opioid use disorders on program take-up and substance usage. Margo Burnison, a behavioral health manager with PCHS, says that the team is “thrilled to be partnering with J-PAL North America to dive deep into the data to inform our elected leaders on the best way to utilize available resources.”
The City of Los Angeles Youth Development Department (YDD) seeks to evaluate a research-informed program: Student Engagement, Exploration, and Development in STEM (SEEDS). This intergenerational STEM mentorship program supports underrepresented middle school and college students in STEM by providing culturally responsive mentorship. The program seeks to foster these students’ STEM identity and degree attainment in higher education. YDD has been working with researchers at the University of Southern California to measure the SEEDS program’s impact, but is interested in developing a randomized evaluation to generate further evidence. Darnell Cole, professor and co-director of the Research Center for Education, Identity and Social Justice, shares his excitement about the collaboration with J-PAL: “We welcome the opportunity to measure the impact of the SEEDS program on our students’ educational experience. Rigorously testing the SEEDS program will help us improve support for STEM students, ultimately enhancing their persistence and success.”
The Fort Wayne Police Department’s Hope and Recovery Team in Indiana will evaluate the impact of two programs that connect social workers with people who have experienced an overdose, or who have a mental health illness, to treatment and resources. “We believe we are on the right track in the work we are doing with the crisis intervention social worker and the recovery coach, but having an outside evaluation of both programs would be extremely helpful in understanding whether and what aspects of these programs are most effective,” says Police Captain Kevin Hunter.
The County of San Diego’s Office of Evaluation, Performance and Analytics, and Planning & Development Services will engage with J-PAL staff to explore evaluation opportunities for two programs that are a part of the county’s Climate Action Plan. The Equity-Driven Tree Planting Program seeks to increase tree canopy coverage, and the Climate Smart Land Stewardship Program will encourage climate-smart agricultural practices. Ricardo Basurto-Davila, chief evaluation officer, says that “the county is dedicated to evidence-based policymaking and taking decisive action against climate change. The work with J-PAL will support us in combining these commitments to maximize the effectiveness in decreasing emissions through these programs.”
J-PAL North America looks forward to working with the selected collaborators in the coming months to learn more about these promising programs, clarify our partner’s evidence goals, and design randomized evaluations to measure their impact.
Linzixuan (Rhoda) Zhang wins 2024 Collegiate Inventors CompetitionMIT graduate student earns top honors in Graduate and People’s Choice categories for her work on nutrient-stabilizing materials.Linzixuan (Rhoda) Zhang, a doctoral candidate in the MIT Department of Chemical Engineering, recently won the 2024 Collegiate Inventors Competition, medaling in both the Graduate and People’s Choice categories for developing materials to stabilize nutrients in food with the goal of improving global health.
The annual competition, organized by the National Inventors Hall of Fame and United States Patent and Trademark Office (USPTO), celebrates college and university student inventors. The finalists present their inventions to a panel of final-round judges composed of National Inventors Hall of Fame inductees and USPTO officials.
No stranger to having her work in the limelight, Zhang is a three-time winner of the Koch Institute Image Awards in 2022, 2023, and 2024, as well as a 2022 fellow at the MIT Abdul Latif Jameel Water and Food Systems Lab.
"Rhoda is an exceptionally dedicated and creative student. Her well-deserved award recognizes the potential of her research on nutrient stabilization, which could have a significant impact on society," says Ana Jaklenec, one of Zhang’s advisors and a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. Zhang is also advised by David H. Koch (1962) Institute Professor Robert Langer.
Frameworks for global health
In a world where nearly 2 billion people suffer from micronutrient deficiencies, particularly iron, the urgency for effective solutions has never been greater. Iron deficiency is especially harmful for vulnerable populations such as children and pregnant women, since it can lead to weakened immune systems and developmental delays.
The World Health Organization has highlighted food fortification as a cost-effective strategy, yet many current methods fall short. Iron and other nutrients can break down during processing or cooking, and synthetic additives often come with high costs and environmental drawbacks.
Zhang, along with her teammate, Xin Yang, a postdoc associate at Koch Institute, set out to innovate new technologies for nutrient fortification that are effective, accessible, and sustainable, leading to the invention nutritional metal-organic frameworks (NuMOFs) and the subsequent launch of MOFe Coffee, the world’s first iron-fortified coffee. NuMOFs not only protect essential nutrients such as iron while in food for long periods of time, but also make them more easily absorbed and used once consumed.
The inspiration for the coffee came from the success of iodized salt, which significantly reduced iodine deficiency worldwide. Because coffee and tea are associated with low iron absorption, iron fortification would directly address the challenge.
However, replicating the success of iodized salt for iron fortification has been extremely challenging due to the micronutrient’s high reactivity and the instability of iron(II) salts. As researchers with backgrounds in material science, chemistry, and food technology, Zhang and Yang leveraged their expertise to develop a solution that could overcome these technical barriers.
The fortified coffee serves as a practical example of how NuMOFs can help people increase their iron intake by engaging in a habit that’s already part of their daily routine, with significant potential benefits for women, who are disproportionately affected by iron deficiency. The team plans to expand the technology to incorporate additional nutrients to address a wider array of nutritional deficiencies and improve health equity globally.
Fast-track to addressing global health improvements
Looking ahead, Zhang and Yang in the Jaklenec Group are focused on both product commercialization and ongoing research, refining MOFe Coffee to enhance nutrient stability and ensuring the product remains palatable while maximizing iron absorption.
Winning the CIC competition means that Zhang, Yang, and the team can fast-track their patent application with the USPTO. The team hopes that their fast-tracked patent will allow them to attract more potential investors and partners, which is crucial for scaling their efforts. A quicker patent process also means that the team can bring the technology to market faster, helping improve global nutrition and health for those who need it most.
“Our goal is to make a real difference in addressing micronutrient deficiencies around the world,” says Zhang.
Any child who’s spent a morning building sandcastles only to watch the afternoon tide ruin them in minutes knows the ocean always wins.
Yet, coastal protection strategies have historically focused on battling the sea — attempting to hold back tides and fighting waves and currents by armoring coastlines with jetties and seawalls and taking sand from the ocean floor to “renourish” beaches. These approaches are temporary fixes, but eventually the sea retakes dredged sand, intense surf breaches seawalls, and jetties may just push erosion to a neighboring beach. The ocean wins.
With climate change accelerating sea level rise and coastal erosion, the need for better solutions is urgent. Noting that eight of the world’s 10 largest cities are near a coast, a recent National Oceanic and Atmospheric Administration (NOAA) report pointed to 2023’s record-high global sea level and warned that high tide flooding is now 300 to 900 percent more frequent than it was 50 years ago, threatening homes, businesses, roads and bridges, and a range of public infrastructure, from water supplies to power plants.
Island nations face these threats more acutely than other countries and there’s a critical need for better solutions. MIT’s Self-Assembly Lab is refining an innovative one that demonstrates the value of letting nature take its course — with some human coaxing.
The Maldives, an Indian Ocean archipelago of nearly 1,200 islands, has traditionally relied on land reclamation via dredging to replenish its eroding coastlines. Working with the Maldivian climate technology company Invena Private Limited, the Self-Assembly Lab is pursuing technological solutions to coastal erosion that mimic nature by harnessing ocean currents to accumulate sand. The Growing Islands project creates and deploys underwater structures that take advantage of wave energy to promote accumulation of sand in strategic locations — helping to expand islands and rebuild coastlines in sustainable ways that can eventually be scaled to coastal areas around the world.
“There’s room for a new perspective on climate adaptation, one that builds with nature and leverages data for equitable decision-making,” says Invena co-founder and CEO Sarah Dole.
MIT’s pioneering work was the topic of multiple presentations during the United Nations General Assembly and Climate week in New York City in late September. During the week, Self-Assembly Lab co-founder and director Skylar Tibbits and Maldives Minister of Climate Change, Environment and Energy Thoriq Ibrahim also presented findings of the Growing Islands project at MIT Solve’s Global Challenge Finals in New York.
“There’s this interesting story that’s emerging around the dynamics of islands,” says Tibbits, whose U.N.-sponsored panel (“Adaptation Through Innovation: How the Private Sector Could Lead the Way”) was co-hosted by the Government of Maldives and the U.S. Agency for International Development, a Growing Islands project funder.
In a recent interview, Tibbits said islands “are almost lifelike in their characteristics. They can adapt and grow and change and fluctuate.” Despite some predictions that the Maldives might be inundated by sea level rise and ravaged by erosion, “maybe these islands are actually more resilient than we thought. And maybe there’s a lot more we can learn from these natural formations of sand … maybe they are a better model for how we adapt in the future for sea level rise and erosion and climate change than our man-made cities.”
Building on a series of lab experiments begun in 2017, the MIT Self-Assembly Lab and Invena have been testing the efficacy of submersible structures to expand islands and rebuild coasts in the Maldivian capital of Male since 2019. Since then, researchers have honed the experiments based on initial results that demonstrate the promise of using submersible bladders and other structures to utilize natural currents to encourage strategic accumulation of sand.
The work is “boundary-pushing,” says Alex Moen, chief explorer engagement officer at the National Geographic Society, an early funder of the project.
“Skylar and his team’s innovative technology reflect the type of forward-thinking, solutions-oriented approaches necessary to address the growing threat of sea level rise and erosion to island nations and coastal regions,” Moen said.
Most recently, in August 2024, the team submerged a 60-by-60-meter structure in a lagoon near Male. The structure is six times the size of its predecessor installed in 2019, Tibbits says, adding that while the 2019 island-building experiment was a success, ocean currents in the Maldives change seasonally and it only allowed for accretion of sand in one season.
“The idea of this was to make it omnidirectional. We wanted to make it work year-round. In any direction, any season, we should be accumulating sand in the same area,” Tibbits says. “This is our largest experiment so far, and I think it has the best chance to accumulate the most amount of sand, so we’re super excited about that.”
The next experiment will focus not on building islands, but on overcoming beach erosion. This project, planned for installation later this fall, is envisioned to not only enlarge a beach but also provide recreational benefits for local residents and enhanced habitat for marine life such as fish and corals.
“This will be the first large-scale installment that’s intentionally designed for marine habitats,” Tibbits says.
Another key aspect of the Growing Islands project takes place in Tibbits’ lab at MIT, where researchers are improving the ability to predict and track changes in low-lying islands through satellite imagery analysis — a technique that promises to facilitate what is now a labor-intensive process involving land and sea surveys by drones and researchers on foot and at sea.
“In the future, we could be monitoring and predicting coastlines around the world — every island, every coastline around the world,” Tibbits says. “Are these islands getting smaller, getting bigger? How fast are they losing ground? No one really knows unless we do it by physically surveying right now and that’s not scalable. We do think we have a solution for that coming.”
Also hopefully coming soon is financial support for a Mobile Ocean Innovation Lab, a “floating hub” that would provide small island developing states with advanced technologies to foster coastal and climate resilience, conservation, and renewable energy. Eventually, Tibbits says, it would enable the team to travel “any place around the world and partner with local communities, local innovators, artists, and scientists to help co-develop and deploy some of these technologies in a better way.”
Expanding the reach of climate change solutions that collaborate with, rather than oppose, natural forces depends on getting more people, organizations, and governments on board.
“There are two challenges,” Tibbits says. “One of them is the legacy and history of what humans have done in the past that constrains what we think we can do in the future. For centuries, we’ve been building hard infrastructure at our coastlines, so we have a lot of knowledge about that. We have companies and practices and expertise, and we have a built-up confidence, or ego, around what’s possible. We need to change that.
“The second problem,” he continues, “is the money-speed-convenience problem — or the known-versus-unknown problem. The hard infrastructure, whether that’s groins or seawalls or just dredging … these practices in some ways have a clear cost and timeline, and we are used to operating in that mindset. And nature doesn’t work that way. Things grow, change, and adapt on their on their own timeline.”
Teaming up with waves and currents to preserve islands and coastlines requires a mindset shift that’s difficult, but ultimately worthwhile, Tibbits contends.
“We need to dance with nature. We’re never going to win if we’re trying to resist it,” he says. “But the best-case scenario is that we can take all the positive attributes in the environment and take all the creative, positive things we can do as humans and work together to create something that’s more than the sum of its parts.”
Six in 10 Americans are living with at least one chronic disease, and four in 10 Americans have two or more chronic diseases. Some of those diseases, such as hypothyroidism and inflammatory diseases, require individuals to carefully track certain blood tests in order to manage their conditions. Unfortunately, that usually means an onerous cycle of scheduling appointments, traveling to hospitals, and waiting for lab results.
Now SiPhox Health is working to help patients and their doctors manage diseases from the comfort of their home with a new kind of blood test based on a silicon photonic chip. The system is the size of a coffee maker and can produce precise readings for 20 different biomarkers.
The chip-based device is not yet FDA-cleared, so it is currently only being used for research purposes, but SiPhox also provides mail-in blood testing to thousands of people with chronic diseases, both directly and through other health care and wellness businesses, using approved technology. The company hopes its new system can soon deliver faster results to every home that needs it.
“A lot of blood tests aren’t done because they're too inconvenient,” SiPhox founder and former MIT researcher Michael Dubrovsky says. “People skip scheduled blood tests, and physicians don’t always prescribe blood tests because they know it’s inconvenient. That requires them to base their decisions on symptoms, and that’s not optimal for many of these chronic diseases.”
Dubrovsky and SiPhox co-founder Diedrik Vermeulen met at MIT while researching photonic chip and laser technology. They see SiPhox’s technology as the latest in a long trend toward smaller and more scalable devices as they are condensed onto integrated chips.
“Biolabs typically do blood testing with these large instruments that are full of optics, lasers, lenses, mirrors, and all these very expensive features,” Vermeulen says. “We don’t change any of the main features. We leave all the optics the same, but we miniaturize it onto our chips and make it so scalable that you can ship it to homes. It’s like how computers used to be the size of a room and you could only find them in high-end universities — now they’re all on a chip. We’re doing the same for blood testing.”
Photonic chips with electric properties
Dubrovsky and Vermeulen met through the MIT ecosystem in 2019. Vermeulen had worked in MIT’s Silicon Photonics Group within the Research Laboratory of Electronics and Dubrovsky was working in the MIT Materials for Micro and Nano Systems group.
The two quickly bonded over a new way to approach optical chips.
“We had this idea of doing optical chips more like a printed circuit board,” Vermeulen explains. “Electrical chips incorporate a lot of chips on one circuit board, but optical chips typically do everything with a single chip. We wanted to combine optical chips into a new kind of circuit board.”
The founders met regularly in the Martin Trust Center for MIT Entrepreneurship to refine their approach. In February of 2020, they started filing patents and receiving guidance from MIT professors. They also entered the START.nano program, which helps early-stage companies accelerate their innovations by giving them access to MIT.nano’s laboratories and equipment.
The same lasers that are used to mass produce traditional silicon chips can be used to manufacture SiPhox’s integrated photonic silicon chips. Each 1-millimeter chip contains lenses, polarizers, modulators, splitters, and other optical components you’d see in a traditional lab-based system, but SiPhox’s chips are cheap enough to be single-use.
Just as the founders were deciding on the first application for their new chip, the Covid-19 pandemic hit. They took that as a sign.
“We decided to focus on biosensing, with one of our chips being disposable and the other one being reusable,” Vermeulen says.
The founders worked to detect infectious disease with their chips but realized the technology was better suited for high-fidelity blood testing.
“I worked a lot on tunable lasers at MIT, and we use a slightly different approach to lasers at SiPhox,” Vermeulen says. “We applied all of the lessons we learned at MIT to design something from scratch.”
SiPhox’s chip testing system works with third-party arm patches that people already use to collect blood samples at home. Dubrovsky likens the system to a Nespresso machine in which users simply place a pod into the machine and press a button. Each of SiPhox’s disposable cartridges contains an array of photonic immunoassay sensors that can be used to detect specific proteins or hormones.
The system includes a dashboard that can be viewed by a physician or the patient themselves to get biomarker data at home. The dashboard also provides historic data and educational content about the biomarkers measured. SiPhox also uses large language models to parse third-party blood test data, allowing users to track traditional blood tests in the same dashboard.
Because the approach makes use of semiconductor lasers and silicon chips, the founders say a single traditional chip manufacturing facility, or fab, could produce 1 billion of SiPhox’s chips every month.
“Our technology is very scalable because it’s all on a chip,” Vermeulen says. “There are only two ways to really scale something: You can do injection molding; that’s how you produce billions of plastic cups, for instance. But if you want to scale something very complex, you have to put it on a silicon chip.”
A platform for health
The founders believe their technology could enable a world where tracking biomarkers is as easy as brushing your teeth. That would have huge implications for the tens of millions of Americans who need to get regular blood tests to manage chronic diseases.
“For people with inflammatory diseases, tracking inflammation levels is very important because they can develop resistance to their medicine,” Dubrovsky says. “Once they experience symptoms of a flare-up, it’s very hard to reverse. And these symptoms can be horrible, so catching it early is really important.”
To gain FDA clearance, SiPhox plans to begin studies in coming months, but its system’s accuracy has already been validated by third parties, and the company’s Burlington, Massachusetts, facility is capable of manufacturing about 10,000 of its cartridges per month.
Once SiPhox gains FDA clearance, it plans to partner with health care systems, health insurers, employers, and mail-in blood testing companies to help people everywhere track their health.
“We can offer a new way for people to access health care in their home,” Vermeulen says. “Once they have our blood testing device, whether for a chronic disease or something else, anytime they want telehealth-enabled by blood testing, they can use our device, similar to how Apple users have access to third-party apps from many different service providers. Siphox users will have access to curated third-party services built on top of the core blood testing capability.”
Samurai in Japan, then engineers at MITA new exhibit explores the Institute’s first Japanese students, who arrived as MIT was taking flight and their own country was opening up.In 1867, five Japanese students took a long sea voyage to Massachusetts for some advanced schooling. The group included a 13-year-old named Eiichirō Honma, who was from one of the samurai families that ruled Japan. Honma expected to become a samurai warrior himself, and enrolled in a military academy in Worcester.
And then some unexpected things happened.
Japan’s ruling dynasty, the shogunate that had run the country since the 17th century, lost power. No longer obligated to become a warrior, Honma found himself free to try other things in life. In 1870, he enrolled in the recently opened Massachusetts Institute of Technology, where he studied civil engineering. By 1874, Honma had become MIT’s first graduate from Japan.
“Honma may have thought he was going to be a military officer, but by the time he got to MIT he wanted to do something else,” says Hiromu Nagahara, an associate professor of history at MIT. “And that something else was the hottest technology of its time: railroads.” Indeed, Honma returned to Japan and became a celebrated engineer of rail lines, including one through the mountainous Usai Pass in central Japan.
Now, 150 years after he graduated, Honma is a central part of an exhibit about MIT’s earliest Japanese students, “From Samurai into Engineers,” which runs through Dec. 19 at Hayden Library.
The exhibit features two other early MIT graduates from Japan. Takuma Dan, Class of 1878, was also from a samurai household, studied mining engineering at MIT, and eventually became prominent in Japan as head of the Mitsui corporation. Kiyoko Makino was the first Japanese woman and the first female international student to enroll at MIT, where she studied biology from 1903 to 1905, later becoming a teacher and textbook author in Japan.
Tracing their lives sheds light on interesting careers — and illuminates a historical period in which MIT was reaching prominence, Japan was opening itself to the world, and modern life was rolling forward.
“When we look at Eiichirō Honma, Takuma Dan, and Kiyoko Makino, their lives fit the larger context of the relationship between America and Japan,” says Nagahara.
The making of “From Samurai into Engineers” was a collective effort, partly generated through MIT course 21H.155/21G.555 (Modern Japan), taught by Nagahara in the spring of 2024. Students contributed to the research and wrote short historical summaries incorporated into the exhibition. The exhibit draws on original archival materials, such as the students’ letters, theses, problem sets, and other documents. Honma’s drawings for an iron girder railroad bridge, as part of his own MIT thesis, are on display, for instance.
Others on campus significantly collaborated on the project from its inception. Christine Pilcavage, managing director of the MIT-Japan Program, helped encourage the development of the effort, having held an ongoing interest in the subject.
“I’m in awe of this relationship that we’ve had since the first Japanese students were at MIT,” Pilcavage says. “We’ve had this long connection. It shows that MIT as an Institute is always innovating. Each side had much to gain, from Honma coming to MIT, learning technology, and returning to Japan, while also mentoring other students, including Dan.”
Much of the research was facilitated by MIT Libraries and its Distinctive Collections holdings, which contain the archives used for the project. Amanda Hawk, who is the public services manager in the library system, worked with Nagahara to facilitate the research by the class.
“Distinctive Collections is excited to support faculty and student projects related to MIT history, particularly those that illuminate unknown stories or underrepresented communities,” Hawk says. “It was rewarding to collaborate with Hiromu on ‘From Samurai into Engineers’ to place these students within the context of Japanese history and the development of MIT.”
The fact that MIT had students from Japan as soon as 1870 might seem improbable on both ends of this historical connection. MIT opened in 1861 but did not start offering classes until 1865. Still, it was rapidly recognized as a significant locus of technological knowledge. Meanwhile the historic changes in Japan created a small pool of students willing to travel to Massachusetts for education.
“The birth of MIT in the 1860s coincides with a period of huge political economic and cultural upheaval in Japan,” Nagahara says. “It was a unique moment when there was a both a desire to go overseas and a government willingness to let people go overseas.”
Overall, the experience of the Japanese students at MIT seems to have been fairly smooth from the start, enabling them to have a strong focus on scholarship.
“Honma seemed to have been quite well-received,” Pilcavage says, who wonders if Honma’s social status — he was occasionally called “prince” — contributed to that. Still, she notes, “He was invited to other people’s homes on Thanksgiving. It didn’t seem like he faced extreme prejudice. The community welcomed him.”
The three Japanese students featured in the exhibit wound up leading distinctive lives. While Honma became a celebrated engineer, Dan was an even higher-profile figure. At MIT, he studied mining engineering with Robert Hollawell Richards, husband of Ellen Swallow Richards, MIT’s first female student and instructor. After starting as a mining engineer at Mitsui in 1888, by 1914 he had become chair of the board of the Mitsui conglomerate. Dan even came back to visit MIT twice as a distinguished alumnus, in 1910 and 1921.
Dan was also a committed internationalist, who believed in cooperation among nations, in contrast to the rising nationalism often present in the 1920s and 1930s. In 1932, he was shockingly assassinated outside of Mitsui headquarters in Tokyo, a victim of nationalist terrorism. Robert Richards wrote that it was “one of those terrible things which no man in his senses can understand.”
Makino, for her part, led a much quieter life, and her status as an early student was only rediscovered in recent years by librarians working in MIT’s Distinctive Collections materials. After MIT, she returned to Japan and became a high school biology teacher in Tokyo. She also authored a textbook, “Physiology of Women.”
MIT archivists and students are continuing to research Makino’s life, and earlier this year also uncovered news articles written about her in New England newspapers while she was in the U.S. Nagahara hopes many people will continue researching MIT’s earliest Japanese students, including Sutejirō Fukuzawa, Class of 1888, the son of a well-known Japanese intellectual.
In so doing, we may gain more insight into the ways MIT, universities, and early students played concrete roles in ushering their countries into the new age. As Nagahara reflects about these students, “They’re witnessing both America and Japan become modern nation-states.”
And as Pilcavage notes, Honma’s status as a railroad builder “is symbolic. We continue to build bridges between our institution and Japan.”
MIT engineers make converting CO2 into useful products more practicalA new electrode design boosts the efficiency of electrochemical reactions that turn carbon dioxide into ethylene and other products.As the world struggles to reduce greenhouse gas emissions, researchers are seeking practical, economical ways to capture carbon dioxide and convert it into useful products, such as transportation fuels, chemical feedstocks, or even building materials. But so far, such attempts have struggled to reach economic viability.
New research by engineers at MIT could lead to rapid improvements in a variety of electrochemical systems that are under development to convert carbon dioxide into a valuable commodity. The team developed a new design for the electrodes used in these systems, which increases the efficiency of the conversion process.
The findings are reported today in the journal Nature Communications, in a paper by MIT doctoral student Simon Rufer, professor of mechanical engineering Kripa Varanasi, and three others.
“The CO2 problem is a big challenge for our times, and we are using all kinds of levers to solve and address this problem,” Varanasi says. It will be essential to find practical ways of removing the gas, he says, either from sources such as power plant emissions, or straight out of the air or the oceans. But then, once the CO2 has been removed, it has to go somewhere.
A wide variety of systems have been developed for converting that captured gas into a useful chemical product, Varanasi says. “It’s not that we can’t do it — we can do it. But the question is how can we make this efficient? How can we make this cost-effective?”
In the new study, the team focused on the electrochemical conversion of CO2 to ethylene, a widely used chemical that can be made into a variety of plastics as well as fuels, and which today is made from petroleum. But the approach they developed could also be applied to producing other high-value chemical products as well, including methane, methanol, carbon monoxide, and others, the researchers say.
Currently, ethylene sells for about $1,000 per ton, so the goal is to be able to meet or beat that price. The electrochemical process that converts CO2 into ethylene involves a water-based solution and a catalyst material, which come into contact along with an electric current in a device called a gas diffusion electrode.
There are two competing characteristics of the gas diffusion electrode materials that affect their performance: They must be good electrical conductors so that the current that drives the process doesn’t get wasted through resistance heating, but they must also be “hydrophobic,” or water repelling, so the water-based electrolyte solution doesn’t leak through and interfere with the reactions taking place at the electrode surface.
Unfortunately, it’s a tradeoff. Improving the conductivity reduces the hydrophobicity, and vice versa. Varanasi and his team set out to see if they could find a way around that conflict, and after many months of work, they did just that.
The solution, devised by Rufer and Varanasi, is elegant in its simplicity. They used a plastic material, PTFE (essentially Teflon), that has been known to have good hydrophobic properties. However, PTFE’s lack of conductivity means that electrons must travel through a very thin catalyst layer, leading to significant voltage drop with distance. To overcome this limitation, the researchers wove a series of conductive copper wires through the very thin sheet of the PTFE.
“This work really addressed this challenge, as we can now get both conductivity and hydrophobicity,” Varanasi says.
Research on potential carbon conversion systems tends to be done on very small, lab-scale samples, typically less than 1-inch (2.5-centimeter) squares. To demonstrate the potential for scaling up, Varanasi’s team produced a sheet 10 times larger in area and demonstrated its effective performance.
To get to that point, they had to do some basic tests that had apparently never been done before, running tests under identical conditions but using electrodes of different sizes to analyze the relationship between conductivity and electrode size. They found that conductivity dropped off dramatically with size, which would mean much more energy, and thus cost, would be needed to drive the reaction.
“That’s exactly what we would expect, but it was something that nobody had really dedicatedly investigated before,” Rufer says. In addition, the larger sizes produced more unwanted chemical byproducts besides the intended ethylene.
Real-world industrial applications would require electrodes that are perhaps 100 times larger than the lab versions, so adding the conductive wires will be necessary for making such systems practical, the researchers say. They also developed a model which captures the spatial variability in voltage and product distribution on electrodes due to ohmic losses. The model along with the experimental data they collected enabled them to calculate the optimal spacing for conductive wires to counteract the drop off in conductivity.
In effect, by weaving the wire through the material, the material is divided into smaller subsections determined by the spacing of the wires. “We split it into a bunch of little subsegments, each of which is effectively a smaller electrode,” Rufer says. “And as we’ve seen, small electrodes can work really well.”
Because the copper wire is so much more conductive than the PTFE material, it acts as a kind of superhighway for electrons passing through, bridging the areas where they are confined to the substrate and face greater resistance.
To demonstrate that their system is robust, the researchers ran a test electrode for 75 hours continuously, with little change in performance. Overall, Rufer says, their system “is the first PTFE-based electrode which has gone beyond the lab scale on the order of 5 centimeters or smaller. It’s the first work that has progressed into a much larger scale and has done so without sacrificing efficiency.”
The weaving process for incorporating the wire can be easily integrated into existing manufacturing processes, even in a large-scale roll-to-roll process, he adds.
“Our approach is very powerful because it doesn’t have anything to do with the actual catalyst being used,” Rufer says. “You can sew this micrometric copper wire into any gas diffusion electrode you want, independent of catalyst morphology or chemistry. So, this approach can be used to scale anybody’s electrode.”
“Given that we will need to process gigatons of CO2 annually to combat the CO2 challenge, we really need to think about solutions that can scale,” Varanasi says. “Starting with this mindset enables us to identify critical bottlenecks and develop innovative approaches that can make a meaningful impact in solving the problem. Our hierarchically conductive electrode is a result of such thinking.”
The research team included MIT graduate students Michael Nitzsche and Sanjay Garimella, as well as Jack Lake PhD ’23. The work was supported by Shell, through the MIT Energy Initiative.
This work was carried out, in part, through the use of MIT.nano facilities.
When muscles work out, they help neurons to grow, a new study showsThe findings suggest that biochemical and physical effects of exercise could help heal nerves.There’s no doubt that exercise does a body good. Regular activity not only strengthens muscles but can bolster our bones, blood vessels, and immune system.
Now, MIT engineers have found that exercise can also have benefits at the level of individual neurons. They observed that when muscles contract during exercise, they release a soup of biochemical signals called myokines. In the presence of these muscle-generated signals, neurons grew four times farther compared to neurons that were not exposed to myokines. These cellular-level experiments suggest that exercise can have a significant biochemical effect on nerve growth.
Surprisingly, the researchers also found that neurons respond not only to the biochemical signals of exercise but also to its physical impacts. The team observed that when neurons are repeatedly pulled back and forth, similarly to how muscles contract and expand during exercise, the neurons grow just as much as when they are exposed to a muscle’s myokines.
While previous studies have indicated a potential biochemical link between muscle activity and nerve growth, this study is the first to show that physical effects can be just as important, the researchers say. The results, which are published today in the journal Advanced Healthcare Materials, shed light on the connection between muscles and nerves during exercise, and could inform exercise-related therapies for repairing damaged and deteriorating nerves.
“Now that we know this muscle-nerve crosstalk exists, it can be useful for treating things like nerve injury, where communication between nerve and muscle is cut off,” says Ritu Raman, the Eugene Bell Career Development Assistant Professor of Mechanical Engineering at MIT. “Maybe if we stimulate the muscle, we could encourage the nerve to heal, and restore mobility to those who have lost it due to traumatic injury or neurodegenerative diseases.”
Raman is the senior author of the new study, which includes Angel Bu, Ferdows Afghah, Nicolas Castro, Maheera Bawa, Sonika Kohli, Karina Shah, and Brandon Rios of MIT’s Department of Mechanical Engineering, and Vincent Butty of MIT’s Koch Institute for Integrative Cancer Research.
Muscle talk
In 2023, Raman and her colleagues reported that they could restore mobility in mice that had experienced a traumatic muscle injury, by first implanting muscle tissue at the site of injury, then exercising the new tissue by stimulating it repeatedly with light. Over time, they found that the exercised graft helped mice to regain their motor function, reaching activity levels comparable to those of healthy mice.
When the researchers analyzed the graft itself, it appeared that regular exercise stimulated the grafted muscle to produce certain biochemical signals that are known to promote nerve and blood vessel growth.
“That was interesting because we always think that nerves control muscle, but we don’t think of muscles talking back to nerves,” Raman says. “So, we started to think stimulating muscle was encouraging nerve growth. And people replied that maybe that’s the case, but there’s hundreds of other cell types in an animal, and it’s really hard to prove that the nerve is growing more because of the muscle, rather than the immune system or something else playing a role.”
In their new study, the team set out to determine whether exercising muscles has any direct effect on how nerves grow, by focusing solely on muscle and nerve tissue. The researchers grew mouse muscle cells into long fibers that then fused to form a small sheet of mature muscle tissue about the size of a quarter.
The team genetically modified the muscle to contract in response to light. With this modification, the team could flash a light repeatedly, causing the muscle to squeeze in response, in a way that mimicked the act of exercise. Raman previously developed a novel gel mat on which to grow and exercise muscle tissue. The gel’s properties are such that it can support muscle tissue and prevent it from peeling away as the researchers stimulated the muscle to exercise.
The team then collected samples of the surrounding solution in which the muscle tissue was exercised, thinking that the solution should hold myokines, including growth factors, RNA, and a mix of other proteins.
“I would think of myokines as a biochemical soup of things that muscles secrete, some of which could be good for nerves and others that might have nothing to do with nerves,” Raman says. “Muscles are pretty much always secreting myokines, but when you exercise them, they make more.”
“Exercise as medicine”
The team transferred the myokine solution to a separate dish containing motor neurons — nerves found in the spinal cord that control muscles involved in voluntary movement. The researchers grew the neurons from stem cells derived from mice. As with the muscle tissue, the neurons were grown on a similar gel mat. After the neurons were exposed to the myokine mixture, the team observed that they quickly began to grow, four times faster than neurons that did not receive the biochemical solution.
“They grow much farther and faster, and the effect is pretty immediate,” Raman notes.
For a closer look at how neurons changed in response to the exercise-induced myokines, the team ran a genetic analysis, extracting RNA from the neurons to see whether the myokines induced any change in the expression of certain neuronal genes.
“We saw that many of the genes up-regulated in the exercise-stimulated neurons was not only related to neuron growth, but also neuron maturation, how well they talk to muscles and other nerves, and how mature the axons are,” Raman says. “Exercise seems to impact not just neuron growth but also how mature and well-functioning they are.”
The results suggest that biochemical effects of exercise can promote neuron growth. Then the group wondered: Could exercise’s purely physical impacts have a similar benefit?
“Neurons are physically attached to muscles, so they are also stretching and moving with the muscle,” Raman says. “We also wanted to see, even in the absence of biochemical cues from muscle, could we stretch the neurons back and forth, mimicking the mechanical forces (of exercise), and could that have an impact on growth as well?”
To answer this, the researchers grew a different set of motor neurons on a gel mat that they embedded with tiny magnets. They then used an external magnet to jiggle the mat — and the neurons — back and forth. In this way, they “exercised” the neurons, for 30 minutes a day. To their surprise, they found that this mechanical exercise stimulated the neurons to grow just as much as the myokine-induced neurons, growing significantly farther than neurons that received no form of exercise.
“That’s a good sign because it tells us both biochemical and physical effects of exercise are equally important,” Raman says.
Now that the group has shown that exercising muscle can promote nerve growth at the cellular level, they plan to study how targeted muscle stimulation can be used to grow and heal damaged nerves, and restore mobility for people who are living with a neurodegenerative disease such as ALS.
“This is just our first step toward understanding and controlling exercise as medicine,” Raman says.
Admir Masic: Using lessons from the past to build a better futureThe associate professor of civil and environmental engineering studies ancient materials while working to solve modern problems.As a teenager living in a small village in what was then Yugoslavia, Admir Masic witnessed the collapse of his home country and the outbreak of the Bosnian war. When his childhood home was destroyed by a tank, his family was forced to flee the violence, leaving their remaining possessions to enter a refugee camp in northern Croatia.
It was in Croatia that Masic found what he calls his “magic.”
“Chemistry really forcefully entered my life,” recalls Masic, who is now an associate professor in MIT’s Department of Civil and Environmental Engineering. “I’d leave school to go back to my refugee camp, and you could either play ping-pong or do chemistry homework, so I did a lot of homework, and I began to focus on the subject.”
Masic has never let go of his magic. Long after chemistry led him out of Croatia, he’s come to understand that the past holds crucial lessons for building a better future. That’s why he started the MIT Refugee Action Hub (now MIT Emerging Talent) to provide educational opportunities to students displaced by war. It’s also what led him to study ancient materials, whose secrets he believes have potential to solve some of the modern world’s most pressing problems.
“We’re leading this concept of paleo-inspired design: that there are some ideas behind these ancient materials that are useful today,” Masic says. “We should think of these materials as a source of valuable information that we can try to translate to today. These concepts have the potential to revolutionize how we think about these materials.”
One key research focus for Masic is cement. His lab is working on ways to transform the ubiquitous material into a carbon sink, a medium for energy storage, and more. Part of that work involves studying ancient Roman concrete, whose self-healing properties he has helped to illuminate.
At the core of each of Masic’s research endeavors is a desire to translate a better understanding of materials into improvements in how we make things around the world.
“Roman concrete to me is fascinating: It’s still standing after all this time and constantly repairing,” Masic says. “It’s clear there’s something special about this material, so what is it? Can we translate part of it into modern analogues? That’s what I love about MIT. We are put in a position to do cutting-edge research and then quickly translate that research into the real world. Impact for me is everything.”
Finding a purpose
Masic’s family fled to Croatia in 1992, just as he was set to begin high school. Despite excellent grades, Masic was told Bosnian refugees couldn't enroll in the local school. It was only after a school psychologist advocated for Masic that he was allowed to sit in on classes as a nonmatriculating student.
Masic did his best to be a ghost in the back of classrooms, silently absorbing everything he could. But in one subject he stood out. Within six months of joining the school, in January of 1993, a teacher suggested Masic compete in a local chemistry competition.
“It was kind of the Olympiads of chemistry, and I won,” Masic recalls. “I literally floated onto the stage. It was this ‘Aha’ moment. I thought, ‘Oh my god, I’m good at chemistry!’”
In 1994, Masic’s parents immigrated to Germany in search of a better life, but he decided to stay behind to finish high school, moving into a friend’s basement and receiving food and support from local families as well as a group of volunteers from Italy.
“I just knew I had to stay,” Masic says. “With all the highs and lows of life to that point, I knew I had this talent and I had to make the most of it. I realized early on that knowledge was the one thing no one could take away from me.”
Masic continued competing in chemistry competitions — and continued winning. Eventually, after a change to a national law, the high school he was attending agreed to give him a diploma. With the help of the Italian volunteers, he moved to Italy to attend the University of Turin, where he entered a five-year joint program that earned him a master’s degree in inorganic chemistry. Masic stayed at the university for his PhD, where he studied parchment, a writing material that’s been used for centuries to record some of humanity’s most sacred texts.
With a classmate, Masic started a company that helped restore ancient documents. The work took him to Germany to work on a project studying the Dead Sea Scrolls, a set of manuscripts that date as far back as the third century BCE. In 2008, Masic joined the Max Planck Institute in Germany, where he also began to work with biological materials, studying water’s interaction with collagen at the nanoscale.
Through that work, Masic became an expert in Raman spectroscopy, a type of chemical imaging that uses lasers to record the vibrations of molecules without leaving a trace, which he still uses to characterize materials.
“Raman became a tool for me to contribute in the field of biological materials and bioinspired materials,” Masic says. “At the same time, I became the ‘Raman guy.’ It was a remarkable period for me professionally, as these tools provided unparalleled information and I published a lot of papers.”
After seven years at Max Planck, Masic joined the Department of Civil and Environmental Engineering (CEE) at MIT.
“At MIT, I felt I could truly be myself and define the research I wanted to do,” Masic says. “Especially in CEE, I could connect my work in heritage science and this tool, Raman spectroscopy, to tackle our society’s big challenges.”
From labs to the world
Raman spectroscopy is a relatively new approach to studying cement, a material that contributes significantly to carbon dioxide emissions worldwide. At MIT, Masic has explored ways cement could be used to store carbon dioxide and act as an energy-storing supercapacitor. He has also solved ancient mysteries about the lasting strength of ancient Roman concrete, with lessons for the $400 billion cement industry today.
“We really don’t think we should replace ordinary Portland cement completely, because it’s an extraordinary material that everyone knows how to work with, and industry produces so much of it. We need to introduce new functionalities into our concrete that will compensate for cement’s sustainability issues through avoided emissions,” Masic explains. “The concept we call ‘multifunctional concrete’ was inspired by our work with biological materials. Bones, for instance, sacrifice mechanical performance to be able to do things like self-healing and energy storage. That's how you should imagine construction over next 10 years or 20 years. There could be concrete columns and walls that primarily offer support but also do things like store energy and continuously repair themselves.”
Masic's work across academia and industry allows him to apply his multifunctional concrete research at scale. He serves as a co-director of the MIT ec3 hub, a principal investigator within MIT Concrete Sustainability Hub, and a co-founder and advisor at the technology development company DMAT.
“It’s great to be at the forefront of sustainability but also to be directly interacting with key industry players that can change the world,” Masic says. “What I appreciate about MIT is how you can engage in fundamental science and engineering while also translating that work into practical applications. The CSHub and ec3 hub are great examples of this. Industry is eager for us to develop solutions that they can help support.”
And Masic will never forget where he came from. He now lives in Somerville, Massachusetts, with his wife Emina, a fellow former refugee, and their son, Benjamin, and the family shares a deep commitment to supporting displaced and underserved communities. Seven years ago, Masic founded the MIT Refugee Action Hub (ReACT), which provides computer and data science education programs for refugees and displaced communities. Today thousands of refugees apply to the program every year, and graduates have gone on to successful careers at places like Microsoft and Meta. To further its reach, the ReACT program was recently absorbed by the Emerging Talent program at MIT Open Learning, where Masic is an executive faculty member.
“It’s really a life-changing experience for them,” Masic says. “It’s an amazing opportunity for MIT to nurture talented refugees around the world through this simple certification program. The more people we can involve, the more impact we will have on the lives of these truly underserved communities.”
Faces of MIT: Gene KeselmanAt MIT, Keselman is a lecturer, executive director, managing director, and innovator. Additionally, he is a colonel in the Air Force Reserves, board director, and startup leader.Gene Keselman wears a lot of hats. He is a lecturer at the MIT Sloan School of Management, the executive director of Mission Innovation Experimental (MIx), and managing director of MIT’s venture studio, Proto Ventures. Colonel in the Air Force Reserves at the Pentagon, board director, and startup leader are only a few of the titles and leadership positions Keselman has held. Now in his seventh year at MIT, his work as an innovator will impact the Institute for years to come.
Keselman and his family are refugees from the Soviet Union. To say that the United States opened its arms and took care of his family is something Keselman calls “an understatement.” Growing up, he felt both gratitude and the need to give back to the country that took in his family. Because of this, Keselman joined the U.S. Air Force after college. Originally, he thought he would spend a few years in the Air Force, earn money to attend graduate school, and leave. Instead, he found a sense of belonging in the military lifestyle.
Early on, Keselman was a nuclear operations officer for four years, watching over nuclear weapons in Wyoming; while it was not a glamorous job, it was a strategically important one. He then joined the intelligence community in Washington, working on special programs for space. Next, he became an acquisition and innovation generalist inside the Air Force, working his way up to the rank of colonel, working on an innovation team at the Pentagon. Meanwhile, Keselman started exploring what his nonmilitary entrepreneurial life could look like. He left active duty after 12 years, entered the reserves, and began his relationship with MIT as an MBA student at the MIT Sloan School of Management.
At MIT Sloan, Keselman met Fiona Murray, associate dean of innovation and inclusion, who took an interest in Keselman’s experience. When the position of executive director of the Innovation Initiative (a program launched by then-President L. Rafael Reif) became available, Murray and MIT.nano Director Vladimir Bulovic hired Keselman and became his managers and main collaborators. While he was unsure that he would be a natural inside academia, Keselman credits Murray and Bulovic with seeing that his skill set from working with the Department of Defense (DoD) and in the military could translate and be useful in academia.
As a military officer, Keselman focused on process, innovation, leadership, and team building — tools he found useful in his new position. Over the next five years at MIT — a place, he admits, that was already at the forefront of innovation — he ran and created programs that augment how the Institute’s cutting-edge research is shared with the world. When the Innovation Initiative became the Office of Innovation, Keselman handed off executive duties to his deputy. Today, he oversees two programs. The first, MIx, focuses on national security innovation, defense technology, and dual-use (creating a commercial product and a capability for the government or defense). The other, Proto Ventures, is centered around venture building and translation of research.
With MIx and Proto Ventures established, it was time to build a teaching component for students interested in working for a startup that the government might want to partner with and learn from. Keselman becoming a lecturer at Sloan seemed like a clear next step. What started as a hackathon for MIT Air Force, Army, and Navy ROTC students to introduce the special operations community to those who were planning to become military officers turned into a class open to all undergrad and graduate students. Keselman co-teaches innovation engineering for global security systems, a design/build class in collaboration with U.S. Special Operations Command, where students learn to build innovative solutions in response to global security problems. Students who do not plan to work for the government enroll because of their desire to work on the most interesting — and difficult — problems in the world. Enrollment in these courses sometimes changes the career trajectory of students who decide they would like to work on national security-related problems in the future. While teaching was not an initial part of his plan, the opportunity to teach has become one of his joys.
Soundbytes
Q: What project brings you the most pride?
Keselman: Proto Ventures is probably what I will look back on that will have made the most impact on MIT. I’m proud that I've continued to sustain it. Building a venture studio inside MIT is unique and is not replicated anywhere.
I’m also really proud of our work with North Atlantic Treaty Organization (NATO) Defence Innovation Accelerator for the North Atlantic (DIANA). DIANA is NATO’s effort to start its own accelerator program for startups to encourage them to work on solving national security questions in their country, based on the model at MIT. We built the curriculum, and I’ve taught it to DIANA startups in places including Italy, Poland, Denmark, and Estonia. The fact that NATO recognized that we need to promote access to startups and that there is a need to create an accelerator network is amazing. When it started, MIT was probably one of the only places teaching dual-use in the country. The fact that I got to take this curriculum and build it to scale in 32 countries and hundreds of startups is really rewarding.
Q: In recognition of their service to our country, MIT actively seeks to recruit and employ veterans throughout its workforce. As a reservist, how does MIT support the time you take away from the Institute to fulfill your duties?
Keselman: MIT has a long history with the military, especially back in WWII times. With that comes a deep history of supporting the military. When I came to MIT I found a welcoming community that enables me to run centers, teach, and have students work on problems brought to us by the government. The magical thing about MIT is an openness to collaboration.
[At MIT,] Being an officer in the reserves is seen as a benefit, not a distraction. No one says, “He's gone again for his military duties at the Pentagon. He's not doing his work.” Instead, my work is viewed as an advantage for the Institute. MIT is a special place for the veteran and military community.
Q: A Veteran and Military Employee Resource Group (ERG) was recently launched at MIT. What do you hope will come from the ERG?
Keselman: The ERG once again underscores the uniqueness of MIT. Recruiter Nicolette Clifford from Human Resources and I had the idea for the group, but I thought, “Would anyone want this?” The reception from MIT Human Resources was positive and reinforcing. To put veterans and military into a supported group and make them feel like they have a home is amazing. I was blown away by it. We don’t usually get this kind of treatment. People thank us for our service, but then move on. It sends a message that MIT is a very friendly place for veterans. It also shows that MIT supports the people that defend our national security and support our way of life.
As a major contributor to global carbon dioxide (CO2) emissions, the transportation sector has immense potential to advance decarbonization. However, a zero-emissions global supply chain requires re-imagining reliance on a heavy-duty trucking industry that emits 810,000 tons of CO2, or 6 percent of the United States’ greenhouse gas emissions, and consumes 29 billion gallons of diesel annually in the U.S. alone.
A new study by MIT researchers, presented at the recent American Society of Mechanical Engineers 2024 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, quantifies the impact of a zero-emission truck’s design range on its energy storage requirements and operational revenue. The multivariable model outlined in the paper allows fleet owners and operators to better understand the design choices that impact the economic feasibility of battery-electric and hydrogen fuel cell heavy-duty trucks for commercial application, equipping stakeholders to make informed fleet transition decisions.
“The whole issue [of decarbonizing trucking] is like a very big, messy pie. One of the things we can do, from an academic standpoint, is quantify some of those pieces of pie with modeling, based on information and experience we’ve learned from industry stakeholders,” says ZhiYi Liang, PhD student on the renewable hydrogen team at the MIT K. Lisa Yang Global Engineering and Research Center (GEAR) and lead author of the study. Co-authored by Bryony DuPont, visiting scholar at GEAR, and Amos Winter, the Germeshausen Professor in the MIT Department of Mechanical Engineering, the paper elucidates operational and socioeconomic factors that need to be considered in efforts to decarbonize heavy-duty vehicles (HDVs).
Operational and infrastructure challenges
The team’s model shows that a technical challenge lies in the amount of energy that needs to be stored on the truck to meet the range and towing performance needs of commercial trucking applications. Due to the high energy density and low cost of diesel, existing diesel drivetrains remain more competitive than alternative lithium battery-electric vehicle (Li-BEV) and hydrogen fuel-cell-electric vehicle (H2 FCEV) drivetrains. Although Li-BEV drivetrains have the highest energy efficiency of all three, they are limited to short-to-medium range routes (under 500 miles) with low freight capacity, due to the weight and volume of the onboard energy storage needed. In addition, the authors note that existing electric grid infrastructure will need significant upgrades to support large-scale deployment of Li-BEV HDVs.
While the hydrogen-powered drivetrain has a significant weight advantage that enables higher cargo capacity and routes over 750 miles, the current state of hydrogen fuel networks limits economic viability, especially once operational cost and projected revenue are taken into account. Deployment will most likely require government intervention in the form of incentives and subsidies to reduce the price of hydrogen by more than half, as well as continued investment by corporations to ensure a stable supply. Also, as H2-FCEVs are still a relatively new technology, the ongoing design of conformal onboard hydrogen storage systems — one of which is the subject of Liang’s PhD — is crucial to successful adoption into the HDV market.
The current efficiency of diesel systems is a result of technological developments and manufacturing processes established over many decades, a precedent that suggests similar strides can be made with alternative drivetrains. However, interactions with fleet owners, automotive manufacturers, and refueling network providers reveal another major hurdle in the way that each “slice of the pie” is interrelated — issues must be addressed simultaneously because of how they affect each other, from renewable fuel infrastructure to technological readiness and capital cost of new fleets, among other considerations. And first steps into an uncertain future, where no one sector is fully in control of potential outcomes, is inherently risky.
“Besides infrastructure limitations, we only have prototypes [of alternative HDVs] for fleet operator use, so the cost of procuring them is high, which means there isn’t demand for automakers to build manufacturing lines up to a scale that would make them economical to produce,” says Liang, describing just one step of a vicious cycle that is difficult to disrupt, especially for industry stakeholders trying to be competitive in a free market.
Quantifying a path to feasibility
“Folks in the industry know that some kind of energy transition needs to happen, but they may not necessarily know for certain what the most viable path forward is,” says Liang. Although there is no singular avenue to zero emissions, the new model provides a way to further quantify and assess at least one slice of pie to aid decision-making.
Other MIT-led efforts aimed at helping industry stakeholders navigate decarbonization include an interactive mapping tool developed by Danika MacDonell, Impact Fellow at the MIT Climate and Sustainability Consortium (MCSC); alongside Florian Allroggen, executive director of MITs Zero Impact Aviation Alliance; and undergraduate researchers Micah Borrero, Helena De Figueiredo Valente, and Brooke Bao. The MCSC’s Geospatial Decision Support Tool supports strategic decision-making for fleet operators by allowing them to visualize regional freight flow densities, costs, emissions, planned and available infrastructure, and relevant regulations and incentives by region.
While current limitations reveal the need for joint problem-solving across sectors, the authors believe that stakeholders are motivated and ready to tackle climate problems together. Once-competing businesses already appear to be embracing a culture shift toward collaboration, with the recent agreement between General Motors and Hyundai to explore “future collaboration across key strategic areas,” including clean energy.
Liang believes that transitioning the transportation sector to zero emissions is just one part of an “energy revolution” that will require all sectors to work together, because “everything is connected. In order for the whole thing to make sense, we need to consider ourselves part of that pie, and the entire system needs to change,” says Liang. “You can’t make a revolution succeed by yourself.”
The authors acknowledge the MIT Climate and Sustainability Consortium for connecting them with industry members in the HDV ecosystem; and the MIT K. Lisa Yang Global Engineering and Research Center and MIT Morningside Academy for Design for financial support.
Startup turns mining waste into critical metals for the U.S.Phoenix Tailings, co-founded by MIT alumni, is creating domestic supply chains for rare earth metals, key to the clean energy transition.At the heart of the energy transition is a metal transition. Wind farms, solar panels, and electric cars require many times more copper, zinc, and nickel than their gas-powered alternatives. They also require more exotic metals with unique properties, known as rare earth elements, which are essential for the magnets that go into things like wind turbines and EV motors.
Today, China dominates the processing of rare earth elements, refining around 60 percent of those materials for the world. With demand for such materials forecasted to skyrocket, the Biden administration has said the situation poses national and economic security threats.
Substantial quantities of rare earth metals are sitting unused in the United States and many other parts of the world today. The catch is they’re mixed with vast quantities of toxic mining waste.
Phoenix Tailings is scaling up a process for harvesting materials, including rare earth metals and nickel, from mining waste. The company uses water and recyclable solvents to collect oxidized metal, then puts the metal into a heated molten salt mixture and applies electricity.
The company, co-founded by MIT alumni, says its pilot production facility in Woburn, Massachusetts, is the only site in the world producing rare earth metals without toxic byproducts or carbon emissions. The process does use electricity, but Phoenix Tailings currently offsets that with renewable energy contracts.
The company expects to produce more than 3,000 tons of the metals by 2026, which would have represented about 7 percent of total U.S. production last year.
Now, with support from the Department of Energy, Phoenix Tailings is expanding the list of metals it can produce and accelerating plans to build a second production facility.
For the founding team, including MIT graduates Tomás Villalón ’14 and Michelle Chao ’14 along with Nick Myers and Anthony Balladon, the work has implications for geopolitics and the planet.
“Being able to make your own materials domestically means that you’re not at the behest of a foreign monopoly,” Villalón says. “We’re focused on creating critical materials for the next generation of technologies. More broadly, we want to get these materials in ways that are sustainable in the long term.”
Tackling a global problem
Villalón got interested in chemistry and materials science after taking Course 3.091 (Introduction to Solid-State Chemistry) during his first year at MIT. In his senior year, he got a chance to work at Boston Metal, another MIT spinoff that uses an electrochemical process to decarbonize steelmaking at scale. The experience got Villalón, who majored in materials science and engineering, thinking about creating more sustainable metallurgical processes.
But it took a chance meeting with Myers at a 2018 Bible study for Villalón to act on the idea.
“We were discussing some of the major problems in the world when we came to the topic of electrification,” Villalón recalls. “It became a discussion about how the U.S. gets its materials and how we should think about electrifying their production. I was finally like, ‘I’ve been working in the space for a decade, let’s go do something about it.’ Nick agreed, but I thought he just wanted to feel good about himself. Then in July, he randomly called me and said, ‘I’ve got [$7,000]. When do we start?’”
Villalón brought in Chao, his former MIT classmate and fellow materials science and engineering major, and Myers brought Balladon, a former co-worker, and the founders started experimenting with new processes for producing rare earth metals.
“We went back to the base principles, the thermodynamics I learned with MIT professors Antoine Allanore and Donald Sadoway, and understanding the kinetics of reactions,” Villalón says. “Classes like Course 3.022 (Microstructural Evolution in Materials) and 3.07 (Introduction to Ceramics) were also really useful. I touched on every aspect I studied at MIT.”
The founders also received guidance from MIT’s Venture Mentoring Service (VMS) and went through the U.S. National Science Foundation’s I-Corps program. Sadoway served as an advisor for the company.
After drafting one version of their system design, the founders bought an experimental quantity of mining waste, known as red sludge, and set up a prototype reactor in Villalón’s backyard. The founders ended up with a small amount of product, but they had to scramble to borrow the scientific equipment needed to determine what exactly it was. It turned out to be a small amount of rare earth concentrate along with pure iron.
Today, at the company’s refinery in Woburn, Phoenix Tailings puts mining waste rich in rare earth metals into its mixture and heats it to around 1,300 degrees Fahrenheit. When it applies an electric current to the mixture, pure metal collects on an electrode. The process leaves minimal waste behind.
“The key for all of this isn’t just the chemistry, but how everything is linked together, because with rare earths, you have to hit really high purities compared to a conventionally produced metal,” Villalón explains. “As a result, you have to be thinking about the purity of your material the entire way through.”
From rare earths to nickel, magnesium, and more
Villalón says the process is economical compared to conventional production methods, produces no toxic byproducts, and is completely carbon free when renewable energy sources are used for electricity.
The Woburn facility is currently producing several rare earth elements for customers, including neodymium and dysprosium, which are important in magnets. Customers are using the materials for things likewind turbines, electric cars, and defense applications.
The company has also received two grants with the U.S. Department of Energy's ARPA-E program totaling more than $2 million. Its 2023 grant supports the development of a system to extract nickel and magnesium from mining waste through a process that uses carbonization and recycled carbon dioxide. Both nickel and magnesium are critical materials for clean energy applications like batteries.
The most recent grant will help the company adapt its process to produce iron from mining waste without emissions or toxic byproducts. Phoenix Tailings says its process is compatible with a wide array of ore types and waste materials, and the company has plenty of material to work with: Mining and processing mineral ores generates about 1.8 billion tons of waste in the U.S. each year.
“We want to take our knowledge from processing the rare earth metals and slowly move it into other segments,” Villalón explains. “We simply have to refine some of these materials here. There’s no way we can’t. So, what does that look like from a regulatory perspective? How do we create approaches that are economical and environmentally compliant not just now, but 30 years from now?”
Connecting the US Coast Guard to MIT SloanFor the past 50 years, the Coast Guard has nominated a senior officer to apply to the MIT Sloan Fellows MBA program. “When you leave MIT Sloan, you want to change the world,” says one alumnus.Jim Ellis II SM ’80 first learned about a special opportunity for members of the U.S. Coast Guard while stationed in Alaska.
“My commander had received a notice from headquarters about this opportunity. They were asking for recommendations for an officer who might be interested,” says Ellis.
The opportunity in question was the MIT Sloan Fellows program, today known as the MIT Sloan Fellows MBA (SFMBA) program. Every year for 50 years, the Coast Guard has nominated a service member to apply to the program. Fifty Sloan Fellows and two Management of Technology participants have graduated since 1976, and the 53rd student is currently enrolled.
With his tour nearly over, Ellis followed his commander’s recommendation to apply. The Coast Guard nominated him and his application to MIT Sloan School of Management was accepted. In 1980, Ellis became the fifth-ever Coast Guard Sloan Fellow to graduate due to the special arrangement.
“My experience at MIT Sloan has been instrumental throughout my entire career,” says Ellis, who, with his wife Margaret Brady, designated half of their bequest to support graduate fellowships through the MIT Sloan Veterans Fund and half to establish the Ellis/Brady Family Fund to support the MIT Sloan Sustainability Initiative.
“The success of the people who have been through the program is a testament to why the Coast Guard continues the program,” he adds.
The desire to change the world
Throughout its 163-year history, MIT has maintained strong relationships with the U.S. military through programs like the MIT Reserve Officers' Training Corps, the 2N Graduate Program in Naval Architecture and Marine Engineering, and more.
The long-standing collaboration between MIT Sloan and the Coast Guard adds to this history. According to Johanna Hising DiFabio, assistant dean for executive degree programs at MIT Sloan, it demonstrates the Coast Guard’s dedication to leadership development, as well as the unique benefits MIT Sloan has to offer service members.
This is especially evident in the careers of the 52 Coast Guard Sloan Fellow alumni, many of whom the program often invites to speak to current students. “It is inspiring to hear our alumni reflect on how this education has significantly influenced their careers and the considerable impact they have had on the Coast Guard and the global community,” says DiFabio.
Captain Anne O’Connell MBA ’19 says, “It is very rewarding to be able to pay it back, to look for those officers coming up behind you who should absolutely be offered the same opportunities, and to help them chart that course. I think it's hugely important.”
One of the most notable Coast Guard Sloan Fellows is Retired Admiral Thad Allen SM ’89, who served as commandant of the Coast Guard from 2006 to 2010. One of the service’s youngest-ever flag officers, Allen is a figure beloved by current and former guardsmen. As commandant, he embraced new digital technologies, championed further arctic exploration, and solidified relations with the other armed services, federal partners, and private industry.
“When you leave MIT Sloan, you want to change the world,” says Allen.
Inspired by his father, who enlisted after the attack on Pearl Harbor, Allen attended the U.S. Coast Guard Academy and subsequently held various commands at sea and ashore during a career spanning four decades.
A few years before the end of his second decade, Allen learned about the Sloan Fellows Program through a service-wide solicitation. “The people I worked for believed this would be a great opportunity, and that it would match with my skill set,” says Allen. With the guidance of his senior captains, he applied to MIT Sloan.
Allen matriculated with a cohort whose members included Carly Fiorina SM ’89, former CEO of Hewlett-Packard; Daniel Hesse SM ’89, former CEO of Sprint; and Robert Malone SM ’89, former chair and president of BP America. Though he initially felt a sharp disconnect between his national service experience and their global private sector knowledge, Allen realized everyone in the cohort were becoming his peers.
Strong bonds with global perspectives
Like Allen, many of the Coast Guard Sloan Fellows acknowledge just how powerful their cohorts were when they matriculated, as well as how influential they have remained since.
“I have classmates with giant perspectives and unique expertise in places all over the world. It’s remarkable,” says Retired Commander Catherine Kang MBA ’06, who served as deputy of financial transformation for Allen.
The majority of SFMBA candidates come to Cambridge from around the world. For example, the 2023–24 cohort comprised 76 percent international citizens.
For Coast Guard Sloan Fellows with decades of domestic experience, their cohort’s global perspectives are as novel as they are informative. As Retired Captain Gregory Sanial SM ’07 explains, “We had students from 30 to 40 different countries, and I had the opportunity to learn a lot about different parts of the world and open up my mind to many different experiences.”
After the Coast Guard, Sanial pursued a doctoral degree in organizational leadership and a career in higher education that, professionally, has kept him stateside. Yet the bonds he built at MIT Sloan remain just as strong and as international as they were when he first arrived.
Many Coast Guard Sloan Fellows attribute this to the program’s focus on cooperation and social events.
“What impressed me most when I first got there were the team-building exercises, which made a difference in getting a group of diverse people to really gel and work together,” says Retired Captain Lisa Festa SM ’92, SM ’99. “MIT Sloan takes the time at the beginning to invest in you and to make sure you know the people you’re going through school with for the next year.”
The most recent Coast Guard Sloan Fellow alumnus, Commander Mark Ketchum MBA ’24, says his cohort’s connections are still fresh, but he believes they will last a lifetime. Considering the testimonies of his predecessors, this may very well be the case.
“My cohort made me stronger, and I would like to think that I imparted my strengths onto my classmates,” says Ketchum.
Big challenges with high impacts
Before earning the Coast Guard’s nomination and an acceptance letter from the SFMBA program, potential Sloan Fellows have already served in various leadership positions. Once they graduate, the recognition and distinction that comes with an MIT Sloan degree is quick.
So, too, are the more challenging leadership tracks.
After graduation, Allen served as deputy program manager for the Coast Guard’s shipbuilding program at the behest of the then-commandant. “For the agency head to say, ‘This is a bad problem, so I’m picking the next graduate from MIT Sloan,’ is indicative of the program’s cachet value,” he says. Allen then served in the office of budget and programs, a challenging and rewarding post that has become a hub for Coast Guard Sloan Fellows past, present, and future.
Like Rear Admiral Jason Tama MBA ’11 and Captain Brian Erickson MBA ’21, both of whom credit the office with introducing them to the vigorous work ethic necessary for both obtaining an MIT Sloan education and for becoming an effective leader.
“Never in a thousand years would I have gone on the resource management path until a mentor told me it would be one of the most challenging and high-impact things I could do,” says Tama. “You can never be fully prepared for the Sloan Fellows experience, but it can and will change you for the better. It changed the way I approach problems and challenges.”
“I owe MIT for the senior-level opportunities I’ve had in this organization, and I will probably owe them for some of the opportunities I may get in the future,” adds Erickson. “You should never, ever say no to this opportunity.”
From the early cohorts of Ellis, Allen, and Festa, to more recent alumni like O’Connell, Kang, and Ketchum, Coast Guard Sloan Fellows from the past half-century echo Erickson and Tama’s sentiments when asked about how MIT Sloan has changed them. Words like “challenge,” “opportunity,” and “impact” are used often and with purpose.
They believe joining the SFMBA program as up-and-coming senior leaders is an incredible opportunity for the individual and the Coast Guard, as well as the MIT community and the world at large.
“I am excited to see this tradition carry on,” says Tama. “I hope others who are considering it can see the potential and the value, not only for themselves, but for the Coast Guard as well.”
Participation by U.S. Coast Guard members in this highlight of prior MIT Sloan Fellows is not intended as, and does not constitute an endorsement of, the MIT Sloan Fellows MBA program or MIT by either the Department of Homeland Security or the U.S. Coast Guard.
A causal theory for studying the cause-and-effect relationships of genesBy sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.By studying changes in gene expression, researchers learn how cells function at a molecular level, which could help them understand the development of certain diseases.
But a human has about 20,000 genes that can affect each other in complex ways, so even knowing which groups of genes to target is an enormously complicated problem. Also, genes work together in modules that regulate each other.
MIT researchers have now developed theoretical foundations for methods that could identify the best way to aggregate genes into related groups so they can efficiently learn the underlying cause-and-effect relationships between many genes.
Importantly, this new method accomplishes this using only observational data. This means researchers don’t need to perform costly, and sometimes infeasible, interventional experiments to obtain the data needed to infer the underlying causal relationships.
In the long run, this technique could help scientists identify potential gene targets to induce certain behavior in a more accurate and efficient manner, potentially enabling them to develop precise treatments for patients.
“In genomics, it is very important to understand the mechanism underlying cell states. But cells have a multiscale structure, so the level of summarization is very important, too. If you figure out the right way to aggregate the observed data, the information you learn about the system should be more interpretable and useful,” says graduate student Jiaqi Zhang, an Eric and Wendy Schmidt Center Fellow and co-lead author of a paper on this technique.
Zhang is joined on the paper by co-lead author Ryan Welch, currently a master’s student in engineering; and senior author Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Data, Systems, and Society (IDSS) who is also director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS). The research will be presented at the Conference on Neural Information Processing Systems.
Learning from observational data
The problem the researchers set out to tackle involves learning programs of genes. These programs describe which genes function together to regulate other genes in a biological process, such as cell development or differentiation.
Since scientists can’t efficiently study how all 20,000 genes interact, they use a technique called causal disentanglement to learn how to combine related groups of genes into a representation that allows them to efficiently explore cause-and-effect relationships.
In previous work, the researchers demonstrated how this could be done effectively in the presence of interventional data, which are data obtained by perturbing variables in the network.
But it is often expensive to conduct interventional experiments, and there are some scenarios where such experiments are either unethical or the technology is not good enough for the intervention to succeed.
With only observational data, researchers can’t compare genes before and after an intervention to learn how groups of genes function together.
“Most research in causal disentanglement assumes access to interventions, so it was unclear how much information you can disentangle with just observational data,” Zhang says.
The MIT researchers developed a more general approach that uses a machine-learning algorithm to effectively identify and aggregate groups of observed variables, e.g., genes, using only observational data.
They can use this technique to identify causal modules and reconstruct an accurate underlying representation of the cause-and-effect mechanism. “While this research was motivated by the problem of elucidating cellular programs, we first had to develop novel causal theory to understand what could and could not be learned from observational data. With this theory in hand, in future work we can apply our understanding to genetic data and identify gene modules as well as their regulatory relationships,” Uhler says.
A layerwise representation
Using statistical techniques, the researchers can compute a mathematical function known as the variance for the Jacobian of each variable’s score. Causal variables that don’t affect any subsequent variables should have a variance of zero.
The researchers reconstruct the representation in a layer-by-layer structure, starting by removing the variables in the bottom layer that have a variance of zero. Then they work backward, layer-by-layer, removing the variables with zero variance to determine which variables, or groups of genes, are connected.
“Identifying the variances that are zero quickly becomes a combinatorial objective that is pretty hard to solve, so deriving an efficient algorithm that could solve it was a major challenge,” Zhang says.
In the end, their method outputs an abstracted representation of the observed data with layers of interconnected variables that accurately summarizes the underlying cause-and-effect structure.
Each variable represents an aggregated group of genes that function together, and the relationship between two variables represents how one group of genes regulates another. Their method effectively captures all the information used in determining each layer of variables.
After proving that their technique was theoretically sound, the researchers conducted simulations to show that the algorithm can efficiently disentangle meaningful causal representations using only observational data.
In the future, the researchers want to apply this technique in real-world genetics applications. They also want to explore how their method could provide additional insights in situations where some interventional data are available, or help scientists understand how to design effective genetic interventions. In the future, this method could help researchers more efficiently determine which genes function together in the same program, which could help identify drugs that could target those genes to treat certain diseases.
This research is funded, in part, by the U.S. Office of Naval Research, the National Institutes of Health, the U.S. Department of Energy, a Simons Investigator Award, the Eric and Wendy Schmidt Center at the Broad Institute, the Advanced Undergraduate Research Opportunities Program at MIT, and an Apple AI/ML PhD Fellowship.
Neuroscientists create a comprehensive map of the cerebral cortexUsing fMRI, the research team identified 24 networks that perform specific functions within the brain’s cerebral cortex.By analyzing brain scans taken as people watched movie clips, MIT researchers have created the most comprehensive map yet of the functions of the brain’s cerebral cortex.
Using functional magnetic resonance imaging (fMRI) data, the research team identified 24 networks with different functions, which include processing language, social interactions, visual features, and other types of sensory input.
Many of these networks have been seen before but haven’t been precisely characterized using naturalistic conditions. While the new study mapped networks in subjects watching engaging movies, previous works have used a small number of specific tasks or examined correlations across the brain in subjects who were simply resting.
“There’s an emerging approach in neuroscience to look at brain networks under more naturalistic conditions. This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s not going to give us all the answers, but it generates a lot of interesting ideas based on what we see going on in the movies that's related to these network maps that emerge.”
The researchers hope that their new map will serve as a starting point for further study of what each of these networks is doing in the brain.
Desimone and John Duncan, a program leader in the MRC Cognition and Brain Sciences Unit at Cambridge University, are the senior authors of the study, which appears today in Neuron. Reza Rajimehr, a research scientist in the McGovern Institute and a former graduate student at Cambridge University, is the lead author of the paper.
Precise mapping
The cerebral cortex of the brain contains regions devoted to processing different types of sensory information, including visual and auditory input. Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using fMRI to measure brain activity as subjects perform a single task such as looking at faces.
In other studies, researchers have scanned people’s brains as they do nothing, or let their minds wander. From those studies, researchers have identified networks such as the default mode network, a network of areas that is active during internally focused activities such as daydreaming.
“Up to now, most studies of networks were based on doing functional MRI in the resting-state condition. Based on those studies, we know some main networks in the cortex. Each of them is responsible for a specific cognitive function, and they have been highly influential in the neuroimaging field,” Rajimehr says.
However, during the resting state, many parts of the cortex may not be active at all. To gain a more comprehensive picture of what all these regions are doing, the MIT team analyzed data recorded while subjects performed a more natural task: watching a movie.
“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic information and contextual information,” Rajimehr says. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”
The data for this study was generated as part of the Human Connectome Project. Using a 7-Tesla MRI scanner, which offers higher resolution than a typical MRI scanner, brain activity was imaged in 176 people as they watched one hour of movie clips showing a variety of scenes.
The MIT team used a machine-learning algorithm to analyze the activity patterns of each brain region, allowing them to identify 24 networks with different activity patterns and functions.
Some of these networks are located in sensory areas such as the visual cortex or auditory cortex, as expected for regions with specific sensory functions. Other areas respond to features such as actions, language, or social interactions. Many of these networks have been seen before, but this technique offers more precise definition of where the networks are located, the researchers say.
“Different regions are competing with each other for processing specific features, so when you map each function in isolation, you may get a slightly larger network because it is not getting constrained by other processes,” Rajimehr says. “But here, because all the areas are considered together, we are able to define more precise boundaries between different networks.”
The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex, which appears to be highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.
Executive control networks
Three of the networks found in this study are involved in “executive control,” and were most active during transitions between different clips. The researchers also observed that these control networks appear to have a “push-pull” relationship with networks that process specific features such as faces or actions. When networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.
“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”
Using a movie-watching paradigm, the researchers are now studying some of the networks they identified in more detail, to identify subregions involved in particular tasks. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies. In a new network that analyzes visual scenes, they have identified regions involved in processing memory of places.
“This kind of experiment is really about generating hypotheses for how the cerebral cortex is functionally organized. Networks that emerge during movie watching now need to be followed up with more specific experiments to test the hypotheses. It’s giving us a new view into the operation of the entire cortex during a more naturalistic task than just sitting at rest,” Desimone says.
The research was funded by the McGovern Institute, the Cognitive Science and Technology Council of Iran, the MRC Cognition and Brain Sciences Unit at the University of Cambridge, and a Cambridge Trust scholarship.
Asteroid grains shed light on the outer solar system’s originsA weak magnetic field likely pulled matter inward to form the outer planetary bodies, from Jupiter to Neptune.Tiny grains from a distant asteroid are revealing clues to the magnetic forces that shaped the far reaches of the solar system over 4.6 billion years ago.
Scientists at MIT and elsewhere have analyzed particles of the asteroid Ryugu, which were collected by the Japanese Aerospace Exploration Agency’s (JAXA) Hayabusa2 mission and brought back to Earth in 2020. Scientists believe Ryugu formed on the outskirts of the early solar system before migrating in toward the asteroid belt, eventually settling into an orbit between Earth and Mars.
The team analyzed Ryugu’s particles for signs of any ancient magnetic field that might have been present when the asteroid first took shape. Their results suggest that if there was a magnetic field, it would have been very weak. At most, such a field would have been about 15 microtesla. (The Earth’s own magnetic field today is around 50 microtesla.)
Even so, the scientists estimate that such a low-grade field intensity would have been enough to pull together primordial gas and dust to form the outer solar system’s asteroids and potentially play a role in giant planet formation, from Jupiter to Neptune.
The team’s results, which are published today in the journal AGU Advances, show for the first time that the distal solar system likely harbored a weak magnetic field. Scientists have known that a magnetic field shaped the inner solar system, where Earth and the terrestrial planets were formed. But it was unclear whether such a magnetic influence extended into more remote regions, until now.
“We’re showing that, everywhere we look now, there was some sort of magnetic field that was responsible for bringing mass to where the sun and planets were forming,” says study author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That now applies to the outer solar system planets.”
The study’s lead author is Elias Mansbach PhD ’24, who is now a postdoc at Cambridge University. MIT co-authors include Eduardo Lima, Saverio Cambioni, and Jodie Ream, along with Michael Sowell and Joseph Kirschvink of Caltech, Roger Fu of Harvard University, Xue-Ning Bai of Tsinghua University, Chisato Anai and Atsuko Kobayashi of the Kochi Advanced Marine Core Research Institute, and Hironori Hidaka of Tokyo Institute of Technology.
A far-off field
Around 4.6 billion years ago, the solar system formed from a dense cloud of interstellar gas and dust, which collapsed into a swirling disk of matter. Most of this material gravitated toward the center of the disk to form the sun. The remaining bits formed a solar nebula of swirling, ionized gas. Scientists suspect that interactions between the newly formed sun and the ionized disk generated a magnetic field that threaded through the nebula, helping to drive accretion and pull matter inward to form the planets, asteroids, and moons.
“This nebular field disappeared around 3 to 4 million years after the solar system’s formation, and we are fascinated with how it played a role in early planetary formation,” Mansbach says.
Scientists previously determined that a magnetic field was present throughout the inner solar system — a region that spanned from the sun to about 7 astronomical units (AU), out to where Jupiter is today. (One AU is the distance between the sun and the Earth.) The intensity of this inner nebular field was somewhere between 50 to 200 microtesla, and it likely influenced the formation of the inner terrestrial planets. Such estimates of the early magnetic field are based on meteorites that landed on Earth and are thought to have originated in the inner nebula.
“But how far this magnetic field extended, and what role it played in more distal regions, is still uncertain because there haven’t been many samples that could tell us about the outer solar system,” Mansbach says.
Rewinding the tape
The team got an opportunity to analyze samples from the outer solar system with Ryugu, an asteroid that is thought to have formed in the early outer solar system, beyond 7 AU, and was eventually brought into orbit near the Earth. In December 2020, JAXA’s Hayabusa2 mission returned samples of the asteroid to Earth, giving scientists a first look at a potential relic of the early distal solar system.
The researchers acquired several grains of the returned samples, each about a millimeter in size. They placed the particles in a magnetometer — an instrument in Weiss’ lab that measures the strength and direction of a sample’s magnetization. They then applied an alternating magnetic field to progressively demagnetize each sample.
“Like a tape recorder, we are slowly rewinding the sample’s magnetic record,” Mansbach explains. “We then look for consistent trends that tell us if it formed in a magnetic field.”
They determined that the samples held no clear sign of a preserved magnetic field. This suggests that either there was no nebular field present in the outer solar system where the asteroid first formed, or the field was so weak that it was not recorded in the asteroid’s grains. If the latter is the case, the team estimates such a weak field would have been no more than 15 microtesla in intensity.
The researchers also reexamined data from previously studied meteorites. They specifically looked at “ungrouped carbonaceous chondrites” — meteorites that have properties that are characteristic of having formed in the distal solar system. Scientists had estimated the samples were not old enough to have formed before the solar nebula disappeared. Any magnetic field record the samples contain, then, would not reflect the nebular field. But Mansbach and his colleagues decided to take a closer look.
“We reanalyzed the ages of these samples and found they are closer to the start of the solar system than previously thought,” Mansbach says. “We think these samples formed in this distal, outer region. And one of these samples does actually have a positive field detection of about 5 microtesla, which is consistent with an upper limit of 15 microtesla.”
This updated sample, combined with the new Ryugu particles, suggest that the outer solar system, beyond 7 AU, hosted a very weak magnetic field, that was nevertheless strong enough to pull matter in from the outskirts to eventually form the outer planetary bodies, from Jupiter to Neptune.
“When you’re further from the sun, a weak magnetic field goes a long way,” Weiss notes. “It was predicted that it doesn’t need to be that strong out there, and that’s what we’re seeing.”
The team plans to look for more evidence of distal nebular fields with samples from another far-off asteroid, Bennu, which were delivered to Earth in September 2023 by NASA’s OSIRIS-REx spacecraft.
“Bennu looks a lot like Ryugu, and we’re eagerly awaiting first results from those samples,” Mansbach says.
This research was supported, in part, by NASA.
Startup gives surgeons a real-time view of breast cancer during surgeryThe drug-device combination developed by MIT spinout Lumicell is poised to reduce repeat surgeries and ensure more complete tumor removal.Breast cancer is the second most common type of cancer and cause of cancer death for women in the United States, affecting one in eight women overall.
Most women with breast cancer undergo lumpectomy surgery to remove the tumor and a rim of healthy tissue surrounding the tumor. After the procedure, the removed tissue is sent to a pathologist to look for signs of disease at the edge of the tissue assessed. Unfortunately, about 20 percent of women who have lumpectomies must undergo a second surgery to remove more tissue.
Now, an MIT spinout is giving surgeons a real-time view of cancerous tissue during surgery. Lumicell has developed a handheld device and an optical imaging agent that, when combined, allow surgeons to scan the tissue within the surgical cavity to visualize residual cancer cells. The surgeons see these images on a monitor that can guide them to remove additional tissue during the procedure.
In a clinical trial of 357 patients, Lumicell’s technology not only reduced the need for second surgeries but also revealed tissue suspected to contain cancer cells that may have otherwise been missed by the standard of care lumpectomy.
The company received U.S. Food and Drug Administration approval for the technology earlier this year, marking a major milestone for Lumicell and the founders, who include MIT professors Linda Griffith and Moungi Bawendi along with PhD candidate W. David Lee ’69, SM ’70. Much of the early work developing and testing the system took place at the Koch Institute for Integrative Cancer Research at MIT, beginning in 2008.
The FDA approval also held deep personal significance for some of Lumicell’s team members, including Griffith, a two-time breast cancer survivor, and Lee, whose wife’s passing from the disease in 2003 changed the course of his life.
An interdisciplinary approach
Lee ran a technology consulting group for 25 years before his wife was diagnosed with breast cancer. Watching her battle the disease inspired him to develop technologies that could help cancer patients.
His neighbor at the time was Tyler Jacks, the founding director of the Koch Institute. Jacks invited Lee to a series of meetings at the Koch involving professors Robert Langer and Bawendi, and Lee eventually joined the Koch Institute as an integrative program officer in 2008, where he began exploring an approach for improving imaging in living organisms with single-cell resolution using charge-coupled device (CCD) cameras.
“CCD pixels at the time were each 2 or 3 microns and spaced 2 or 3 microns,” Lee explains. “So the idea was very simple: to stabilize a camera on a tissue so it would move with the breathing of the animal, so the pixels would essentially line up with the cells without any fancy magnification.”
That work led Lee to begin meeting regularly with a multidisciplinary group including Lumicell co-founders Bawendi, currently the Lester Wolfe Professor of Chemistry at MIT and winner of the 2023 Nobel Prize in Chemistry; Griffith, the School of Engineering Professor of Teaching Innovation in MIT’s Department of Biological Engineering and an extramural faculty member at the Koch Institute; Ralph Weissleder, a professor at Harvard Medical School; and David Kirsch, formerly a postdoc at the Koch Institute and now a scientist at the Princess Margaret Cancer Center.
“On Friday afternoons, we’d get together, and Moungi would teach us some chemistry, Lee would teach us some engineering, and David Kirsch would teach some biology,” Griffith recalls.
Through those meetings, the researchers began to explore the effectiveness of combining Lee’s imaging approach with engineered proteins that would light up where the immune system meets the edge of tumors, for use during surgery. To begin testing the idea, the group received funding from the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund.
“Without that support, this never would have happened,” Lee says. “When I was learning biology at MIT as an undergrad, genetics weren’t even in the textbooks yet. But the Koch Institute provided education, funding, and most importantly, connections to faculty, who were willing to teach me biology.”
In 2010, Griffith was diagnosed with breast cancer.
“Going through that personal experience, I understood the impact that we could have,” Griffith says. “I had a very unusual situation and a bad kind of tumor. The whole thing was nerve-wracking, but one of the most nerve-wracking times was waiting to find out if my tumor margins were clear after surgery. I experienced that uncertainty and dread as a patient, so I became hugely sensitized to our mission.”
The approach Lumicell’s founders eventually settled on begins two to six hours before surgery, when patients receive the optical imaging agent through an IV. Then, during surgery, surgeons use Lumicell’s handheld imaging device to scan the walls of the breast cavity. Lumicell’s cancer detection software shows spots that highlight regions suspected to contain residual cancer on the computer monitor, which the surgeon can then remove. The process adds less than 7 minutes on average to the procedure.
“The technology we developed allows the surgeon to scan the actual cavity, whereas pathology only looks at the lump removed, and [pathologists] make their assessment based on looking at about 1 or 2 percent of the surface area,” Lee says. “Not only are we detecting cancer that was left behind to potentially eliminate second surgeries, we are also, very importantly, finding cancer in some patients that wouldn't be found in pathology and may not generate a second surgery.”
Exploring other cancer types
Lumicell is currently exploring if its imaging agent is activated in other tumor types, including prostate, sarcoma, esophageal, gastric, and more.
Lee ran Lumicell between 2008 and 2020. After stepping down as CEO, he decided to return to MIT to get his PhD in neuroscience, a full 50 years since he earned his master’s. Shortly thereafter, Howard Hechler took over as Lumicell’s president and chief operating officer.
Looking back, Griffith credits MIT’s culture of learning for the formation of Lumicell.
“People like David [Lee] and Moungi care about solving problems,” Griffith says. “They’re technically brilliant, but they also love learning from other people, and that’s what makes makes MIT special. People are confident about what they know, but they are also comfortable in that they don’t know everything, which drives great collaboration. We work together so that the whole is bigger than the sum of the parts.”
A new approach to modeling complex biological systemsMIT engineers’ new model could help researchers glean insights from genomic data and other huge datasets.Over the past two decades, new technologies have helped scientists generate a vast amount of biological data. Large-scale experiments in genomics, transcriptomics, proteomics, and cytometry can produce enormous quantities of data from a given cellular or multicellular system.
However, making sense of this information is not always easy. This is especially true when trying to analyze complex systems such as the cascade of interactions that occur when the immune system encounters a foreign pathogen.
MIT biological engineers have now developed a new computational method for extracting useful information from these datasets. Using their new technique, they showed that they could unravel a series of interactions that determine how the immune system responds to tuberculosis vaccination and subsequent infection.
This strategy could be useful to vaccine developers and to researchers who study any kind of complex biological system, says Douglas Lauffenburger, the Ford Professor of Engineering in the departments of Biological Engineering, Biology, and Chemical Engineering.
“We’ve landed on a computational modeling framework that allows prediction of effects of perturbations in a highly complex system, including multiple scales and many different types of components,” says Lauffenburger, the senior author of the new study.
Shu Wang, a former MIT postdoc who is now an assistant professor at the University of Toronto, and Amy Myers, a research manager in the lab of University of Pittsburgh School of Medicine Professor JoAnne Flynn, are the lead authors of a new paper on the work, which appears today in the journal Cell Systems.
Modeling complex systems
When studying complex biological systems such as the immune system, scientists can extract many different types of data. Sequencing cell genomes tells them which gene variants a cell carries, while analyzing messenger RNA transcripts tells them which genes are being expressed in a given cell. Using proteomics, researchers can measure the proteins found in a cell or biological system, and cytometry allows them to quantify a myriad of cell types present.
Using computational approaches such as machine learning, scientists can use this data to train models to predict a specific output based on a given set of inputs — for example, whether a vaccine will generate a robust immune response. However, that type of modeling doesn’t reveal anything about the steps that happen in between the input and the output.
“That AI approach can be really useful for clinical medical purposes, but it’s not very useful for understanding biology, because usually you’re interested in everything that’s happening between the inputs and outputs,” Lauffenburger says. “What are the mechanisms that actually generate outputs from inputs?”
To create models that can identify the inner workings of complex biological systems, the researchers turned to a type of model known as a probabilistic graphical network. These models represent each measured variable as a node, generating maps of how each node is connected to the others.
Probabilistic graphical networks are often used for applications such as speech recognition and computer vision, but they have not been widely used in biology.
Lauffenburger’s lab has previously used this type of model to analyze intracellular signaling pathways, which required analyzing just one kind of data. To adapt this approach to analyze many datasets at once, the researchers applied a mathematical technique that can filter out any correlations between variables that are not directly affecting each other. This technique, known as graphical lasso, is an adaptation of the method often used in machine learning models to strip away results that are likely due to noise.
“With correlation-based network models generally, one of the problems that can arise is that everything seems to be influenced by everything else, so you have to figure out how to strip down to the most essential interactions,” Lauffenburger says. “Using probabilistic graphical network frameworks, one can really boil down to the things that are most likely to be direct and throw out the things that are most likely to be indirect.”
Mechanism of vaccination
To test their modeling approach, the researchers used data from studies of a tuberculosis vaccine. This vaccine, known as BCG, is an attenuated form of Mycobacterium bovis. It is used in many countries where TB is common but isn’t always effective, and its protection can weaken over time.
In hopes of developing more effective TB protection, researchers have been testing whether delivering the BCG vaccine intravenously or by inhalation might provoke a better immune response than injecting it. Those studies, performed in animals, found that the vaccine did work much better when given intravenously. In the MIT study, Lauffenburger and his colleagues attempted to discover the mechanism behind this success.
The data that the researchers examined in this study included measurements of about 200 variables, including levels of cytokines, antibodies, and different types of immune cells, from about 30 animals.
The measurements were taken before vaccination, after vaccination, and after TB infection. By analyzing the data using their new modeling approach, the MIT team was able to determine the steps needed to generate a strong immune response. They showed that the vaccine stimulates a subset of T cells, which produce a cytokine that activates a set of B cells that generate antibodies targeting the bacterium.
“Almost like a roadmap or a subway map, you could find what were really the most important paths. Even though a lot of other things in the immune system were changing one way or another, they were really off the critical path and didn't matter so much,” Lauffenburger says.
The researchers then used the model to make predictions for how a specific disruption, such as suppressing a subset of immune cells, would affect the system. The model predicted that if B cells were nearly eliminated, there would be little impact on the vaccine response, and experiments showed that prediction was correct.
This modeling approach could be used by vaccine developers to predict the effect their vaccines may have, and to make tweaks that would improve them before testing them in humans. Lauffenburger’s lab is now using the model to study the mechanism of a malaria vaccine that has been given to children in Kenya, Ghana, and Malawi over the past few years.
“The advantage of this computational approach is that it filters out many biological targets that only indirectly influence the outcome and identifies those that directly regulate the response. Then it's possible to predict how therapeutically altering those biological targets would change the response. This is significant because it provides the basis for future vaccine and trial designs that are more data driven,” says Kathryn Miller-Jensen, a professor of biomedical engineering at Yale University, who was not involved in the study.
Lauffenburger’s lab is also using this type of modeling to study the tumor microenvironment, which contains many types of immune cells and cancerous cells, in hopes of predicting how tumors might respond to different kinds of treatment.
The research was funded by the National Institute of Allergy and Infectious Diseases.
Despite its impressive output, generative AI doesn’t have a coherent understanding of the worldResearchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.Large language models can do impressive things, like write poetry or generate viable computer programs, even though these models are trained to predict words that come next in a piece of text.
Such surprising capabilities can make it seem like the models are implicitly learning some general truths about the world.
But that isn’t necessarily the case, according to a new study. The researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy — without having formed an accurate internal map of the city.
Despite the model’s uncanny ability to navigate effectively, when the researchers closed some streets and added detours, its performance plummeted.
When they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.
This could have serious implications for generative AI models deployed in the real world, since a model that seems to be performing well in one context might break down if the task or environment slightly changes.
“One hope is that, because LLMs can accomplish all these amazing things in language, maybe we could use these same tools in other parts of science, as well. But the question of whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says senior author Ashesh Rambachan, assistant professor of economics and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Rambachan is joined on a paper about the work by lead author Keyon Vafa, a postdoc at Harvard University; Justin Y. Chen, an electrical engineering and computer science (EECS) graduate student at MIT; Jon Kleinberg, Tisch University Professor of Computer Science and Information Science at Cornell University; and Sendhil Mullainathan, an MIT professor in the departments of EECS and of Economics, and a member of LIDS. The research will be presented at the Conference on Neural Information Processing Systems.
New metrics
The researchers focused on a type of generative AI model known as a transformer, which forms the backbone of LLMs like GPT-4. Transformers are trained on a massive amount of language-based data to predict the next token in a sequence, such as the next word in a sentence.
But if scientists want to determine whether an LLM has formed an accurate model of the world, measuring the accuracy of its predictions doesn’t go far enough, the researchers say.
For example, they found that a transformer can predict valid moves in a game of Connect 4 nearly every time without understanding any of the rules.
So, the team developed two new metrics that can test a transformer’s world model. The researchers focused their evaluations on a class of problems called deterministic finite automations, or DFAs.
A DFA is a problem with a sequence of states, like intersections one must traverse to reach a destination, and a concrete way of describing the rules one must follow along the way.
They chose two problems to formulate as DFAs: navigating on streets in New York City and playing the board game Othello.
“We needed test beds where we know what the world model is. Now, we can rigorously think about what it means to recover that world model,” Vafa explains.
The first metric they developed, called sequence distinction, says a model has formed a coherent world model it if sees two different states, like two different Othello boards, and recognizes how they are different. Sequences, that is, ordered lists of data points, are what transformers use to generate outputs.
The second metric, called sequence compression, says a transformer with a coherent world model should know that two identical states, like two identical Othello boards, have the same sequence of possible next steps.
They used these metrics to test two common classes of transformers, one which is trained on data generated from randomly produced sequences and the other on data generated by following strategies.
Incoherent world models
Surprisingly, the researchers found that transformers which made choices randomly formed more accurate world models, perhaps because they saw a wider variety of potential next steps during training.
“In Othello, if you see two random computers playing rather than championship players, in theory you’d see the full set of possible moves, even the bad moves championship players wouldn’t make,” Vafa explains.
Even though the transformers generated accurate directions and valid Othello moves in nearly every instance, the two metrics revealed that only one generated a coherent world model for Othello moves, and none performed well at forming coherent world models in the wayfinding example.
The researchers demonstrated the implications of this by adding detours to the map of New York City, which caused all the navigation models to fail.
“I was surprised by how quickly the performance deteriorated as soon as we added a detour. If we close just 1 percent of the possible streets, accuracy immediately plummets from nearly 100 percent to just 67 percent,” Vafa says.
When they recovered the city maps the models generated, they looked like an imagined New York City with hundreds of streets crisscrossing overlaid on top of the grid. The maps often contained random flyovers above other streets or multiple streets with impossible orientations.
These results show that transformers can perform surprisingly well at certain tasks without understanding the rules. If scientists want to build LLMs that can capture accurate world models, they need to take a different approach, the researchers say.
“Often, we see these models do impressive things and think they must have understood something about the world. I hope we can convince people that this is a question to think very carefully about, and we don’t have to rely on our own intuitions to answer it,” says Rambachan.
In the future, the researchers want to tackle a more diverse set of problems, such as those where some rules are only partially known. They also want to apply their evaluation metrics to real-world, scientific problems.
This work is funded, in part, by the Harvard Data Science Initiative, a National Science Foundation Graduate Research Fellowship, a Vannevar Bush Faculty Fellowship, a Simons Collaboration grant, and a grant from the MacArthur Foundation.
A new focus on understanding the human elementThe MIT Human Insight Collaborative will elevate the human-centered disciplines and unite the Institute’s top scholars to help solve the world’s biggest challenges.A new MIT initiative aims to elevate human-centered research and teaching, and bring together scholars in the humanities, arts, and social sciences with their colleagues across the Institute.
The MIT Human Insight Collaborative (MITHIC) launched earlier this fall. A formal kickoff event for MITHIC was held on campus Monday, Oct. 28, before a full audience in MIT’s Huntington Hall (Room 10-250). The event featured a conversation with Min Jin Lee, acclaimed author of “Pachinko,” moderated by Linda Pizzuti Henry SM ’05, co-owner and CEO of Boston Globe Media.
Initiative leaders say MITHIC will foster creativity, inquiry, and understanding, amplifying the Institute’s impact on global challenges like climate change, AI, pandemics, poverty, democracy, and more.
President Sally Kornbluth says MITHIC is the first of a new model known as the MIT Collaboratives, designed among other things to foster and support new collaborations on compelling global problems. The next MIT Collaborative will focus on life sciences and health.
“The MIT Collaboratives will make it easier for our faculty to ‘go big’ — to pursue the most innovative ideas in their disciplines and build connections to other fields,” says Kornbluth.
“We created MITHIC with a particular focus on the human-centered fields, to help advance research with the potential for global impact. MITHIC also has another, more local aim: to support faculty in developing fresh approaches to teaching and research that will engage and inspire a new generation of students,” Kornbluth adds.
A transformative opportunity
MITHIC is co-chaired by Anantha Chandrakasan, chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science; and Agustin Rayo, Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences (SHASS).
“MITHIC is an incredibly exciting and meaningful initiative to me as it represents MIT at its core — bringing broad perspectives and human insights to solve some of the world’s most important problems,” says Chandrakasan. “It offers the opportunity to shape the future of research and education at MIT through advancing core scholarship in the individual humanities, arts, and social sciences disciplines, but also through cross-cutting problem formulation and problem-solving. I have no doubt MITHIC will inspire our community to think differently and work together in ways that will have a lasting impact on society.”
Rayo says true innovation must go beyond technology to encompass the full complexity of the human experience.
“At MIT, we aim to make the world a better place. But you can't make the world a better place unless you understand its full economic, political, social, ethical — human — dimensions,” Rayo says. “MITHIC can help ensure that MIT educates broad-minded students, who are ready for the multidimensional challenges of the future.”
Rayo sees MITHIC as a transformative opportunity for MIT.
“MIT needs an integrated approach, which combines STEM with the human-centered disciplines. MITHIC can help catalyze that integration,” he says.
Mark Gorenberg ’76, chair of the MIT Corporation, says MITHIC represents a commitment to collaboration, a spirit of curiosity, and the belief that uniting the humanities and sciences results in solutions that are not only innovative, but meaningful and lasting.
“MIT has long been a place where boundless ideas and entrepreneurial energy come together to meet the world’s toughest challenges,” Gorenberg says. “With MITHIC, we’re adding a powerful new layer to that mission — one that captures the richness of human experience and imagination.”
Support for MITHIC comes from all five MIT schools, the MIT Schwarzman College of Computing, and the Office of the Provost, along with philanthropic support.
Charlene Kabcenell ’79, a life member of the MIT Corporation, and Derry Kabcenell ’75 chose to support MITHIC financially.
“MIT produces world-class scientists and technologists, but expertise in the skills of these areas is not enough. We are excited that the collaborations catalyzed by this initiative will help our graduates to stay mindful of the impact of their work on people and society,” they say.
Ray Stata ’57, MIT Corporation life member emeritus, is also a benefactor of MITHIC.
“In industry, it is not just technical innovation and breakthroughs that win, but also culture, in the ways people collaborate and work together. These are skills and behaviors that can be learned through a deeper understanding of humanities and social sciences. This has always been an important part of MIT’s education and I am happy to see the renewed attention being given to this aspect of the learning experience,” he says.
“A potential game changer”
Keeril Makan, associate dean for strategic initiatives in SHASS and the Michael (1949) and Sonja Koerner Music Composition Professor, is the faculty lead for MITHIC.
“MITHIC is about incentivizing collaboration, not research in specific areas,” says Makan. “It’s a ground-up approach, where we support faculty based upon the research that is of interest to them, which they identify.”
MITHIC consists of three new funding opportunities for faculty, the largest of which is the SHASS+ Connectivity Fund. For all three funds, proposals can be for projects ready to begin, as well as planning grants in preparation for future proposals.
The SHASS+ Connectivity Fund will support research that bridges between SHASS fields and other fields at MIT. Proposals require a project lead in SHASS and another project lead whose primary appointment is outside of SHASS.
The SHASS+ Connectivity Fund is co-chaired by David Kaiser, the Germehausen Professor of the History of Science and professor of physics, and Maria Yang, deputy dean of engineering and Kendall Rohsenow Professor of Mechanical Engineering.
“MIT has set an ambitious agenda for itself focused on addressing extremely complex and challenging problems facing society today, such as climate change, and there is a critical role for technological solutions to address these problems,” Yang says. “However, the origin of these problems are in part due to humans, so humanistic considerations need to be part of the solution. Such problems cannot be conquered by technology alone.”
Yang says the goal of the SHASS+ Connectivity Fund is to enhance MIT’s research by building interdisciplinary teams, embedding a human-centered focus.
“My hope is that these collaborations will build bridges between SHASS and the rest of MIT, and will lead to integrated research that is more powerful and meaningful together,” says Yang.
Proposals for the first round of projects are due Nov. 22, but MITHIC is already bringing MIT faculty together to share ideas in hopes of sparking ideas for potential collaboration.
An information session and networking reception was held in September. MITHIC has also been hosting a series of “Meeting of the Minds” events. Makan says these have been opportunities for faculty and teaching staff to make connections around a specific topic or area of interest with colleagues they haven’t previously worked with.
Recent Meeting of the Minds sessions have been held on topics like cybersecurity, social history of math, food security, and rebuilding Ukraine.
“Faculty are already educating each other about their disciplines,” says Makan. “What happens in SHASS has been opaque to faculty in the other schools, just as the research in the other schools has been opaque to the faculty in SHASS. We’ve seen progress with initiatives like the Social and Ethical Responsibilities of Computing (SERC), when it comes to computing. MITHIC will broaden that scope.”
The leadership of MITHIC is cross-disciplinary, with a steering committee of faculty representing all five schools and the MIT Schwarzman College of Computing.
Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology, is a member of the MITHIC steering committee. He says that while he continues to be amazed and inspired by the diverse research and work from across MIT, there’s potential to go even further by working together and connecting across diverse perspectives, ideas, and approaches.
“The bold goal and mission of MITHIC, to connect the humanities at MIT to work being conducted across the other schools at MIT, feels like a potential game-changer,” he says. “I am really excited to see the unexpected new work and directions that come out of this initiative, including hopefully connections that persist and transform the work across MIT.”
Enhancing the arts and humanities
In addition to the SHASS+ Connectivity Fund, MITHIC has two funds aimed specifically at enhancing research and teaching within SHASS.
The Humanities Cultivation Fund will support projects from the humanities and arts in SHASS. It is co-chaired by Arthur Bahr, professor of literature, and Anne McCants, the Ann F. Friedlaender Professor of History and SHASS research chair.
“Humanistic scholarship and artistic creation have long been among MIT’s hidden gems. The Humanities Cultivation Fund offers an exciting new opportunity to not only allow such work to continue to flourish, but also to give it greater visibility across the MIT community and into the wider world of scholarship. The fund aspires to cultivate — that is, to seed and nurture — new ideas and modes of inquiry into the full spectrum of human culture and expression,” says McCants.
The SHASS Education Innovation Fund will support new educational approaches in SHASS fields. The fund is co-chaired by Eric Klopfer, professor of comparative media studies/writing, and Emily Richmond Pollock, associate professor of music and SHASS undergraduate education chair.
Pollock says the fund is a welcome chance to support colleagues who have a strong sense of where teaching in SHASS could go next.
“We are looking for efforts that address contemporary challenges of teaching and learning, with approaches that can be tested in a specific context and later applied across the school. The crucial role of SHASS in educating MIT students in all fields means that what we devise here in our curriculum can have huge benefits for the Institute as a whole.”
Makan says infusing MIT’s human-centered disciplines with support is an essential part of MITHIC.
“The stronger these units are, the more the human-centered disciplines permeate the student experience, ultimately helping to build a stronger, more inclusive MIT,” says Makan.
Lemelson-MIT awards 2024-25 InvenTeam grants to eight high school teamsEach $7,500 grant allows high schoolers to solve real-world problems with technological solutions.The Lemelson-MIT Program has announced the 2024-25 InvenTeams — eight teams of high school students, teachers, and mentors from across the country. Each team will each receive $7,500 in grant funding and year-long support to build a technological invention to solve a problem of their own choosing. The students’ inventions are inspired by real-world problems they identified in their local communities.
The InvenTeams were selected by a respected panel consisting of university professors, inventors, entrepreneurs, industry professionals, and college students. Some panel members were former InvenTeam members now working in industry. The InvenTeams are focusing on problems facing their local communities, with a goal that their inventions will have a positive impact on beneficiaries and, ultimately, improve the lives of others beyond their communities.
This year’s teams are:
InvenTeams are comprised of students, teachers and community mentors who pursue year-long invention projects involving creative thinking, problem-solving, and hands-on learning in science, technology, engineering, and mathematics. The InvenTeams’ prototype inventions will be showcased at a technical review within their home communities in February 2025, and then again as a final prototype at EurekaFest — an invention celebration taking place June 9-11, 2025, at MIT.
“The InvenTeams are focusing on solving problems that impact their local communities,” says Leigh Estabrooks, Lemelson-MIT’s invention education officer. “Teams are focusing their technological solutions — their inventions — on health and well-being, environmental issues, and safety concerns. These high school students are not just problem-solvers of tomorrow, they are problem solvers today helping to make our world healthier, greener, and safer.”
This year the Lemelson-MIT Program and the InvenTeams grants initiative celebrate a series of firsts in the annual high school invention grant program. For the first time, a team from their home city of Cambridge, Massachusetts, will participate, representing the Cambridge community’s innovative spirit on a national stage. Additionally, the program welcomes the first team from Puerto Rico, highlighting the expanding reach of the InvenTeams grants initiative. The pioneering teams exemplify the diversity and creativity that fuel invention.
The InvenTeams grants initiative, now in its 21st year, has enabled 18 teams of high school students to be awarded U.S. patents for their projects. Intellectual property education is combined with invention education offerings as part of the Lemelson-MIT Program’s deliberate efforts to remedy historic inequities among those who develop inventions, protect their intellectual property, and commercialize their creations. The ongoing efforts empower students from all backgrounds, equipping them with invaluable problem-solving skills that will serve them well throughout their academic journeys, professional pursuits, and personal lives. The program has worked with over 4,000 students across 304 different InvenTeams nationwide and has included:
The Lemelson-MIT Program is a national leader in efforts to prepare the next generation of inventors and entrepreneurs, focusing on the expansion of opportunities for people to learn ways inventors find and solve problems that matter to improve lives. A commitment to diversity, equity, and inclusion aims to remedy historic inequities among those who develop inventions, protect their intellectual property, and commercialize their creations.
Jerome H. Lemelson, one of U.S. history’s most prolific inventors, and his wife Dorothy founded the Lemelson-MIT Program in 1994. It is funded by The Lemelson Foundation and administered by the MIT School of Engineering. For more information, contact Leigh Estabrooks.
Artist and designer Es Devlin awarded Eugene McDermott Award in the Arts at MITExploring biodiversity, linguistic diversity, and collective AI-generated poetry, her work will be honored with a $100K prize, artist residency, and public lecture at MIT in spring 2025.Artist and designer Es Devlin is the recipient of the 2025 Eugene McDermott Award in the Arts at MIT. The $100,000 prize, to be awarded at a gala in her honor, also includes an artist residency at MIT in spring 2025, during which Es Devlin will present her work in a lecture open to the public on May 1, 2025.
Devlin’s work explores biodiversity, linguistic diversity, and collective AI-generated poetry, all areas that also are being explored within the MIT community. She is known for public art and installations at major museums such as the Tate Modern, kinetic stage designs for the Metropolitan Opera, the Super Bowl, and the Olympics, as well as monumental stage sculptures for large-scale stadium concerts.
“I am always most energized by works I have not yet made, so I am immensely grateful to have this trust and investment in ideas I’ve yet to conceive,” says Devlin. “I’m honored to receive an award that has been granted to so many of my heroes, and look forward to collaborating closely with the brilliant minds at MIT.”
“We look forward to presenting Es Devlin with MIT’s highest award in the arts. Her work will be an inspiration for our students studying the visual arts, theater, media, and design. Her interest in AI and the arts dovetails with a major initiative at MIT to address the societal impact of GenAI [generative artificial intelligence],” says MIT vice provost and Ford International Professor of History Philip S. Khoury. “With a new performing arts center opening this winter and a campus-wide arts festival taking place this spring, there could not be a better moment to expose MIT’s creative community to Es Devlin’s extraordinary artistic practice.”
The Eugene McDermott Award in the Arts at MIT recognizes innovative artists working in any field or cross-disciplinary activity. The $100,000 prize represents an investment in the recipient’s future creative work, rather than a prize for a particular project or lifetime of achievement. The official announcement was made at the Council for the Arts at MIT’s 51st annual meeting on Oct. 24. Since it was established in 1974, the award has been bestowed upon 38 individuals who work in performing, visual, and media arts, as well as authors, art historians, and patrons of the arts. Past recipients include Santiago Calatrava, Gustavo Dudamel, Olafur Eliasson, Robert Lepage, Audra McDonald, Suzan-Lori Parks, Bill Viola, and Pamela Z, among others.
A distinctive feature of the award is a short residency at MIT, which includes a public presentation of the artist’s work, substantial interaction with students and faculty, and a gala that convenes national and international leaders in the arts. The goal of the residency is to provide the recipient with unparalleled access to the creative energy and cutting-edge research at the Institute and to develop mutually enlightening relationships in the MIT community.
The Eugene McDermott Award in the Arts at MIT was established in 1974 by Margaret McDermott (1912-2018) in honor of her husband, Eugene McDermott (1899-1973), a co-founder of Texas Instruments and longtime friend and benefactor of MIT. The award is presented by the Council for the Arts at MIT.
The award is bestowed upon individuals whose artistic trajectory and body of work have achieved the highest distinction in their field and indicate they will remain leaders for years to come. The McDermott Award reflects MIT’s commitment to risk-taking, problem-solving, and connecting creative minds across disciplines.
Es Devlin, born in London in 1971, views an audience as a temporary society and often invites public participation in communal choral works. Her canvas ranges from public sculptures and installations at Tate Modern, V&A, Serpentine, Imperial War Museum, and Lincoln Center, to kinetic stage designs at the Royal Opera House, the National Theatre, and the Metropolitan Opera, as well as Olympic ceremonies, Super Bowl halftime shows, and monumental illuminated stage sculptures for large-scale stadium concerts.
Devlin is the subject of a major monographic book, “An Atlas of Es Devlin,” described by Thames and Hudson as their most intricate and sculptural publication to date, and a retrospective exhibition at the Cooper Hewitt Smithsonian Design Museum in New York. In 2020, she became the first female architect of the U.K. Pavilion at a World Expo, conceiving a building which used AI to co-author poetry with visitors on its 20-meter diameter facade. Her practice was the subject of the 2015 Netflix documentary series “Abstract: The Art of Design.” She is a fellow of the Royal Academy of Music, University of the Arts London, and a Royal Designer for Industry at the Royal Society of Arts. She has been awarded the London Design Medal, three Olivier Awards, a Tony Award, an Ivor Novello Award, doctorates from the Universities of Bristol and Kent, and a Commander of the Order of the British Empire award.
Nanoscale transistors could enable more efficient electronicsResearchers are leveraging quantum mechanical properties to overcome the limits of silicon semiconductor technology.Silicon transistors, which are used to amplify and switch signals, are a critical component in most electronic devices, from smartphones to automobiles. But silicon semiconductor technology is held back by a fundamental physical limit that prevents transistors from operating below a certain voltage.
This limit, known as “Boltzmann tyranny,” hinders the energy efficiency of computers and other electronics, especially with the rapid development of artificial intelligence technologies that demand faster computation.
In an effort to overcome this fundamental limit of silicon, MIT researchers fabricated a different type of three-dimensional transistor using a unique set of ultrathin semiconductor materials.
Their devices, featuring vertical nanowires only a few nanometers wide, can deliver performance comparable to state-of-the-art silicon transistors while operating efficiently at much lower voltages than conventional devices.
“This is a technology with the potential to replace silicon, so you could use it with all the functions that silicon currently has, but with much better energy efficiency,” says Yanjie Shao, an MIT postdoc and lead author of a paper on the new transistors.
The transistors leverage quantum mechanical properties to simultaneously achieve low-voltage operation and high performance within an area of just a few square nanometers. Their extremely small size would enable more of these 3D transistors to be packed onto a computer chip, resulting in fast, powerful electronics that are also more energy-efficient.
“With conventional physics, there is only so far you can go. The work of Yanjie shows that we can do better than that, but we have to use different physics. There are many challenges yet to be overcome for this approach to be commercial in the future, but conceptually, it really is a breakthrough,” says senior author Jesús del Alamo, the Donner Professor of Engineering in the MIT Department of Electrical Engineering and Computer Science (EECS).
They are joined on the paper by Ju Li, the Tokyo Electric Power Company Professor in Nuclear Engineering and professor of materials science and engineering at MIT; EECS graduate student Hao Tang; MIT postdoc Baoming Wang; and professors Marco Pala and David Esseni of the University of Udine in Italy. The research appears today in Nature Electronics.
Surpassing silicon
In electronic devices, silicon transistors often operate as switches. Applying a voltage to the transistor causes electrons to move over an energy barrier from one side to the other, switching the transistor from “off” to “on.” By switching, transistors represent binary digits to perform computation.
A transistor’s switching slope reflects the sharpness of the “off” to “on” transition. The steeper the slope, the less voltage is needed to turn on the transistor and the greater its energy efficiency.
But because of how electrons move across an energy barrier, Boltzmann tyranny requires a certain minimum voltage to switch the transistor at room temperature.
To overcome the physical limit of silicon, the MIT researchers used a different set of semiconductor materials — gallium antimonide and indium arsenide — and designed their devices to leverage a unique phenomenon in quantum mechanics called quantum tunneling.
Quantum tunneling is the ability of electrons to penetrate barriers. The researchers fabricated tunneling transistors, which leverage this property to encourage electrons to push through the energy barrier rather than going over it.
“Now, you can turn the device on and off very easily,” Shao says.
But while tunneling transistors can enable sharp switching slopes, they typically operate with low current, which hampers the performance of an electronic device. Higher current is necessary to create powerful transistor switches for demanding applications.
Fine-grained fabrication
Using tools at MIT.nano, MIT’s state-of-the-art facility for nanoscale research, the engineers were able to carefully control the 3D geometry of their transistors, creating vertical nanowire heterostructures with a diameter of only 6 nanometers. They believe these are the smallest 3D transistors reported to date.
Such precise engineering enabled them to achieve a sharp switching slope and high current simultaneously. This is possible because of a phenomenon called quantum confinement.
Quantum confinement occurs when an electron is confined to a space that is so small that it can’t move around. When this happens, the effective mass of the electron and the properties of the material change, enabling stronger tunneling of the electron through a barrier.
Because the transistors are so small, the researchers can engineer a very strong quantum confinement effect while also fabricating an extremely thin barrier.
“We have a lot of flexibility to design these material heterostructures so we can achieve a very thin tunneling barrier, which enables us to get very high current,” Shao says.
Precisely fabricating devices that were small enough to accomplish this was a major challenge.
“We are really into single-nanometer dimensions with this work. Very few groups in the world can make good transistors in that range. Yanjie is extraordinarily capable to craft such well-functioning transistors that are so extremely small,” says del Alamo.
When the researchers tested their devices, the sharpness of the switching slope was below the fundamental limit that can be achieved with conventional silicon transistors. Their devices also performed about 20 times better than similar tunneling transistors.
“This is the first time we have been able to achieve such sharp switching steepness with this design,” Shao adds.
The researchers are now striving to enhance their fabrication methods to make transistors more uniform across an entire chip. With such small devices, even a 1-nanometer variance can change the behavior of the electrons and affect device operation. They are also exploring vertical fin-shaped structures, in addition to vertical nanowire transistors, which could potentially improve the uniformity of devices on a chip.
“This work definitively steps in the right direction, significantly improving the broken-gap tunnel field effect transistor (TFET) performance. It demonstrates steep-slope together with a record drive-current. It highlights the importance of small dimensions, extreme confinement, and low-defectivity materials and interfaces in the fabricated broken-gap TFET. These features have been realized through a well-mastered and nanometer-size-controlled process,” says Aryan Afzalian, a principal member of the technical staff at the nanoelectronics research organization imec, who was not involved with this work.
This research is funded, in part, by Intel Corporation.
Finding a sweet spot between radical and relevantAs he invents programmable materials and self-organizing systems, Skylar Tibbits is pushing design boundaries while also solving real-world problems.While working as a lecturer in MIT’s Department of Architecture, Skylar Tibbits SM ’10 was also building art installations in galleries all over the world. Most of these installations featured complex structures created from algorithmically designed and computationally fabricated parts, building off Tibbits’ graduate work at the Institute.
Late one night in 2011 he was working with his team for hours — painstakingly riveting and bolting together thousands of tiny parts — to install a corridor-spanning work called VoltaDom at MIT for the Institute’s 150th anniversary celebration.
“There was a moment during the assembly when I realized this was the opposite of what I was interested in. We have elegant code for design and fabrication, but we didn’t have elegant code for construction. How can we promote things to build themselves? That is where the research agenda for my lab really came into being,” he says.
Tibbits, now a tenured associate professor of design research, co-directs the Self-Assembly Lab in the Department of Architecture, where he and his collaborators study self-organizing systems, programmable materials, and transformable structures that respond to their environments.
His research covers a diverse range of projects, including furniture that autonomously assembles from parts dropped into a water tank, rapid 3D printing with molten aluminum, and programmable textiles that sense temperature and automatically adjust to cool the body.
“If you were to ask someone on the street about self-assembly, they probably think of IKEA. But that is not what we mean. I am not the ‘self’ that is going to assemble something. Instead, the parts should build themselves,” he says.
Creative foundations
As a child growing up near Philadelphia, the hands-on Tibbits did like to build things manually. He took a keen interest in art and design, inspired by his aunt and uncle who were both professional artists, and his grandfather, who worked as an architect.
Tibbits decided to study architecture at Philadelphia University (now called Thomas Jefferson University) and chose the institution based on his grandfather’s advice to pick a college that was strong in design.
“At that time, I didn’t really know what that meant,” he recalls, but it was good advice. Being able to think like a designer helped form his career trajectory and continues to fuel the work he and his collaborators do in the Self-Assembly Lab.
While he was studying architecture, the digitization boom was changing many aspects of the field. Initially he and his classmates were drafting by hand, but software and digital fabrication equipment soon overtook traditional methods.
Wanting to get ahead of the curve, Tibbits taught himself to code. He used equipment in a sign shop owned by the father of classmate Jared Laucks (who is now a research scientist and co-director of the Self-Assembly Lab) to digitally fabricate objects before their school had the necessary machines.
Looking to further his education, Tibbits decided to pursue graduate studies at MIT because he wanted to learn computation from full-time computer scientists rather than architects teaching digital tools.
“I wanted to learn a different discipline and really enter a different world. That is what brought me to MIT, and I never left,” he says.
Tibbits earned dual master’s degrees in computer science and design and computation, delving deeper the theory of computation and the question of what it means to compute. He became interested in the challenge of embedding information into our everyday world.
One of his most influential experiences as a graduate student was a series of projects he worked on in the Center for Bits and Atoms that involved building reconfigurable robots.
“I wanted to figure out how to program materials to change shape, change properties, or assemble themselves,” he says.
He was pondering these questions as he graduated from MIT and joined the Institute as a lecturer, teaching studios and labs in the Department of Architecture. Eventually, he decided to become a research scientist so he could run a lab of his own.
“I had some prior experience in architectural practice, but I was really fascinated by what I was doing at MIT. It seemed like there were a million things I wanted to work on, so staying here to teach and do research was the perfect opportunity,” he says.
Launching a lab
As he was forming the Self-Assembly Lab, Tibbits had a chance meeting with someone wearing a Stratasys t-shirt at Flour Bakery and Café, near campus. (Stratasys is a manufacturer of 3D printers.)
A lightbulb went off in his head.
“I asked them, why can’t I print a material that behaves like a robot and just walks off the machine? Why can’t I print robots without adding electronics or motors or wires or mechanisms?” he says.
That idea gave rise to one of his lab’s earliest projects: 4D printing. The process involves using a multimaterial 3D printer to print objects designed to sense, actuate, and transform themselves over time.
To accomplish this, Tibbits and his team link material properties with a certain activation energy. For instance, moisture will transform cellulose, and temperature will activate polymers. The researchers fabricate materials into certain geometries so they can leverage these activation energies to transform the material in predictable and precise ways.
“It is almost like making everything a ‘smart’ material,” he says.
The lab’s initial 4D printing work has evolved to include different materials, such as textiles, and has led the team to invent new printing processes, such as rapid liquid printing and liquid metal printing.
They have used 4D printing in many applications, often working with industry partners. For instance, they collaborated with Airbus to develop thin blades that can fold and curl themselves to control the airflow to an airplane’s engine.
On an even greater scale, the team also embarked on a multiyear project in 2015 with the organization Invena in the Maldives to leverage self-assembly to “grow” small islands and rebuild beaches, which could help protect this archipelago from rising seas.
To do this, they fabricate submersible devices that, based on their geometry and the natural forces of the ocean like wave energy and tides, promote the accumulation of sand in specific areas to become sand bars.
They have now created nine field installations in the Maldives, the largest of which measures approximately 60 square meters. The end goal is to promote the self-organization of sand into protective barriers against sea level rise, rebuild beaches to fight erosion, and eliminate the need to dredge for land reclamation.
They are now working on similar projects in Iceland with J. Jih, associate professor of the practice in architectural design at MIT, looking at mountain erosion and volcanic lava flows, and Tibbits foresees many potential applications for self-assembly in natural environments.
“There are almost an unlimited number of places, and an unlimited number of forces that we could harness to tackle big, important problems, whether it is beach erosion or protecting communities from volcanoes,” he says.
Blending the radical and the relevant
Self-organizing sand bars are a prime example of a project that combines a radical idea with a relevant application, Tibbits says. He strives to find projects that strike such a balance and don’t only push boundaries without solving a real-world problem.
Working with brilliant and passionate researchers in the Self-Assembly Lab helps Tibbits stay inspired and creative as they launch new projects aimed at tackling big problems.
He feels especially passionate about his role as a teacher and mentor. In addition to teaching three or four courses each year, he directs the undergraduate design program at MIT.
Any MIT student can choose to major or minor in design, and the program focuses on many aspects and types of design to give students a broad foundation they can apply in their future careers.
“I am passionate about creating polymath designers at MIT who can apply design to any other discipline, and vice-versa. I think my lab is the ethos of that, where we take creative approaches and apply them to research, and where we apply new principles from different disciplines to create new forms of design,” says Tibbits, who is also the assistant director for education at the Morningside Academy for Design.
Outside the lab and classroom, Tibbits often finds inspiration by spending time on the water. He lives at the beach on the North Shore of Massachusetts and is a surfer, a hobby he had dabbled in during his youth, but which really took hold after he moved to the Bay State for graduate school.
“It is such an amazing sport to keep you in tune with the forces of the ocean. You can’t control the environment, so to ride a wave you have to find a way to harness it,” he says.
3 Questions: Can we secure a sustainable supply of nickel?Extraction of nickel, an essential component of clean energy technologies, needs stronger policies to protect local environments and communities, MIT researchers say.As the world strives to cut back on carbon emissions, demand for minerals and metals needed for clean energy technologies is growing rapidly, sometimes straining existing supply chains and harming local environments. In a new study published today in Joule, Elsa Olivetti, a professor of materials science and engineering and director of the Decarbonizing Energy and Industry mission within MIT’s Climate Project, along with recent graduates Basuhi Ravi PhD ’23 and Karan Bhuwalka PhD ’24 and nine others, examine the case of nickel, which is an essential element for some electric vehicle batteries and parts of some solar panels and wind turbines.
How robust is the supply of this vital metal, and what are the implications of its extraction for the local environments, economies, and communities in the places where it is mined? MIT News asked Olivetti, Ravi, and Bhuwalka to explain their findings.
Q: Why is nickel becoming more important in the clean energy economy, and what are some of the potential issues in its supply chain?
Olivetti: Nickel is increasingly important for its role in EV batteries, as well as other technologies such as wind and solar. For batteries, high-purity nickel sulfate is a key input to the cathodes of EV batteries, which enables high energy density in batteries and increased driving range for EVs. As the world transitions away from fossil fuels, the demand for EVs, and consequently for nickel, has increased dramatically and is projected to continue to do so.
The nickel supply chain for battery-grade nickel sulfate includes mining nickel from ore deposits, processing it to a suitable nickel intermediary, and refining it to nickel sulfate. The potential issues in the supply chain can be broadly described as land use concerns in the mining stage, and emissions concerns in the processing stage. This is obviously oversimplified, but as a basic structure for our inquiry we thought about it this way. Nickel mining is land-intensive, leading to deforestation, displacement of communities, and potential contamination of soil and water resources from mining waste. In the processing step, the use of fossil fuels leads to direct emissions including particulate matter and sulfur oxides. In addition, some emerging processing pathways are particularly energy-intensive, which can double the carbon footprint of nickel-rich batteries compared to the current average.
Q: What is Indonesia’s role in the global nickel supply, and what are the consequences of nickel extraction there and in other major supply countries?
Ravi: Indonesia plays a critical role in nickel supply, holding the world's largest nickel reserves and supplying nearly half of the globally mined nickel in 2023. The country's nickel production has seen a remarkable tenfold increase since 2016. This production surge has fueled economic growth in some regions, but also brought notable environmental and social impacts to nickel mining and processing areas.
Nickel mining expansion in Indonesia has been linked to health impacts due to air pollution in the islands where nickel processing is prominent, as well as deforestation in some of the most biodiversity-rich locations on the planet. Reports of displacement of indigenous communities, land grabbing, water rights issues, and inadequate job quality in and around mines further highlight the social concerns and unequal distribution of burdens and benefits in Indonesia. Similar concerns exist in other major nickel-producing countries, where mining activities can negatively impact the environment, disrupt livelihoods, and exacerbate inequalities.
On a global scale, Indonesia’s reliance on coal-based energy for nickel processing, particularly in energy-intensive smelting and leaching of a clay-like material called laterite, results in a high carbon intensity for nickel produced in the region, compared to other major producing regions such as Australia.
Q: What role can industry and policymakers play in helping to meet growing demand while improving environmental safety?
Bhuwalka: In consuming countries, policies can foster “discerning demand,” which means creating incentives for companies to source nickel from producers that prioritize sustainability. This can be achieved through regulations that establish acceptable environmental footprints for imported materials, such as limits on carbon emissions from nickel production. For example, the EU’s Critical Raw Materials Act and the U.S. Inflation Reduction Act could be leveraged to promote responsible sourcing. Additionally, governments can use their purchasing power to favor sustainably produced nickel in public procurement, which could influence industry practices and encourage the adoption of sustainability standards.
On the supply side, nickel-producing countries like Indonesia can implement policies to mitigate the adverse environmental and social impacts of nickel extraction. This includes strengthening environmental regulations and enforcement to reduce the footprint of mining and processing, potentially through stricter pollution limits and responsible mine waste management. In addition, supporting community engagement, implementing benefit-sharing mechanisms, and investing in cleaner nickel processing technologies are also crucial.
Internationally, harmonizing sustainability standards and facilitating capacity building and technology transfer between developed and developing countries can create a level playing field and prevent unsustainable practices. Responsible investment practices by international financial institutions, favoring projects that meet high environmental and social standards, can also contribute to a stable and sustainable nickel supply chain.
Revealing causal links in complex systemsMIT engineers’ algorithm may have wide impact, from forecasting climate to projecting population growth to designing efficient aircraft.Getting to the heart of causality is central to understanding the world around us. What causes one variable — be it a biological species, a voting region, a company stock, or a local climate — to shift from one state to another can inform how we might shape that variable in the future.
But tracing an effect to its root cause can quickly become intractable in real-world systems, where many variables can converge, confound, and cloud over any causal links.
Now, a team of MIT engineers hopes to provide some clarity in the pursuit of causality. They developed a method that can be applied to a wide range of situations to identify those variables that likely influence other variables in a complex system.
The method, in the form of an algorithm, takes in data that have been collected over time, such as the changing populations of different species in a marine environment. From those data, the method measures the interactions between every variable in a system and estimates the degree to which a change in one variable (say, the number of sardines in a region over time) can predict the state of another (such as the population of anchovy in the same region).
The engineers then generate a “causality map” that links variables that likely have some sort of cause-and-effect relationship. The algorithm determines the specific nature of that relationship, such as whether two variables are synergistic — meaning one variable only influences another if it is paired with a second variable — or redundant, such that a change in one variable can have exactly the same, and therefore redundant, effect as another variable.
The new algorithm can also make an estimate of “causal leakage,” or the degree to which a system’s behavior cannot be explained through the variables that are available; some unknown influence must be at play, and therefore, more variables must be considered.
“The significance of our method lies in its versatility across disciplines,” says Álvaro Martínez-Sánchez, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “It can be applied to better understand the evolution of species in an ecosystem, the communication of neurons in the brain, and the interplay of climatological variables between regions, to name a few examples.”
For their part, the engineers plan to use the algorithm to help solve problems in aerospace, such as identifying features in aircraft design that can reduce a plane’s fuel consumption.
“We hope by embedding causality into models, it will help us better understand the relationship between design variables of an aircraft and how it relates to efficiency,” says Adrián Lozano-Durán, an associate professor in AeroAstro.
The engineers, along with MIT postdoc Gonzalo Arranz, have published their results in a study appearing today in Nature Communications.
Seeing connections
In recent years, a number of computational methods have been developed to take in data about complex systems and identify causal links between variables in the system, based on certain mathematical descriptions that should represent causality.
“Different methods use different mathematical definitions to determine causality,” Lozano-Durán notes. “There are many possible definitions that all sound ok, but they may fail under some conditions.”
In particular, he says that existing methods are not designed to tell the difference between certain types of causality. Namely, they don’t distinguish between a “unique” causality, in which one variable has a unique effect on another, apart from every other variable, from a “synergistic” or a “redundant” link. An example of a synergistic causality would be if one variable (say, the action of drug A) had no effect on another variable (a person’s blood pressure), unless the first variable was paired with a second (drug B).
An example of redundant causality would be if one variable (a student’s work habits) affect another variable (their chance of getting good grades), but that effect has the same impact as another variable (the amount of sleep the student gets).
“Other methods rely on the intensity of the variables to measure causality,” adds Arranz. “Therefore, they may miss links between variables whose intensity is not strong yet they are important.”
Messaging rates
In their new approach, the engineers took a page from information theory — the science of how messages are communicated through a network, based on a theory formulated by the late MIT professor emeritus Claude Shannon. The team developed an algorithm to evaluate any complex system of variables as a messaging network.
“We treat the system as a network, and variables transfer information to each other in a way that can be measured,” Lozano-Durán explains. “If one variable is sending messages to another, that implies it must have some influence. That’s the idea of using information propagation to measure causality.”
The new algorithm evaluates multiple variables simultaneously, rather than taking on one pair of variables at a time, as other methods do. The algorithm defines information as the likelihood that a change in one variable will also see a change in another. This likelihood — and therefore, the information that is exchanged between variables — can get stronger or weaker as the algorithm evaluates more data of the system over time.
In the end, the method generates a map of causality that shows which variables in the network are strongly linked. From the rate and pattern of these links, the researchers can then distinguish which variables have a unique, synergistic, or redundant relationship. By this same approach, the algorithm can also estimate the amount of “causality leak” in the system, meaning the degree to which a system’s behavior cannot be predicted based on the information available.
“Part of our method detects if there’s something missing,” Lozano-Durán says. “We don’t know what is missing, but we know we need to include more variables to explain what is happening.”
The team applied the algorithm to a number of benchmark cases that are typically used to test causal inference. These cases range from observations of predator-prey interactions over time, to measurements of air temperature and pressure in different geographic regions, and the co-evolution of multiple species in a marine environment. The algorithm successfully identified causal links in every case, compared with most methods that can only handle some cases.
The method, which the team coined SURD, for Synergistic-Unique-Redundant Decomposition of causality, is available online for others to test on their own systems.
“SURD has the potential to drive progress across multiple scientific and engineering fields, such as climate research, neuroscience, economics, epidemiology, social sciences, and fluid dynamics, among others areas,” Martínez-Sánchez says.
This research was supported, in part, by the National Science Foundation.
Making agriculture more resilient to climate changeResearchers across MIT are working on ways to boost food production and help crops survive drought.As Earth’s temperature rises, agricultural practices will need to adapt. Droughts will likely become more frequent, and some land may no longer be arable. On top of that is the challenge of feeding an ever-growing population without expanding the production of fertilizer and other agrochemicals, which have a large carbon footprint that is contributing to the overall warming of the planet.
Researchers across MIT are taking on these agricultural challenges from a variety of angles, from engineering plants that sound an alarm when they’re under stress to making seeds more resilient to drought. These types of technologies, and more yet to be devised, will be essential to feed the world’s population as the climate changes.
“After water, the first thing we need is food. In terms of priority, there is water, food, and then everything else. As we are trying to find new strategies to support a world of 10 billion people, it will require us to invent new ways of making food,” says Benedetto Marelli, an associate professor of civil and environmental engineering at MIT.
Marelli is the director of one of the six missions of the recently launched Climate Project at MIT, which focus on research areas such as decarbonizing industry and building resilient cities. Marelli directs the Wild Cards mission, which aims to identify unconventional solutions that are high-risk and high-reward.
Drawing on expertise from a breadth of fields, MIT is well-positioned to tackle the challenges posed by climate change, Marelli says. “Bringing together our strengths across disciplines, including engineering, processing at scale, biological engineering, and infrastructure engineering, along with humanities, science, and economics, presents a great opportunity.”
Protecting seeds from drought
Marelli, who began his career as a biomedical engineer working on regenerative medicine, is now developing ways to boost crop yields by helping seeds to survive and germinate during drought conditions, or in soil that has been depleted of nutrients. To achieve that, he has devised seed coatings, based on silk and other polymers, that can envelop and nourish seeds during the critical germination process.
In healthy soil, plants have access to nitrogen, phosphates, and other nutrients that they need, many of which are supplied by microbes that live in the soil. However, in soil that has suffered from drought or overfarming, these nutrients are lacking. Marelli’s idea was to coat the seeds with a polymer that can be embedded with plant-growth-promoting bacteria that “fix” nitrogen by absorbing it from the air and making it available to plants. The microbes can also make other necessary nutrients available to plants.
For the first generation of the seed coatings, he embedded these microbes in coatings made of silk — a material that he had previously shown can extend the shelf life of produce, meat, and other foods. In his lab at MIT, Marelli has shown that the seed coatings can help germinating plants survive drought, ultraviolet light exposure, and high salinity.
Now, working with researchers at the Mohammed VI Polytechnic University in Morocco, he is adapting the approach to crops native to Morocco, a country that has experienced six consecutive years of drought due a drop in rainfall linked to climate change.
For these studies, the researchers are using a biopolymer coating derived from food waste that can be easily obtained in Morocco, instead of silk.
“We’re working with local communities to extract the biopolymers, to try to have a process that works at scale so that we make materials that work in that specific environment.” Marelli says. “We may come up with an idea here at MIT within a high-resource environment, but then to work there, we need to talk with the local communities, with local stakeholders, and use their own ingenuity and try to match our solution with something that could actually be applied in the local environment.”
Microbes as fertilizers
Whether they are experiencing drought or not, crops grow much better when synthetic fertilizers are applied. Although it’s essential to most farms, applying fertilizer is expensive and has environmental consequences. Most of the world’s fertilizer is produced using the Haber-Bosch process, which converts nitrogen and hydrogen to ammonia at high temperatures and pressures. This energy intensive process accounts for about 1.5 percent of the world’s greenhouse gas emissions, and the transportation required to deliver it to farms around the world adds even more emissions.
Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT, is developing a microbial alternative to the Haber-Bosch process. Some farms have experimented with applying nitrogen-fixing bacteria directly to the roots of their crops, which has shown some success. However, the microbes are too delicate to be stored long-term or shipped anywhere, so they must be produced in a bioreactor on the farm.
To overcome those challenges, Furst has developed a way to coat the microbes with a protective shell that prevents them from being destroyed by heat or other stresses. The coating also protects microbes from damage caused by freeze-drying — a process that would make them easier to transport.
The coatings can vary in composition, but they all consist of two components. One is a metal such as iron, manganese, or zinc, and the other is a polyphenol — a type of plant-derived organic compound that includes tannins and other antioxidants. These two components self-assemble into a protective shell that encapsulates bacteria.
“These microbes would be delivered with the seeds, so it would remove the need for fertilizing mid-growing. It also reduces the cost and provides more autonomy to the farmers and decreases carbon emissions associated with agriculture,” Furst says. “We think it’ll be a way to make agriculture completely regenerative, so to bring back soil health while also boosting crop yields and the nutrient density of the crops.”
Furst has founded a company called Seia Bio, which is working on commercializing the coated microbes and has begun testing them on farms in Brazil. In her lab, Furst is also working on adapting the approach to coat microbes that can capture carbon dioxide from the atmosphere and turn it into limestone, which helps to raise the soil pH.
“It can help change the pH of soil to stabilize it, while also being a way to effectively perform direct air capture of CO2,” she says. “Right now, farmers may truck in limestone to change the pH of soil, and so you’re creating a lot of emissions to bring something in that microbes can do on their own.”
Distress sensors for plants
Several years ago, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, began to explore the idea of using plants themselves as sensors that could reveal when they’re in distress. When plants experience drought, attack by pests, or other kinds of stress, they produce hormones and other signaling molecules to defend themselves.
Strano, whose lab specializes in developing tiny sensors for a variety of molecules, wondered if such sensors could be deployed inside plants to pick up those distress signals. To create their sensors, Strano’s lab takes advantage of the special properties of single-walled carbon nanotubes, which emit fluorescent light. By wrapping the tubes with different types of polymers, the sensors can be tuned to detect specific targets, giving off a fluorescent signal when the target is present.
For use in plants, Strano and his colleagues created sensors that could detect signaling molecules such as salicylic acid and hydrogen peroxide. They then showed that these sensors could be inserted into the underside of plant leaves, without harming the plants. Once embedded in the mesophyll of the leaves, the sensors can pick up a variety of signals, which can be read with an infrared camera.
These sensors can reveal, in real-time, whether a plant is experiencing a variety of stresses. Until now, there hasn’t been a way to get that information fast enough for farmers to act on it.
“What we’re trying to do is make tools that get information into the hands of farmers very quickly, fast enough for them to make adaptive decisions that can increase yield,” Strano says. “We’re in the middle of a revolution of really understanding the way in which plants internally communicate and communicate with other plants.”
This kind of sensing could be deployed in fields, where it could help farmers respond more quickly to drought and other stresses, or in greenhouses, vertical farms, and other types of indoor farms that use technology to grow crops in a controlled environment.
Much of Strano’s work in this area has been conducted with the support of the U.S. Department of Agriculture (USDA) and as part of the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) program at the Singapore-MIT Alliance for Research and Technology (SMART), and sensors have been deployed in tests in crops at a controlled environment farm in Singapore called Growy.
“The same basic kinds of tools can help detect problems in open field agriculture or in controlled environment agriculture,” Strano says. “They both suffer from the same problem, which is that the farmers get information too late to prevent yield loss.”
Reducing pesticide use
Pesticides represent another huge financial expense for farmers: Worldwide, farmers spend about $60 billion per year on pesticides. Much of this pesticide ends up accumulating in water and soil, where it can harm many species, including humans. But, without using pesticides, farmers may lose more than half of their crops.
Kripa Varanasi, an MIT professor of mechanical engineering, is working on tools that can help farmers measure how much pesticide is reaching their plants, as well as technologies that can help pesticides adhere to plants more efficiently, reducing the amount that runs off into soil and water.
Varanasi, whose research focuses on interactions between liquid droplets and surfaces, began to think about applying his work to agriculture more than a decade ago, after attending a conference at the USDA. There, he was inspired to begin developing ways to improve the efficiency of pesticide application by optimizing the interactions that occur at leaf surfaces.
“Billions of drops of pesticide are being sprayed on every acre of crop, and only a small fraction is ultimately reaching and staying on target. This seemed to me like a problem that we could help to solve,” he says.
Varanasi and his students began exploring strategies to make drops of pesticide stick to leaves better, instead of bouncing off. They found that if they added polymers with positive and negative charges, the oppositely charged droplets would form a hydrophilic (water-attracting) coating on the leaf surface, which helps the next droplets applied to stick to the leaf.
Later, they developed an easier-to-use technology in which a surfactant is added to the pesticide before spraying. When this mixture is sprayed through a special nozzle, it forms tiny droplets that are “cloaked” in surfactant. The surfactant helps the droplets to stick to the leaves within a few milliseconds, without bouncing off.
In 2020, Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22 founded a company called AgZen to commercialize their technologies and get them into the hands of farmers. They incorporated their ideas for improving pesticide adhesion into a product called EnhanceCoverage.
During the testing for this product, they realized that there weren’t any good ways to measure how many of the droplets were staying on the plant. That led them to develop a product known as RealCoverage, which is based on machine vision. It can be attached to any pesticide sprayer and offer real-time feedback on what percentage of the pesticide droplets are sticking to and staying on every leaf.
RealCoverage was used on 65,000 acres of farmland across the United States in 2024, from soybeans in Iowa to cotton in Georgia. Farmers who used the product were able to reduce their pesticide use by 30 to 50 percent, by using the data to optimize delivery and, in some cases, even change what chemicals were sprayed.
He hopes that the EnhanceCoverage product, which is expected to become available in 2025, will help farmers further reduce their pesticide use.
“Our mission here is to help farmers with savings while helping them achieve better yields. We have found a way to do all this while also reducing waste and the amount of chemicals that we put into our atmosphere and into our soils and into our water,” Varanasi says. “This is the MIT approach: to figure out what are the real issues and how to come up with solutions. Now we have a tool and I hope that it’s deployed everywhere and everyone gets the benefit from it.”
Presidential portrait of L. Rafael Reif unveiledA reception at Gray House honored MIT’s 17th president, who led the Institute for more than 10 years.A new portrait marks the legacy of L. Rafael Reif, MIT’s president from 2012 to 2022. Painted by Jon Friedman, the portrait was unveiled at a recent gathering at Gray House, where portraits of many of Reif’s predecessors also adorn the walls.
The unveiling served as something of a reunion for many MIT faculty and staff members who had worked closely with Reif at various points in his four decades at MIT, especially his decade as president. It also featured several generations of the Reif family and special friends such as cellist Yo-Yo Ma. Susan Whitehead, a life member of the MIT Corporation and life board member of the Whitehead Institute, and Ray Stata ’57, SM ’58, co-founder of Analog Devices, gave remarks honoring Reif and his impact at the Institute.
MIT President Sally Kornbluth opened the event by welcoming the audience to the president’s residence on campus.
“As we all know, Gray House belongs to the MIT community, which means that each family who lives here takes responsibility for stewarding the place for the future. Which, in a grander sense, is a pretty good way of describing what it means to be president of MIT,” she said.
Applauding the “many grand things he set in motion,” Kornbluth described several of Reif’s impactful achievements as MIT’s 17th president, such as establishing the MIT Schwarzman College of Computing, leading the revitalization of Kendall Square, and envisioning and launching The Engine, MIT’s venture firm for “tough tech.”
“Each of those achievements helped prime MIT for the future, and each one has had powerful positive effects well beyond our community too,” Kornbluth said, noting that the term “tough tech” didn’t even exist before the establishment of The Engine.
“MIT has been an exceptional place from the very start, and it has had quite a few visionary presidents. But there is no question that MIT was more exceptional when Rafael finished than when he began. And we owe him a great debt of gratitude,” Kornbluth said.
More information about the Reif presidency can be found in this article written when Reif announced his decision to step down.
After the portrait was unveiled, Ma performed a short piece by Johann Sebastian Bach on the cello. Afterward, Stata offered a comprehensive personal and historical perspective on Reif’s wide-ranging contributions to MIT and the nation, including his key role in establishing MIT’s footing in the semiconductor landscape, and in demonstrating and advocating for the critical role of academic research in advancing the development of the U.S. semiconductor sector. Whitehead followed, highlighting a range of Reif’s accomplishments during his tenure as MIT president, including establishing the Institute for Medical Engineering and Science and MIT.nano, leading the Campaign for a Better World, overseeing the redevelopment of the Volpe Center in Kendall Square, and more.
“All of the above was made possible because you are a remarkable synthesizer and builder,” she said. “We watched as you grappled with questions, listened carefully, inside and outside of MIT, and then you moved. You were bold once you had synthesized. None of the above initiatives would have happened without your decisive big thinking.”
Whitehead also praised Reif’s kindness and empathy, noting the many decisions he oversaw to promote student wellbeing at MIT and acknowledging his leadership during difficult times, such as the death of MIT Police Officer Sean Collier. She closed by reminding the crowd of the Institute-wide farewell dance party he hosted as he stepped down.
When Reif took to the podium, he thanked the speakers as well as other members of the audience, including Corporation Life Member Fariboz Maseeh ScD ’90; Reif was the inaugural holder of the Fariborz Maseeh Professorship of Emerging Technology before becoming MIT’s president. He also thanked his wife, Christine — whose own portrait, also painted by Friedman, now hangs in the Emma Rogers Room (Room 10-340) — for her support.
Reif recalled some of his favorite memories of living at Gray House, including hosting his grandchildren for sleepovers at what they called “the Castle” and partaking in a snowball fight with students on Killian Court.
“Each and every one of you influenced my thinking, gave me intellectual breadth, suffered my sense of humor, and shaped the person I became,” Reif said. “So, whatever qualities you believe you see captured in Mr. Friedman’s portrait, please realize that all of you are represented there too, in your brilliance and your goodness. It has been a tremendous privilege to be part of the MIT family for all these years.”
“Wearable” devices for cellsBy snugly wrapping around neurons, these devices could help scientists probe subcellular regions of the brain, and might even help restore some brain function.Wearable devices like smartwatches and fitness trackers interact with parts of our bodies to measure and learn from internal processes, such as our heart rate or sleep stages.
Now, MIT researchers have developed wearable devices that may be able to perform similar functions for individual cells inside the body.
These battery-free, subcellular-sized devices, made of a soft polymer, are designed to gently wrap around different parts of neurons, such as axons and dendrites, without damaging the cells, upon wireless actuation with light. By snugly wrapping neuronal processes, they could be used to measure or modulate a neuron’s electrical and metabolic activity at a subcellular level.
Because these devices are wireless and free-floating, the researchers envision that thousands of tiny devices could someday be injected and then actuated noninvasively using light. Researchers would precisely control how the wearables gently wrap around cells, by manipulating the dose of light shined from outside the body, which would penetrate the tissue and actuate the devices.
By enfolding axons that transmit electrical impulses between neurons and to other parts of the body, these wearables could help restore some neuronal degradation that occurs in diseases like multiple sclerosis. In the long run, the devices could be integrated with other materials to create tiny circuits that could measure and modulate individual cells.
“The concept and platform technology we introduce here is like a founding stone that brings about immense possibilities for future research,” says Deblina Sarkar, the AT&T Career Development Assistant Professor in the MIT Media Lab and Center for Neurobiological Engineering, head of the Nano-Cybernetic Biotrek Lab, and the senior author of a paper on this technique.
Sarkar is joined on the paper by lead author Marta J. I. Airaghi Leccardi, a former MIT postdoc who is now a Novartis Innovation Fellow; Benoît X. E. Desbiolles, an MIT postdoc; Anna Y. Haddad ’23, who was an MIT undergraduate researcher during the work; and MIT graduate students Baju C. Joy and Chen Song. The research appears today in Nature Communications Chemistry.
Snugly wrapping cells
Brain cells have complex shapes, which makes it exceedingly difficult to create a bioelectronic implant that can tightly conform to neurons or neuronal processes. For instance, axons are slender, tail-like structures that attach to the cell body of neurons, and their length and curvature vary widely.
At the same time, axons and other cellular components are fragile, so any device that interfaces with them must be soft enough to make good contact without harming them.
To overcome these challenges, the MIT researchers developed thin-film devices from a soft polymer called azobenzene, that don’t damage cells they enfold.
Due to a material transformation, thin sheets of azobenzene will roll when exposed to light, enabling them to wrap around cells. Researchers can precisely control the direction and diameter of the rolling by varying the intensity and polarization of the light, as well as the shape of the devices.
The thin films can form tiny microtubes with diameters that are less than a micrometer. This enables them to gently, but snugly, wrap around highly curved axons and dendrites.
“It is possible to very finely control the diameter of the rolling. You can stop if when you reach a particular dimension you want by tuning the light energy accordingly,” Sarkar explains.
The researchers experimented with several fabrication techniques to find a process that was scalable and wouldn’t require the use of a semiconductor clean room.
Making microscopic wearables
They begin by depositing a drop of azobenzene onto a sacrificial layer composed of a water-soluble material. Then the researchers press a stamp onto the drop of polymer to mold thousands of tiny devices on top of the sacrificial layer. The stamping technique enables them to create complex structures, from rectangles to flower shapes.
A baking step ensures all solvents are evaporated and then they use etching to scrape away any material that remains between individual devices. Finally, they dissolve the sacrificial layer in water, leaving thousands of microscopic devices freely floating in the liquid.
Once they have a solution with free-floating devices, they wirelessly actuated the devices with light to induce the devices to roll. They found that free-floating structures can maintain their shapes for days after illumination stops.
The researchers conducted a series of experiments to ensure the entire method is biocompatible.
After perfecting the use of light to control rolling, they tested the devices on rat neurons and found they could tightly wrap around even highly curved axons and dendrites without causing damage.
“To have intimate interfaces with these cells, the devices must be soft and able to conform to these complex structures. That is the challenge we solved in this work. We were the first to show that azobenzene could even wrap around living cells,” she says.
Among the biggest challenges they faced was developing a scalable fabrication process that could be performed outside a clean room. They also iterated on the ideal thickness for the devices, since making them too thick causes cracking when they roll.
Because azobenzene is an insulator, one direct application is using the devices as synthetic myelin for axons that have been damaged. Myelin is an insulating layer that wraps axons and allows electrical impulses to travel efficiently between neurons.
In non-myelinating diseases like multiple sclerosis, neurons lose some insulating myelin sheets. There is no biological way of regenerating them. By acting as synthetic myelin, the wearables might help restore neuronal function in MS patients.
The researchers also demonstrated how the devices can be combined with optoelectrical materials that can stimulate cells. Moreover, atomically thin materials can be patterned on top of the devices, which can still roll to form microtubes without breaking. This opens up opportunities for integrating sensors and circuits in the devices.
In addition, because they make such a tight connection with cells, one could use very little energy to stimulate subcellular regions. This could enable a researcher or clinician to modulate electrical activity of neurons for treating brain diseases.
“It is exciting to demonstrate this symbiosis of an artificial device with a cell at an unprecedented resolution. We have shown that this technology is possible,” Sarkar says.
In addition to exploring these applications, the researchers want to try functionalizing the device surfaces with molecules that would enable them to target specific cell types or subcellular regions.
“This work is an exciting step toward new symbiotic neural interfaces acting at the level of the individual axons and synapses. When integrated with nanoscale 1- and 2D conductive nanomaterials, these light-responsive azobenzene sheets could become a versatile platform to sense and deliver different types of signals (i.e., electrical, optical, thermal, etc.) to neurons and other types of cells in a minimally or noninvasive manner. Although preliminary, the cytocompatibility data reported in this work is also very promising for future use in vivo,” says Flavia Vitale, associate professor of neurology, bioengineering, and physical medicine and rehabilitation at the University of Pennsylvania, who was not involved with this work.
The research was supported by the Swiss National Science Foundation and the U.S. National Institutes of Health Brain Initiative. This work was carried out, in part, through the use of MIT.nano facilities.
MIT to lead expansion of regional innovation networkNational Science Foundation grant expected to help New England researchers translate discoveries to commercial technology.The U.S. National Science Foundation (NSF) has selected MIT to lead a new Innovation Corps (I-Corps) Hub to support a partnership of eight New England universities committed to expanding science and technology entrepreneurship across the region, accelerating the translation of discoveries into new solutions that benefit society. NSF announced the five-year cooperative agreement of up to $15 million today.
The NSF I-Corps Hub: New England Region is expected to launch on Jan. 1, 2025. The seven institutions initially collaborating with MIT include Brown University, Harvard University, Northeastern University, Tufts University, University of Maine, University of Massachusetts Amherst, and the University of New Hampshire.
Established by the NSF in 2011, the I-Corps program provides scientists and engineers from any discipline with hands-on educational experiences to advance their research from lab to impact. There are more than 50,000 STEM researchers at the nearly 100 universities and medical schools in New England. Many of these institutions are located in underserved and rural areas of the region that face resource challenges in supporting deep-tech translational efforts. The eight institutions in the hub will offer I-Corps training while bringing unique strengths and resources to enhance a regional innovation ecosystem that broadens participation in deep-tech innovation.
“Now more than ever we need the innovative solutions that emerge from this type of collaboration to solve society’s greatest and most intractable challenges. Our collective sights are set on bolstering our regional and national innovation networks to accelerate the translation of fundamental research into commercialized technologies. MIT is eager to build on our ongoing work with NSF to further cultivate New England’s innovation hub,” says MIT Provost Cynthia Barnhart, the Abraham J. Siegel Professor of Management Science and professor of operations research, who is the principal investigator on the award.
The hub builds on 10 years of collaboration with other I-Corps Sites at institutions across the region and prior work from the MIT I-Corps Site program launched in 2014 and the I-Corps Node based at MIT established in 2018. More than 3,000 engineers and scientists in New England have participated in regional I-Corps programs. They have formed over 200 companies, which have raised $3.5 billion in grants and investments.
“The goal of the I-Corps program is to deploy experiential education to help researchers reduce the time necessary to translate promising ideas from laboratory benches to widespread implementation that in turn impacts economic growth regionally and nationally,” said Erwin Gianchandani, NSF assistant director for Technology, Innovation and Partnerships, in NSF’s announcement. “Each regional NSF I-Corps Hub provides training essential in entrepreneurship and customer discovery, leading to new products, startups, and jobs. In effect, we are investing in the next generation of entrepreneurs for our nation.”
One I-Corps success story comes from Shreya Dave PhD ’16, who participated in I-Corps training in 2016 with her colleagues to explore potential applications for a new graphene oxide filter technology developed through her research. Based on their learnings from the program and the evidence collected, they shifted from filters for desalination to applications in chemical processing and gained the confidence to launch Via Separations in 2017, focused on the tough tech challenge of industrial decarbonization. Via Separations, which was co-founded by Morton and Claire Goulder and Family Professor in Environmental Systems Professor of Materials Science and Engineering Jeffrey Grossman and Chief Technical Officer Brent Keller, has reached commercialization and is now delivering products to the pulp and paper industry.
“NSF I-Corps helped us refine our vision, figure out if our technology could be used for different applications, and helped us figure out if we can manufacture our technology in a scalable fashion — taking it from an academic project to a real–scale commercial project,” says Dave, who is the CEO and co-founder of Via Separations.
New England boasts a “highly developed ecosystem of startup resources, funders, founders, and talent,” says Roman Lubynsky, executive director of MIT’s current NSF I-Corps Node, who will serve as the director of the new hub. “However, innovation and entrepreneurship support has been unevenly distributed across the region. This new hub offers an exciting opportunity to collaborate with seven partner institutions to extend and further scale up this important work throughout the region.”
The I-Corps Hubs across the country form the backbone of the NSF National Innovation Network. This network connects universities, NSF researchers, entrepreneurs, regional communities, and federal agencies to help researchers bring their discoveries to the marketplace. Together, the hubs work to create a more inclusive and diverse innovation ecosystem, supporting researchers nationwide in transforming their ideas into real-world solutions.
Quantum simulator could help uncover materials for high-performance electronicsBy emulating a magnetic field on a superconducting quantum computer, researchers can probe complex properties of materials.Quantum computers hold the promise to emulate complex materials, helping researchers better understand the physical properties that arise from interacting atoms and electrons. This may one day lead to the discovery or design of better semiconductors, insulators, or superconductors that could be used to make ever faster, more powerful, and more energy-efficient electronics.
But some phenomena that occur in materials can be challenging to mimic using quantum computers, leaving gaps in the problems that scientists have explored with quantum hardware.
To fill one of these gaps, MIT researchers developed a technique to generate synthetic electromagnetic fields on superconducting quantum processors. The team demonstrated the technique on a processor comprising 16 qubits.
By dynamically controlling how the 16 qubits in their processor are coupled to one another, the researchers were able to emulate how electrons move between atoms in the presence of an electromagnetic field. Moreover, the synthetic electromagnetic field is broadly adjustable, enabling scientists to explore a range of material properties.
Emulating electromagnetic fields is crucial to fully explore the properties of materials. In the future, this technique could shed light on key features of electronic systems, such as conductivity, polarization, and magnetization.
“Quantum computers are powerful tools for studying the physics of materials and other quantum mechanical systems. Our work enables us to simulate much more of the rich physics that has captivated materials scientists,” says Ilan Rosen, an MIT postdoc and lead author of a paper on the quantum simulator.
The senior author is William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of the Engineering Quantum Systems group, and associate director of the Research Laboratory of Electronics. Oliver and Rosen are joined by others in the departments of Electrical Engineering and Computer Science and of Physics and at MIT Lincoln Laboratory. The research appears today in Nature Physics.
A quantum emulator
Companies like IBM and Google are striving to build large-scale digital quantum computers that hold the promise of outperforming their classical counterparts by running certain algorithms far more rapidly.
But that’s not all quantum computers can do. The dynamics of qubits and their couplings can also be carefully constructed to mimic the behavior of electrons as they move among atoms in solids.
“That leads to an obvious application, which is to use these superconducting quantum computers as emulators of materials,” says Jeffrey Grover, a research scientist at MIT and co-author on the paper.
Rather than trying to build large-scale digital quantum computers to solve extremely complex problems, researchers can use the qubits in smaller-scale quantum computers as analog devices to replicate a material system in a controlled environment.
“General-purpose digital quantum simulators hold tremendous promise, but they are still a long way off. Analog emulation is another approach that may yield useful results in the near-term, particularly for studying materials. It is a straightforward and powerful application of quantum hardware,” explains Rosen. “Using an analog quantum emulator, I can intentionally set a starting point and then watch what unfolds as a function of time.”
Despite their close similarity to materials, there are a few important ingredients in materials that can’t be easily reflected on quantum computing hardware. One such ingredient is a magnetic field.
In materials, electrons “live” in atomic orbitals. When two atoms are close to one another, their orbitals overlap and electrons can “hop” from one atom to another. In the presence of a magnetic field, that hopping behavior becomes more complex.
On a superconducting quantum computer, microwave photons hopping between qubits are used to mimic electrons hopping between atoms. But, because photons are not charged particles like electrons, the photons’ hopping behavior would remain the same in a physical magnetic field.
Since they can’t just turn on a magnetic field in their simulator, the MIT team employed a few tricks to synthesize the effects of one instead.
Tuning up the processor
The researchers adjusted how adjacent qubits in the processor were coupled to each other to create the same complex hopping behavior that electromagnetic fields cause in electrons.
To do that, they slightly changed the energy of each qubit by applying different microwave signals. Usually, researchers will set qubits to the same energy so that photons can hop from one to another. But for this technique, they dynamically varied the energy of each qubit to change how they communicate with each other.
By precisely modulating these energy levels, the researchers enabled photons to hop between qubits in the same complex manner that electrons hop between atoms in a magnetic field.
Plus, because they can finely tune the microwave signals, they can emulate a range of electromagnetic fields with different strengths and distributions.
The researchers undertook several rounds of experiments to determine what energy to set for each qubit, how strongly to modulate them, and the microwave frequency to use.
“The most challenging part was finding modulation settings for each qubit so that all 16 qubits work at once,” Rosen says.
Once they arrived at the right settings, they confirmed that the dynamics of the photons uphold several equations that form the foundation of electromagnetism. They also demonstrated the “Hall effect,” a conduction phenomenon that exists in the presence of an electromagnetic field.
These results show that their synthetic electromagnetic field behaves like the real thing.
Moving forward, they could use this technique to precisely study complex phenomena in condensed matter physics, such as phase transitions that occur when a material changes from a conductor to an insulator.
“A nice feature of our emulator is that we need only change the modulation amplitude or frequency to mimic a different material system. In this way, we can scan over many materials properties or model parameters without having to physically fabricate a new device each time.” says Oliver.
While this work was an initial demonstration of a synthetic electromagnetic field, it opens the door to many potential discoveries, Rosen says.
“The beauty of quantum computers is that we can look at exactly what is happening at every moment in time on every qubit, so we have all this information at our disposal. We are in a very exciting place for the future,” he adds.
This work is supported, in part, by the U.S. Department of Energy, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. Army Research Office, the Oak Ridge Institute for Science and Education, the Office of the Director of National Intelligence, NASA, and the National Science Foundation.
Oceanographers record the largest predation event ever observed in the oceanThe scientists’ wide-scale acoustic mapping technique could help track vulnerable keystone species.There is power in numbers, or so the saying goes. But in the ocean, scientists are finding that fish that group together don’t necessarily survive together. In some cases, the more fish there are, the larger a target they make for predators.
This is what MIT and Norwegian oceanographers observed recently when they explored a wide swath of ocean off the coast of Norway during the height of spawning season for capelin — a small Arctic fish about the size of an anchovy. Billions of capelin migrate each February from the edge of the Arctic ice sheet southward to the Norwegian coast, to lay their eggs. Norway’s coastline is also a stopover for capelin’s primary predator, the Atlantic cod. As cod migrate south, they feed on spawning capelin, though scientists have not measured this process over large scales until now.
Reporting their findings today in Nature Communications Biology, the MIT team captured interactions between individual migrating cod and spawning capelin, over a huge spatial extent. Using a sonic-based wide-area imaging technique, they watched as random capelin began grouping together to form a massive shoal spanning tens of kilometers. As the capelin shoal formed a sort of ecological “hotspot,” the team observed individual cod begin to group together in response, forming a huge shoal of their own. The swarming cod overtook the capelin, quickly consuming over 10 million fish, estimated to be more than half of the gathered prey.
The dramatic encounter, which took place over just a few hours, is the largest such predation event ever recorded, both in terms of the number of individuals involved and the area over which the event occurred.
This one event is unlikely to weaken the capelin population as a whole; the preyed-upon shoal represents 0.1 percent of the capelin that spawn in the region. However, as climate change causes the Arctic ice sheet to retreat, capelin will have to swim farther to spawn, making the species more stressed and vulnerable to natural predation events such as the one the team observed. As capelin sustains many fish species, including cod, continuously monitoring their behavior, at a resolution approaching that of individual fish and across large scales spanning tens of thousands of square kilometers, will help efforts to maintain the species and the health of the ocean overall.
“In our work we are seeing that natural catastrophic predation events can change the local predator prey balance in a matter of hours,” says Nicholas Makris, professor of mechanical and ocean engineering at MIT. “That’s not an issue for a healthy population with many spatially distributed population centers or ecological hotspots. But as the number of these hotspots deceases due to climate and anthropogenic stresses, the kind of natural ‘catastrophic’ predation event we witnessed of a keystone species could lead to dramatic consequences for that species as well as the many species dependent on them.”
Makris’ co-authors on the paper are Shourav Pednekar and Ankita Jain at MIT, and Olav Rune Godø of the Institute of Marine Research in Norway.
Bell sounds
For their new study, Makris and his colleagues reanalyzed data that they gathered during a cruise in February of 2014 to the Barents Sea, off the coast of Norway. During that cruise, the team deployed the Ocean Acoustic Waveguide Remote Sensing (OAWRS) system — a sonic imaging technique that employs a vertical acoustic array, attached to the bottom of a boat, to send sound waves down into the ocean and out in all directions. These waves can travel over large distances as they bounce off any obstacles or fish in their path.
The same or a second boat, towing an array of acoustic receivers, continuously picks up the scattered and reflected waves, from as far as many tens of kilometers away. Scientists can then analyze the collected waveforms to create instantaneous maps of the ocean over a huge areal extent.
Previously, the team reconstructed maps of individual fish and their movements, but could not distinguish between different species. In the new study, the researchers applied a new “multispectral” technique to differentiate between species based on the characteristic acoustic resonance of their swim bladders.
“Fish have swim bladders that resonate like bells,” Makris explains. “Cod have large swim bladders that have a low resonance, like a Big Ben bell, whereas capelin have tiny swim bladders that resonate like the highest notes on a piano.”
By reanalyzing OAWRS data to look for specific frequencies of capelin versus cod, the researchers were able to image fish groups, determine their species content, and map the movements of each species over a huge areal extent.
Watching a wave
The researchers applied the multi-spectral technique to OAWRS data collected on Feb. 27, 2014, at the peak of the capelin spawning season. In the early morning hours, their new mapping showed that capelin largely kept to themselves, moving as random individuals, in loose clusters along the Norwegian coastline. As the sun rose and lit the surface waters, the capelin began to descend to darker depths, possibly seeking places along the seafloor to spawn.
The team observed that as the capelin descended, they began shifting from individual to group behavior, ultimately forming a huge shoal of about 23 million fish that moved in a coordinated wave spanning over ten kilometers long.
“What we’re finding is capelin have this critical density, which came out of a physical theory, which we have now observed in the wild,” Makris says. “If they are close enough to each other, they can take on the average speed and direction of other fish that they can sense around them, and can then form a massive and coherent shoal.”
As they watched, the shoaling fish began to move as one, in a coherent behavior that has been observed in other species but never in capelin until now. Such coherent migration is thought to help fish save energy over large distances by essentially riding the collective motion of the group.
In this instance, however, as soon as the capelin shoal formed, it attracted increasing numbers of cod, which quickly formed a shoal of their own, amounting to about 2.5 million fish, based on the team’s acoustic mapping. Over a few short hours, the cod consumed 10.5 million capelin over tens of kilometers before both shoals dissolved and the fish scattered away. Makris suspects that such massive and coordinated predation is a common occurrence in the ocean, though this is the first time that scientists have been able to document such an event.
“It’s the first time seeing predator-prey interaction on a huge scale, and it’s a coherent battle of survival,” Makris says. “This is happening over a monstrous scale, and we’re watching a wave of capelin zoom in, like a wave around a sports stadium, and they kind of gather together to form a defense. It’s also happening with the predators, coming together to coherently attack.”
“This is a truly fascinating study that documents complex spatial dynamics linking predators and prey, here cod and capelin, at scales previously unachievable in marine ecosystems,” says George Rose, professor of fisheries at the University of British Columbia, who studies the ecology and productivity of cod in the North Atlantic, and was not involved in this work. “Simultaneous species mapping with the OAWRS system…enables insight into fundamental ecological processes with untold potential to enhance current survey methods.”
Makris hopes to deploy OAWRS in the future to monitor the large-scale dynamics among other species of fish.
“It’s been shown time and again that, when a population is on the verge of collapse, you will have that one last shoal. And when that last big, dense group is gone, there’s a collapse,” Makris says. “So you’ve got to know what’s there before it’s gone, because the pressures are not in their favor.”
This work was supported, in part, by the U.S. Office of Naval Research and the Institute of Marine Research in Norway.
Implantable microparticles can deliver two cancer therapies at onceThe combination of phototherapy and chemotherapy could offer a more effective way to fight aggressive tumors.Patients with late-stage cancer often have to endure multiple rounds of different types of treatment, which can cause unwanted side effects and may not always help.
In hopes of expanding the treatment options for those patients, MIT researchers have designed tiny particles that can be implanted at a tumor site, where they deliver two types of therapy: heat and chemotherapy.
This approach could avoid the side effects that often occur when chemotherapy is given intravenously, and the synergistic effect of the two therapies may extend the patient’s lifespan longer than giving one treatment at a time. In a study of mice, the researchers showed that this therapy completely eliminated tumors in most of the animals and significantly prolonged their survival.
“One of the examples where this particular technology could be useful is trying to control the growth of really fast-growing tumors,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “The goal would be to gain some control over these tumors for patients that don't really have a lot of options, and this could either prolong their life or at least allow them to have a better quality of life during this period.”
Jaklenec is one of the senior authors of the new study, along with Angela Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science and Engineering and a member of the Koch Institute, and Robert Langer, an MIT Institute Professor and member of the Koch Institute. Maria Kanelli, a former MIT postdoc, is the lead author of the paper, which appears today in the journal ACS Nano.
Dual therapy
Patients with advanced tumors usually undergo a combination of treatments, including chemotherapy, surgery, and radiation. Phototherapy is a newer treatment that involves implanting or injecting particles that are heated with an external laser, raising their temperature enough to kill nearby tumor cells without damaging other tissue.
Current approaches to phototherapy in clinical trials make use of gold nanoparticles, which emit heat when exposed to near-infrared light.
The MIT team wanted to come up with a way to deliver phototherapy and chemotherapy together, which they thought could make the treatment process easier on the patient and might also have synergistic effects. They decided to use an inorganic material called molybdenum sulfide as the phototherapeutic agent. This material converts laser light to heat very efficiently, which means that low-powered lasers can be used.
To create a microparticle that could deliver both of these treatments, the researchers combined molybdenum disulfide nanosheets with either doxorubicin, a hydrophilic drug, or violacein, a hydrophobic drug. To make the particles, molybdenum disulfide and the chemotherapeutic are mixed with a polymer called polycaprolactone and then dried into a film that can be pressed into microparticles of different shapes and sizes.
For this study, the researchers created cubic particles with a width of 200 micrometers. Once injected into a tumor site, the particles remain there throughout the treatment. During each treatment cycle, an external near-infrared laser is used to heat up the particles. This laser can penetrate to a depth of a few millimeters to centimeters, with a local effect on the tissue.
“The advantage of this platform is that it can act on demand in a pulsatile manner,” Kanelli says. “You administer it once through an intratumoral injection, and then using an external laser source you can activate the platform, release the drug, and at the same time achieve thermal ablation of the tumor cells.”
To optimize the treatment protocol, the researchers used machine-learning algorithms to figure out the laser power, irradiation time, and concentration of the phototherapeutic agent that would lead to the best outcomes.
That led them to design a laser treatment cycle that lasts for about three minutes. During that time, the particles are heated to about 50 degrees Celsius, which is hot enough to kill tumor cells. Also at this temperature, the polymer matrix within the particles begins to melt, releasing some of the chemotherapy drug contained within the matrix.
“This machine-learning-optimized laser system really allows us to deploy low-dose, localized chemotherapy by leveraging the deep tissue penetration of near-infrared light for pulsatile, on-demand photothermal therapy. This synergistic effect results in low systemic toxicity compared to conventional chemotherapy regimens,” says Neelkanth Bardhan, a Break Through Cancer research scientist in the Belcher Lab, and second author of the paper.
Eliminating tumors
The researchers tested the microparticle treatment in mice that were injected with an aggressive type of cancer cells from triple-negative breast tumors. Once tumors formed, the researchers implanted about 25 microparticles per tumor, and then performed the laser treatment three times, with three days in between each treatment.
“This is a powerful demonstration of the usefulness of near-infrared-responsive material systems,” says Belcher, who, along with Bardhan, has previously worked on near-infrared imaging systems for diagnostic and treatment applications in ovarian cancer. “Controlling the drug release at timed intervals with light, after just one dose of particle injection, is a game changer for less painful treatment options and can lead to better patient compliance.”
In mice that received this treatment, the tumors were completely eradicated, and the mice lived much longer than those that were given either chemotherapy or phototherapy alone, or no treatment. Mice that underwent all three treatment cycles also fared much better than those that received just one laser treatment.
The polymer used to make the particles is biocompatible and has already been FDA-approved for medical devices. The researchers now hope to test the particles in larger animal models, with the goal of eventually evaluating them in clinical trials. They expect that this treatment could be useful for any type of solid tumor, including metastatic tumors.
The research was funded by the Bodossaki Foundation, the Onassis Foundation, a Mazumdar-Shaw International Oncology Fellowship, a National Cancer Institute Fellowship, and the Koch Institute Support (core) Grant from the National Cancer Institute.