Artificial Intelligence Cannot Make Better Batteries Just yet, but Stanford Researchers Are Making Progress
Stanford University researchers have published a new study in Energy & Environmental Sciences that applies artificial intelligence (AI) techniques to accelerate the development of advanced batteries. Specifically, they looked to improve solid-state battery electrolytes, which are a very promising class of materials that could potentially improve the safety, performance, and cost of energy storage, affecting important applications like plug-in vehicles. While this initial Stanford study did not physically result in better batteries yet, it does present an early and important case study in how AI will impact how science will be done in the future, and how it can accelerate progress on open problems like next-generation battery development.
What the researchers focused on is how to down-select from more than 12,000 known crystalline solids containing lithium (taken from the Materials Project database) to a list of options that is much more manageable to synthesize and test. In the end, they identified 21 candidates that they argue could make for excellent solid-state battery electrolytes; of those, they claim about 15 are new and hitherto unstudied in the context of energy storage. To validate their claims, follow-up studies will need to actually synthesize, test, and refine the electrolyte materials, a process that will take many years. Nonetheless, the approach is potentially useful, being a sophisticated software-based approach that both complements and competes with the hardware-based high-throughput combinatorial synthesis and screening done by battery firms like Wildcat Discovery Technologies and Ilika. The new study also brings to mind efforts that new materials informatics startups like Citrine Informatics are exploring.
The methodology the Stanford scientists used is worth highlighting. Part of their battery material screening process used a predictive AI model. To train this AI model, the researchers fed it the experimentally measured ionic conductivity for 40 known crystal structures that are relevant to energy storage. That is, the researchers taught the AI model how a set of known solid-state electrolytes perform. Then, based on that training set, the AI model helped screen the 12,000 candidate structures by breaking down the task into quantifying 20 possible features that make a structure a good ionic conductor, a key requirement in batteries. As an example of what one of these 20 features looked like, the researchers explained that the AI model trained itself to look for crystal structures where one lithium atom has many other neighboring lithium atoms.
It is important to keep in mind the many limitations of this initial study. One is that in the world of AI, training on a dataset of only 40 entries is alarmingly low, which when paired with the 20 aforementioned features could result in what is technically known as “overfitting” (having too many parameters or features relative to the number of observations or data points). Another problem is that the model was only trained on positive examples, since negative examples are hard to come by: Scientists are rewarded only for publishing positive outcomes – that is, the “publish or perish” approach in academia rewards high-impact successes, and few if any publish their failures. For training AI models, that is a problem, because having both positive and negative examples is important (indeed, the Stanford researchers ended their paper with the plea, “To that end, the authors invite investigators to share their ionic conductivity measurements of poor Li conductors”). Finally, the area of possible materials the AI searched within is only as good as the original database, an open problem that we have detailed in the past (client registration required).
For readers, this study and others like it have a number of implications. Those working on energy storage, particularly on materials development for next-generation batteries, should strongly consider forming a working relationship with applied AI groups, like this paper’s Stanford-based authors. More broadly, every reader working on chemicals and materials discovery needs to form a strategy for how to implement AI into their scientific process, to accelerate the pace of their development and keep competitive. That embrace of AI will likely involve developing both internal and external experts in hybrid teams that range from up-skilled scientists to software engineers to data specialists. As this study highlights, readers developing an AI strategy for research will need to rethink part of how research is done in their labs, including placing a greater emphasis on collecting, saving, and using negative results for model training purposes.