American Chemical Society: Peering into the mind of artificial intelligence to make better antibiotics
Artificial intelligence (AI) has exploded in popularity. It powers models that help us drive vehicles, proofread emails and even design new molecules for medications. But just like a human, it’s hard to read AI’s mind.
Explainable AI (XAI), a subset of the technology, could help us do just that by justifying a model‘s decisions. And now, researchers are using XAI to not only scrutinize predictive AI models more closely, but also to peer deeper into the field of chemistry.
The researchers present their results at the fall meeting of the American Chemical Society.
AI’s vast number of uses has made it almost ubiquitous in today’s technological landscape. However, many AI models are black boxes, meaning it’s not clear exactly what steps are taken to produce a result. And when that result is something like a potential drug molecule, not understanding the steps might stir up skepticism with scientists and the public alike.
“As scientists, we like justification,” explains Rebecca Davis, a chemistry professor at the University of Manitoba. “If we can come up with models that help provide some insight into how AI makes its decisions, it could potentially make scientists more comfortable with these methodologies.”
To read the full article, please visit Phys.org