As reported here, AI has been transformative for many industries in 2023, including biotechnology, healthcare, finance, education, and more. That being said, AI also faces many challenges and the Nature editorial below discusses the slow progress of using AI and automated systems for chemical synthesis. This discussion has important lessons for using AI in general and may suggest that the Polyani paradox is a significant obstacle to AI-driven innovation.
The main challenges of using AI in the development of new and improved chemical synthesis processes include the following:
- Existing automatic systems to test AI outputs can only try a narrow range of chemical reactions compared to a human chemist
- Lack of sufficient data
- Lack of data on negative outcomes, such as reaction conditions that did not work
Future developments in robotics will certainly provide automatic systems that can test more comprehensive ranges of chemical reactions, and the amount of data available for training AI systems is continuously increasing. The general need for more data for hungry AI models may also be solved by developing specialized AI systems such as AlphaFold.
However, the lack of negative data problem may be tough to handle because negative data is rarely published by scientific journals. Chemists are addressing this issue through efforts like the "Open Reaction Database," but it remains a significant hurdle.
The negative data problem points to a deeper problem for AI in scientific innovation that is often referred to as the Polyani Paradox, after the science philosopher with the same name. According to Polyani, scientific discovery relies on personal knowledge that is acquired by experience and internalized unconsciously. The Polyani Paradox can summarized as “We can know more than we can tell.”
Negative data points are often internalized and unexpressed experiences that become part of the personal knowledge of individuals or groups of scientists. So, information, insights, and experiences critical for innovation may never be expressed verbally or in a tangible propositional form that AI models can learn from. Negative data and the Polyani Paradox may consequently be crucial blind spots of certain applications of AI and essential to be aware of when using AI models, whether for scientific discovery or any human endeavor such as legal problem-solving. To become better than a human chemist, as requested by the Nature editorial, AI models must somehow overcome the Polyani Paradox.