Artificial Intelligence may Replace Testing of Live Animals

Great news coming out of the scientific-based tech world today:  The use of laboratory animals to test drug safety may be replaced by computer models and artificial intelligence (AI).  It turns out that AI results can be even more accurate and reliable than real animal tests.  We may finally be getting ready to turn a corner on the use of live animal testing.


AI may soon save a ton of animals from drug testing



A study recently published in the research journal Toxicological Sciences shows that it is possible to predict the attributes of new compounds using the data we already have about past tests and experiments. The artificially intelligent system was trained to predict the toxicity of tens of thousands of unknown chemicals, based on previous animal tests, and the results are, in some cases, more accurate and reliable than real animal tests.

We are already using AI for drug testing

Using AI in the drug development process is nothing new. In fact with 28 pharma companies and 93 startups already spending hundreds of millions to apply machine learning and other AI techniques to drug discovery, the costly and time consuming process of identifying and testing new drugs, it seems that the industry is ripe for an artificially intelligent disruption.

What the future of AI drug testing means

Thomas Hartung, the toxicologist at Johns Hopkins University in Baltimore, Maryland, who leads the research to predict drug properties without live animal tests says computer models could replace some standard safety studies conducted on millions of animals each year, such as dropping compounds into rabbits’ eyes to check if they are irritants, or feeding chemicals to rats to work out lethal doses.

How artificial intelligence works for predicting toxicity in drug testing

Researchers made this predictive approach possible through feeding a vast amount of data to their AI that was harnessed from a huge dataset that was originally collected by the European Chemicals Agency (ECHA) under a 2007 law called REACH (registration, evaluation, authorization and restriction of chemicals).

The data gathered in ECHA’s database is publicly available but the format is not easily readable for computers. In 2014, Hartung’s crew started to reformat the data in a way that is easily feedable to machines, compromising information about 10 thousand chemicals and their properties that was gathered in roughly 800 thousand animal tests.

And the results are amazing. Their system is able to predict the toxicity for tens of thousands of chemicals covering nine types of tests, covering everything from harm to aquatic ecosystems to inhalation damage.


Journal Reference: Luechtefeld, T., Marsh, D., & Rowlands, C.  (2018). Machine Learning of Toxicological Big Data Enables Read-Across Structure Activity Relationships (RASAR) Outperforming Animal Test Reproducibility, Toxicological Sciences, kfy152,