Home Science & Technology

Will Artificial Intelligence Help or Hinder Trust in Science?

Artificial Intelligence (AI) tools are already being widely used in science. But can they, and the science they help produce, be trusted? (Science and Technology)

New Update
Artificial Intelligence

With greater public knowledge of Artificial Intelligence (AI) will come greater public scrutiny of how it's being used by scientists. | Unsplash: Possessed Photography

Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

In the past year, generative artificial intelligence tools — such as ChatGPT, Gemini, and OpenAI's video generation tool Sora — have captured the public's imagination.

All that is needed to start experimenting with AI is an internet connection and a web browser. You can interact with AI like you would with a human assistant: by talking to it, writing to it, showing it images or videos, or all of the above.

While this capability marks entirely new terrain for the general public, scientists have used AI as a tool for many years. But with greater public knowledge of AI will come greater public scrutiny of how it's being used by scientists.

AI is already revolutionising science — six percent of all scientific work leverages AI, not just in computer science, but in chemistry, physics, psychology and environmental science.

Nature, one of the world's most prestigious scientific journals, included ChatGPT on its 2023 Nature's 10 list of the world's most influential and, until then, exclusively human scientists.

The use of Artificial Intelligence in science is twofold.

At one level, AI can make scientists more productive.

When Google DeepMind released an AI-generated dataset of more than 380,000 novel material compounds, Lawrence Berkeley Lab used AI to run compound synthesis experiments at a scale orders of magnitude larger than what could be accomplished by humans.

But artificial intelligence has even greater potential: to enable scientists to make discoveries that otherwise would not be possible at all.

It was an AI algorithm that for the first time found signal patterns in brain-activity data that pointed to the onset of epileptic seizures, a feat that not even the most experienced human neurologist can repeat.

Early success stories of the use of artificial intelligence in science have led some to imagine a future in which scientists will collaborate with AI scientific assistants as part of their daily work.

That future is already here. CSIRO researchers are experimenting with AI science agents and have developed robots that can follow spoken language instructions to carry out scientific tasks during fieldwork.

While modern AI systems are impressively powerful — especially so-called artificial general intelligence tools such as ChatGPT and Gemini — they also have drawbacks.

Generative AI systems are susceptible to "hallucinations"where they make up facts. Or they can be biased. Google's Gemini depicting America's Founding Fathers as a diverse group is an interesting case of over-correcting for bias.

There is a very real danger of AI fabricating results and this has already happened. It's relatively easy to get a generative AI tool to cite publications that don't exist.

Furthermore, many AI systems cannot explain why they produce the output they produce.

This is not always a problem. If AI generates a new hypothesis that is then tested by the usual scientific methods, there is no harm done.

However, for some applications a lack of explanation can be a problem.

Replication of results is a basic tenet in science, but if the steps that AI took to reach a conclusion remain opaque, replication and validation become difficult, if not impossible.

And that could harm people's trust in the science produced.

A distinction should be made here between general and narrow AI.

Narrow AI is AI trained to carry out a specific task.

Narrow AI has already made great strides. Google DeepMind's AlphaFold model has revolutionised how scientists predict protein structures.

But there are many other, less well publicised, successes too — such as AI being used at CSIRO to discover new galaxies in the night sky, IBM Research developing AI that rediscovered Kepler's third law of planetary motion, or Samsung AI building AI that was able to reproduce Nobel prize winning scientific breakthroughs.

When it comes to narrow AI applied to science, trust remains high.

AI systems — especially those based on machine learning methods — rarely achieve 100 percent accuracy on a given task. (In fact, machine learning systems outperform humans on some tasks, and humans outperform AI systems on many tasks. Humans using AI systems generally outperform humans working alone and they also outperform AI working alone. There is a large scientific evidence base for this fact, including this study.)

AI working alongside an expert scientist, who confirms and interprets the results, is a perfectly legitimate way of working, and is widely seen as yielding better performance than human scientists or AI systems working alone.

On the other hand, general AI systems are trained to carry out a wide range of tasks, not specific to any domain or use case.

ChatGPT, for example, can create a Shakespearian sonnet, suggest a recipe for dinner, summarise a body of academic literature, or generate a scientific hypothesis.

When it comes to general AI, the problems of hallucinations and bias are most acute and widespread. That doesn't mean general AI isn't useful for scientists — but it needs to be used with care. This means scientists must understand and asse

login-icon

Unlock this story for free.

Simply log in with your email ID and immerse yourself in a world where exclusive insights and compelling narratives come alive.