https://www.eurekalert.org/news-releases/1066077
INTRO: Large language models, a type of AI that analyses text, can predict the results of proposed neuroscience studies more accurately than human experts, finds a new study led by UCL (University College London) researchers.
The findings, published in Nature Human Behaviour, demonstrate that large language models (LLMs) trained on vast datasets of text can distil patterns from scientific literature, enabling them to forecast scientific outcomes with superhuman accuracy.
The researchers say this highlights their potential as powerful tools for accelerating research, going far beyond just knowledge retrieval.
Lead author Dr Ken Luo (UCL Psychology & Language Sciences) said: “Since the advent of generative AI like ChatGPT, much research has focused on LLMs' question-answering capabilities, showcasing their remarkable skill in summarising knowledge from extensive training data. However, rather than emphasising their backward-looking ability to retrieve past information, we explored whether LLMs could synthesise knowledge to predict future outcomes.
“Scientific progress often relies on trial and error, but each meticulous experiment demands time and resources. Even the most skilled researchers may overlook critical insights from the literature. Our work investigates whether LLMs can identify patterns across vast scientific texts and forecast outcomes of experiments.”
The international research team began their study by developing BrainBench, a tool to evaluate how well large language models (LLMs) can predict neuroscience results... (MORE - details, no ads)
INTRO: Large language models, a type of AI that analyses text, can predict the results of proposed neuroscience studies more accurately than human experts, finds a new study led by UCL (University College London) researchers.
The findings, published in Nature Human Behaviour, demonstrate that large language models (LLMs) trained on vast datasets of text can distil patterns from scientific literature, enabling them to forecast scientific outcomes with superhuman accuracy.
The researchers say this highlights their potential as powerful tools for accelerating research, going far beyond just knowledge retrieval.
Lead author Dr Ken Luo (UCL Psychology & Language Sciences) said: “Since the advent of generative AI like ChatGPT, much research has focused on LLMs' question-answering capabilities, showcasing their remarkable skill in summarising knowledge from extensive training data. However, rather than emphasising their backward-looking ability to retrieve past information, we explored whether LLMs could synthesise knowledge to predict future outcomes.
“Scientific progress often relies on trial and error, but each meticulous experiment demands time and resources. Even the most skilled researchers may overlook critical insights from the literature. Our work investigates whether LLMs can identify patterns across vast scientific texts and forecast outcomes of experiments.”
The international research team began their study by developing BrainBench, a tool to evaluate how well large language models (LLMs) can predict neuroscience results... (MORE - details, no ads)