BotBeat
...
← Back

> ▌

OpenAIOpenAI
RESEARCHOpenAI2026-05-06

Researchers Publish 'Ten Simple Rules' for Responsible Use of Generative AI in Science

Key Takeaways

  • ▸A new 'ten simple rules' framework provides practical guidance for responsible use of generative AI tools in scientific research
  • ▸Generative AI is rapidly transforming scientific workflows, from literature reviews to protein prediction and drug discovery, but requires careful implementation
  • ▸The research highlights both the transformative potential and significant risks (such as hallucinations and confident false claims) of deploying LLMs in science
Source:
Hacker Newshttps://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013588↗

Summary

A new open-access research paper published in PLoS Computational Biology establishes comprehensive guidelines for the optimal and responsible use of generative AI tools in scientific research. Authored by Helmy, Jin, Alhossary, Mansour, Pellagrina, and Selvarajoo, the paper addresses the rapid integration of large language models (LLMs) like ChatGPT, Gemini, and specialized domain models into scientific workflows—a transformation that accelerated dramatically following the late 2022 release of highly efficient generative AI platforms.

The research examines both the transformative potential and the risks of adopting generative AI in science. The paper highlights real-world applications including SciSpace Copilot for literature interpretation, Ought's Elicit for automated literature reviews, DeepMind's AlphaFold for protein structure prediction, and BioMedLM for biomedical question-answering. These tools demonstrate how generative AI can dramatically accelerate scientific workflows—yet the paper emphasizes that without careful use, they can introduce hallucinations, factual errors, and other reliability issues that undermine scientific integrity.

The 'ten simple rules' framework provides practical guidance for scientists and institutions seeking to leverage generative AI responsibly. The paper goes beyond theoretical discussion to offer actionable recommendations for integrating these powerful tools while maintaining scientific standards, ethical practices, and accuracy. This timely contribution directly addresses the urgent need for best practices as generative AI adoption accelerates across academic and biomedical research globally.

Published as an open-access article, the research ensures that these guidelines are freely available to the entire scientific community, democratizing access to responsible AI use frameworks at a critical inflection point in how science is conducted.

  • Clear best practices are essential to maintain scientific integrity and accuracy as generative AI adoption accelerates across research institutions

Editorial Opinion

This paper addresses a critical gap at precisely the right moment—as generative AI tools become increasingly embedded in scientific practice. While models like ChatGPT and specialized tools offer tremendous potential to accelerate discovery and reduce drudgework, the research correctly emphasizes the existential risk of adopting imperfect tools without robust guardrails. Scientists must understand that LLMs, despite their sophistication, can hallucinate with high confidence, making established best practices not merely helpful but essential for protecting the integrity of scientific literature and discovery itself.

Generative AIScience & ResearchEthics & BiasAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
POLICY & REGULATION

Parents Sue OpenAI After ChatGPT Allegedly Gave Deadly Drug Advice to College Student

2026-05-12
OpenAIOpenAI
RESEARCH

ChatGPT Excels at Julia Code Generation, Outperforming Python

2026-05-12
OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Expands GPT-5.5-Cyber Access to European Companies

2026-05-12

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
MetaMeta
POLICY & REGULATION

Meta Employees Protest Mouse Tracking Technology at US Offices

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us