CeBIL Researchers Lead Development of Ethical Guidelines for AI Use in Academic Writing
In a significant contribution to the ongoing dialogue about artificial intelligence in academia, CeBIL researchers have helped develop a comprehensive framework for the ethical use of Large Language Models (LLMs) in academic writing. The guidelines, published today in Nature Machine Intelligence, provide practical solutions to key challenges facing researchers worldwide.
Led by CeBIL's Sebastian Porsdam Mann and Timo Minssen, in collaboration with CeBIL affiliates and partners Mateo Aboy of Cambridge and Glenn Cohen of Harvard Law School, as well as with colleagues at the National University of Singapore and Oxford University, the paper promulgates three essential criteria for ethical LLM use. Derived from existing standards, these guidelines offer a practical yet rigorous means of addressing ethical issues associated with the use of LLMs in scholarly writing.
"As AI tools become increasingly powerful, we need clear guidelines that balance innovation with academic rigor," says Sebastian Porsdam Mann. "Our framework provides practical solutions without creating unnecessary barriers to adoption."
Three Key Principles
The guidelines are built on three fundamental criteria:
- Human Vetting and Guaranteeing: Authors must take responsibility for the accuracy and integrity of their work, including thorough verification of AI-generated content.
- Substantial Human Contribution: Each author must make meaningful intellectual contributions beyond merely processing AI outputs.
- Acknowledgement and Transparency: Authors should disclose AI use appropriately, following field-specific standards without imposing undue burden.
The paper also introduces a standardized acknowledgment statement that researchers can readily adopt in their manuscripts.
Broad Impact
While developed primarily for academic writing, these guidelines have implications for all knowledge workers using AI tools in their professional practice. The framework's principles can help organizations maintain quality and integrity while leveraging the benefits of AI technology.
"These guidelines represent an important step toward ensuring responsible AI use in research and knowledge work," notes Timo Minssen, Director of CeBIL. "They provide a practical framework that can evolve alongside the technology."
The research was supported by the Novo Nordisk Foundation's grant for the Inter-CeBIL Programme and demonstrates CeBIL's ongoing commitment to addressing crucial challenges at the intersection of law, ethics, and innovation.
Read More
The full paper, "Guidelines for ethical use and acknowledgement of large language models in academic writing," is available open access in Nature Machine Intelligence: https://doi.org/10.1038/s42256-024-00922-7
A free, read-only version may be accessed here: https://rdcu.be/dZ0Ic
For more information about CeBIL's work on AI governance and ethics, visit our Research page.
This work was supported by the Novo Nordisk Foundation grant for a scientifically independent International Collaborative Bioscience Innovation & Law Programme (Inter-CeBIL Programme, grant NNF23SA0087056).