In a world where generative artificial intelligence (AI) systems are rapidly advancing and reshaping the landscape of research and information dissemination, it is crucial to ensure responsible and ethical use of this technology. Generative AI has the potential to revolutionize the way we create and access information, but it also presents significant risks, including the spread of misinformation and biases in scientific knowledge. In this context, we introduce the concept of "Living Guidelines" for the responsible use of generative AI in research.

Principles of Living Guidelines for Generative AI:

  1. Respect for Human Rights: The guidelines adhere to the Universal Declaration of Human Rights, including the 'right to science' (Article 27). The use of generative AI in research should not infringe upon fundamental human rights.

  2. Ethical AI Practices: These guidelines align with UNESCO's Recommendation on the Ethics of AI and promote a human-rights-centered approach to AI ethics. Researchers and developers of generative AI systems must prioritize ethical considerations.

  3. Transparency and Accountability: All research conducted using generative AI must prioritize transparency and accountability. Researchers should be open about the use of AI-generated content and disclose any AI involvement in the creation of research materials.

  4. Verification and Review: The scientific community should encourage independent verification and review of generative AI tools. This promotes the reliability and integrity of research outputs.

  5. Continuous Monitoring and Adaptation: Given the rapid evolution of AI technology, these guidelines should be continuously monitored and adapted to address emerging challenges and innovations. They should remain agile and responsive to changes in the AI landscape.

  6. Inclusivity: The development and application of generative AI should be inclusive and consider diverse perspectives to avoid bias and promote fairness.

Why Living Guidelines are Essential:

  1. Swift Technological Advancements: AI technology evolves rapidly, making static regulations outdated before they become official policy. Living guidelines can adapt to the dynamic nature of AI development.

  2. Independent Oversight: To ensure the safety and security of generative AI systems, scientists and researchers must play a central role. An independent institute, detached from commercial interests, is ideal for conducting testing, verification, and improvements.

  3. Resource Limitations: Most scientists lack the resources to evaluate generative AI independently. A collaborative and resource-sharing approach is necessary to address these challenges effectively.

The Way Forward:

The creation of 'Living Guidelines for Responsible Use of Generative AI in Research' is a collective effort involving specialists in AI, generative AI, computer science, and the psychological and social impacts of AI. It has been developed in collaboration with multinational scientific institutions, global organizations, and policy advisors. This initiative aims to provide a framework that ensures the ethical and responsible use of generative AI in research, aligning with international human rights standards and ethical principles.

As AI continues to shape the future of research and information dissemination, the adoption and evolution of living guidelines can serve as a proactive approach to mitigate risks, foster responsible AI practices, and uphold the integrity of science itself. It's a collaborative effort that involves all stakeholders to create a brighter future for AI-driven research and innovation.