Guidance of Employing AI in Graduate Study and Research

The purpose of these guidelines is to outline acceptable practices for using AI in study and research by graduate students. These guidelines are particularly important for students who need or plan to submit written work to meet the requirements of the graduate program, including coursework or reports, the progress update of the annual committee meeting, oral exam, required seminar presentation, master’s thesis, and PhD dissertation. Please be aware that expectations for use of AI for coursework are subject to the discretion of the Instructor of each course, and may differ from the guidelines described here. Similarly, the use of AI for graded publications and reports conducted under the supervision of a Principal Investigator (PI) is at the discretion of the PI.

The most important thing is that using AI may result in lack of data security. Any content, such as comments, discussion or questions, uploaded to AI tools may be retained by the tool's parent company and utilized in their training models.

It is therefore not possible at this time to guarantee data security or privacy protections for such content. As a consequence, AI tools must not be used to generate output that would be considered non-public, for example, proprietary or unpublished research.

Uploading unpublished data to generative AI should be strictly prohibited.

Recommended Principles for the Use of AI

(Adopted from Blau, W. et. al. Protecting scientific integrity in an age of generative AI PNAS 2024 121, e2407886121)

  1. Students and advisors should clearly disclose the use of generative AI in research, including the specific tools, algorithms, and settings employed; accurately attribute the human and AI sources of information or ideas, distinguishing between the two and acknowledging their respective contributions; and ensure that human expertise and prior literature are appropriately cited.
  2. Students and advisors are accountable for the accuracy of data analysis even when using AIgenerated content and analyses. In other words, analysis should be reproducible by other researchers with or without AI assistance. In addition, students and advisors need to be able to defend and explain whatever presentation or publication they generated with AI.
  3. Students and advisors should mark AI-generated or synthetic data, inferences, and images, so that it is not mistaken for observations collected in the real world.
  4. Students and advisors should take credible steps to ensure that their uses of AI produce scientifically sound and socially beneficial results while taking appropriate steps to mitigate the risk of harm.
  5. Students and advisors should continuously monitor and evaluate the impact of AI on their scientific work with transparency, and adapt strategies as necessary to maintain integrity.

Examples of Acceptable Uses of AI Tools

(Adopted from Duke University Department of Chemistry Guidance of Acceptable Use of AI for Graduate Milestone Exams)

Be aware that anything you input into AI becomes public information.

Potential Problems with the Use of AI