The purpose of these guidelines is to outline acceptable practices for using AI in study and research by graduate students. These guidelines are particularly important for students who need or plan to submit written work to meet the requirements of the graduate program, including coursework or reports, the progress update of the annual committee meeting, oral exam, required seminar presentation, master’s thesis, and PhD dissertation. Please be aware that expectations for use of AI for coursework are subject to the discretion of the Instructor of each course, and may differ from the guidelines described here. Similarly, the use of AI for graded publications and reports conducted under the supervision of a Principal Investigator (PI) is at the discretion of the PI.
The most important thing is that using AI may result in lack of data security. Any content, such as comments, discussion or questions, uploaded to AI tools may be retained by the tool's parent company and utilized in their training models.
It is therefore not possible at this time to guarantee data security or privacy protections for such content. As a consequence, AI tools must not be used to generate output that would be considered non-public, for example, proprietary or unpublished research.
Uploading unpublished data to generative AI should be strictly prohibited.
Recommended Principles for the Use of AI
(Adopted from Blau, W. et. al. Protecting scientific integrity in an age of generative AI PNAS 2024 121, e2407886121)
- Students and advisors should clearly disclose the use of generative AI in research, including the specific tools, algorithms, and settings employed; accurately attribute the human and AI sources of information or ideas, distinguishing between the two and acknowledging their respective contributions; and ensure that human expertise and prior literature are appropriately cited.
- Students and advisors are accountable for the accuracy of data analysis even when using AIgenerated content and analyses. In other words, analysis should be reproducible by other researchers with or without AI assistance. In addition, students and advisors need to be able to defend and explain whatever presentation or publication they generated with AI.
- Students and advisors should mark AI-generated or synthetic data, inferences, and images, so that it is not mistaken for observations collected in the real world.
- Students and advisors should take credible steps to ensure that their uses of AI produce scientifically sound and socially beneficial results while taking appropriate steps to mitigate the risk of harm.
- Students and advisors should continuously monitor and evaluate the impact of AI on their scientific work with transparency, and adapt strategies as necessary to maintain integrity.
Examples of Acceptable Uses of AI Tools
(Adopted from Duke University Department of Chemistry Guidance of Acceptable Use of AI for Graduate Milestone Exams)
Be aware that anything you input into AI becomes public information.
Stimulate thinking
Gather various angles as to the significance or relevance of your research and identify knowledge gaps suitable for your proposal.
Structuring
Draft outlines but avoid putting personal data or unpublished results.
Writing Refinement
Run abstracts, sentences, or paragraphs through the software to check for grammatical errors and improve writing style.
Feedback Incorporation & Revision
Direct the software to provide ideas for restructuring your document based on feedback.
Potential Problems with the Use of AI
Plagiarism
Copying and pasting text, images, media, etc. generated by AI software into your document without attribution counts as plagiarism. Repeating or slightly modifying phrases, sentences, or passages generated by AI tools without attribution is also plagiarism. Plagiarism is not tolerated and may result in disciplinary action.
Incorrect Information
AI models can generate inaccurate or misleading information, including citations and references to works that do not exist. Verify any information with credible sources, i.e., from multiple literature articles, and trustworthy literature bases (Scopus, Web of Science, etc).
Insecurity of the Intellectual Information
Anything input into AI becomes public information. Therefore, intellectual information or results should not be input into AI, for example grant proposal, unpublished manuscript, etc.
Superficial Understanding
AI is not a substitute for reading the literature on your own and applying critical thinking to the problems you face. An over-reliance on AI sources may result in a superficial understanding of your subject, which will become apparent in the oral component of the examination. Ask yourself, or have peers ask, questions to check whether you fully understand the topic.
Deficient in Research Development
Heavy reliance on AI could lead to students lacking proper training in developing independent research idea or project. In addition, research ideas or projects that utilize AI assistance could be accessible by general public and hence viewed as not novel.
Problem in Job Application
AI-generated self-assessment or essays could look generic, and make the applicants who use AI less competitive.