For clarification and guidance on the use of ChatGPT and other generative artificial intelligence (AI) tools in our graduate education programs, CGHS has developed the following position statement:
AI tools, including but not limited to ChatGPT, GPT-4, Gemini, Claude, Perplexity, and specialized health care AI applications, can enhance academic and student learning experiences and boost productivity when used appropriately and ethically. It is essential to acknowledge the potential benefits of AI in academic and professional pursuits, including research, content generation, problem-solving, and data analysis. It is also crucial to understand the ethical implications of AI use and stay informed about potential limitations, conflicts, and impact of use in both academic and professional spheres.
Some of the recent developments in the world of AI and considerations for use in academia include:
- The latest AI models demonstrate improved accuracy, context understanding, and domain-specific knowledge. However, students, faculty, and staff must remain vigilant about potential biases and limitations.
- New AI tools can process and generate various types of content, including text, images, and code. Even writing support tools such as Grammarly and TurnItIn utilize AI technology. Students and faculty should be aware of the ethical implications of using such tools, particularly in health care contexts.
- The integration of AI in health care education and practice is accelerating. Students, faculty, and staff should familiarize themselves with AI applications relevant to their field of study while maintaining a critical perspective on their limitations and ethical use.
- Recent discussions in academia emphasize the importance of responsible AI use. Students, faculty, and staff must prioritize transparency, fairness, and privacy in their AI interactions.
- AI note-taking tools can enhance efficiency and productivity in meetings and lectures. However, it is important to consider how these tools might impact privacy, perceived or actual safety in conversations, storage policies within the relevant AI company, etc. Student should always seek faculty approval before using an AI note-taking tool.
Principles for Ethical AI Use:
- Transparent and Ethical Usage: When permitted by instructors, students should use AI tools transparently and ethically. Similarly, faculty and staff, in all applications, are expected to use AI tools transparently and ethically. Possible applications could include brainstorming, outlining, identifying additional areas to investigate, summarizing, reviewing material, refining writing, and so many more! All AI-generated content must be appropriately acknowledged and cited, including in written assignments, discussion posts, presentations, and projects. Guidance for citing and referencing AI tools is available from the APA.
- Critical Thinking and Independent Learning: While AI tools can provide valuable support, users must prioritize independent thinking, curiosity, and genuine engagement with the subject matter. AI should serve as a supplement to intellectual growth rather than a replacement for genuine engagement with the subject matter. If students have any questions about the appropriateness of AI use in a given assignment, they should contact the course instructor before submitting their work.
- Content Accuracy and Responsibility: Students, faculty, and staff remain responsible for the accuracy, credibility, and authenticity of their work. This includes validating information, fact-checking, and ensuring proper citations.
- Originality Assurance: All submitted work must the person's original intellectual product, demonstrating their unique thought processes, critical analysis, and academic skills. Any external sources, including AI-generated content, must be explicitly cited and used only as supplementary material. The use of AI tools to replace or significantly augment a person's own critical thinking, idea formation, or authorship is strictly prohibited and constitutes a serious violation of academic integrity. This includes but is not limited to submitting work predominantly written by AI, using AI to generate substantial portions of assignments, or relying on AI to circumvent the learning process, even if AI is cited. Students should communicate proactively with their faculty if they are uncertain about the appropriateness of AI usage.
- Academic Integrity: Improper attribution or unauthorized use of AI-generated content constitutes academic dishonesty and is subject to the University Code of Academic Conduct. Academic dishonesty is subject to the CGHS Plagiarism Policy and/or the University Code of Academic Conduct.
- AI Literacy: Students, faculty, and staff are encouraged to develop AI literacy and understand the capabilities, limitations, and potential biases of AI tools used in health care and research settings.
- Ethical Considerations in Health Care: Students, faculty, and staff must be mindful of patient privacy, data security, and potential biases when using AI tools that involve health care data or scenarios.
- Continuous Learning: Given the rapid evolution of AI, students, faculty, and staff should stay informed about new developments and their ethical implications in health care and education.
- Open Dialogue: ATSU-CGHS encourages ongoing discussions among students, faculty, staff, and administration regarding the ethical use of AI in graduate health sciences education.
Free Resources and Support
Note: This position statement will be reviewed on a regular basis and updated as technology evolves.
October 20, 2024