For clarification and guidance on the use of ChatGPT and other generative artificial intelligence (AI) tools in our graduate education programs, CGHS has developed the following position statement:
This document aims to guide the ethical and effective use of artificial intelligence (AI) tools by
students, faculty, and staff within CGHS, emphasizing integrity, transparency, and continuous
learning.
AI tools, including but not limited to ChatGPT, GPT-4, Gemini, Claude, Perplexity, and specialized
healthcare AI applications, can enhance academic and student learning experiences and boost
productivity when used appropriately and ethically. It is essential to acknowledge the potential
benefits of AI in academic and professional pursuits, including research, content generation,
problem-solving, and data analysis. It is also crucial to understand the ethical implications of AI use
and stay informed about potential limitations, conflicts, and impact of use in both academic and
professional spheres.
Some of the recent developments in the world of AI and considerations for use in academia include:
1. The latest AI models demonstrate improved accuracy, context understanding, and
domain-specific knowledge. However, students, faculty, and staff must remain attentive to
relevant applications for their fields of study while maintaining a critical perspective on AI’s
limitations and ethical use.
2. New AI tools can process and generate various types of content, including text, images, and
code. Even writing support tools such as Grammarly and TurnItIn utilize AI technology.
Students and faculty should be aware of the ethical implications of using such tools. Sharing
HIPAA, FERPA, or other protected information with AI tools may result in a breach of
privacy, confidentiality and/or other legal policies/rules. .
3. Recent discussions in academia emphasize the importance of responsible AI use. Students,
faculty, and staff must prioritize transparency, fairness, and maintaining academic and
scholarly integrity when using AI.
4. AI note-taking tools can enhance efficiency and productivity in meetings and lectures.
However, it is important to consider how these tools might impact privacy, perceived or
actual safety in conversations, storage policies within the relevant AI company, etc. Students
should always seek faculty approval before using an AI note-taking tool.
Principles for Ethical AI Use:
1. Transparent and Ethical Usage: Students are expected to use AI tools transparently and
ethically. Similarly, faculty and staff, in all applications, are expected to use AI tools
transparently and ethically. Possible applications could include brainstorming, outlining, identifying additional areas to investigate, summarizing, reviewing material, refining writing, and so many more! All AI-generated content must be appropriately acknowledged and
cited, including in written assignments, discussion posts, presentations, and projects. Guidance for citing and referencing AI tools can be found here: https://apastyle.apa.org/blog/how-to-cite-chatgpt
2. Critical Thinking and Independent Learning: While AI tools can provide valuable support,
users must prioritize independent thinking, curiosity, and genuine engagement with the
subject matter. AI should serve as a supplement to intellectual growth rather than a
replacement for genuine engagement with the subject matter. If students have any
questions about the appropriateness of AI use in a given assignment, they should
contact the course instructor before submitting their work.
3. Content Accuracy and Responsibility: Students, faculty, and staff remain responsible for the
accuracy, credibility, and authenticity of their work. This includes validating information,
fact-checking, and ensuring proper citations.
4. Originality Assurance: All submitted work must be the person’s original intellectual product,
demonstrating their unique thought processes, critical analysis, and academic skills. Any
external sources, including AI-generated content, must be explicitly cited and used only as
supplementary material. Students should communicate proactively with their faculty if they
are uncertain about the appropriateness of AI usage.
5. Academic Integrity: Students are encouraged to contact their instructors to learn what may
constitute academic dishonesty in cases of AI use. Improper use or attribution of
AI-generated content may constitute academic dishonesty and could be subject to the
University Code of Academic Conduct and/or the CGHS Plagiarism Policy.
6. AI Literacy: Students, faculty, and staff are encouraged to develop AI literacy and understand
the capabilities, limitations, and potential biases of AI tools used in health care and research
settings.
7. Ethical Considerations in Health Care: Students, faculty, and staff must be mindful of patient
privacy, data security, and potential biases when using AI tools that involve health care data
or scenarios.
8. Continuous Learning: Given the rapid evolution of AI, students, faculty and staff should stay
informed about new developments and their ethical implications in health care and
education.
9. Open Dialogue: ATSU-CGHS encourages ongoing discussions among students, faculty,
staff, and administrators regarding the ethical use of AI in graduate health sciences
education.
AI detection tools vary on their accuracy on detection of AI generated content and should
be used with caution and critical analysis.
If a faculty member suspects that a student used AI to generate content and then posted or
presented that content as their own original work, CGHS recommends the following
process:
a. The faculty member should note this in their feedback and seek clarification from
the student.
b. If the concerns continue, it would then be best to have a conversation with the
student via Zoom or phone, with the academic advisor present. CGHS asks that
faculty treat initial cases as “teachable moments” and allow students to revise and
resubmit their work.
c. Should the faculty member have concerns that cannot be resolved in such a meeting,
or if they encounter additional issues, they should refer the case to their department
chair.
Free Resources and Support
Note: This position statement will be reviewed on a regular basis and updated as technology evolves.
Revised 2/25/2025