The Environmental Program policy on generative AI provides guidelines for students using AI tools, balancing innovation with educational integrity.
Individual instructors determine the use of AI tools like ChatGPT and DALL-E. They will specify their policies in course syllabi and assignment instructions. AI use is allowed only if explicitly stated in the instructions. If AI use is not mentioned, students should assume it is prohibited.
If an instructor permits AI use, students need prior permission to use AI in graded assignments and must document and cite any AI assistance. AI tools may be used for ungraded activities like brainstorming with instructor guidance. The aim is to help students understand AI’s benefits and limitations, not to replace traditional research methods.
If AI is permitted in a class and students have received permission from their instructor to use it, students must properly cite it. For example, text generated by ChatGPT should be cited as: “ChatGPT. (2024, July 28). ‘Text of your query.’ Generated using OpenAI. https://chat.openai.com/”. Students should also briefly explain how they used the AI tool in their assignments. Students are responsible for ensuring the accuracy of AI-generated content. Inaccurate information may lead to a loss of credit.
Students should consider the ethical implications of AI use, including issues of bias and the importance of original work. The Environmental Program’s core values are predicated on students developing original thought and critical thinking skills. Overreliance on AI can undermine these essential academic practices and contribute to ethical issues such as plagiarism and lack of personal effort.
AI tools also require significant computational resources with a high carbon footprint. AI models also use an immense amount of water. During a time when billions of people around the world do not have access to clean drinking water, this is an issue of environmental justice. ChatGPT-3 alone has used over 185,000 gallons of freshwater during its training, and ChatGPT-4’s water use is similar. ChatGPT and other AI models continue to use more freshwater every time they have a conversation with someone.
Additionally, all LLMs are trained on copyrighted material whose authors were not consulted or compensated for their work. The open-access sources LLMs train on also have issues. For example, Wikipedia, which AI models train on more than any other source, has been found to be biased in many ways.
Reducing unnecessary AI use aligns with the program’s commitments to sustainability, environmental stewardship, equity, and restorative action.
Failure to follow these guidelines violates academic integrity and may result in disciplinary actions. Consequences may include loss of credit, academic probation, or other penalties in accordance with Lafayette College’s Academic Integrity Policy.