skip to content
 

Guidance for Students

“A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.” Plagiarism & Academic Misconduct

This statement aims to provide more specific clarity around the use of GenAI tools in summative assessments whilst allowing for discipline-specific definitions of what is appropriate. The change enables staff and students to engage with these tools more in independent study and formative work, encouraging open dialogue about suitable use, ethical implications, the consequences of over-reliance, what stands to be gained, and what could be lost in the use of these emerging tools. Large Language Models and Generative AI require the use of information to derive their responses so anything generated has already been informed by the work of others without appropriate attribution or reference and as such is a violation of academic misconduct when submitted.

There are a wide variety of use-cases and possible variations across the University so students should always review local guidance from their department or faculty as well as information in the assignment itself. 

Guidance for Staff

The information available on this site is designed to support local discussions within departments and faculties and facilitate the development of discipline-specific statements on the use of GenAI in assessments. We recommend using this guidance together with the information available in the Suspected Academic Misconduct – Staff Guidance Document from the Office for Student Conduct, Complaints and Appeals (OSCCA)

When designing and reviewing assessments, we recommend:

  • Designing assessments that require information or skills that is more difficult for GenAI tools to replicate such as personal reflection.
  • Not relying on AI detection software as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI.
  • Consider how students, where permitted to use GenAI, should declare their use and describe how it was used within their work.
  • Provide clear guidance and rationale for permitting or prohibiting use of GenAI in assessments to support student understanding and compliance

Guidance for Examiners

Examiners across all programmes of study (including Undergraduate, Postgraduate Taught, and Postgraduate Research) are not permitted to upload, copy, or share student work with Generative AI (GenAI) tools and Large Language Models (LLMs). Examiners may not use tools, such as ChatGPT, Perplexity, Google Bard, or Microsoft Copilot, to analyse work submitted by students and provide written feedback. Due to lack of clarity and uncertainty of how these tools use and/or store input data, it is not permitted to share student IP and submitted work without permission from the student. 

Where preferred, however, examiners may use these tools to support their own writing in the process of documenting feedback (e.g., consolidating personal notes and rephrasing comments).