skip to content

Blended Learning Service

 

Artificial Intelligence, assessment integrity, and implications for education

Last Updated Tuesday 27th February 2024

The information contained in these pages will continue to develop as we learn more about the impact of these tools. Please check back regularly to ensure you are receiving the most up to date information. 

What is Artificial Intelligence?

Artificial Intelligence (AI) is a rapidly evolving field of developing machine intelligence to replicate, and in some cases, exceed human cognitive capacities. The most prominent techniques in AI that have risen to wider attention are machine learning, large language models, and natural language processing (generative AI) which, when prompted by user input, can produce seemingly intelligent responses replicating that of human interaction.

Towards the beginning of 2023, OpenAI presented the most notable example of a natural language processing mode, ChatGPT. Becoming increasingly popular with its development through GPT 3.5 and soon after 4.0, other companies followed suit and provided platforms such as Bing Chat and Google Bard to compete with the now rapidly rising interest from the general public.

Large language models do not operate as a search-and-find mechanism, but instead generate unique text by identifying what is most likely to appear in a sequence, based on the parameters entered by the user.  It is not, strictly speaking, an intelligent machine, but is operating on a prediction model based on large data sets. The more users engage with, and contribute to, these platforms, the more efficient they become in answering queries and providing in-depth responses.  Because the text is uniquely generated, it is not detectable through traditional methods for identifying matched text and potential plagiarism.  While there are other implications for assessment (for example, the ethical question of whether AI-generated text should be referenced, and how), in the short-term the primary concern is the potential threat to current assessment approaches.

How does it affect me?

The rise in popularity has raised many moral and ethical queries regarding the proper use of AI in professional industries, and most notably, education. Questions surrounding the appropriate uses of AI, how they might impact student progress, how staff can support, respond, or even counteract the use, and how we can move forward with such technological developments becoming a potential main-stay in our day-to-day lives, have prompted the need for further guidance in order to inform members of the University and aid them in supporting others.

In addition to the risks, there are a number of positive uses of this technology within education. These will need to be evaluated and refined, all while the technology itself is continually evolving at a rapid pace and identifying new possibilities. We are all still learning what it means and what it can do, and so we can expect our usage to adapt alongside the technology. The Blended Learning Service, in collaboration with others, will be updating this guidance regularly, and continuing to be open in our discussions with staff and students to understand how best to help one another and embrace developments effectively, appropriately, and with sufficient support to build confidence in doing so.

Localised Support & Guidance

The information provided here presents a general consensus and understanding for the University of Cambridge but guidance, support, and information may vary at more local levels within Schools, Departments, Faculties and Colleges.

We encourage teams to clearly communicate their expectations to students and encourage, where relevant and useful, staff to refer students to this guidance page.