How will teachers navigate our new world of artificial intelligence and deep fakes when assessing student work, asks ERICA SOUTHGATE.
One of the common assumptions on which contemporary Western education rests is that growth and transformation is evident when students draw on their learning to create their own work. Originality remains the yardstick for the assessment of intellectual and artistic accomplishment – to demonstrate mastery of knowledge and skills. It is associated with authenticity, which, in education, translates to students undertaking and submitting their own, non-plagiarised work for assessment.
Our new machine age challenges these assumptions. What does it mean to be authentic when artificial intelligence (AI) is capable of producing ‘deep fakes’ – AI-generated new content or the manipulation of existing content to make new images, videos, audio and text – and when educators rely on software powered by AI to detect plagiarism?
Powered by AI, this new age already has a profound influence on our everyday lives. From smartphone assistants and chatbots, to online advertising and facial recognition tagging, to search engines that can sort millions of sources in seconds, AI is driving the applications and platforms that we depend on for communication, work and education.
Machine learning has the goal of enabling computers to learn on their own, through experience.
A simple definition of AI is a machine-based system that can make predictions, recommendations, or decisions that can influence real or virtual environments. A key point of these systems is that they are designed to operate with varying degrees of autonomy. Today, AI usually needs ‘big data’ harvested from the internet, sensors and geolocation signals from devices, to build and develop statistical models. These models can predict or make forecasts about phenomena (including human behaviour), provide recommendations for future action, or adapt to personalise content. These functions are increasingly being integrated into educational applications.
At present, we are in an era of narrow AI. This means that an AI is only able to do the focused task it was designed to do, sometimes more effectively than a human could. This leads people to think that AI is smarter than it is. But, currently, no form of AI has the general intelligence a human mind possesses. They are incapable of forming (and re-forming) representations of internal states of knowledge, thoughts, expectations, beliefs, motives and emotions, or of appreciating the internal states of others.
Machine learning (ML), an important subfield of AI, has the goal of enabling computers to learn on their own, through experience: to identify patterns in data, build models that explain the world, and make predictions without having explicit pre-programmed rules and models.
Originality in the age of AI
Teachers are well-versed in the various forms of plagiarism and, increasingly, in contract cheating, which involves buying academic work – usually from an online ‘essay mill’ or private contractor – and passing it off as your own.
Platforms designed to assist educators detect plagiarism and contract cheating, such as Turnitin, are powered by AI. Machine learning is used to detect patterns between a student’s work and other sources, such as the platform’s database of student work, online research articles and books, and other internet sources. Unsurprisingly, some students are already finding ways of playing the system. There are limitations to AI pattern detection, especially when students take chunks of text from different sources and reconstruct sentence structure without changing meaning.
Of course, avoiding plagiarism is no guarantee of originality. Even if students use referencing correctly, you can still read essays that substantially consist of quotes from different sources that have been strung together into paragraphs without any demonstrable depth of understanding. Depending on the settings that teachers specify for each piece of assessment, the platform will not detect this kind of mosaic writing or what is arguably a lack of original interpretation because the essay uses correct quoting conventions and referencing.
But, of course, AI is constantly improving its ability to detect plagiarism. Turnitin already has an option said to detect contract cheating using ML to create a ‘probability score’ based comparisons to prior student work (which needs to be wholly original) plus metadata.
COVID and online exams
The coronavirus pandemic has seen a massive rise of online proctoring services where students can undertake exams in their own home under the supervision of another person and also monitored by software. During the exam, software records your computer’s camera, audio and the websites you visit. It also measures your body and tracks your movements to identify ‘cheating behaviours’. If you do anything that the software deems suspicious, it provides a ‘probability’ of your misconduct and alerts your teacher to view the recording.
This software uses some combination of ML, AI and biometrics (including facial recognition, facial detection and eye tracking). The algorithm can flag a person’s behaviour as suspicious if they look away from the screen, do not look at the screen enough, or talk aloud. This could have negative effects for women with caring duties, for example, who might respond to a situation at home, or for neuro-diverse students or those who are differently abled, and for students who do not have quiet living conditions to undertake an online exam.
Ethical issues aside, these more sophisticated detection techniques can be seen as the start of an arms race in AI around plagiarism.
Surveillance, privacy and data security issues have also been raised, as proctoring software may record and analyse not only the person but their surroundings. There is ongoing and warranted debate concerning the ethics of online proctoring. Ethical issues aside, these more sophisticated detection techniques can be seen as the start of an arms race in AI around plagiarism. Just as AI is getting better at detecting plagiarism, it is also becoming more adept at faking authenticity.
Over the past two years, the company OpenAI has caused debate about the ethics of AI that can auto-generate extended text, music and images. Its software uses unsupervised ML algorithms that can be ‘programmed’ to produce ‘original’ material by showing it a few examples of what you would like it to do. This type of software has prompted serious dialogue about the machine as ‘author’ and how to detect this when such applications become commercially available. Imagine a future where a student uses an AI app to produce music, text or visual art that could gain a pass or credit grade. AIs could produce work where no two responses are the same, because they would learn to check against what it and other AI had already created.
As machines continue to learn by themselves, they may very well learn to avoid having their work identified as machine-made.
Time to panic?
At this time, AI does not have independent thought or imagination, yet machines can produce what would be regarded as new work. Would such works – such as creative imitations and deep fakes – necessarily be classed as inauthentic, even if machines can learn, through the massive harvesting of existing online artefacts, to produce an original piece of work?
How do we distinguish this from the human learning process we understand as leading to authentically original thought? Would it be better to shift our conception of human learning as, to a greater or lesser degree, machine-augmented?
While these might seem like philosophical questions, education is a philosophical project with material foundations and effects. Education is about ethical conduct, ontology (being and reality) and epistemology (the nature of knowledge and knowledge production). AI provides tools for supporting traditional ethical, ontological and epistemological ideas of original, authentic student work, and is also a potentially powerful challenge to this. As educators, we need to talk about this state of play, because the machine age of AI is already upon us.