IFS AND BOTS
An increasingly AI-enabled world is creating ethical and legal pitfalls – here’s how to help students avoid them
Alarmingly, some students are handing in work entirely generated by artificial intelligence and don’t seem to understand why this is wrong. Nor do they know what the potential pitfalls are.
My daughter needed to prepare an Afrikaans speech on street art. I suggested that she use ChatGPT to do an initial scoping, but she was sceptical.
ChatGPT gave a generic overview of Berlin Kidz, the artists she was researching. An example was: “One of the standout features of these young artists is their knack for creating beautiful pieces filled with colour and vibrancy.”
Based on her knowledge and first-hand experience in Berlin, she was not convinced. She discarded the note I gave her and used her own material that she had gathered. Artificial intelligence (AI) obviously did not understand the question properly and proceeded to provide a general answer that could be applied to any graffiti artist.
After some discussion, we decided that it spoke in generalities and was essentially bluffing its way through.
I also read about how someone used ChatGPT to break up a relationship. When I asked a similar question, I didn’t get a how-to list. The algorithm was probably changed, and it spoke about the importance of speaking through issues and so forth. Later I asked the same question and it did provide a more direct approach under headings like “choose the right time and place”, “listen and empathise” and “be respectful and kind”.
Interestingly, AI is being used more to counsel people in phone counselling situations. I asked a few young people whose school was using a system like this what they thought of it. They were appalled and said if they really did have a problem, they would prefer speaking to a human, not a machine.
In the academic world, I have read stories of AI quoting fictitious authors and making up quotes. AI in education has been a hot topic of discussion since the end of 2022.
I recently spoke to a professor at a leading university who bemoaned the fact that students were submitting essays written entirely by AI. Even though plagiarism software had picked it up, the students failed to see any problem with what they were doing. They surely need further help in understanding the implications of using AI.
Principles to use AI ethically
The Russell Group of universities in the UK has developed a set of principles using AI and committed its institutions to the ethical and responsible use of generative AI and to prepare their staff and students to “be leaders in an increasingly AI-enabled world”.
These universities acknowledge the potentially profound impact on the ways in which they teach, learn, assess and access education. They want to ensure that generative AI tools can be used to enhance teaching practices and student learning experiences.
The risk of plagiarised content and/or copyright infringement in material being submitted by a student as their own is very real.
In supporting students to become AI literate, they need to understand that there are risks to privacy and intellectual property associated with the information that students and staff may enter. They also need to understand the potential for bias, as generative AI tools produce answers based on information generated by humans and may contain societal biases and stereotypes.
There is also the risk of inaccuracy and misinterpretation of information as data is gathered from a wide range of sources, including those that are poorly referenced or incorrect. Unclear commands or information may be misinterpreted by generative AI tools to produce incorrect, irrelevant or out-of-date information. This makes the student accountable for the accuracy of the information generated by these tools.
The risk of plagiarised content and/or copyright infringement in material being submitted by a student as their own is very real; artwork used by image generators may have been included without the creator’s consent or licence.
Read more in Daily Maverick: Does AI spell the end of the open internet?
Exploitation is an aspect I wasn’t familiar with, but it involves the process by which generative AI tools are built. This can present ethical issues, where some developers outsource data labelling to low-wage workers working in poor conditions.
In its statement, the Russell Group commits itself to providing guidance and training to help students and staff understand how generative AI tools work, where they can add value and personalise learning, as well as their limitations.
Using AI appropriately
In increasing AI literacy, its universities will equip students with the skills needed to use these tools appropriately throughout their studies and future careers, and ensure staff have the necessary skills and knowledge to deploy these tools to support student learning and adapt teaching pedagogies.
Writing in a teaching magazine, Ian Stacey questions the accuracy of chatbots: “AI chatbots aren’t above citing facts or statistics that are either provably false, or ones it’s spontaneously generated – i.e. made up. Related to this is the issue of inappropriate content. You are what you eat, and since AI chatbots are fed by data and content harvested from the internet, this can lead to potential complications.”
Alex O’Brien, a journalist and author of the book The Truth Detective: A Poker Player’s Guide to a Complex World, provides useful advice that could help you get started in your AI detective work:
Your first task is to verify and check the sources. Can you check the evidence – both written and visual?
Read more in Daily Maverick: Artificial Intelligence is moving way faster than the speed of evolution. Could we be in trouble?
The next step is to take a closer look at the text. Some clues can be found in spelling, use of grammar and punctuation. For now, the default language for AI is still American English. If the spelling and grammar is not appropriate for the publication or the author writing it, ask why. Does it include quotes? If so, who are the quotes by – do these people or institutions exist? Check the references used and check what date they are from. A clue is that AI is often still limited in terms of what data source it can access, and it is often unaware of recent news.
Finally, check the tone, voice and style of writing. There are linguistic patterns that are still stilted in AI-generated text (at least for now). A particular giveaway is an abrupt change in tone and voice.
Looking to the future
The example of Berlin Kidz is a stark reminder that AI can easily make up things that seem plausible and real, but need cross-checking.
But let’s be fair. ChatGPT is learning all the time, and before finishing this article I went back and asked about the Berlin Kidz again. This time, according to my daughter, AI was spot-on: “The Berlin Kidz are a group of urban artists and daredevils known for their unique style of graffiti and parkour.” And then it went on in great – and accurate – detail.
In the previous century, US psychologist, behaviourist and author BF Skinner spoke about replacing teachers with machines. He wasn’t exactly right. The interrelationships and the social dimensions of teaching cannot be underestimated.
In Alex Beard’s incredible book Natural Born Learners, in which he explores the future of education around the world, he concludes that education is a lifelong process and that teaching purpose, values and ethics, and developing wisdom, are essential for education and cannot easily be taught by computers.
As we engage more with artificially generated ideas and text, we will have to use our ability as humans to investigate, problem-solve and be creative.
In short, we will have to scrutinise information more closely and use our own minds to make value judgements. DM