Opinion

Balancing curiosity and caution with AI

There is a discussion that seems to come up at the Communiqué staff table regularly: Does AI have a place in education? The short answer is yes, but students should balance their curiosity about what AI can do with caution about what it cannot.  

When taking a look at the resources available to evaluate this complicated question, Google’s AI search assistant was at the top of the page ahead of the search result links, aiming to present a snapshot of the seemingly endless pages of information presented in response to our query.  

Broken into three sections, one regarding students, another for educators and a third for institutions, some of the benefits reported by the AI Overview include “personalized learning, enhanced engagement, automated grading, “reduced teacher burnout and efficient resource allocation.” 

In contrast, the Overview states the risks as “student over-reliance leading to a decline in critical thinking and problem-solving skills, perpetuation of algorithmic bias and inequity, significant privacy and security concerns regarding sensitive student data and the potential for increased plagiarism and academic dishonesty,” with which we tend to agree. 

One will likely assume the overview of the topics in the initial query is accurate, but coming to expect AI to be correct, students, or any other user, are less likely to be critical of the information they’re being presented.  

This inclination, in addition to poor media literacy skills, leads to important facts slipping between the cracks and bad advice being allowed to permeate what is considered a reliable resource.

This can turn deadly, as was the case for Adam Raine, a teenager who initially went to ChatGPT as a resource for schoolwork and found himself confiding in the program as a replacement for human connection.  

His parents found evidence of conversations with the bot which encouraged him to take actions that ultimately led to him utilizing the resource for his suicide. 

His parents are suing OpenAI, creators of ChatGPT, for ChatGPT’s role as a digital confidante, ultimately assisting him in planning and executing his own death. 

Beyond playing the role of confidant, ChatGPT promises to assist with any task, trouble or topic, from a quick question on formatting a thesis statement to writing entire papers with sources formatted in any style. 

Keep in mind those sources may or may not actually exist, like those hallucinated by the AI program used to create recent reports created and circulated by the federal Department of Health and Human Services.  

Commonly voiced concerns about educational, social, ethical, intellectual and environmental impact aside, AI interfaces are here to stay. 

By learning how to use AI appropriately, understanding media literacy skills, recognizing the presence of AI rendered information and encouraging actively practicing critical thinking skills at every level of education, perhaps we can avoid the worst of what AI has to offer and make the most of how it can assist in personal and educational endeavors.  

Perhaps we need to be reminded of the lesson that was taught during Google’s AI search assistant’s early days in 2024 when it recommended hapless searchers a recipe that included adding glue to pizza when it used false information sourced from a joke made in an a now 12-year-old comment on Reddit; You can’t believe everything you read on the internet, even if it’s rendered by an AI interface.