Contributed

Moderation of generative AI is crucial for college

The use of Generative AI is a controversial topic in schools. Many schools have moved quickly to ban the use of ChatGPT but Kirkwood Community College has taken a different path. 

In August, Kirkwood professors met to talk about the use of AI in their classes. No concrete policy was decided but instead, professors were encouraged to create their own policy on a per-class basis. 

While many may be quick to immediately dismiss the use of generative AI as an inherently bad thing, the topic is a bit more nuanced than you may think. 

Many issues with generative AI stem from the fact that it is not moderated, meaning it can be a powerful tool for unethical practices in its current chaotic state. GenAI such as ChatGPT works by examining images and digital media. It’s trained off of human content, and uses this database to create transformative content. It’s not just as simple as copy and paste but that doesn’t mean the concerns around GenAI and plagiarism are inherently invalid. 

Many writers and artists worry about their jobs being taken by AI, or their creative content being used in a database. In AI’s current state, it is not able to completely take over human jobs. It is only able to assist humans in handling digital tasks, and requires human input and editing. 

Regardless of this, none of us know what the future holds in 50 – 100 years. Maybe one day, AI will take over all of our jobs. Or, maybe it won’t. 

But there’s one fact that we can’t ignore: Whether we like it or not, GenAI will pave the future and change technology as we know it. Banning it in schools and tossing it aside as technology too powerful to wield is not going to fix anything long-term. 

If colleges are preparing us for our future careers, shouldn’t we learn how to ethically yield these tools that many of us will use? Even now, many companies and employees have started using GenAI, especially in the STEM and healthcare fields. 

GenAI has been used to automate tasks and analyze large databases and information, freeing up time and allowing employees to spend their time on more meaningful work. In some fields, this extra time can be essential—especially in fields like cybersecurity, forensic science and software engineering. Not only is it faster for AI to complete routine tasks, it’s more efficient. 

When used ethically, GenAI can be a powerful tool for communication. It can help with brainstorming, editing, translation and analyzing data and conversations to lead to clearer, more effective communication.  

We shouldn’t be afraid of generative AI. Instead, we should be trying to understand it. If we teach students how to use GenAI ethically and responsibly and put pressure on companies and our government to do the same, a lot of the issues many people have would be eased significantly. 

Kirkwood should continue to discipline students who fail to use GenAI responsibly but should not outright ban the tool altogether. Whether we like it or not, GenAI in our future workplaces is inevitable.