Artificial intelligence has surged into university campuses with a force that few educators anticipated just a few years ago. Since tools like ChatGPT gained widespread attention, lecturers and students have found themselves navigating unfamiliar terrain where traditional methods of teaching and assessment are under strain. Institutions that once relied on essays and take-home assignments are now questioning whether these tasks truly reflect student learning when generative AI can produce polished responses in seconds.
Across campuses, professors describe an environment where the line between legitimate learning and technology-assisted shortcuts is blurred. Some students openly use artificial intelligence to explore complex concepts or summarise readings, while others quietly depend on it to complete entire homework sets. Research suggests that roughly a third of college students in surveys admit to using AI tools for written assignments, with many doing so on more than half of their coursework.
Educators now face a pressing question about academic integrity and the value of their degrees. If AI can craft answers that resemble human thought, how should universities measure true learning and ensure fairness? This challenge is not limited to one department or one type of class; it stretches from writing courses to mathematics and beyond.

Table of Contents
Professors Fight Back with New Methods
Some lecturers have reacted to the rise of AI by tightening the way tests and homework are structured. In classrooms where essays once dominated, instructors are now reintroducing paper-based exams and assigning tasks that must be completed in person. These approaches aim to reduce opportunities for students to rely on generative AI during assessments.
Others are revamping coursework entirely. Oral examinations, in-class writing labs, and projects that require personalised responses are gaining popularity. By requiring students to explain or defend their answers in real time, teachers hope to distinguish genuine understanding from AI-generated text. Some biology and philosophy lecturers have designated strict in-class tests to reduce the risk of academic dishonesty.
At the same time, a number of professors are placing greater emphasis on process over product. They ask students to submit drafts, annotated research or step-by-step reflections on how they arrived at an answer. This helps instructors see the student’s thought process rather than merely the final result, making it harder for AI shortcuts to go unnoticed.
This shift is not without controversy. More time spent on in-class assessments means less time for covering new material, and some students argue that these changes feel like punishments for broader trends rather than individual choices. Yet, many academics believe such steps are necessary to preserve academic standards in the age of advanced technology.

Embracing Artificial Intelligence as a Learning Partner
Not every educator wants to block AI outright. A growing number see it as a powerful tool that, when used responsibly, can enhance learning. Marketing and computer science instructors are among those experimenting with creative uses of artificial intelligence inside and outside the classroom.
For example, some courses now include assignments where students input real data into AI tools to identify patterns or generate initial insights before students build their own analysis on top of that. This kind of dual approach encourages students to compare machine suggestions with traditional analytical methods like Excel or hand calculations.
In upper-level computer science programmes, students are taught how to build and assess artificial intelligence systems, giving them practical skills that will be increasingly valuable in the job market. These courses view AI not as a threat but as a central part of modern professional life.
Several universities are also discussing assignments that explore ethics and responsible AI usage, prompting students to consider not just how a tool works but how it should be used in ethical and professional settings. This helps students reflect on when the use of AI supports genuine learning and when it undermines it.
The Broader Debate on Fairness and Integrity
At the heart of this transformation is a deeper debate about academic integrity and the purpose of education itself. Critics argue that easy access to generative AI undermines the development of critical thinking, writing and research skills that higher education is meant to build. If students can circumvent these processes, they risk not mastering the material the course aims to teach.
On the other hand, supporters of AI integration suggest that education must evolve. Learning how to work with artificial intelligence effectively can be seen as a skill in its own right. They argue that students should be guided to use these tools responsibly, just as calculators became standard in mathematics courses decades ago.
The tension between these viewpoints has led to a patchwork of policies within and between institutions. Some lecturers maintain strict bans on AI use in any part of coursework, while others assign value to AI-assisted reflections or critical evaluations of machine-generated text. A few universities are even establishing committees to draft formal guidelines on AI usage, balancing innovation with accountability.

Critically, the rise of artificial intelligence has forced educators to ask fundamental questions about why education matters and what skills today’s students should leave university with. In confronting these questions, professors are redefining assessments, reshaping curricula and rethinking the very nature of learning for a generation raised alongside powerful technology.
Join Our Social Media Channels:
WhatsApp: NaijaEyes
Facebook: NaijaEyes
Twitter: NaijaEyes
Instagram: NaijaEyes
TikTok: NaijaEyes



