In recent times, “AI tutors” have been hailed as the next great leap for schools and education systems. Promises abound: personalised learning, instant feedback, limitless scalability. Yet despite all the excitement, there remains a critical gap — an AI tutor is not yet, and possibly may never be, capable of fulfilling what human intelligence innately provides. This is especially true when education aims not only to transmit facts, but to foster creativity, instil critical thinking, and cultivate an intelligence that can adapt to uncertainty.
As students return to classrooms or continue learning online, many educators and policymakers are asking: can AI tutors match or exceed what outstanding human teachers already do? And when do we risk chasing a version of education that aims only at what machines can replicate — neglecting what makes human intelligence unique?
These questions are more urgent now that tools like ChatGPT Edu are being rolled out in universities. While these platforms hold potential for transforming the way learning is delivered, they also raise difficult issues about what remains the domain of human instruction — and what might be lost if those domains are ignored.
Table of Contents

The Central Challenge: Understanding Human Intelligence
At the core of the debate lies something we understand only partially: how human intelligence works. Without that understanding, building an AI tutor that truly complements human learning is more of a leap of faith than engineering. Schools may buy systems that assess memory or drill facts, but what about the kinds of learning where uncertainty reigns — discovering answers, confronting paradoxes, refining judgement, nurturing resilience?
Educational systems worldwide tend to celebrate what can be measured — accuracy, speed, and correct answers. AI is exceptional in many of those tasks. But what of imagination, ethical reasoning, and adaptability? These are harder to quantify. More importantly, they are harder to model computationally. The risk is that AI tutors will optimise for what can be evaluated easily, rather than what really matters for young minds.
Another layer to this problem is competition: as AI becomes more capable, what should human learners focus on, so that their intelligence still adds value? If AI can answer many factual, routine, or even semi-complex questions with immediacy, then what remains uniquely human in learning? Schools must therefore rethink the educational goals: are we preparing students to feed the machines (by matching their capacity) or to grow beyond what AI currently can do?

What AI Tutors Can Do — And Where They Fall Short
To be fair, AI tutors are offering and will continue to offer significant advantages. They can:
- Personalise pace and content: Students can work at their own speed; weaker areas can be reinforced.
- Provide constant, scalable feedback: Mistakes can be corrected immediately; repetition and drilling become feasible at scale.
- Free up human teachers to focus on mentorship, holistic development, and creative projects.
However, these strengths do not automatically translate to mastering what human intelligence can uniquely produce. Here are the limits:
- Lack of Insight into the Brain’s Learning Mechanisms
AI may track memory, test recall, perhaps even map some cognitive patterns, but no AI yet understands fully how memory, creativity, intuition, and expertise develop in human brains. Without that model, it is hard for AI to guide students into higher‐order thinking reliably. - Over‐reliance on Known Data
AI learns from data. It trains on what is already known or recorded. But intelligence frequently involves the unknown — dealing with ill‐defined problems, asking questions no one has asked before, bridging fields, innovating. Human teachers, guided by experience, can take students into those grey areas. AI remains less capable there. - Ethical, Emotional, and Moral Dimensions
Genuine education encompasses more than cognition. Schools are about character, collaboration, and emotional resilience. AI can simulate conversation or moral dilemmas, but the lived, relational experience — the teacher as mentor, the peer interactions, the shared uncertainty — are not yet reproducible by AI with full fidelity. - Risk of Narrowing Educational Goals
If the measure of success becomes what machines can do, education may shift to “what can be evaluated by an algorithm”. That risks underplaying arts, ethics, philosophy, experimental thinking — the parts of learning that are hardest to quantify but often most essential to forming well‐rounded individuals.
Charting a Path Forward: Widening Vision, Deepening Purpose
Given both the promise and the limitations, what should schools, policymakers, and researchers do to ensure AI tutors contribute positively — rather than distract or diminish — the grand aims of education?
- Define Learning Goals Beyond Testable Outputs
Education systems must reaffirm what they value. Is it just exam performance, or also character, curiosity, and adaptability? Making this explicit helps shape AI tools to support not just memory and recall, but also ambiguity, ethical reflection, and innovation. - Research into Human Intelligence Must Increase
Investing in neuroscience, cognitive science, and psychology is essential. The more we understand memory formation, creative thinking, pattern recognition, and intuition, the better we can design AI tutors that support rather than mimic superficial aspects. - Hybrid Models: Teachers + AI, Not AI as Replacement
In practice, the best models may lie in combining human teachers with AI tools. Let machines handle repetitive tasks, provide analytics, suggest corrections, free teachers to focus on mentorship, feedback that demands human intuition, and experiences that deepen emotional and moral learning. - Ethical Oversight and Responsible Implementation
As institutions adopt AI tools, they need oversight frameworks: ensuring bias is minimised, privacy is respected, data is used ethically, and outcomes are monitored. Also, ensuring equitable access — so AI tutors don’t become a privilege of a few. - Curriculum Innovation
Curricula should adapt, including more emphasis on problem‐solving under uncertainty, interdisciplinary work, creative thinking, and possibly meta‐cognitive skills (learning how to learn). These are aspects where human intelligence excels in ways AI does not (yet).

Conclusion: Keeping Human Intelligence in the Picture
AI tutors in schools hold real promise: scaling access, personalising support, reducing burdens on overtaxed teachers. Yet, education is not merely about acquiring what AI can replicate. Human intelligence — its flexibility, its creativity, its ethical dimension — remains the core target. The question isn’t whether AI tutors will ever “replace” human instructors, but how they might be employed so that schooling strengthens rather than softens those uniquely human traits.
For the Nigerian education context, particularly, this matters deeply. Our classrooms are not just spaces of rote learning; they are spaces of character formation, community building, and preparing young people to face uncertainty in local, regional, and global challenges. If we adopt AI tutors without attending closely to what human intelligence requires — imagination, moral judgment, adaptability — we risk creating learners well‐versed in fact but poorly prepared for the unknown.
As schools begin to integrate AI more fully, we must keep asking not only: What can AI do? But also, what should AI support, so that human intelligence still leads? The future of education depends not on machines alone, but on how they help us become more fully human.
Join Our Social Media Channels:
WhatsApp: NaijaEyes
Facebook: NaijaEyes
Twitter: NaijaEyes
Instagram: NaijaEyes
TikTok: NaijaEyes
READ THE LATEST EDUCATION NEWS



