The Unspoken Challenge: 5 AI Pitfalls We’re Sleepwalking Into

July 12, 2024 / 6 min. read

Please don’t say this aloud, but we might sleep into a significant, tearful mess. And it’s creeping into every corner of our lives, including education, with a persistence that we can’t ignore.

Over the past few months, artificial intelligence is consuming almost every moment of our professional thoughts; my most recent personal case study was uploading my daily vitamin supplements via Chat GPT 4o and asking for changes, enhancements and purchases I might need to make for a more improved routine. However, beneath the promise, I’m growing nervous about some pitfalls that we must be honest about and confront. A challenge many of us in education face is simultaneously showing enthusiasm without letting AI overwhelm or blind us.

Speaking on AI is quickly becoming one of the more tricky things to do within our profession. “Experts” dominate the LinkedIn airwaves, quickly shooting down criticism or content about response perfection, bias, transparency or their own perception of how much or how little AI should be in school practice or policy.

Are we sleeping into an AI Crisis?

 

 

  1. The Mirage of Data Perfection: Clean, Correct, Consistent, and Complete?

I recently read a part on AI data quality which attempted to classify AI responses into ‘4 Cs’: Clean, Correct, Consistent, and Complete – to try and make sense of the quality of data given in response to a prompt. 

However, there is a kicker—tiny errors can snowball into massive issues in the sprawling world of AI responses (now prevalent in education, job market, finance and health care). 

In education, this is beginning to translate into AI systems making biased or incorrect decisions about students’ learning needs and performance, often based on incomplete or skewed data. Imagine misjudging a student’s potential because your Learning Management System (LMS), recently enhanced with AI, could not see beyond the numbers.

The implications are enormous and potentially damaging.

I’ve noticed on many forums that so-called Artificial Intelligence “experts” shoot down questions and concerns from those more in the novice corner of the AI world or just those with fleeting interests who want to know more. The land grab of the “expertise space” reminds me of the recent volatility of “futurists” or “alternative educators” towards anyone who took a passing interest in engaging with futures or foresight, where they were met with a disturbing academic ugliness. 

At the recent Hakuba Forum (Nagano, Japan), Dr. Hiroaki Kitano from SonyAI delivered a keynote on the dangers of singularity and encouraged more openness. He emphasised that we must be mindful that many current challenges will shift our AI goals from implementing mass systems to developing accurate knowledge discovery systems. In an upbeat tone, he championed “human creativity” in getting to this promised land.

There is a need to regularly interrupt this process with questions (however much they come from the novice perspective).

The implications to our education system are massive since we would face an unprecedented expansion of access to now just knowledge (the internet did that). Still, AI gives us knowledge with insights and personalised solutions, with many discoveries potentially being beyond our immediate comprehension.

 

Dr. Kitano speaks to the Hakuba Forum with Governor Abe of Nagano.

 

2. Bias: The Invisible Phantom in Our Machines

One of the most insidious problems with AI is its inherent bias. 

I’ve read very little on detecting and addressing unfairness.

Despite the common belief by many that AI is not free from prejudice.

It’s growing concern that if the data used to train these systems is biased, the AI will reflect these biases in its decisions.

But how does this apply to our community?

In education, this could mean reinforcing existing disparities; we already know that some areas of the world are more data-rich than others when it comes to the data AI systems have been trained on; we also know there is a lack of knowledge when it comes to Indigenous culture, tools, knowledge, language and rituals. 

Picture a world where an AI system, trained on biased data, subtly but systematically disadvantages certain groups of students, perpetuating a cycle of inequality. 

It’s a concern we must consider in greater detail and strategise against.

3. The Illusion of Transparency: The Black Box Dilemma

As AI’s rapid advancement grows, our choice of applications, personalised recommendations, chatbots, and decision support systems increase.

However, AI’s lack of transparency is a significant hurdle for educators as the inner workings remain opaque to end-users. 

How will trust be embedded within these systems, thus not leading to scepticism and disengagement from a tool which can help aid our educational pain points?

Machine learning models are often black boxes—we feed them data, and they give us results, but understanding the ‘why’ behind these results can be nearly impossible. 

Mentorship and ethical judgment must remain paramount.

As educators and students struggle together to make sense of this brave new world, they should co-design a path forward using AI tools and consider the concept of AI transparency. Any lack of opacity can erode trust and make challenging or improving AI-driven processes within the school complex. This could create great apathy when considering the journey towards knowledge (we don’t want to live in a cut-and-paste world); the messy process of knowledge gathering and research should be noticed in favour of the quick fix.

4. The Over-Reliance Trap: More Data, Better Outcomes?

There’s a growing tendency to over-rely on AI.

AI overreliance is potentially causing complacent behaviour, and causing many of us to not properly reflect on many real-life scenarios.

This is now entering many high-stakes professions such as healthcare, finance, criminal justice and, of course, education.

Recent studies and research are now repeatedly demonstrating and showcasing that humans are over-relying on AI.

The missing piece seems to be the reasoning explaining how the AI comes to a result: more skills in our education system (if this is to be our reality) should be driven to arming humans with better abilities to calibrate their reliance.

Research by Fortiss showed that participants in their studies spent significantly less effort as they progressed through a task assisted by typical AI. Therefore, more data on measuring the overreliance on AI would seem logical medium to long-term ambition. 

5. The Ethical Minefield: Privacy, Consent, and Learning

The ethical implications of AI are vast and complex. 

Data privacy and security needs should be at the forefront for leaders. Questions within this domain need to be critically asked of providers in the design, development, and deployment of AI systems.

As such systems become more entrenched in our world, they raise not just profound questions about privacy and consent but the very nature of learning. 

In Hakuba, SonyAI communicated with delegates that they would work alongside “creatives” to protect their intellectual property within an internal AI system.

As we use more open systems or buy-in systems, which are window dressing for these same systems (Chat GPT), are we comfortable with the destination of the data and the privacy?

What safeguards are in place? As AI systems grow, leaders need to ask more about preventing unauthorized access, use, disclosure of data, encryption, access controls, and authentication mechanisms.

The Way Forward: Awareness and Action

So, what’s next? 

It indeed starts with greater awareness and asking more questions. 

We need to quickly abandon the mindset that if we add these extra [AI] bits, we get to celebrate our pathway to greatness automatically.

Responsibly, transparently, and ethics must remain key. 

We must also retain the premise of learning and growing together in a VUCA existence.

We must not sleepwalk into this potential messiness as we stand on the brink of an AI-driven future.

Allowing only certain voices to dominate the landscape, all questions are valid at such an early stage.

Keeping our humanity, a core theme from Hakuba, should be acknowledged.

By staying vigilant and proactive, we can harness the power of AI to enhance education without falling prey to its pitfalls. 

However, thanks to Dr. Sabba Quidwai, the point was made [in Hakuba] that we need to see AI not from the old paradigms of traditional assessments and timetables but from a new world and frontier of possibilities. 

It is hard to think of any other situation which is currently as essential or needs more of our collective attention.

 

——

If you are interested in partnering with the THINK Learning Studio by Elham.

Reach out here, info@elhamstudio.com

 

TLS

Get in touch with us and ask away!

"*" indicates required fields

See our latest Events & Gatherings.