The Rise of Metabots
I'm starting to see something awesome. Teachers are using chatbots not to just convey information, but to promote AI Literacy.
This snippet is from a LinkedIn post by Ryan Tannenbaum.
Here’s the TL;DR:
Take the course material you want students to know.
Create a chatbot.
The twist: Ask that chatbot to lie 20% of the time.
Have students use the chatbot and spot the lies.
The chatbot won't stick to the 20%. But that's okay. It's more of an approximation, anyway. What matters is that the student goes on a scavenger hunt to find inaccuracies.
The chatbot stops being an authoritative voice, and starts being a suspect voice that warrants our skepticism and critical thinking.
In other words, the AI voice takes on the fallibility of a human voice.
I’m seeing these kinds of experiments more and more.
Chatbots that are less about providing information and more about encouraging certain behaviors—like critical thinking, close analysis, and creativity.
My contention: this kind of innovation is the future of chatbots in the classroom. Innovative educators are beginning to move past a specific model (where a chatbot is a mediator between student and knowledge) and towards a more pedagogically sound model (where a chatbot sets up a scenario for low-stakes practice and iteration).
I’ll have more to say about this in what follows.
Why This Matters
I’ve made my share of chatbots. I created a chatbot to field questions about Edgar Allen Poe’s The Raven, after noticing just how much difficulty students were having with the poem. This way, students could get personalized support with a challenging text.
I also created a Descriptive Writing chatbot: students could give a bland, non-descriptive sentence and the chatbot would convert everything to a detailed, concrete description. I wanted my students to see the difference.
My chatbots, with a few exceptions, are about knowledge. Essentially, I was creating a virtual me: instead of asking me questions about a specific theme or piece of content, my students would ask a chatbot.
I now see the problems with this:
It starts to replace me: Now, it doesn’t do it completely. But it does take a section of my regular day—in which I field relatively easy and comprehension-based questions—and offload it to a machine.
It’s especially prone to hallucinations: It’s strange. The most common use of chatbots is the most problematic. If we use chatbots to convey information and content, then we should be especially worried about hallucinations.
Switching to a model of using chatbots to encourage practice and failure is much better.
It’s more pedagogically sound.
The Takeaway: A Return To Basics
We’re all under a lot of pressure. We want to reinvent the wheel because, in a world where “Adapt or Die” mentality reins, we all need to prove that we’ve adapted.
But we don’t need to reinvent the wheel.
Scratch that. We shouldn’t reinvent the wheel.
The best way forward is often to step back, and think about what works. Return to Learning Science.
Return to the basics.
If we know that low-stakes practice and scenario-based learning work better than simply delivering knowledge, then why do we so quickly design chatbots based on bad pedagogy. The tools don’t change the principles of good learning and teaching.
That’s all for today.
Cool Things I Learned This Week
Adobe’s Firefly was apparently trained on Midjourney images: So, the trend of AI programs cheating off of each other continues.
Microsoft’s VASA-1 Model is frighteningly good: Deepfakes get easier to create every day. It’s as worrying as it is fascinating.
Medium just banned AI from its partnership program: They don’t want anyone making money on AI content, on their platform. I wonder how that’s going to work.
Texas has replaced thousands of human graders (for their STARR standardized exam) with AI bots: I’m still processing what the rise of grading bots means for education. It makes me feel a bit queasy, at the moment.
Alexander Todd created an ASU + GSV Summit 2024 Chatbot: I was supposed to go to the AIR (AI Revolution) show this year, but couldn’t make it. Someone actually created a GPT (ChatGPT Plus required) based on transcripts from the ASU+GSV Show. Pretty cool!
This is a fabulous idea! 🌟 Every school should adopt it.
And away we go! I like the idea of metabots (plus it just sounds cool too). I play around with chatbots and see the same problems with hallucinations and with just plain inadequacy