First of all, sorry. It’s been a while since I’ve published an edition of this newsletter.
Over the past few months, I’ve had a hard time writing. Part of the reason is life: my job as a full-time Professor and my role as a dad have been all-consuming lately, even more so than usual. The other reason is fatigue and burnout: I found the idea of sitting to down to put my thoughts into words difficult to swallow.
But I’m back now. And it’s a good thing because — even if I find writing difficult at a specific moment — writing helps give my life meaning.
On my first day back, I want to think about the future of AI-free learning spaces.
Their role in conversations about AI and education
What’s happening to them
Their future
I do think AI-free learning spaces are shrinking. After all, companies like Meta and Google are taking actions specifically designed to shrink those spaces. (More on this later.)
But I don’t think these kinds of spaces will go away. In fact, I’d argue that (a) AI-free learning spaces remain central to our conversations about AI-embedded classrooms and (b) they’ll continue to be powerful ways to restructure how we see the world.
My goal here is to work through those ideas.
AI-Free, You Say?
Over the past 2 years, I’ve noticed something.
When I listen to someone — an educator, an EdTech CEO, an AI enthusiast, etc. — talk about AI and education, there’s often a moment when they evoke an AI-free space. They assume that, at some point in the process, all students will put down their AI-embedded devices.
For example, consider Sinead Bovell’s activity for teaching writing with ChatGPT:
Students write the essay at home with AI assistance. Then, “they’ll come back to school and (with no AI systems) improve it, critique it…”
The moment is truly an example of the Hegelian Dialectic. The concept of an AI-embedded space contains and depends on its opposite, not only for its meaning but for its functionality. An AI-embedded space relies on the continued existence of AI-free spaces.
Counterintuitively, the assumption that AI-free spaces exist is embedded within our conversations about the AI-embedded classroom.
What’s more, in Bovell’s exercise the use of a temporary AI-free space is not a small detail. It’s essential to making the whole activity work. If a student were to use an AI system to critique the AI system — which is certainly possible — it would defeat the point.
This is one reason why I don’t see AI-embedded classrooms and AI-free classrooms as opposite poles. The bone of contention, here, is not whether we can cultivate AI-free moments in the classroom, but for how long those moments are actually sustainable.
Can we sustain those AI-free moments for an hour? A class session? Longer?
Because even Bovell’s activity assumes that we can encourage students to put AI down, even if only for a brief while.
But what happens when that “brief while” shrinks?
Let’s go there.
Embedded Technologies And The End of AI-Free Spaces?
Powerful individuals and companies seem hellbent on introducing generative AI into every part of our daily experience. We’re already being bombarded with an ever-increasing list of AI-enabled wearables: pendants, rings, and so on. As soon as a product like the Humane pin goes under, another wearable seems primed to take its place.
Oh, and let’s not forget glasses.
After all, Meta spent a lot of money on a large commercial campaign to make its Ray-Ban AI glassses more mainstream. Here’s one of those commercials:
In just a 30-second clip, we see a whole set of capabilities. Hemsworth plays his art gallery playlist while he walks around; Pratt looks up details about the artist without taking out his phone; Hemsworth takes a video and then promptly deletes it.
The defining characteristic of this AI use is its handlessness.
It’s probably no surprise that these devices are already showing up in classrooms, both in the K-12 space and in Higher Education. Below, my friend
talks about a moment when Meta’s glasses showed up in his classroom:I can already feel those AI-free learning spaces — so pivotal to our current discussions about AI and education — shrinking. After all, Google has its own competitor. More are coming.
What happens when those AI-free learning spaces shrink?
What happens if they go away altogether?
Those are the questions I’ll be thinking about next.
And spoiler alert: I’m going to guess a lot.
Imposed vs. Consensual AI-Free Spaces
Imagine that you walked into a class for the first time, as a student.
The professor goes over the course syllabus. She lays out the course’s guidelines and expectations. She explicitly states what everyone will be working on and why it matters. She briefly discusses the course assignments, to give you an overview of what’s coming.
She covers the AI Policy, stating that no AI use will be allowed because the classroom is a space for expressing and interrogating our own ideas. She links that policy to the course’s objectives, arguing that an AI-free learning space is the best and most direct way of reaching those objectives.
She lays out everything neatly and clearly. You feel anchored.
But…
It doesn’t feel like your space. It feels very pre-created. It’s clear that the professor knows what she’s doing and has taught the course many times before. You feel — simultaneously — like you’re in good hands and like you’ve entered a space that is distinctly not your own.
I start with this scenario to make a couple of points:
There are many different kinds of learning spaces
Those learning spaces come with different feelings of ownership and non-ownership
For me, this is an example of an imposed AI-free space. These are probably still the most dominant form. I’d even say that AI detectors remain popular precisely because they promise to protect those kinds of imposed spaces.
If the professor, say, co-created the AI Policy with students, it would be more of a consensual AI-free space or a hybrid AI-free space.
The difference between these kinds of spaces are ownership and voice.
Here’s what I think will happen. As AI becomes embedded in society at large, the sustainability of imposed AI-free learning spaces will get tested. Hard. I think it’ll become more and more difficult (though maybe not impossible) to impose AI-free learning spaces on students.
However, consensual and hybrid AI-free learning spaces will continue to have a lot of value. I can imagine classes where students opt into an AI-free space. Or they’ll even create and maintain those spaces.
I think those could work. And in fact, I think they’ll become even more powerful if students have a pivotal role in creating them.
But Don’t AI-Free Spaces Do Students a Disservice?
When I talk about AI and education with others, I often see this syllogism at work in the background:
Premise 1: AI is becoming embedded in virtually every aspect of society.
Premise 2: The purpose of school is to prepare students to function in society.
Conclusion: AI should become embedded in virtually every aspect of school.
I’ll grant the 2 premises. But I’ll push against the conclusion strongly.
The ongoing integration of AI and similar technologies into everyday life does not mean that every classroom needs to use AI extensively. In fact, I see an argument for cultivating AI-free learning environments as a way to make students (and us) more aware of what’s actually going on in the world and our place within it.
As
argued in a Substack note, there’s a difference between AI-aware courses and AI-embedded courses. You can build an AI-aware course without embedding AI into your teaching or learning practices.Because I think the future isn’t going to be about using AI all the time for everything. It’s going to be about moving between different contexts.
Sometimes, we’ll need to use AI.
Sometimes, we’ll need to put AI down.
So yes, I think cultivating AI-free learning spaces could serve our students quite well.
Or maybe I’m just dreaming, and AI will simply be worked into the background of everything, become the invisible hand guiding our actions and even our ideas, and we’ll never even think about what we gave up when we eliminated AI-free spaces.
And if that is the case…
Just let me keep dreaming, please.
A new AI-school being piloted, Astra, seems to be taking an opt-in approach like you mention: "Astra wants to be that kind of resource in a few ways. It wants students to grapple with technology — weighing in, for example, on tech tools in the classroom and even phone policies. Astra wants phones to be kept in lockers by choice, not top-down edicts. And in general, how tech is changing the world won’t be ignored. It’ll be discussed. Aired out.
During one day of the camp last summer, for instance, Astra held a seminar on 'AI ethics.'" (https://www.piratewires.com/p/astra-academy-school-the-openai-datavant-and-education-vets-rescuing-k-12-from-itself?f=home)
Interesting perspective! We're still in the early innings of AI where the assumption that AI-free spaces will continue to exist may be overestimating the control we'll have over our lives and how we engage with AI long-term. If we don't deliberately cultivate these AI-free spaces, we risk becoming passive consumers to whatever is put in front of us.