Appreciate the nuanced take and linking to Anna Mills’ great work. It feels like experimenting with different options is just the world we’re in right now, so it’s nice to see what others are trying out. (I also appreciated your post on AI free spaces and referenced it in my own substack recently). I’ll plug your upcoming talk with some colleagues who’re also interested in this stuff.
I think it's so easy to just stay in our own spaces, and the first step is to just poke around and see what others are doing. I learned so much from them!
Without question I'm intrigued by the idea of students navigating different types and vantage points of feedback; indeed, this is why I work so hard to elevate peer workshops so that the feedback students receive is not entirely my own.
However, my wondering here is how to maintain integrity with "authentic" phase in the early stages of writing—particularly when done outside of class, at least in part? In the same way I do not want students leaning on peer feedback until we get to the workshop, I want them to first set out on their own as writers before they lean on other sources.
Curious what those early stages look like for you in this process—and of course always appreciate your openness in sharing your thinking and its evolving iterations. Very helpful for us secondhand learners, too!
It’s a struggle, especially with online async courses (which I’m teaching right now).
Almost everything out there has a “make students write without AI” in the process, at some point. And that’s fine. But how can you do that if it’s online async?
I don’t have an answer.
Right now, I address it with SEWPs (self-empowering writing processes). There are some writing processes that help us find our own voices, and others that just give up those voices. I encourage my students to think hard about that. (But undoubtedly, some will shrug it off.)
I also do a pretty intense Transparency Statement at the end, which hopefully allows me to align the parts in the process a bit more.
Those are some of the ideas I’m throwing around, for the moment.
I formerly taught online, and at that time we were concerned with students either cutting and pasting writing they found online, or paying an essay-writing service. I think project-based, collaborative, active learning activities are one answer. Another is requiring oral presentations (live or recorded) where students had to answer questions about their work.
I assigned multiple stages for papers, which allowed me to see progress and to become familiar with their styles. If a final paper was fabulous but prior versions were clunky, there would be cause for a closer examination.
Of course trust is central to any approach. Mutual trust - they need to trust the professor as well.
These are great points. I think that any good strategy -- like yours -- needs to find ways to bring out their processes and to think about how different activities can be stitched together. There really are no silver bullets.
I’m definitely humbled by you and others teaching writing async - in the classroom it is so much easier when it comes to this topic (another reason it’s really valuable to follow what solutions you are constructing in your context!)
Thanks for sparking this dialogue, Jason! Would you be open to sharing examples of SEWPs and/or Transparency Statements? Are these things that you add to assignment instructions or communicate to students verbally? Thanks again!
Thanks for writing this up, Jason and for spotlighting what Anna and team are up to. I am experimenting with a similar approach, building in a layer of feedback from an LLM as a way to nudge students toward giving better peer feedback and help them think about how the tools work. I look forward to reading more!
Do you and your students realize that when you enter work into LLMs you've given the work to that AI for training? Any proprietary or personal information is no longer private. Drafts and work in progress are now in the public domain. If you are under the illusion that their writing couldn't be spit out to someone's prompts, that is indeed an illusion. I've seen my own writing that was scraped from a blog, spit out by AI verbatim (out of context, with no references).
Personally, as a student, I would request that my work NOT be entered into an AI for these reasons. Maybe @BethSpencer needs to make a badge, or Creative Commons needs to make a license, with the message "do not feed AI with my writing!" Also, as a student I would question why I'm paying tuition if AI feedback is what I'm getting?
In this time, students need actual interactions and support, not a robotic substitute that spits out other writers' words!
These are the questions in trying to reason my way through.
I think the approach is still worth considering and talking to students about. (I am also concerned with the LLMs sharing student writing. But many of my students are not.)
I’m still trying to make up my mind. And of course, their research is still an ongoing project.
It depends on the company/model and often the plan/settings whether info put into it is used for further training of future models (or fine-tuning of currently-trained models). For example, with ChatGPT paid consumer subscription you can turn it all off. With some EDU or otherwise Enterprise plans (for Gemini, ChatGPT, etc.) they often contractually disallow using the interactions for any training and such.
That said, yessss to warn warn warn students over and over that many services - especially almost all “free” ones - do gobble up their data and use it for all sorts of stuff they may not like or realize.
That said, most of our students are of the social media, phone app generation where they don’t seem to care (or know?) about privacy.
Perhaps some don't care - young people I talk with do and are very attentive to cybersecurity. They're the ones warning me!
This is a teachable moment, and a big part of digital literacy. Platforms and software programs generally, as well as LLMs, are not neutral. It is further complicated by the fact that we aren't all playing by the same rules. People in some parts of the world have regulatory oversight and legal parameters. In the US, they can sell your soul if they can get a buck for it!
These are issues I discuss in my work in online qualitative research, where these decisions are quite critical. We need a plan to protect the data and participants' identities as part of ethics approvals and practice, and that is getting harder and harder. I'm in the process of putting together a group to discuss the issues in a research context and generate recommendations and instructional materials.
The Association of Internet Researchers, real trailblazers in online scholarship, have a new guide about risky research: https://aoir.org/riskyresearchguide/. I was stunned to see the recommendation that some work is so sensitive and risks so high that no files should be created or saved online. Use good old pen and paper!
Appreciate the nuanced take and linking to Anna Mills’ great work. It feels like experimenting with different options is just the world we’re in right now, so it’s nice to see what others are trying out. (I also appreciated your post on AI free spaces and referenced it in my own substack recently). I’ll plug your upcoming talk with some colleagues who’re also interested in this stuff.
Thank you so much, Brian!
I think it's so easy to just stay in our own spaces, and the first step is to just poke around and see what others are doing. I learned so much from them!
Without question I'm intrigued by the idea of students navigating different types and vantage points of feedback; indeed, this is why I work so hard to elevate peer workshops so that the feedback students receive is not entirely my own.
However, my wondering here is how to maintain integrity with "authentic" phase in the early stages of writing—particularly when done outside of class, at least in part? In the same way I do not want students leaning on peer feedback until we get to the workshop, I want them to first set out on their own as writers before they lean on other sources.
Curious what those early stages look like for you in this process—and of course always appreciate your openness in sharing your thinking and its evolving iterations. Very helpful for us secondhand learners, too!
It’s a great point!
It’s a struggle, especially with online async courses (which I’m teaching right now).
Almost everything out there has a “make students write without AI” in the process, at some point. And that’s fine. But how can you do that if it’s online async?
I don’t have an answer.
Right now, I address it with SEWPs (self-empowering writing processes). There are some writing processes that help us find our own voices, and others that just give up those voices. I encourage my students to think hard about that. (But undoubtedly, some will shrug it off.)
I also do a pretty intense Transparency Statement at the end, which hopefully allows me to align the parts in the process a bit more.
Those are some of the ideas I’m throwing around, for the moment.
I formerly taught online, and at that time we were concerned with students either cutting and pasting writing they found online, or paying an essay-writing service. I think project-based, collaborative, active learning activities are one answer. Another is requiring oral presentations (live or recorded) where students had to answer questions about their work.
I assigned multiple stages for papers, which allowed me to see progress and to become familiar with their styles. If a final paper was fabulous but prior versions were clunky, there would be cause for a closer examination.
Of course trust is central to any approach. Mutual trust - they need to trust the professor as well.
These are great points. I think that any good strategy -- like yours -- needs to find ways to bring out their processes and to think about how different activities can be stitched together. There really are no silver bullets.
I’m definitely humbled by you and others teaching writing async - in the classroom it is so much easier when it comes to this topic (another reason it’s really valuable to follow what solutions you are constructing in your context!)
Thanks for sparking this dialogue, Jason! Would you be open to sharing examples of SEWPs and/or Transparency Statements? Are these things that you add to assignment instructions or communicate to students verbally? Thanks again!
Thanks for writing this up, Jason and for spotlighting what Anna and team are up to. I am experimenting with a similar approach, building in a layer of feedback from an LLM as a way to nudge students toward giving better peer feedback and help them think about how the tools work. I look forward to reading more!
Do you and your students realize that when you enter work into LLMs you've given the work to that AI for training? Any proprietary or personal information is no longer private. Drafts and work in progress are now in the public domain. If you are under the illusion that their writing couldn't be spit out to someone's prompts, that is indeed an illusion. I've seen my own writing that was scraped from a blog, spit out by AI verbatim (out of context, with no references).
Personally, as a student, I would request that my work NOT be entered into an AI for these reasons. Maybe @BethSpencer needs to make a badge, or Creative Commons needs to make a license, with the message "do not feed AI with my writing!" Also, as a student I would question why I'm paying tuition if AI feedback is what I'm getting?
In this time, students need actual interactions and support, not a robotic substitute that spits out other writers' words!
These are the questions in trying to reason my way through.
I think the approach is still worth considering and talking to students about. (I am also concerned with the LLMs sharing student writing. But many of my students are not.)
I’m still trying to make up my mind. And of course, their research is still an ongoing project.
Also, you might look up CC Signals about Creative Commons stuff:
https://techcrunch.com/2025/06/25/creative-commons-debuts-cc-signals-a-framework-for-an-open-ai-ecosystem/
Thanks. I've been following the development of new Creative Commons options, and am glad that they are making progress!
It depends on the company/model and often the plan/settings whether info put into it is used for further training of future models (or fine-tuning of currently-trained models). For example, with ChatGPT paid consumer subscription you can turn it all off. With some EDU or otherwise Enterprise plans (for Gemini, ChatGPT, etc.) they often contractually disallow using the interactions for any training and such.
That said, yessss to warn warn warn students over and over that many services - especially almost all “free” ones - do gobble up their data and use it for all sorts of stuff they may not like or realize.
That said, most of our students are of the social media, phone app generation where they don’t seem to care (or know?) about privacy.
Perhaps some don't care - young people I talk with do and are very attentive to cybersecurity. They're the ones warning me!
This is a teachable moment, and a big part of digital literacy. Platforms and software programs generally, as well as LLMs, are not neutral. It is further complicated by the fact that we aren't all playing by the same rules. People in some parts of the world have regulatory oversight and legal parameters. In the US, they can sell your soul if they can get a buck for it!
These are issues I discuss in my work in online qualitative research, where these decisions are quite critical. We need a plan to protect the data and participants' identities as part of ethics approvals and practice, and that is getting harder and harder. I'm in the process of putting together a group to discuss the issues in a research context and generate recommendations and instructional materials.
The Association of Internet Researchers, real trailblazers in online scholarship, have a new guide about risky research: https://aoir.org/riskyresearchguide/. I was stunned to see the recommendation that some work is so sensitive and risks so high that no files should be created or saved online. Use good old pen and paper!
Glad to see you on here Jason! Just subscribed