The Age of Plausible Deniability
Trump just accused Harris's campaign of using an AI-generated photo. Here's why that matters.
Note: The following post will be a departure for me. I will wade into politics, not because I have any specific political position but because what just happened in the Presidential race was so important that it bears focusing on. I will attempt to remain as impartial as possible.
* * * * * * * * * * * * * * * *
A few days ago, something huge happened. I found it predictable but, at the same time, jarring.
Former U.S. President Donald Trump accused Kamala Harris’s campaign of using a fake, AI-generated photo to give the impression that her arrival in Detroit was met with a large crowd of supporters.
Trump claimed that, in fact, almost no one was there and that Harris’s campaign wanted to make Harris appear more popular than she actually is.
Here is one of the photo he disputed:
Almost immediately, major news outlets reported the falsity of Trump’s claims. It was covered by CBS, ABC, The Washington Post and more.
I posted about it on my LinkedIn. In the comments, we talked a lot about what this means for the future of truth.
I did want to focus on 2 responses, in particular.
In the first response, a commenter asked a question about the plane’s back fin. They noticed a missing number, whereas many photos of Air Force Two included a number on the back. They did some digging and concluded that either (1) they decided to cover the number or get rid of it or (2) this is actually an AI photo.
I did some digging myself. I found out that a little over a year ago, the U.S. military started removing serial numbers of aircrafts for security reasons. He then did some digging, and found more photos of Air Force Two. Some of them had serial numbers and some didn’t, depending on the year. We worked together and concluded that it’s more like that the serial numbers were removed.
Now, onto the second response.
A friend of mine, Dawnne, wrote this:
She did a full analysis of the clouds and sky, looking for continuities and discontinuities. She did additional research into the size and shapes of the aircraft. She put it all together and concluded that—based on her careful reflection and research—that it was probably a real image.
Phew…I’m tired of thinking about it.
Now, let’s talk about that tiredness and why it matters.
Are We Asking Too Much of Critical Thinking?
The two commenters went above and beyond. They wrote to me. I wrote to them. We researched together and came up with a consensus. We did exactly what this new age demands of us.
We used our “critical thinking” caps.
So, it sounds like we should feel good about ourselves. Right?
Today, I have the exact opposite emotional reaction. Instead of feeling confident in my ability to snuff out information through critical thinking and analysis, I found myself asking some hefty questions:
Do we have too much confidence in critical thinking as a tool for battling misinformation?
Is critical thinking becoming a scapegoat? This would allow people to spread misinformation, while saying that it’s the consumer’s job to sniff out what to believe.
Is critical thinking scalable? It it took this much work and back-and-forth for a single image, is it practical for individuals to do it for hundreds (or thousands) of images.
All of this brings to me a single question which I have no answer for: are we asking too much of critical thinking?
Our answers to that question (I'm not sure what mine is yet) will be essential for the future of the classroom.
After all, many of us teach critical thinking as a general rule.
The Age of Plausible Deniability?
For me, the main issue here isn’t whether what Trump says is true or not.
It’s that the accessibility of Generative AI has created an “age of plausible deniability.”
Imagine you want to do something wrong. And in your back pocket, you have phrases like:
“I didn’t say that. Obviously, someone must have cloned by voice.”
“I didn’t do that. But did you notice some irregularities in the reflection on the right side? It’s clearly AI-generated?”
“I wasn’t there. But have you seen recent advancements in AI-generated video? It’s really amazing what anyone can do from their laptop right now.”
Yes, Generative AI has some powerful use cases, especially for education. But it’s also a really powerful weapon for gaslighting.
This is why, I think, we lean on critical thinking so much. We (and I include myself here) really want to believe that critical thinking is the key. It’s the only surefire way (maybe) to sniff out misinformation.
Seems good.
But then, we think about the sheer amount of content we’re already being bombarded with. Then, we think about how easy it would be to say “That’s not me. That’s clearly AI-generated.”
Asking people to use critical thinking to sniff out information for themselves is a big ask.
Maybe it’s too big.
Does Critical Thinking Subvert Bias or Support It?
There’s another issue.
Critical thinking may not subvert our own biases. In fact, it could just as easily support those biases.
Above, I mentioned that the first commenter on my LinkedIn post questioned the serial number and concluded that either (1) they removed the number or (2) it was fake. That question was the jumping-off point for our investigations.
When I did my research, I followed my own biases. I searched specifically for whether the Air Force was removing numbers and why. I found the answer because I asked a specific, targeted question. I ran that search only because I was very confident that the image was real.
My bias and critical thinking went hand-in-hand.
I think that should be a general worry. I suspect that critical thinking will be a very powerful tool for challenging someone else’s bias. But it may be not as effective at challenging our own.
What Should We Do About It?
I’m left with a question about the role of critical thinking in the Age of AI: What should we do about this?
My honest answer: I don’t know.
I feel confident that critical thinking is a key with moving forward. But I also think that we tend to apotheosize critical thinking and, perhaps, I am sometimes overly confident in it.
I feel confident that we need to create a system of accountability, which would place responsibility with the creators of false content and those who hide behind AI. But I have absolutely zero idea how to do that.
I’m going to leave it there.
I’ve successfully written myself to an ambiguous position, with a big question mark around critical thinking and its role in this new world.
For now, I’ll have to be happy with that ambiguity.
Because after all, I used critical thinking for this entire post and I’m tired.
I’m off to watch a TV show. Hopefully, there isn’t any AI-generated content I need to analyze…
Thanks for the honest and authentic writing. I am also a frequent user of ChatGPT, but for very specific task, I converse with it regarding things I read about and wanted a discussing partner. I learn alot from AI, and in turn, I give a little part of my psyche to it. I believe this to be a win-win situation.
Fun fact: this phenomenon is also known as liar's divided, coined by Robert Chesney and Danielle Citron in a paper about deep fakes in 2018. Here’s a link to that: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954
You raise a valid point though, which is that it doesn’t matter if the claim itself is wrong or right. The goal is to raise confusion, and generative AI is supercharging that confusing, because virtually anything can now be generated using AI, so convincingly, that we can’t tell the difference anymore.
I wrote a short piece on this myself about several pictures of TEDx speakers that went viral last week, that turned out to be AI generated as well: https://jurgengravestein.substack.com/p/welcome-to-hyperreality