Practical Ways to Teach AI Literacy and Critical Thinking

On a recent vacation, I took some of my photos and used the AI tool Gemini to turn some of my photos into short videos. The results were outlandish and funny (if I do say so myself). I shared them on social media, assuming everyone would be in on the joke.

The Shark

Prompt: “The happy couple is enjoying a day at the lake when a large, friendly white shark emerges, and they greet the fish just as you would greet your dog swimming up to you. They give it hugs and kisses.”

Watch the video

The Selfie Bear 

Prompt: “Have a friendly grizzly bear join the couple for a selfie; they all act like old friends.”

Watch the video

The Submarine

Prompt: “Have a nuclear submarine emerge from the water behind the friend group! They find it funny.”

Watch the video

The Bear Fishing

Prompt: “Behind these friendly hikers is a grizzly bear fishing. They love it!”

Watch the video

Real or fake? The challenge of AI-generated content

The reaction to these videos was not what I expected. While most responses were laughing emojis, a few friends and family asked,Is that real?” I thought about our students navigating a new world where what they see and hear can be easily deceiving–a modern parallel to one of the most famous media hoaxes in history: Orson Welles’s 1938 radio broadcast of The War of the Worlds.

A radio hoax that shaped media literacy

To understand how my AI post of a friendly bear could trick anyone, we have to travel back to Halloween night in 1938. On that evening, Orson Welles directed a radio adaptation of H.G. Wells’s novel, The War of the Worlds. Rather than presenting it as a straightforward drama, Welles staged the story as if it were happening live with frantic news bulletins that interrupted a program of ballroom music.

The broadcast used real locations and featured fake interviews with witnesses and experts (Speitz, 2008). Welles was hijacking the format people had come to trust for their most urgent information—the radio. By mimicking the conventions of a breaking news event, Welles bloated the line between fact and fiction. It was the 1938 equivalent of a viral AI-generated video; fiction wrapped in the familiar language of reality, and for many listeners, indistinguishable from truth.

The result was a fascinating social experiment. Some frightened listeners jammed telephone lines to police stations and newspapers, others prayed, and some even packed their families into cars to flee the so-called “Martian gas attack” (Cantril, 1940).

What makes this relevant in the AI era

But what makes this event especially relevant today is that not everyone reacted the same way. Research found that the broadcast split its audience into distinct groups, offering a fascinating historical case study of how our minds work when deciding what is real.

Social psychologist Hadley Cantril’s (1940) research revealed that listeners of The War of the Worlds fell into four different categories. 

Below, you will find how each group responded to the broadcast–and what their reactions can teach us in today’s AI-driven world:

Four categories 

Name for the Group Their 1938 Action The Outcome The 21st Century AI Video Takeaway
The Media Savvy They recognized the show’s internal clues, like the actors’ names and dramatic pacing, and knew it was a play from the start. Some listeners had read Wells’ book. They were impervious to being fooled due to a strong set of background information.  They were entertained, not fooled. Be a Digital Detective. Look for the small flaws like lighting, mannerisms, or voices. Think about what emotion or thought the piece is trying to provoke. Use your background knowledge to look critically at the AI video.
The Fact-Checkers They were suspicious, so they changed the station or checked the television listings to find out it was a show. They quickly confirmed it was a fictional broadcast and were not tricked. Check the source, not just the content. Who posted this video? Is it a known news organization or a random account? Look for corroboration from trusted, independent sources before believing or sharing a video.
The Unsuccessful Verifiers They tried to check what was happening but were met with busy signals or other panicked people, which seemed to confirm their fears. They believed the hoax because their attempts at verification failed and reinforced the panic. Break the algorithm’s bubble. AI videos are often designed to spread in echo chambers where they won’t be questioned. If a video perfectly confirms everything you and your friends believe, be extra skeptical. Step outside of our circles to keep investigating.
The Passive Believers They made no attempt to verify the information, accepting it at face value because it seemed plausible or fit their worldview. They did not seek outside confirmation if there was an alien invasion. They were the most frightened and panicked. Pause when you feel a strong emotion. AI-generated videos are often designed to make you feel angry, scared, or overjoyed. These strong emotions are a red flag. If a video makes you feel a powerful urge to share it immediately, that’s the exact moment to pause and think.
Adapted from (Cantril, 1940)

 

Teaching media literacy in the AI era

The lessons from 1938’s War of the Worlds can guide us as we help students navigate today’s AI-driven media landscape. To thrive, our students must master new analytical skills–learning not to just consume digital content, but to question, verify, and even create it. The most effective way to build these new skills is to let them act as both digital creators and digital detectives. 

Here are a few classroom activities by grade level:

Elementary School: The “Real or Pretend?” Game

Have students use an AI image generator to create pictures. Mix their AI-generated images with authentic photographs in a slideshow. In teams, students analyze the images and look for “clues” that reveal whether each picture is real or pretend.

Middle School: Create and Debunk a Hoax

Assign students to create their own believable hoax. Using AI tools, they write a fictional news article and pair it with AI-generated pictures about a community event. Students present their creation to the class. Their classmates act as fact-checkers, verifying the story against trusted primary sources such as local news outlets.

High School: A Digital Forensics Project

Present students with a viral AI-generated video and task them with fact-checking its claims. They cite credible sources such as news articles, academic reports, or fact-checking websites to build their case. Students then use AI tools to brainstorm potential motives, answering questions like: who benefits from this video spreading? What emotions is it designed to provoke? 

The rise of AI can feel overwhelming, but the solution isn’t avoidance–it’s empowerment. By integrating these tools thoughtfully, we can transform potential risks into powerful teaching moments that prepare students to think critically in a digital world.

Not sure about AI in your classroom? While you wait for the next post in our AI ED blog series, explore our other AI resources to get started with confidence. From on-site training and live events to practical books, videos, and free tools, we help teachers integrate AI responsibly and effectively. Explore AI resources>>

About the educator 

Dr. Alexander McNeece is an award-winning former principal, teacher, author, and consultant. He helps educators boost student engagement, develop healthy school cultures, and integrate AI tools for enhanced learning.

References

Solution Tree

Here's some awesome bio info about me! Short codes are not allowed, but perhaps we can work something else out.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Our Blog

Never miss a post! Each time we add fresh content, you’ll receive a notification through email.
Loading

Subscribe to Our Blog

Never miss a post! Each time we add fresh content, you’ll receive a notification through email.
Loading