Please ensure Javascript is enabled for purposes of website accessibility
The Class Where ‘Screenagers’ Train to Navigate Social Media, AI
d8a347b41db1ddee634e2d67d08798c102ef09ac
By The New York Times
Published 1 hour ago on
December 29, 2025

Students try to determine the authenticity of a social media profile during a lesson on digital literacy at Abraham Lincoln High School in San Francisco, Nov. 7, 2025. New technologies are complicating efforts to teach the scrolling generation to think critically and defensively online.. (Minh Connors/The New York Times)

Share

Getting your Trinity Audio player ready...

Most teenagers know that baseless conspiracy theories, partisan propaganda and artificially generated deepfakes lurk on social media. Valerie Ziegler’s students know how to spot them.

At Abraham Lincoln High School in San Francisco, she trains her government, economy and history students to consult a variety of sources, recognize rage-baiting content and consider influencers’ motivations. They brainstorm ways to distinguish deepfakes from real footage.

Ziegler, 50, is part of a vanguard of California educators racing to prepare students in a rapidly changing online world. Content moderation policies have withered at many social media platforms, making it easier to lie and harder to trust. Artificial intelligence is evolving so quickly, and generating such persuasive content, that even professionals who specialize in detecting its presence are being stumped.

California Leads Way on Teaching Digital Literacy

California is ahead of many other states in pushing schools to teach digital literacy, but even there, education officials are not expected to set specific standards until later in 2026. So Ziegler and a group of her peers are forging ahead, cobbling together lesson plans from nonprofit groups and updating older coursework to address new technologies, such as the artificial intelligence that powers video apps like Sora. Their methods are hands-on, including classroom exercises that fact-check posts about history on TikTok.

Social media literacy is a tough subject for schools to try to teach, especially now. Federal funding for education is precarious, and the Trump administration has politicized and penalized the study of disinformation and misinformation. AI is becoming pervasive in the educational system, even as its dangers to students and educators become increasingly clear.

The News Literacy Project, a media education nonprofit, surveyed 1,110 teenagers in May 2024 and found that 4 in 10 said they had any media literacy instruction in class that year. Eight in 10 said they had come across a conspiracy theory on social media — including false claims that the 2020 election was rigged — and many said they were inclined to believe at least one of the narratives.

Ziegler teaches the self-described “screenagers” that their social media feeds are populated using highly responsive algorithms, and that large followings do not make accounts trustworthy. In one case, students learned to distinguish between a reputable historians group on Instagram and a historical satire account with a similar name. Now, they default to double-checking information that interests them online.

“That’s the starting point,” said Xavier Malizia, 17.

Ziegler first tried to teach AI literacy last year by testing out a new module from the Digital Inquiry Group, a nonprofit literacy organization. She relies heavily on collaborations, often consulting with her school’s librarian or using free resources from CRAFT, an AI literacy project from Stanford University.

Spotting Fake AI Videos

Riley Huang, 17, said she had recently been nearly, but not quite, duped by artificially generated clips that portrayed Jake Paul, a popular boxer and influencer, as a gay man applying cosmetics. Elisha Tuerk-Levy, 18, said it was “jarring” to watch a realistic AI video of someone falling off Mount Everest, but added that the visuals in such videos were often too smooth — a useful “tell” that helps identify them as fake.

Zion Sharpe, 17, noted that AI-generated videos often seem to originate from accounts where all the posts feature the same person wearing the same clothes and speaking in the same intonation and cadence.

“It’s kind of scary, because we still have a lot more to see,” Zion said. “I feel like this is just the beginning.”

Policymakers are paying more attention to the issue. Dr. Vivek Murthy, the surgeon general under former President Joe Biden, urged schools in 2023 to set up digital literacy instruction. At least 25 states have approved related legislation, according to an upcoming report from Media Literacy Now, a nonprofit group. This summer, for example, North Carolina passed a law requiring social media literacy coursework starting in the 2026-27 school year, covering topics such as mental health, misinformation and cyberbullying.

Many of those new rules, however, are voluntary, toothless or slow to take effect or do not acknowledge the growing presence of artificial intelligence.

“I absolutely wish we could make things happen faster,” said California Assembly member Marc Berman, a Democrat who wrote two media literacy bills passed in 2023 and 2024. The bills nudged the state to incorporate lessons about media literacy and responsible AI use at each grade level, but California education officials have yet to decide on a formal course of action.

“It’s about really strengthening those foundational skills so that no matter what tech pops up between now and then, young people have the ability to handle it,” Berman said.

This article originally appeared in The New York Times.

By Tiffany Hsu/Minh Connors

c.2025 The New York Times Company

RELATED TOPICS:

Search

Help continue the work that gets you the news that matters most.

Send this to a friend