Please ensure Javascript is enabled for purposes of website accessibility
His Students Suddenly Started Getting A’s. Did a Google AI Tool Go Too Far?
gvw_calmatters
By CalMatters
Published 5 minutes ago on
November 10, 2025

Some teachers say that AI tools, particularly Google Lens, have made it impossible to enforce academic integrity in the classroom. (Shutterstock)

Share

Getting your Trinity Audio player ready...

This story was originally published by CalMatters. Sign up for their newsletters.

A few months ago, a high school English teacher in Los Angeles Unified noticed something different about his students’ tests. Students who had struggled all semester were suddenly getting A’s. He suspected some were cheating, but he couldn’t figure out how.

Until a student showed him the latest version of Google Lens.

BMW 1280x180

Carolyn Jones portrait


CalMatters

Google had recently made the visual search tool easier to use on the company’s Chrome browser. When users click on an icon hidden in the tool bar, a moveable bubble pops up. Wherever the bubble is placed, a sidebar appears with an artificial intelligence answer, description, explanation or interpretation of whatever is inside the bubble. For students, it provides an easy way to cheat on digital tests without typing in a prompt, or even leaving the page. All they have to do is click.

“I couldn’t believe it,” said teacher Dustin Stevenson. “It’s hard enough to teach in the age of AI, and now we have to navigate this?”

Keeping up with students’ methods of cheating has always been a cat-and-mouse game for teachers. But some now say that AI tools, particularly Lens, have made it impossible to enforce academic integrity in the classroom — with potentially harmful long-term effects on students’ learning.

‘A Terrible Idea’

Lens has been around for nearly a decade. It’s the camera technology that scans QR codes or identifies objects in photos. But as AI has evolved, its uses have expanded, and Google has made it more available to users, especially those using Chrome, the Google browser.

During the COVID school closures, most school districts in California gave students Chromebook laptops to do remote work. Thousands of those laptops were actually donated by Google. After schools reopened for in-person learning, schools kept using the Chromebooks, making them an integral part of classroom instruction.

Some teachers say that AI tools, particularly Lens, have made it impossible to enforce academic integrity in the classroom — with potentially harmful long-term effects on students’ learning.

Millions of California’s 5.8 million K-12 students use Chromebooks, making it by far the most popular laptop option in schools.

For William Heuisler, a high school ethnic studies teacher in Los Angeles, the ubiquity of Chromebooks was the first red flag.

“After COVID-19, it was clear that Chromebooks were a terrible idea in my classroom,” Heuisler said. Students used the laptops to play games during class, watch soccer matches and otherwise focus on anything but the lesson plan.

Then came AI, with its immense potential to enhance education — and facilitate cheating. That’s when Heuisler decided to ditch technology altogether in his classroom and return to the basics: pencil and paper. Tests, homework and in-class assignments are all on paper. The school already bans cell phones.

It’s more work for him, but worth it, he said.

“We want teenagers to think independently, voice their opinions, learn to think critically,” Heuisler said. “But if we give them a tool that allows them to not develop those skills, I’m not sure we’re actually helping them. Can you get by in life not knowing how to write, how to express yourself? I don’t know, but I hope not.”

AI and Cognitive Activity

Heuisler is not alone, according to research from the Center for Democracy and Technology. In a recent nationwide survey, the organization found that more than 70% of teachers say that because of AI, they have concerns about whether students’ work is actually their own. Nearly 75% of teachers say they worry students aren’t learning important skills like writing, research and reading comprehension.

The impact on students’ learning appears to be real, according to a recent study by the Massachusetts Institute of Technology. The study, “Your Brain on ChatGPT,” found that students who use AI for help writing essays showed significantly less cognitive activity than those who didn’t, and often couldn’t remember details from essays they had just written. The essays themselves were also of poorer quality, with limited ideas, sentence structures and vocabulary compared to the essays written by students who didn’t rely on AI.

Nonetheless, about 85% of teachers and students use AI in the classroom, the Center for Democracy and Technology found. Teachers use it to organize lesson plans and grade papers, and students use it to do things like research and brainstorming.

Lack of Consistent Rules

But rules related to its use vary widely. The California Department of Education offers extensive guidance on how teachers can use AI in the classroom, but no strict requirements — even regarding students who use AI to cheat. One video urges teachers not to punish students caught using AI to write an essay. Instead, the video encourages teachers to come up with essay assignments that can’t be easily written by a machine, or require students to provide their notes and cite AI just like they would cite any other source for an essay.

In a recent nationwide survey, the organization found that more than 70% of teachers say that because of AI, they have concerns about whether students’ work is actually their own.

Even within schools, teachers have different AI rules. Some encourage students to incorporate AI into their work, while others ban it outright. A recent survey by RAND research organization found only 34% of teachers said their school or district had consistent policies related to AI and cheating, and 80% of students said their teachers haven’t provided guidance on how to use AI for schoolwork.

That confusion is the crux of the problem, said Alix Gallagher, a director at Policy Analysis for California Education who has studied AI use in schools. Because there are few clear rules about AI use, students and teachers tend to have “significantly” different views about what constitutes cheating, according to a recent report by the education nonprofit Project Tomorrow.

“Because adults aren’t clear, it’s actually not surprising that kids aren’t clear,” Gallagher said. “It’s adults’ responsibility to fix that, and if adults don’t get on the same page they will make it harder for kids who actually want to do the ‘right’ thing.”

Districts need to provide high-quality training for teachers and consistent policies for AI use in the classroom, so everyone knows what the rules are and teachers know how to navigate the new technology, she said.

Unsustainable?

In Hillary Freeman’s government class at Piedmont High School near Oakland, AI is all but forbidden. If students use AI to write a paper, they get a zero. She only allows students to use AI to summarize complex concepts, write practice questions for a self-assessment or when Freeman explicitly permits it for a specific task.

She appreciates that AI can sometimes be useful, but she worries that it’s too easy for students to use it as a crutch.

“Reasoning, logic, problem-solving, writing — these are skills that students need,” Freeman said. “I fear that we’re going to have a generation with huge cognitive gaps in critical thinking skills. … It’s really concerning to me. I want their futures to be bright.”

Detecting students’ use of AI is another obstacle, she said. It means spending time digging through version histories of students’ work, or using AI plagiarism screeners, which are sometimes inaccurate and more likely to flag English learners.

“It’s a huge ‘add’ to my job, and it doesn’t seem sustainable,” Freeman said.

Digital literacy and Academic Integrity

Google, meanwhile, so far has no plans to remove Lens from its Chrome browsers, even on school-issued laptops, although it is continuing to test various levels of accessibility. It recently paused a “homework help” Lens shortcut button, in response to feedback from users. 

The tech giant encourages students and teachers to learn more about positive and ethical uses of AI and how it can enhance learning. It’s also invested more than $40 million in AI literacy for students and teachers over the past few years.

“Students have told us they value tools that help them learn and understand things visually, so we have been running tests offering an easier way to access Lens while browsing.” — Google spokesman Craig Ewer

“Students have told us they value tools that help them learn and understand things visually, so we have been running tests offering an easier way to access Lens while browsing,” said Google spokesman Craig Ewer. “We continue to work closely with educators and partners to improve the helpfulness of our tools that support the learning process.”

School administrators also have the option of disabling Lens on district-issued Chromebooks.

Los Angeles Unified has decided to keep Lens on its student laptops, at least for now, because the tool has plenty of positive uses that students should have the opportunity to explore the technology, a district spokesperson said.

But the district has instituted some guardrails: the tool is only available to students who have completed a lesson on digital literacy, and students and teachers must comply with the district’s academic integrity and responsible-use-of-technology rules. Those rules include bans on plagiarism and cheating.

“As new digital tools evolve, we continuously evaluate how they are used within our schools. When certain technologies or features may present concerns, we carefully analyze the risks, benefits, and overall impact on the learning environment,” a district spokesperson said.

This isn’t the district’s first challenge with AI technology. In 2024 Superintendent Alberto Carvalho unveiled a nearly $3 million chatbot called Ed, only to shelve it three months later when the company laid off half its staff.

Meanwhile, Stevenson said Lens vanished from his students’ Chromebooks last week after he alerted the district that some students were using it to cheat.

“It’s encouraging, but it also reveals how haphazard the introduction of AI has been,” Stevenson said. “Teachers and school leaders spend countless hours considering each detail of the learning experience, then Google totally undermines it with the click of a button. This isn’t how education is supposed to work.”

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

RELATED TOPICS:

Search

Help continue the work that gets you the news that matters most.

Send this to a friend