Cheating beyond ChatGPT: Agentic browsers present risks to universities
Cheating beyond ChatGPT: Agentic browsers present risks to universities
AI chatbots have proliferated in school settings since the launch of ChatGPT. But OpenAI, the company behind ChatGPT, just released a new AI tool that may make more difficult.
OpenAI鈥檚 new browser, Atlas, follows the release of other browsers that incorporate AI technology. Built into these browsers are assistants that operate the browser without keyboard inputs or mouse clicks. That means they can navigate a learning management system (LMS) like Canvas and testing software on their own. OpenAI鈥檚 announcement for its new product included an endorsement from a college student who found the tool to aid their learning. However, as examines in this story, students and researchers are sounding the alarms that these tools put academic integrity and personal data at risk in classrooms already upended by a rise in AI use.
In online posts, students use these so-called 鈥渁gentic browsers鈥 to take over academic platforms like Canvas and Coursera and complete quizzes assigned to them. The CEO of Perplexity, the creator of the agentic browser Comet, even to a student displaying how they used the tool to complete a quiz, saying, 鈥淎bsolutely don鈥檛 do this.鈥
These browsers interact with websites at the user鈥檚 request to complete tasks like shopping, web navigation and form submission. They can even complete schoolwork without a student鈥檚 hands needing to touch the keyboard. See an example below.
Carter Schwalb, a senior business analytics major at Bradley University, heads the school鈥檚 AI club. He said he鈥檚 experimented with agentic browsers for planning trips and apartment searching as well as summarizing information found on various websites. However, he鈥檚 talked to many professors at his university who report that students are submitting AI-generated responses for their assignments.
鈥淚鈥檝e seen a lot of instances, even from talking to professors, of the students just blatantly submitting ChatGPT-generated responses,鈥 Schwalb said.
For students, agentic browsers offer a new sort of convenience, with their built-in chatbots and their abilities to complete and submit assignments automatically. For teachers that want to combat these issues, looking at the version history in Google Docs can help determine if students are using AI assistants to complete and submit entire written works.
Students like Schwalb, though, are refraining from the use of these tools for hands-free assignment completion. For Schwalb, he said he doesn鈥檛 want to lose his critical thinking abilities by offloading all of his work to AI tools.
鈥淚 need to keep my ability to critically think and I think that needs to be emphasized, probably both from teachers to their students as well as parents to their children,鈥 Schwalb said.
Not everyone shares Schwalb鈥檚 outlook. But, agentic browser use not only presents concerns for academic integrity and engagement in education. In a authored by University of California, Davis Ph.D. student Yash Vekaria and others, researchers concluded that generative AI assistant browser extensions store and share the personal data of their users.
鈥淪ometimes this may involve collecting information and storing information which is sensitive to a user,鈥 Vekaria said.
The was carried out in late 2024 when agentic browsers were not a part of mainstream AI usage. Starting in May 2025, searches for 鈥淎I in browser鈥 and 鈥淐omet browser鈥 (the tool created by Perplexity) on Google started to ramp up. However, the conclusions researchers settled on apply to agentic browsers, according to Vekaria. Additionally, he said, agentic browsers may present more privacy risks than were covered by the study.
鈥淭he assistant is always present in the side panel, so it鈥檚 able to access and view everything that the user is doing,鈥 Vekeria said. 鈥淎gentic browsers collect all this information and have, if not similar, at least more risks in my opinion.鈥
Many students who use agentic browsers for academic or personal tasks don鈥檛 understand these risks, Vekaria noted. When used on academic platforms like Canvas, AI assistant tools collected and shared student academic records with other sites. The privacy of students鈥 educational records is supposed to be protected under a federal law called the Family Educational Rights and Privacy Act.
鈥淲e saw that it was able to exfiltrate student academic records, which is a risk under FERPA that protects students鈥 academic data in the U.S.,鈥 Vekaria said. 鈥淚n general there should be more regulatory enforcement that should happen.鈥
However, universities across the nation haven鈥檛 demonstrated a cohesive response to the use of these tools by their own students. While AI detectors can assess submitted work by students, multiple choice tests and discussion forums don鈥檛 incorporate these checks. Students are using these tools regardless and Schwalb argues that restriction is not the answer.
鈥淚 haven鈥檛 seen a good enough argument against AI to be fully adopted at a university, other than we don鈥檛 want kids using it which is just not reasonable,鈥 Schwalb said. 鈥淚t鈥檚 like the internet coming out and telling somebody not to use the internet or like the Industrial Revolution and telling somebody not to make something on an assembly line.鈥
As new tools are emerging, the realities for students and professors keep changing. Companies looking to support educational institutions are releasing different tools like that protect the user data that agentic browsers may put at risk.
鈥淭he option is here, and students are going to take it,鈥 Schwalb said. 鈥淭he job is not whether to and not how do we restrict this. It鈥檚 how do we incorporate.鈥
was produced by and reviewed and distributed by 麻豆原创.