Better Thinkers or Better Submitters?
(Jenell Theobald/The Santa Clara)
It’s late afternoon during finals week at Santa Clara University, and the Sobrato Family Courtyard is packed. Under the shade of palm trees, with 85-degree heat bearing down, students have claimed every shaded corner of the courtyard, a cool breeze cutting through the heat.
Students sit across from each other like debaters, toggling between screens and peers—but for many, the most frequent conversation partner isn’t sitting across the table. It’s an algorithm.
Kevin Nguyen ’27, an economics major, is one of them. His most common study companions are ChatGPT and Google Gemini. When class notes fall short, he feeds them into Google’s NotebookLM.
“It would help me study by creating PowerPoints, like AI videos,” Nguyen said. When asked where he finds AI most useful, he said it helps him work through math and economics problems, explaining concepts more thoroughly when his professors leave gaps.
Nguyen’s experience is no longer the exception, it’s the new normal.
A 2025 Inside Higher Ed survey, conducted with Generation Lab, found that 85% of U.S. college students used generative AI for coursework in the prior academic year.
As generative AI becomes routine in classrooms, students and faculty at the University are wrestling with a tension the technology cannot resolve:
It can sharpen the final product while dulling the thinking behind it.
The Learning Paradox
In statistics and economics, math-heavy disciplines where Nguyen spends most of his time, he finds AI useful for explaining how to solve complex problems. When he has "no idea how to even start" a creative writing assignment, like a poem for an English class, he turns to ChatGPT to break the blank-page paralysis.
Yet even as a frequent user, Nguyen recognizes the tradeoff.
“Honestly, I feel like it does not increase your knowledge because it kind of helps you out a lot,” Nguyen said. “Maybe your critical thinking part is taken away from AI.”
Nguyen added that while AI is useful in certain situations, it has its limitations. “It does help a lot if you are trying to do stuff, but not necessarily replace the critical thinking that you have,” he said.
He isn’t the only one worried.
Recent research shows a paradox: AI can improve speed and polish while weakening deeper understanding.
Developers who used AI to learn a new coding library scored 17% lower, roughly two grade points, on task evaluations than those who didn't, according to a January 2026 study conducted through the Anthropic Fellows Program.
Conversely, a 2024 Harvard University study found that when AI is designed explicitly to follow proven teaching methods, not just as an open-ended chatbot, students learned more than twice as much in physics courses.
Educators say the key variable is not whether students use AI, but how.
That distinction plays out in real time at the University Yi Fang, a professor of computer science and engineering and director of the School of Engineering's Responsible AI program, said he sees the split clearly in his own classrooms.
“Very capable students master this iterative process,” Fang said. “They don’t purely do everything by themselves, because AI can help you do a lot of things. But they don’t purely rely on AI either. They think, and then use AI, and then think, and then use AI, and they finally accomplish the task.”
Fang contrasts those students with peers who skip the thinking step. “They just do a one-shot prompt, get the answer, and they’re done. They don’t really understand.” Those students often do well on take-home assignments, Fang said, but struggle on in-person exams where AI isn’t available. “This iterative process can help students more deeply understand the content. That’s the important skill students should master.”
Policy Without a Playbook
At Santa Clara University, navigating that “how” is largely left to individual professors and students. According to a Tyton Partners survey, only an estimated 24%of higher education institutions nationally have a formal, campus-wide generative AI policy, leaving the vast majority of students to navigate uneven, class-by-class rules.
However, some corners of the university are moving faster than others.
The School of Engineering has established the AI² (Artificial Intelligence x Academic Integrity) collaborative task force, a group dedicated to exploring how these tools can be integrated into learning and research while maintaining ethical standards.
The initiative provides faculty with sample syllabus language and discipline-specific examples for using AI tools, including code-generation tools such as Copilot, across different courses.
For students outside of the engineering labs, the rules remain fluid. Within the School of Engineering itself, the official stance strongly encourages instructors to clearly communicate their policies. However, absent an explicit statement from an instructor, the School of Engineering advises that the use of or consultation with generative AI be treated analogously to assistance from another person. In practice, this means students shouldn’t submit AI-generated work as their own any more than they would submit an essay written by a friend.
“Usually teachers say not to use AI, or if they do allow it, they do state how we can use it,” Nguyen said. He noted that professors at the University make their expectations clear, but said awareness of the rules doesn’t always translate to following them. “Maybe students are not following it fully 100%."
This culture of distributed governance puts the burden of clarity on the faculty. While the University emphasizes “responsible experimentation,” the penalty for guessing wrong can be severe. Violations of an instructor’s specific AI policy are handled through the University’s standard academic integrity protocol, the same system used for traditional plagiarism.
When students are confused, or when they want to push beyond basic prompting, they often end up at the University Learning Commons, Technology Center, and Library. Sophia Mosbe, an applied sciences librarian at Santa Clara University, sees a massive spectrum of fluency.
Some students have been using AI since middle school and want to go deeper. Others have never touched it and are anxious because a class now requires it. Many come asking about the campus’s partnership with Google for Gemini and NotebookLM.
“We always tell patrons it is better to use your SCU with Gemini and Notebook because there is protected access to the system,” Mosbe said in reference to students’ university accounts. According to the Santa Clara University Library’s Generative AI guide, under the University’s contract with Google, student prompts and uploaded materials are not used to train AI systems, nor are they reviewed by humans.
But the biggest hurdle Mosbe sees isn’t privacy. It’s too much trust.
“AI is not a valid resource. It is not the same as if you were speaking to a subject specialist,” Mosbe said. “It is unable to actually think or actually make conclusions on that information.” Mosbe added that there is a tendency in society to place excessive trust in AI-generated content. “We see a favoritism to overly trust AI output as if it was an expert when that is not the case,” she said
She sees this overreliance manifest in problematic ways: students asking AI for legal advice, using it to write literature reviews or adopting AI-generated essay ideas without scrutiny. Those ideas, Mosbe said, “don’t have a lot of grounding and are not well thought out, but are taken at face value when there needs to be more digging.”
Mosbe tries to recalibrate student expectations with a simple analogy.
“I’m a big promoter that AI is a tool, like a hammer. You can build a house with a hammer, you can make some beautiful things with a hammer,” she said. “However, if you’re doing surgery, sometimes you can use a hammer, but most of the time it’s not ideal.”
The New Divide
The gap between knowing how to use AI as a hammer versus trying to use it for surgery is emerging as a new form of academic and professional inequality.
Nationally, 44% of students who regularly use AI pay for premium versions of the tools, according to a 2024 Tyton Partners survey of 1,600 students.
“I feel like every student at least knows how to ask the basic stuff,” Nguyen said. “But maybe research-wise, I feel maybe students are not aware of what’s actually real and what’s actually fake. Like, sometimes AI, they generate a lot of crap stuff. And students might not know.”
“I think so,” Nguyen said of whether AI gives some students an advantage over others.
That advantage is bleeding into the labor market. While National Association of Colleges and Employers (NACE) data shows that only 10.5% of entry-level jobs explicitly mention AI in their descriptions, the 2024 Work Trend Index from Microsoft and LinkedIn found that 71% of leaders say they’d rather hire a less experienced candidate with AI skills than a more experienced candidate without them.
According to a NACE survey of nearly 1,500 college seniors, less than one-third of the class of 2025 used AI in their job search. Non-users cited ethical concerns, lack of expertise and fear of academic or professional consequences for undisclosed AI use.
Students at Santa Clara University are caught between a world that increasingly demands algorithmic efficiency and one that vehemently rejects those who rely on it too much.
This tension is particularly acute at a Jesuit institution like the University, where the educational philosophy of “cura personalis”, care for the whole person, emphasizes the development of independent judgment and character.
At the University’s Markkula Center for Applied Ethics, internet ethics program director Irina Raicu has argued in an essay that AI can synthesize and analyze data but cannot substitute for the dimensions of personhood—including dignity, spirit and agency—that a Jesuit education aims to cultivate.
“I feel like it’s part of our life now,” Nguyen said, reflecting on the shift. “We use it every day to search up basic stuff. Not just school, but everything.”
Fang put the same tension more bluntly. “You want AI to augment people’s thinking, not replace it,” he said. “Thinking and reasoning are still very important skills. AI is a tool. It can help you do some tasks, but it should not replace your reasoning. It should not replace your thinking.”
The University’s challenge is to ensure that tools like Gemini and ChatGPT enhance, rather than replace, that human core. As students head into finals, the challenge is no longer just passing the test.
It’s knowing when AI helps, and when thinking for yourself still matters more.