Skip to Content, Navigation, or Footer.

Editorial: Brown should prioritize its foundational learning goals when addressing AI

This semester, there is a new phrase  haunting professors’ syllabi: artificial intelligence. Each course seems to have a different statement on AI: For some classes, the use of AI is defined as plagiarism; for others, it is permitted as long as students cite how the technology was used. Some syllabi don’t mention AI at all; some enthusiastically encourage students to utilize it. Brown professors should have discretionary power over AI regulation, but the University should also offer guiding principles for the non-negotiable elements of learning that must be preserved as part of a Brown education.  

Over the summer, the University released a statement regarding the impact of AI on Brown’s academic mission. The communication encouraged faculty to outline clear rules about what is and is not permitted in their classrooms. Other universities have released similar statements, including guides on how to write AI policies for syllabi. Given Brown's wide range of disciplines and their unique learning goals, it makes sense to let professors implement their own course-specific policies instead of a blanket University regulation. But if we want to safeguard the fundamental features at the heart of student learning, Brown must recognize both the threat and potential of these technologies. 

There is significant value in learning without AI. Chatbots like ChatGPT deprive students of a learning process that is central to academic growth and skill building. By using ChatGPT, students don’t have to struggle through the challenging, iterative process of writing a research paper, solving a problem set or completing a coding project. These tools allow students to bypass the process of using and developing their critical thinking skills. Rather than prioritizing learning, AI systems maximize productivity and efficiency, which explains why industries have embraced them with open arms — but also why academia should proceed with caution.  

This doesn’t mean that Brown should ban AI completely. Instead, the University should consider how it can integrate AI as a complementary feature of its academic mission. In doing so, it can both continue to promote its liberal learning goals, while also equipping students with the skills to enter an increasingly digital job market. For example, just like there is a WRIT requirement, perhaps Brown could create a “technology” requirement. Courses under this requirement might include classes on computer science, AI prompt engineering, technology literature, cybersecurity and more. This way students can engage with AI in a way that complements rather than conflicts with a liberal learning education, and develop a better ability to think critically about the uses of AI.


Two of Brown’s liberal learning goals most threatened by AI models like ChatGPT are 1) Improving speech and writing, and 2) Enhancing aesthetic sensibility. AI reduces writing and art, time-intensive creative practices, into near-instant processes of creation. The integrity of literary and visual arts courses, as well as any courses that require some mode of generative writing or creativity, is threatened by readily available AI tools online. If Brown truly believes in the importance of these two learning goals, then the University should ensure that they cannot be compromised by easy access to AI creative generation. This could be accomplished, for example, by having more in-class creative assignments that eliminate the possibility of reliance on AI tools.

Similarly, with their ability to write quick solutions for almost any homework problem, AI chatbots offer an easy way for students to bypass the work that goes into quantitative or coding-based courses. Brown professors should respond to this threat with the same philosophy held toward using the internet to search for answers since ChatGPT’s answers also come only from information already existing online. While online tools can solve the problems presented to us as practice, uncovering and addressing new problems requires a human brain capable of critical reasoning — something AI is not yet capable of emulating. 

These are just a few examples of the ways that AI can compromise the integrity of Brown’s learning goals. As the capabilities of artificial intelligence grows, so too will the possibilities for action that the University should consider to protect its academic mission. We call on Brown to strike a balance between the drawbacks and benefits of AI within its academic philosophy in order to protect students’ potential to learn, grow and contribute to the world around them. 

Editorials are written by The Herald’s editorial page board and aim to contribute informed opinions to campus debates while remaining mindful of the group’s past stances. The editorial page board and its views are separate from The Herald’s newsroom and the 134rd Editorial Board, which leads the paper. This editorial was written by the editorial page board’s members Paul Hudes ’27, Paulie Malherbe ’26, Laura Romig ’25, Alissa Simon ’25, and Yael Wellisch ’26. 


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.