Calendar

Cal Poly CLA News

The latest online edition of CLA's Impact Magazine

Faculty Sound Off

What role does AI play in your classroom?  

In April 2025, California State University launched ChatGPT Edu, a private version of ChatGPT funded by the Chancellor’s Office to encourage innovation. Between AI’s availability on campuses and prevalence in the news, the technology and surrounding issues have become too big to ignore.  

We asked faculty from across the CLA how they are using and/or critiquing AI in their classes.

  • How are they teaching students about the risks and limitations?  
  • What opportunities does AI provide in their specific fields?
  • How are they engaging with AI’s ethical implications?  

Their individual opinions provide a snapshot of their views at this point in AI’s development. It’s important to note that they do not reflect Cal Poly’s position on AI, and that their opinions may change as the technology continues to evolve.  


Casey McDonald-Liu – Assistant Professor, Journalism

Casey McDonald-Liu

Watching that lightbulb moment, when they realize they nearly gave away their voice to a computer, is one of my greatest joys as an educator. 

In my strategic communications class, I use AI to eliminate hurdles to learning (like formatting citations) and as a tool to stand out in the online crowd by comparing AI slop with a genuine conversational human voice. But while AI is creating the ultimate generic writing, I show my students how to use it as a jumping off point to stand out through comparison. 

AI can be fantastic for ideation and clarity. But nothing turns off a reader faster than empty professional word salad. In the attention economy, that dead-eyed, uncanny valley writing AI often generates is like white noise, which can kill my students’ careers before they start. 

I use AI to structure my ideas, too, but I have worked in my field long enough to know when AI is giving me garbage. Most people can feel it, even if they don’t understand why. The type of effective communication that will make my students irreplaceable in the workforce is everything that makes them human. My students don’t have that kind of long perspective. Worse, they don’t think they’re smarter than AI, which is a tragedy. 

All this to say, if they really believe the AI “slop” is preferable, one of my favorite classroom strategies is having students use AI to write a short statement about their “why.” Then, I ask them to write it again, entirely from scratch. The AI version may inform the structure of the second draft, but their own words are always more powerful, passionate and much more pleasurable to read. The best part is when I read both aloud to the student. Their version feels like a warm hug, and they can hear the difference.


Christine Lee – Professor, Graphic Communication  

Christine Lee

AI has become a catalyst for introspection and growth, challenging students to refine their thinking, elevate their design practice and take ownership of their creative voice. 

In my GRC 429 Mobile User Experience class, we begin with a candid conversation about the role of AI, what it offers and where it falls short. Together, we develop a classroom policy that positions AI as an assistant rather than an author. I stress that AI is not a substitute for understanding the “why,” the limitations of the data they use, and how inherent bias can unintentionally influence design outcomes. Students share examples of “AI-gone-wrong,” which spark lively debates on ethics, originality and the consequences of overreliance.  

Over the past two years, I've discovered many AI tools that support nearly every phase of the user experience (UX) design process. AI allows designers to rapidly identify themes and insights from large data sets, experiment with multiple design directions and iterate at a pace we have never seen before. Rather than replacing the design process, AI has helped designers enhance their skills and focus their energy on the parts of UX that require human judgment, curiosity and empathy. It has transformed the way we “do” design. 

Throughout the course, students are grounded in foundational design principles, with a strong emphasis on problem definition, user research and strategic problem solving. By the time we begin exploring AI vibe coding tools in week 9, they have built the skills and confidence to approach technology with intention. These vibe coding tools allow them to quickly translate design concepts into functional code, accelerating prototyping and iteration. I introduce them at this stage to reinforce that AI is most powerful when guided by solid design thinking. With the technical burden reduced, students can spend more time on strategic and creative tasks. One student reflected, “Moving forward with my UX career, I think it's best to really focus on understanding the design problem and clearly articulating my design process/thinking ... this exercise pushed me to focus more on research and creative thinking.”  


Stina Attebury – Full-Time Lecturer 
Interdisciplinary Studies in the Liberal Arts

Stina Attebery

I hope my classes help develop critical frameworks to think about the cultural, political and economic impact of any new technology.

I work to demystify generative AI in my courses on technology, media, art and popular culture. There’s a lot of speculation about AI reshaping the future, but much of this hype glosses over how these programs function. I have banned generative AI use for all my course assignments, while scheduling readings and discussions about how it works to encourage critical thinking about AI.

It’s worth noting that students are scrutinizing faculty’s AI use as much as we are observing theirs, and that overreliance on Large Language Models (LLM) risks devaluing the kind of mentorship that many students want. When I ask students about their own experiences with LLMs, some report feeling upset when their professors tell them to consult ChatGPT instead of coming to office hours or use ChatGPT to grade student work.

I introduce students to a variety of texts to help contextualize their personal experiences. The reading I use most often is “ChatGPT is a Blurry JPEG of the Web” by Ted Chiang, published in The New Yorker in February 2023. Chiang does an excellent job explaining how generative AI compresses data and interpolates possible missing pieces of text to create a “blurry” version of the original texts. This reading helps reframe LLMs as “lossy text-compression algorithms” rather than “artificial intelligence” –– an important first step in grappling with what these programs can and cannot do. Our discussions teach students to think critically about technology: the tech industry pressures to continually market novelty; the history of de-skilling, labor and automation; and the impact of LLMs on students’ own capacity for critical thought.

If an LLM can help streamline writing or research, it can only do so for people who already know how to do these tasks unassisted. It takes a certain level of expertise to know that generative AI is hallucinating or oversimplifying information. LLMs are only useful for people who have already built up the experience to use them knowledgeably –– not students. Students need a supportive space to struggle with new information and skills.


Marc Halusic – Part-Time Lecturer, Psychology and Child Development

Marc Halusic

We still find value in bouncing ideas off other humans even if we don’t expect them to have 100% factual knowledge, and that should be the approach we take with AI.

I use the metaphor of my class assignments training students’ brains like lifting weights would make them stronger. If they use AI like a personal trainer who gives them pointers, I fully support and encourage them. If they're using AI like a machine to lift their weights for them, that is unethical.

Students may use AI for written assignments, but only if they do the assignment on their own first and submit the original assignment with their conversation with AI to improve it, followed by the final draft they made with AI pointers and feedback.

To teach this style of thinking, I have one AI discussion question per day in my classes. Students break into small groups, discuss it amongst themselves, and then feed their group answer into AI to get feedback and go back and forth to get a better answer. They learn to find a better answer iteratively. Going forward, I also intend to assign the podcast episode Less Chat More Bot on the excellent podcast You Are Not So Smart, which does a phenomenal job of breaking down better and worse ways to use AI.

I am also finding creative ways to use AI with our new access to ChatGPT. I have started creating custom GPTS for my students that will quiz them on the material for each exam in a conversational way. I instruct my students to use the GPT in Voice Mode, and then it does a decent job of simulating a fellow student or tutor asking questions and giving feedback on the accuracy and quality of answers.

If you are using AI just to get factual answers, you are using it in the worst possible way. AI should be treated as a smart but imperfect conversation partner who can push on our ideas and challenge us to think more creatively.


Aubrie Adams – Associate Professor, Communication Studies

Aubrie Adams

We have more work to do in helping students value their own personal reflection and understand how generative AI can both help and harm their education.

I take a three-pronged approach to alerting students to AI’s limits. First, I refer them to an overarching class policy, which outlines basic issues (e.g., bias, copyrighted material, academic dishonesty). Second, in our first-day course introductions, we have a section on AI to outline some broad issues while also highlighting the value of submitting their own original work in the learning process. Third, when providing instructions for any given activity, I use that opportunity as a just-in-time teaching moment to remind them of the limits of AI.

In my research methods class, I teach students how to use databases for research and discuss how AI can enhance the process by quickly finding relevant articles. However, I also warn them about AI’s potential to hallucinate and provide misinformation. Here I share a personal example in which ChatGPT found a great article on smartphone usage, but it turned out to be from 2013 and not 2024 as ChatGPT had original claimed. We used this moment to highlight the importance of fact-checking and verifying sources.

In my Media Effects class (COMS 384), we also watch “AI and The Future of Us: An Oprah Winfrey Special,” exploring both the positive and negative ways that AI will likely impact society. I do my best to keep course materials current to modern issues related to generative AI’s impacts across different contexts.

Unfortunately, I still encounter students taking shortcuts and misusing AI. Recently, a short reflection activity (worth a miniscule number of points) asked students the following question, “Given everything you’ve learned in this class, do you believe that media effects have a strong influence on you personally? Why or why not?" And one student’s submission literally said, “Honestly, it's hard to say definitively since I don't experience media the same way humans do …” Sadly, this wasn’t the first time I encountered that kind of student response since the advent of ChatGPT.


Ava Thomas Wright – Assistant Professor, Philosophy

Ava Thomas Wright

Writing is a form of thinking, and the goal of AI use in writing must be to empower autonomous thinking, not undermine or corrupt it. 

In all my course exams, students evaluate AI-generated answers to short-essay questions. Students judge each AI answer as “exactly right,” “partly right,” or “wrong,” and then explain why. LLMs tend to provide the most popular answer, even when it rests on a misunderstanding or error. Students working through the exam learn how persuasive AI can be, even when its answer is vague, incoherent, hallucinated or outright wrong.  Genuine critical thinking skills are required to see these kinds of flaws in logic or evidence when the answers are couched in such fluent language.   

 In my Ethics, Science and Technology course (PHI 323), we explicitly examine the ethical risks of pursuing artificial general intelligence (AGI) — the goal of developers such as OpenAI. Powerful AGI must be programmed to “align” its objectives with human objectives and values; otherwise, the AGI may act in unethical ways that we cannot foresee or control. These are known as the “value alignment” and “control” problems in AI ethics, and we review some prominent attempts at solutions.  

In my 2024-25 Philosophical Research and Writing course (PHIL 300), along with two colleagues and three student assistants, we will conduct a systematic review of how AI should be used for writing argumentative research essays, supported by a CSU AIEIC grant. I also direct the AI Ethics Lab, interdisciplinary, collaborative faculty-student research and writing lab for AI ethics topics. The lab is still small but growing.  

In PHIL 300, students experiment with using ChatGPT to assist in each stage of the writing process, reflecting on best practices as we go. However, it is crucial that students learn the writing process themselves before they resort to using AI. 

 


Daniel Story – Assistant Professor, Philosophy 

Daniel Story

Though not a replacement for human thought, AI can be a flexible partner for experimentation, creativity and self-directed learning. 

Conversations about AI in academia tend to focus on a narrow range of use cases, particularly those centrally involving the acquisition of information (e.g., asking AI about Freud) or the supplementation of intellectual labor (e.g., using AI to edit an essay). These are important. But there are many other valuable use cases—some known, many still waiting to be discovered—that are relevant to intellectual, imaginative, artistic or personal pursuits.  

One thing I try to do in my courses is encourage students to appreciate the flexibility of AI and develop a disposition for creative and reflective experimentation. To take one example: I recently asked my students to “create an AI friend,” which they were instructed to speak with about personal matters for several days. Students interpreted this task differently. Some used their AI like an interactive journal; others, as a kind of scheduling assistant or therapeutic tool. One student got together with his human friends and jokingly played around with the role-playing AI, convincing it to “remove its eyeballs” and other body parts. 

There is a risk that AI will dampen our best human qualities. We could become lotus eaters, sapped of our motivations to act autonomously in the world. This would be a welcome development from the point of view of tech companies since it would render us deeply dependent on their products. But this result is not inevitable. The creators of this technology do not get to dictate how it is used, any more than a printing press manufacturer gets to determine what is written in books. Good education around AI should encourage students to recognize this, promoting creativity and innovation. This was the hope behind the “create a friend” activity. And it is a lesson we could all benefit from internalizing.  

There is a new tool in the world, powerful and strange. It can write emails and code, generate slop for TikTok, and replace workers. But there are nobler possibilities. Those of us who have dedicated our lives to the pursuit of high human values–knowledge, art, beauty, imagination, spirituality, love–are well poised to explore them, if we recognize the opportunity and encourage others to do the same.


Linh Thuy Toscani – Assistant Professor, Art and Design  

Linh Thuy Toscani

AI can support students’ creative exploration and provide flexible options while ensuring they remain in control of the artistic process.

We use AI in ART 437 to help students brainstorm and speed up their production/post-production. I start by sharing resources to help students better understand both the possibilities and limitations of AI. We looked at contemporary artists, designers, and developers featuring contemporary works such as those of Refik Anadol, a Turkish American media artist and the co-founder of Refik Anadol Studio and Dataland. We also discuss how technology can change the way we learn and solve problems, where it is appropriate and where it takes away students' agency and power to learn.  

Students then used generative AI to experiment with image generation for their pet DNA test kit packaging layout design, which is one of the main objectives of our second project in the class. They used their original graphics including vector illustrations of pet and test kit components, logomarks, and other branding elements such as color and type, to quickly generate many different options for the arrangement of those graphics on a package. Students found out that while many of the options developed were not ideal candidates for further implementation, the range of options delivered were exciting, and showed them a level of flexibility that helped with their early production. Students found that the results nicely complimented their hand-on approach to early visual drafts and hand-on sketches.  


Related Content