From reading to engagement
Singh deliberately designed the assignment with minimal procedural guidelines so students would make an informed choice of what to focus on, drawing on their individual readings of the research literature on human-AI interaction in the course. The core instruction was simple:
In the process of reading and reflecting on [the course readings], I ask that you engage with at least one GenAI system. This interaction should help deepen your understanding of the readings by providing hands-on experience . . . Your reflection should connect the experience to the concepts, challenges, and opportunities covered in the readings, contextualized by your practical use of the system.
Students were encouraged to explore a wide range of systems, from text-based models like ChatGPT to code generation tools like GitHub Copilot, and connect their use directly to the course literature:
You may use a variety of systems in your reflection. Consider the following examples, though you are not limited to these applications:
Text-based systems (e.g., ChatGPT, Claude, Gemini, etc.) to discuss the content of one or more of the papers, brainstorm related topics or project ideas, or better understand a concept
Code generation tools (e.g., GitHub Copilot) to prototype an idea related to the readings
Multimodal models to interact with visual or auditory materials relevant to the reading's content
A GenAI system from one of the readings, to get hands-on experience interacting with it
Finding 1: Students uncover the nuances of human-AI collaboration
Students' reflections surfaced several critical themes that enriched class discussions:
- The fine lines between creativity support and homogenization in AI-assisted writing.
- The importance of actively challenging the LLM versus passively accepting its output.
- The cognitive load that comes with unlimited text generation.
- The uncertainty that arises related to copyright ownership of output and the unreliability of AI for deep analytical tasks.
As one student noted, this hands-on approach was essential for turning abstract concepts into concrete understanding:
"I believe that by engaging with it firsthand, we were better able to understand the broader conversations about human-centered AI and how we might improve on existing AI interfaces. I also believe that GenAI isn't something that should be shied away from and must be tackled head on to improve learning outcomes." — Course participant
Finding 2: The 'new' signals of good writing in the age of AI
The assignment prompted conversations about what constitutes skillful writing when fluency is easily achieved using automation. Singh began to see summarization of a given week's readings as a baseline, pushing him to look for deeper evidence of learning in student reflections.
This led to a useful perspective about evaluating student work in the presence of AI.
"As one student pointed out in a class discussion, the signals of baseline writing ability used to be grammar, quantity, fluency, etc.—but these are not discriminative of good writing in the age of LLMs. The new signals might be deviation from the patterns LLMs implement by default." — Singh
Lessons learned: Don't limit students to the "spectator's view"
Singh believes that in a field as dynamic as generative AI, direct interaction is nonnegotiable. He sees hands-on, reflective practice as fundamental to developing the critical GenAI literacy students and faculty need.
By having students use the tools, critique their outputs, and connect their experiences back to the literature, the course sought to empower students to become informed, critical contributors to the future of human-centered AI.
"When we teach students about generative AI, I believe we must not limit them to the spectator's view." — Singh