The innovation: An AI that only asks questions
To reposition GenAI for this purpose, O'Connor developed a structured prompt that essentially gave the AI a specific job to do. Students copied this prompt into a specific GenAI model, along with a passage of text. The AI's role was not to provide analysis but to push students to explore the passage's key ideas and claims until they could demonstrate in-depth understanding. In this way, students deepened their comprehension of course readings and research texts. (Notably, this was the only form of GenAI use permitted in the course.)
The implementation: A judgment-free practice space
With a GenAI teaching grant, O'Connor introduced the "Close-Reading Tool" and foundational GenAI literacy information in a Week 6 lesson (as shown in the lesson plan). Students then completed two low-stakes homework assignments designed for practice.
He also invited students to use the tool voluntarily while drafting major assignments, including:
- Short Essay 3 (a conversation between scholarly sources relative to the students' inquiry questions)
- Short Essay 4 (using a course reading as an analytic lens for primary sources relevant to the students' term-long project)
- Final Essay
If students used the tool, he simply asked that they share their chat transcript with their submission, providing him insight into their process.
Finding 1: "Tremendous growth" for motivated students
O'Connor said the success of the Close-Reading Tool depended on the quality of the interaction—the prompt, the text, and the students' own answers—and the output it provided (the questions asked, the recalibration of the questions' difficulty in light of student responses). While most students did not use the tool beyond the required assignments, those who did showed "tremendous growth in their understanding of difficult texts" in their subsequent writing.
The AI proved to be a surprisingly nuanced partner.
"The LLM was able to discern very subtle inaccuracies in student responses and offered excellent questions to help students recognize their misunderstanding and reevaluate evidence to correct their misunderstanding." — O'Connor
Critically, students judged the tool's value not by the AI's output but by what it enabled them to produce: clearer ideas and a deeper grasp of the material, arrived at through their own thinking.
Finding 2: The power of clear instruction
A key insight was just how effective the LLM was at adhering to its prescribed role. By providing a clear set of instructions, O'Connor found that the AI performed with consistency:
- It stayed committed to only asking questions.
- It refrained from providing answers or analysis.
- It reliably kept the conversation going until the student met the criteria for demonstrating understanding.
Lessons learned and next steps
The experiment was an iterative process. After the first assignment, Dr. O'Connor noticed the tool wasn't consistently pushing students beyond simple text retrieval. He revised the master prompt, adding criteria to demand deeper reasoning. By the second assignment, the tool was eliciting more higher-level thinking from students—and asking them to explain their reasoning, if missing, from their inputs.
For his next iteration of the course, O'Connor is considering several adjustments:
- Introduce the tool earlier: Get students comfortable with the tool sooner to make it more useful.
- Bridge AI and peer interaction: Use the tool as the first step in a two-part process. First, students practice with the AI to learn how to ask probing questions. Second, they apply those skills in class discussions with their peers.
The ultimate goal is to leverage GenAI as a judgment-free training ground, helping students develop the sophisticated questioning skills needed for close reading and rich, productive academic conversations.