In Brief: Iterative Revision and GAI in Course Design
Introduction
As generative AI becomes increasingly integrated into teaching and learning in higher education, instructors are navigating complex questions around academic integrity and student learning. While many emerging policies emphasize detection and sanctions, this guide offers a proactive alternative—a continuous improvement approach that privileges process over product in course design. By centering trust, transparency, reflection, and student agency, iterative revision reframes the instructor’s role from surveilling potential GAI misuse to intentionally designing scaffolded opportunities for process-focused learning. In this shift, instructors provide meaningful feedback that helps students build their learning over time, moving from producing final answers to applying feedback for deeper understanding. Grounded in research evidence from the learning sciences and inclusive pedagogy, this iterative approach mirrors authentic cycles of refinement, sustained growth, and ownership in learning.
1. What Does the Research Tell Us About Iterative Revision?
Iterative revision—the practice of reflecting on feedback and applying it to improve work—is central to deeper learning. The concept draws on Donald Schön’s influential work on reflective practice. Greenwood (1993) explains Schön’s distinction between reflection-in-action, in which learners think critically while engaged in a task, and reflection-on-action, in which they revisit prior work to better understand the choices and outcomes that shaped it. This brief focuses on the latter: revision as post-process reflection, where learning is strengthened through repeated cycles of improvement. When instructors embed multiple feedback loops, particularly in low-stakes contexts, they put the principles of formative assessment into practice by stretching students’ thinking, skills, and attitudes through opportunities for incremental growth.
This iterative process is already embedded into several high-impact practices (HIPs) that have been demonstrated to positively and significantly influence student learning. For example, writing-intensive courses, collaborative projects, service-learning experiences, capstone projects, and e-portfolios often use scaffolded assignments that break large tasks into meaningful parts, with each part receiving feedback and revision before the next iteration. These structures emphasize process over product, allowing students to integrate improvements and strengthen their work, develop agency and ownership, and engage in authentic learning. Also, these checkpoints give instructors opportunities to intervene and guide learning before the end of a course, shifting the focus from evaluating a final product to supporting an incremental approach to learning.
Instructor feedback is the spark that drives iterative revision. For feedback to be transformative, it must be timely, constructive, and actionable. When intentionally designed, scaffolded opportunities—such as drafts, revisions, and low-stakes practice—create cycles of improvement that shift the focus from one-time performance to sustained growth. These cycles exemplify effective teaching and learning techniques that stretch student’s learning through compassionate challenges (Cavanagh 2023) while also providing support and direction within relationship-rich contexts, where valuable feedback helps students thrive (Felten and Lambert 2020).
Integrating iterative revision into assessment harnesses the energy that is lost in policing academic misconduct and reinvests it into fostering a culture of reflection, relationships, integrity, and learning.
2. Why Does Iterative Revision Matter When Teaching with GAI?
Iterative revision reduces students’ misuse or over-reliance on GAI by emphasizing process over product. When assignments include drafts, feedback loops, and scaffolding, instructors minimize the need to use GAI as a shortcut to more polished work.
In courses with closed GAI policies, a recent trend to combat misuse is requiring edit access to drafts as evidence of student work. While this can help track students' progress in their writing development, it risks shifting attention from teaching to surveillance, adding an extra burden to already overextended instructors. An alternative draws on research emphasizing the value of trust-based instructor–student relationships (Felten and Lambert 2020), where deeper learning takes place through transparency, support, and meaningful feedback. By lowering the pressure to “get it right” on the first attempt and normalizing failure as an essential part of learning (Nunn and Gruyter 2019), instructors help students build habits of reflection, feedback use, and sustained growth.
When courses adopt open or conditional GAI use policies, GAI can serve as a partner in learning, depending on the instructor’s goals and disciplinary context. In this approach, GAI could function as a support tool, helping students brainstorm ideas, test explanations, or receive provisional feedback, while students remain responsible for critiquing, refining, and extending the outputs. For example, human-in-the-loop (HITL) AI-Educational Development Loop (AI-EDL) models invite students to refine AI-generated drafts, positioning the technology as a starting point for engagement rather than a finished product. In this approach, GAI complements learning through iterative feedback cycles while keeping students at the center of the process.
Whether instructors adopt closed, open, or conditional policies on GAI use, the priority remains the same: embedding scaffolding, elevating process over product, and normalizing failure so that students focus on growth rather than performance.
3. What Can We Do?
The following ideas offer scalable entry points—from quick additions to existing assignments to deeper collaborations with students on ethical GAI use, including estimated times for instructors to implement these ideas.