Abstract: This assignment, developed for a fall 2023 section of an upper-division undergraduate editing course, asks students to perform a comprehensive edit of a ChatGPT-generated text. The highest stated priorities for the assigned edit were factual accuracy, rhetorical appropriateness, and completeness in relation to user need. Overall, the project successfully developed and assessed the desired learning outcomes, and served as an introduction to generative AI for students whose experience with it was limited.
In January 2023, about two months after the release of ChatGPT-3, the Executive Committee of the Association for Writing Across the Curriculum (AWAC) released a statement on the use of generative AI tools for writing across the curriculum. In this statement, AWAC expressed concern that the use of AI tools has the potential to limit student learning because of the unique cognitive and social development facilitated by the writing process. At the same time, AWAC advocated for the critical, strategic integration of AI tools into writing pedagogy. Because we do not know what the long-term effects on student learning will be, and because we have an obligation to expose students to the tools and processes of their future professions, many educators are similarly seeking to thread the needle of these (potentially) competing priorities by adopting what Stuart Selber (2023) describes as a post-critical stance toward generative AI tools, in which they are approached as “both an educational subject and a platform for work” (Selber 2023).
The broadly transformational impacts of generative AI tools across disciplines suggests that unified and generalizable integrations are unlikely to be useful or effective; rather, the approach and degree of integration might be more effectively determined by learning objectives at the course and assignment levels. For example, a programming course might address potential privacy concerns; a studio art course might focus on intellectual property; a business course might focus on consumption of resources and environmental impact. While the full scope of concerns might certainly be acknowledged in any part of the curriculum, the most substantive integration will naturally occur at points that align with existing goals and learning outcomes. In a technical editing course like the one in which this project is assigned, engagement with generative AI tools aligns with goals and outcomes surrounding rhetorical ethics, which includes questions of authorship, agency, accountability, accuracy, and precision.
Technical writing is among the careers predicted to be most impacted by the generative AI turn (Kochhar 2023). Many practitioners have embraced its utility in automating the production of rote and boilerplate documents (Verhulsdonck et al. 2024; Reeves & Sylvia, 2024). When used by experts who are able to assess the quality and effectiveness of documents as situated in contexts with material and ethical stakes, generative AI tools can save time and labor (Dobrin 2023; Bowen & Watson, (2024). In response to this shift, many technical writing programs and teachers are leaning into the elements of content production that require human judgment (Reeves & Sylvia 2024; Cardon et al. 2023; Mallette 2024; Laquintano, Schnitzler, and Vee 2023). Laquintano, et. al. (2023) and Plugfelder & Reeves (2024) point out that generative AI tools draw productive attention to competing understandings of authorship and agency in academic writing contexts, which center the individual author, and technical writing contexts, which distribute rhetorical agency in production and messaging. Similarly, in an academic writing context, attribution is primarily about giving credit; in a technical writing context, it is also about accountability. Correct attribution is necessary to maintain credibility and trust among users. Unattributed information is more likely to be inaccurate, imprecise, or biased, which can in turn lead to problems with safety, legal compliance, operational efficiencies, equity, and professional reputation.
Hallucinations, in which AI tools fill in inaccurate information and invent citations that do not exist, are another threat to accuracy and precision. In text-based AI products, hallucinations tend to be either false information presented as true or citations attributed to fabricated sources. Humans and AI alike tend to believe a statement is true unless there is a specific reason to think otherwise, a phenomenon known as “truth bias.” Both humans and AI detect deception at a rate of about 50%, but AI is significantly more truth-biased, evaluating nearly 100% of messages as true (Reeves & Sylvia 2024). The practical problem of hallucination presents a pedagogical opportunity aligned to the existing goals of many professional and technical writing courses that engage with the ethics of accuracy, precision, and attribution in communication. In many cases, technical editors serve as quality control in ensuring that users are receiving information that takes every foreseeable precaution against these potential harms. For this reason, engaging with AI-generated texts in an editing class not only exposes students to the utilities and weaknesses of generative AI, it also creates an opportunity to deepen students’ understanding of existing higher-order course goals.
The course for which this assignment was designed is an upper-level undergraduate course in technical editing housed in the professional and technical writing program. It is populated primarily by majors and minors in this program, as well as students from the English and creative writing programs. Due to the small number of participants, in order to preserve anonymity, I did not collect demographic data. In general, white women made up the majority of participants. All spoke English as a first language. The institution as a whole is a Predominantly White Institution (PWI) that enrolls approximately 65% women to 35% men. Among first-time undergraduates, 33.9% are first-generation students, and 44.4% receive Pell grants (Institutional Research 2023). In 2023, when I first assigned this project, 41% of students surveyed reported having never accessed ChatGPT, though by 2024 that number had dropped to 27.8% (Casey, 2024). While these numbers may seem low, and self-report may be a factor, they align with data that shows that men and students from households with higher incomes and higher educational attainment are more likely to use generative AI tools (National University, 2024). These populations are less represented on our campus and in our program.
The editing course is most often taken later in the program sequence. Though it does not have any explicit prerequisites, most students have taken one or more technical or multimodal writing courses before enrolling. Objectives for the course are the ability to demonstrate the following:
An understanding of the editor’s role in producing a text
Knowledge of the fundamentals of style, grammar, and usage
The ability to prioritize editing issues from global concerns through proofreading
The ability to clearly and persuasively articulate the reasons behind editing decisions
Familiarity with the tools and methods of editorial markup on both page and screen
I chose to develop an AI-centered assignment in the editing course rather than in a more production-based course in the curriculum for two reasons. First, in this early stage of adapting to accessible generative AI tools, it appears that using AI for initial drafting and humans for fact-checking and editing will be an increasingly common scenario for writers and editors in the workplace. Practitioners report using generative AI for research and writing to a greater extent than for editing and revising (Reeves and Sylvia 2024). Because AI is unable to reliably evaluate whether a statement is true or false, human judgment is necessary to creating “tailored, rhetorically aware, user-centered communication” (Mallette 2024, p. 290). It follows, then, that as use of generative AI tools becomes more integrated into workflows, editors will spend more time on the tasks that require human judgment while automating those that do not (Mallette 2024; Verhulsdonck et al. 2024).
Second, people outside the discipline of technical writing and editing may not be aware that the ability to address and provide feedback on higher-order content and ethics concerns is a core role of working editors. Students are no different, and as novice editors, often focus on sentence-level editing at the expense of structural and rhetorical concerns. Generative AI tools are highly effective at creating clean prose but are less so at tailoring text for a local context and concrete audience and purpose; therefore, working with AI-generated texts will help prevent students from getting caught up in the lower-order and mechanical concerns.
This assignment is the first of three major assignments in the class. The first focuses on content; the second, organization and structure; and the third, grammar, style, and mechanics. This is a common trajectory of focus that aligns with the structure of a number of technical editing textbooks, including the one used for the course, which is Cunningham, Malone, and Rothschild’s Technical editing: an introduction to editing in the workplace (2019). The learning outcomes for project one are as follows:
Recognizing the strengths and weaknesses of AI-generated text
Assessing a rhetorical situation
Evaluating a document for completeness in the context of a particular audience and purpose
Creating content necessary for comprehension and use by the target audience
Checking content for accuracy
Checking content for internal consistency
Using Word’s Track Changes and Comment features
Each of these outcomes aligns with one or more of the course goals, and all goals (with the exception of “knowledge of grammar, style, and usage”) are addressed by the assignment outcomes.
Students are provided with a Word document containing AI-generated text (included with the assignment sheet) and asked to edit it for accuracy, completeness, and consistency, tracking their changes. The text is a 750-word recommendation report for creating a pollinator garden in a local public park; the audience is the Mayor, the City Council, and the Director of the Parks and Recreation Department. Students are asked to perform a “substantive edit,” a term that is articulated, defined, and applied as part of the scaffolding work for the assignment. To help guide the process, the assignment sheet suggests they keep the following questions in mind:
Is all of the information accurate?
Is all of the information relevant to the stakeholders?
Will the stakeholders be able to make a decision based on the information provided, or is more (or different) information needed?
Is the information internally consistent?
These questions, taken together, serve to direct students’ editorial attention to higher-order concerns in the document, especially accuracy and completeness based on audience, which are the focus of the course’s first unit.
I provided students with the text instead of having them use AI to generate it themselves because the projects in this class have historically used provided texts, which allows the bulk of class time to focus on editing rather than drafting. Iterative prompt engineering is a valuable skill that is addressed in other courses in our program, but it is outside the scope of this course. The text was generated using ChatGPT 3.5, which was sufficient for the task at the time. The initial prompt was “The City of Conway is considering planting a pollinator garden in one of its parks. Please write a 750-word recommendation for where it should be located, what should be planted there, and how much it would cost initially and for maintenance.”
I chose to generate a text related to the local environment because it is an area in which ChatGPT was likely to be inaccurate. The AI did an excellent job of creating a list of plants that are both good for pollinators and indigenous to the area, but included some inaccuracies related to execution in a specific local context. For example, it recommended two local parks on the basis of being centrally located; however, one of those parks is located outside of town. The rest of the content was widely available factual information, which is where generative AI excels. In order to introduce more inaccuracies, I reran the same prompt, but asked that it include some plants that would not serve the stated purpose and cite some quotations, an area in which ChatGPT is weak and prone to hallucination. Though the organizations and/or publications quoted throughout are real, the quotations themselves are fabricated. Not only do they not exist in the sources cited, they do not exist as direct quotations in any verifiable way. For example, the AI-generated text included the passage “according to a study published in Environmental Entomology, ‘Native plants are more effective at attracting and supporting native pollinators compared to non-native species.’” Environmental Entomology is an existing journal published by Oxford University Press and the content of the statement is accurate; however, the exact quotation does not appear in any of their issues. This is a common form of AI hallucination.
In order to successfully complete the assignment, students need to do the following:
Delete unnecessary information based on the needs of the audience (the plants that do not serve the stated purpose)
Add information based on audience (this will likely vary, but I am looking primarily for more specific details in the Introduction and Location Selection sections that would help stakeholders make a decision)
Address the inconsistency in the park’s location
Address the fabricated quotations
Confirm the Latin names and definitions of the recommended plants and that they would thrive in the local climate
Confirm that cost estimates are roughly correct OR generate more specific cost estimates
Because much of the information is correct and therefore does not show up in the changes tracked on the document, students accompany the edited document with a Letter of Transmittal, in which they describe the changes they made, the reasoning for those changes, and the editing process.
Before the introduction of this project, the course had covered the editing process, assessing a document in terms of its rhetorical situation, and planning and executing an edit using Track Changes in Word. Over several weeks after this project was introduced, we built skills needed to achieve the outcomes by reading and discussing textbook material on editing for completeness and editing for accuracy. These discussions were interspersed exercises practicing those skills, which were completed both collaboratively and individually. Feedback on the collaborative exercises was provided in class; on the individual exercises, in writing. We spent one class period early in the process working with ChatGPT, which most of the students reported that they had never used, though they were aware of it. While we identified and discussed the problems with citing sources that are characteristic of ChatGPT, the scaffolding exercises did not use AI-generated text, depending instead upon exercises from the textbook.
In general, I was pleased with the students’ performance on the assignment, and I think it was successful in moving them toward course goals. The average grade on the assignment was a 77.83%, which is similar to average scores for previous assignments on completeness and accuracy that ask students to edit human-generated rather than AI-generated texts. The most successful projects identified and corrected all major and minor inaccuracies and made logical suggestions for adding and deleting information based on audience need. While I had a couple of things in mind for changes based on audience need (a more detailed introduction, more description of recommended locations, for example), students took different approaches. For example, several students recommended adding more detail to the budget section, as those material details would be the most important deciding factor for the relevant stakeholders. Others recommended more background information on the benefits of pollinator gardens. One student added language about community and social engagement in order to appeal to the target audiences’ perceived values. Others focused more on the accessibility of language choices. I accepted edits for audience that were effective in terms of the document and were explained in the letter of transmittal in a way that demonstrated an understanding of audience. The more successful recommendations were those that considered stakeholders’ specific needs and grounded the recommendation in course concepts.
In order to get a fuller sense of students’ experience with the project, I conducted an anonymous IRB-exempt survey soliciting basic feedback. The survey asked students to rate on a four-point Likert scale how useful the assignment was in preparing them to meet each of the assignment outcomes. Because of the relatively low response rate in an already small sample, I hesitate to draw firm conclusions from the results. That said, in general students indicated that they felt less prepared to meet the AI-related objectives than the more traditionally editing-related objectives. When I assign this project again, I will give it four weeks rather than three. Though it is likely that by fall 2024, when I next teach the class, students will be more familiar with ChatGPT and other generative AI tools, I will build in additional class time to engage with the tools in an open-ended way. Further, I will add one or two additional scaffolding exercises on editing for accuracy and fact-checking.
In conclusion, though I will make minor tweaks to the scaffolding and I will need to regenerate the provided text periodically as generative AI tools develop, at its core this project is an effective way to introduce students to generative AI in the writing and media disciplines. It opens up discussions about rhetorical ethics and agency, grounding them in a specific context and connecting them firmly with existing course goals. Though the assignment was developed in an editing course, it could be revised for a technical writing and communication course, a digital rhetorics course, a writing-intensive course in another discipline, or any course that might benefit from automating some of the drafting process for public-facing documents. While the larger philosophical and ethical questions posed by generative AI continue to unfold, writing studies professionals and teachers must help students understand these tools as a means of engagement in an increasingly algorithm-driven rhetorical landscape.
In completing this assignment, you will practice:
Recognizing the strengths and weaknesses of AI-generated text
Assessing a rhetorical situation
Evaluating a document for completeness in the context of a particular audience and purpose
Creating content necessary for comprehension and use by target audience
Checking content for accuracy
Checking content for internal consistency
Using Word’s Track Changes and Comment features
Edited Report: Submit your edited report as a Word document with changes tracked.
Letter of Transmittal: Submit a Letter of Transmittal, addressed to me, that explains and provides a rationale for the changes you made. If you corrected inaccurate information, include the source(s) you used.
The City of Conway is considering planting a pollinator garden in one of the local parks. You have been charged with creating a report for the Mayor, the City Council, and the Director of the Parks and & Recreation Department in which you make recommendations about establishing such a garden. You are on a tight deadline, so you have been given a first draft created by ChatGPT (attached on Classroom) to use as a starting point. Perform a substantive edit on the document, keeping the following questions in mind:
Is all of the information accurate?
Is all of the information relevant to the stakeholders?
Will the stakeholders be able to make a decision based on the information provided, or is more (or different) information needed?
Is the information internally consistent?
Report
Factual accuracy
Rhetorical effectiveness
Consistency of content, organization, and style
Use of Track Changes
Letter of Transmittal
Compelling rationale for changes, grounded in course concepts
Organization and structure
Clarity and usage
Bowen, José Antonio, and C. Edward Watson. 2024. Teaching with AI: A Practical Guide to a New Era of Human Learning. Johns Hopkins University Press. https://doi.org/10.56021/9781421449227.
Cardon, Peter, Carolin Fleischmann, Jolanta Aritz, Minna Logemann, and Jeanette Heidewald. 2023. “The Challenges and Opportunities of AI-Assisted Writing: Developing AI Literacy for the AI Age.” Business and Professional Communication Quarterly 86 (3): 257–95. https://doi.org/10.1177/23294906231176517.
Cunningham, Donald H., Edward A. Malone, and Joyce M. Rothschild. 2019. Technical Editing: An Introduction to Editing in the Workplace. Oxford University Press.
Dobrin, Sidney I. 2023. Talking About Generative AI: A Guide for Educators. Broadview Press. https://sites.broadviewpress.com/ai/talking/.
Institutional Research. 2023. “Diversity Ledger.” University of Central Arkansas. https://uca.edu/ir/files/2024/11/diversity-ledger_fall-2023.pdf.
Kochhar, Rakesh. 2023. “Which U.S. Workers Are More Exposed to AI on Their Jobs?” Pew Research Center. https://www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/.
Laquintano, Tim, Carly Schnitzler, and Annette Vee. 2023. “Introduction to Teaching with Text Generation Technologies.” In TextGenEd: Teaching with Text Generation Technologies, edited by Annette Vee, Tim Laquintano, and Carly Schnitzler. The WAC Clearinghouse. https://doi.org/10.37514/TWR-J.2023.1.1.02.
Mallette, Jennifer C. 2024. “Preparing Future Technical Editors for an Artificial Intelligence-Enabled Workplace.” Journal of Business and Technical Communication 38 (3): 289–302. https://doi.org/10.1177/10506519241239950.
Reeves, Carol, and J. J. Sylvia IV. 2024. “Generative AI in Technical Communication: A Review of Research from 2023 to 2024.” Journal of Technical Writing and Communication 54 (4): 439–62. https://doi.org/10.1177/00472816241260043.
Selber, Stuart A. 2023. “PWR Approach to Artificial Intelligence.” Penn State Program in Writing and Rhetoric. https://www.pwr.psu.edu/pwr-ai-approach/.
Verhulsdonck, Gustav, Jennifer Weible, Danielle Mollie Stambler, Tharon Howard, and Jason Tham. 2024. “Incorporating Human Judgment in AI-Assisted Content Development: The HEAT Heuristic.” Technical Communication 71 (3): 60–72. https://doi.org/10.55177/tc286621.