Introduction
In a recent interview, Khan Academy founder Sal Khan asserted to the New York Times that “we’re on the cusp of using AI for probably the biggest positive transformation that education has ever seen… and the way we’re going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor” (Singer 2024). Of course, in pontificating about the gargantuan future of AI in education, Khan was primarily referencing his own product Khanmigo, newly powered by the multimodal capabilities of OpenAI’s ChatGPT-4o (Craig 2024). In a notable move, in May 2024 Khan Academy joined Microsoft and other tech organizations in offering their generative AI products free to educators. Shortly thereafter, Google announced a $25 million investment in training teachers to use AI in their teaching (Klein 2024) and OpenAI released ChatGPT Edu, a product focused on building and scaling AI use in universities (Coffey 2024). Since then, institutions of higher education like Harvard and Arizona State have readily embraced headline-generating initiatives to involve AI in classrooms (Ravaglia 2024). When tech organizations and university administrators discuss AI’s impact on education, themes of opportunity, progress, and innovation tend to dominate the discussion and any drawbacks or even complexities seem to be either glossed over or omitted entirely. Among the many notable sermonizers of this opportunity frame is OpenAI CEO Sam Altman, who told The Harvard Gazette that academic “standards are just going to have to evolve” and “writing a paper the old-fashioned way is not going to be the thing” that leads students to the highest levels of success in a rapidly evolving media environment (Simon 2024). Instead, Altman argues, “using the tool to best discover and express, to communicate ideas, I think that’s where things are going to go in the future” (Simon 2024).1
Meanwhile, in their own arena of the classroom, instructors in higher education have rapidly advanced their own initiatives, experiments, and ventures that chart a more localized path calibrated for the needs of their particular students on their particular campus. This localized approach contrasts with the broader “top down” vision advanced by public figures like Khan or Altman and the university administrators they are rapidly forming partnerships with (Vee, Laquintano, and Schnitzler 2023; Ranade and Eyman 2024; Knowles 2024; Graham 2023; Grassini 2023; Jamieson 2022; Stanton 2023; Dobrin 2023; Cummings, Monroe, and Watkins 2024). Alongside pedagogical innovations in classrooms, instructor-led professional organizations are theorizing disciplinary knowledge concerning ethical uses of AI to advise campuses struggling with the effects these tools have on the rapidly evolving workflows of academic life. As just one example from the field of writing studies, a joint MLA-CCCC task force (2023) has begun assembling theories, methods, workflows, and practices to assist college writing instructors struggling to both confront undesired forms of student AI use while simultaneously experimenting with what they consider to be more ethical applications for AI in their pedagogies. In contrast to the techno-optimist tone of figures like Khan and Altman that seems to prescribe an uncomplicated one-size-fits-all future for campuses to work toward, in their own discussions of AI, instructors who teach actual college courses tend to emphasize themes of experimentation with practical implementation suited for the unique, situated needs of their campus. Importantly, instructor-led discussions of AI include conversations about the emotional labor required to enact what, to many instructors, feels like surveilling students for potential academic dishonesty while also noting the messy and ultimately hybrid nature of the solutions they piece together based on local needs, context, and practice. In sum, these instructors are building knowledge about applying AI in higher education “from the ground up” in a form that bears some distinct differences from the “top-down,” one-size-fits-all orientation emblemized by Khan’s proclamation about providing AI personal tutors to each student and Altman’s speculations on the future of college writing. Compared to the “from the ground up” knowledge instructors and students collectively develop, Khan and Altman’s pronouncements appear heavy on hype and light on details while also being quite far removed from the actual academic workflows of students that instructors witness (and help shape) every day.
In this article, we argue that digital humanities (DH) perspectives built from on-the-ground pedagogical experiences can illuminate the opportunities, pitfalls, and challenges of AI use in higher education in forms that should be listened to and sometimes even prioritized within the political economy of higher education. We argue that, at least in situations directly relevant to AI use in humanities programs and pedagogies, digital humanists who know their campuses best should be empowered to lead and be listened to in decision making processes. Digital humanists can provide perspectives that are labor-focused and productively critical of techno-solutionist visions proffered by AI companies and the university PR pushes supporting them while also advancing a DH-informed approach for practical, ethical AI use in higher education. Digital humanists have developed critical—but also creative—approaches for integrating emerging technologies into education that facilitates resistance to top-down AI hype while also illuminating the realities of experimental and emotional labor in the classroom that a “top-down” approach can’t readily account for. These are forms of pedagogical labor that higher education could do more to recognize and incentivize to support ethical hybrid approaches to generative AI built from the ground up by students and instructors themselves.
By foregrounding attention to the challenges stakeholders face regarding AI in higher education that localized DH perspectives are uniquely positioned to address—namely practical implementation, emotional labor/surveillance, hybridity, criticality, and pedagogy—this article contributes a roadmap for subverting AI hype proffered by neoliberal techno-solutionist approaches like those proposed by Khan and Altman. Instead, the “from the ground up” approach that this article advocates for elevates the voices of techno-critical DH perspectives that are nonetheless committed to developing locally informed ethical parameters for generative AI use in higher education that are based on student and instructor experiences in actual classrooms.
Hidden Labors, or What We are Asked to Do Instead of What Could be Done
In this section, we outline how two hidden labors theorized by digital humanities scholars—practical implementation and emotional labor/surveillance—can draw on localized “from the ground up” expertise to benefit classrooms as AI use becomes more established, methodical, and formalized within university cultures. Just as digital humanities scholars like Rebekah Cummings, David S. Roh, and Elizabeth Callaway (2020) have argued for tailoring initiatives like DH labs to the “identity and mission” of a particular campus in order “to reflect local interests and character,” attending to the localized needs of AI policies, practices, and pedagogies that only on-the-ground teachers have access to (and that are entirely unknown to top-down” evangelists like Khan and Altman and that university administrators tend to be less directly situated in) helps ground practical implementation in actual, and not prescriptive, opportunities for innovation (2). Foremost among opportunities for DH perspectives to take the lead in is the situated material practice of the classroom, where instructors develop localized knowledge of their campuses’ needs concerning the practical implementation of AI use to experiment toward more ethical, and extremely innovative, pedagogical practices that a “top-down” approach does not have access to.
Practical implementation
As any experienced teacher knows, any sort of generalized pedagogical knowledge often requires a process of adaptation and tailoring to a local context to optimize its practical value to students. Implementing AI into education is no different: generalized knowledge without local expertise results in suboptimal practice. In a contribution to Debates in the Digital Humanities 2016, Ryan Cordell asserts how “we must think locally and create versions of DH that make sense not at some ideal, universal level but at specific schools, in specific curricula, and with specific institutional partners” (471). Considering the wide array of university identities, student demographics, and campus cultures in higher education, effective teaching practices involving emerging technologies like AI are necessarily not one-size-fits-all, but are of a community, rooted in shared experiences, and tailored for students within a particular context. While instructors across institutions and disciplines have expressed anxiety about the seismic pedagogical challenges AI poses in the classroom, some disciplines, like writing studies—which has long embraced the relationship between technology and writing as an opportunity for student learning—have been swift to consider how to implement AI in the classroom in localized and concrete forms.
For example, in “Machine-In-The-Loop Writing: Optimizing the Rhetorical Load,” Alan M. Knowles (2024) develops the concept of “Rhetorical Load Sharing” to conceptualize the distributed labor of AI-assisted writing that “avoids offloading the entire rhetorical load to generative AI tools,” arguing instead that “machine-in-the-loop writing, in which human collaborators retain majority of the rhetorical load, is an ideal AI collaborative writing model” that is characterized by ongoing interplay between human and machine (1). In this endeavor, one of this article’s authors has experimented with “rhetorical load sharing” in their own courses: in a technical communication course, the author challenged their students to use generative AI tools to “recreate” their just-submitted projects as an in-class exercise, finding that the AI-only projects would receive no better than a C or D grade, but could be fine-tuned with major human revision to move much closer to a B grade (students generated a list of all sorts of other capacities and shortcomings of the AI-assisted writing process).
Furthermore, edited collections catering to writing studies like Annette Vee, Tim Laquintano, and Carly Schnitzler’s (2023) TextGenEd: Teaching with Text Generation Technologies offer a variety of situated pedagogies that put AI in the hands of students in experimental but also methodical and finely tuned pedagogical exercises. Beyond activities that prompt students to approach AI with a critical lens, authors in this collection ask students to consider how AI enables new rhetorical competencies, using AI to experiment with creative and professional pursuits. In one assignment, AI assists students in writing and illustrating children’s books (Easter). In another, students use AI to translate medical journal articles to lay audiences (McKee). Importantly, these pedagogies take place in a variety of institutional settings, course types and subjects, and levels of both student and instructor familiarity with AI tools. Even in the business writing classroom, as articulated in “The Challenges and Opportunities of AI-Assisted Writing: Developing AI Literacy for the AI Age,” Peter Cardon, Carolin Fleischmann, Jolanta Aritz, Minna Logemann, and Jeanetta Heidewald (2023) find that many writing instructors already recognize AI literacy as an essential skill and learning outcome for their courses that they expect workplaces will highly value in coming years (257).
The “from the ground up” implementation practices and methods that college instructors are developing contrast with those offered by people like Khan and Altman as well as university PR initiatives in a few key respects. The first and more important difference is in detail, specificity, material recommendations, and practical knowledge (phronesis) that on-the-ground pedagogical expertise offers compared to uncomplicated techno-optimist evangelizing. AI use in a History course will be markedly different from its use in a Race and Gender Studies course, a difference that subject matter experts and on-the-ground instructors are best prepared to comprehend, theorize, and generate knowledge from. Another important difference is the contrast between the implied Ivy League-adjacent universities that the Stanford-educated Sam Altman has invoked in public interviews with The Harvard Gazette compared to the actual realities of college students in most United States higher education institutions. As far as practical implementation goes, the experiences of instructors at community colleges, land grant universities, regional colleges, HBCUs, Tribal colleges, small liberal arts colleges (SLACs), fully online programs, and the sheer heterogeneity of other educational models in higher education likely gauges the needs of America’s actual college students better than the very limited view generally envisioned by Silicon Valley. Finally, teachers know their students and their students’ needs better than Silicon Valley executives ever could, making classrooms a far better laboratory for experimentation and innovation with AI-assisted pedagogies than corporate oratory could truly formulate.
This is a reality we have first-hand knowledge of. One of this article’s authors is a first-year writing instructor at a two-year college in the northeastern United States. In response to the release of OpenAI’s ChatGPT in November 2022, college administration simply included a line about AI use in the college’s honor code beginning with the fall semester of 2023: “Plagiarism is the intentional or unintentional failure to immediately, accurately, and completely cite and document the source of any language, ideas, summaries, hypotheses, conclusions, interpretations, speculations, graphs, charts, pictures, etc., or other material not entirely your own (including material generated by any artificial intelligence platform)” (Great Bay Community College 2023, 47). The college’s only official communication concerning AI, which should be noted is in a parenthetical aside, frames AI as solely related to plagiarism. Aside from comments in department meetings, the college has yet to officially recognize or give guidance about acceptable AI use in the classroom for instructors and for students. Implementation, in the case of this two-year college, is happening piecemeal through instructors’ and students’ trials and errors with AI. In a separate case, another of our authors is faculty at a private research university and has been asked to attend a handful of AI seminars that do not have a single member of the humanities presenting during the symposium. At one recent AI symposium, only two people taught any classes at the institution at all. These symposiums are illustrative of a “top-down” approach, as they prioritize views from administrators and leadership rather than on-the-ground practitioners. A “ground-up” approach would instead work to foreground the experiences of people actively working with students and their AI use in the university’s local context.
Emotional labor/surveillance
Alongside the practical implications of AI technologies, practitioners are often asked to surveil student use and punitively assess student work alongside vague university policies on AI. This is not a new phenomenon by any measure. Instructors in the humanities have often been asked to punitively assess students’ use of citations and technology in the classroom. As scholars like Chris Anson have noted, “definitions of plagiarism are socially constructed and tied to context-sensitive cycles of reward for the production—and therefore the ownership—of certain kinds of texts” (2022, 37). In other words, teasing out the intent of students to use others’ language in the classroom is a difficult process, one that is often flattened in the process of punishment for plagiarism that a “top down” approach would seem to favor. Despite the difficulties and nuances of this work, teachers have always had to keep an eye out for potential plagiarism or academic dishonesty. However, faculty are faced with increasingly difficult terrain as many generative AI software is impossible to detect with any confidence. For example, Jacob Hubbard (2023) did a study that tested the efficacy of AI detection software. Across the board, these technologies were inconclusive if not outright inaccurate. These findings “should make us question how confident we should be in these AI detection programs detecting AI writing, and which best practices are most appropriate for determining if a student’s work is original or AI-generated” (Hubbard 2023). As a result, faculty are often asked to surveil the use of AI without any real ways to do so ethically and with confidence they are not making a false accusation.
This fear of AI misuse is not isolated to novel technologies: many accepted technologies have begun to add generative elements, creating further confusion that practitioners must manage. The most prominent example of this is Grammarly. Many universities provide Grammarly for free, even though the technology has a newly introduced generative AI element. Recently, a student at the University of North Georgia was punished for using Grammarly’s new software without any real instruction about the changes these new features introduced (Menesez 2024). Similarly, a group of students at Texas A&M were emailed angrily about their AI use when only one student had used AI (Hubbard 2023). Students are being harmed by unclear policies and slow approaches to these technologies. However, most of these stories leave out discussion of who is having to deal with these claims of plagiarism and misuse: instructors. This labor of surveillance takes energy and, perhaps most importantly, time. Scholars in the humanities are also often dealing with higher teaching loads and writing-intensive courses, meaning that this time is compounded.
According to Inside Higher Ed’s 2024 president’s poll, provosts are well-aware that their AI policies are not ready to meet the exigent moment. Only 20% of provosts said their college or university has published a policy governing the use of AI, including in teaching and research (Quinn 2024). Of those policies, most are primarily concerned with integrity, because “for many institutions, their plagiarism policies kind of govern also generative AI—that is, students are forbidden to use anything that is not their own work” (Thuswaldner, quoted in Quinn 2024). It is important to pause and note that throughout the survey, the most pervasive concern by far was cheating and plagiarism, aligning with the demands that administrations often have for practitioners. Nearly half of the provosts said they’re moderately concerned about the threat generative AI poses to academic integrity with a further 26 percent saying they’re very or extremely concerned (Quinn 2024). As we will discuss later, this is not the only concern that many faculty have with AI use but is the concern that requires the most time to enforce and discuss with students.
For example, the enforcement of ethical AI use policies related to plagiarism is one instance in which a “from the ground up” approach that leverages localized classroom expertise contrasts with “top-down” approaches from administrators or departments, which primarily function either after the fact using penalties and academic integrity violation mechanisms or through admirably proactive but lightly attended workshops. In contrast, a “from the ground up” approach allowed one of this article’s authors teaching a writing course primarily intended for first year students to guide a student who had uncritically used AI toward a generative learning outcome for all involved. The student, who admitted to passing AI’s writing off as their own to generate an unpolished, undeveloped, and academically unacceptable project draft that cited nonexistent sources was eventually able to grow academically through a process of revision, recalibration, and commitment to the messy processes of both writing and learning. This resulted in a preferable learning outcome compared to an academic integrity violation.
As such, it seems that the messiness of process, growth, and a student’s academic evolution over time are all better served by “ground up” approaches than they are by administrative bureaucracies that feel impersonal to students, presume guilt rather than misunderstanding, and that are divorced from the situated pedagogical context students do their learning and academic growing in. The situation, while emotionally laborious and stressful for the instructor, ultimately helped the student and instructor learn about the opportunities and shortcomings of AI tools and how they can aid an academic writing process (for instance, learning that AI is helpful for structure, background knowledge, and generating ideas, but inadequate at researching, writing evocative sentences, and discussing current events). The situation is further complicated by both instructors and administrators often believing they can spot unethical AI use but lacking definitive tools to guarantee they aren’t making a false accusation. In these cases, a “ground up”-style one-on-one sit-down discussion with a trusted instructor capable of guiding a situation toward a productive outcome is likely to be more fruitful for a student’s academic growth and learning compared to the student being called to an administrator’s office to discuss a possible academic integrity policy violation. While administrators and techno-solutionists are equipped primarily to surveil and punish academic integrity violations, instructors already tasked with doing the heavy lifting “on the ground” can supervise ethical and unethical AI use and then respond accordingly with “top down” solutions if they wish.
In essence, a “from the ground up” approach allocates agency to those who most directly teach students the ethics of AI use and to those who craft assignments, projects, and AI policies even as it adds additional emotional labor. In practice, the labor of governing and surveilling AI use is left nearly entirely to individual instructors and practitioners, who must determine how to apply these policies. This situation leaves faculty with little protection from the risk that comes alongside their decision-making. While 80 percent of faculty are offered training for AI in the classroom, it is not clear who is leading those trainings or what their experience is in the classroom (Quinn 2024). It should not go without saying that different disciplines have different goals for their classroom practices, as some courses are focused on developing practical skills or retaining important knowledge while others are focused on developing and then applying critical thinking skills. Thus, a one-size-fits-all approach to AI governance flattens the nuance present in the university classroom and leaves the labor of unethical AI use detection but not the how and when to the instructor who knows the course, and students, best.
Opportunities for Localized DH Perspectives
The hidden labors of generative AI cost practitioners time and energy that could be allocated toward more generative efforts. Specifically, the labor of implementation and supervision of AI use (or navigating ambiguities between ethical and unethical uses of AI in academic work) taxes instructors and often is not the type of labor that is rewarded in tenure, promotion, or retention materials. Considering this, the work and expertise of practitioners in disciplines like the digital humanities could benefit universities far more if those practitioners were granted the time and agency to lean into their existing but also rapidly evolving expertise. These opportunities arising “from the ground up” include navigating the hybridity of digital tools, assessing emerging technologies and their social impacts, and teaching alongside those technologies.
Hybridity
As often is noted, the digital humanities have never been just one thing. Hybridity is essential to the development, identity, and practice of the digital humanities, a function of the discipline that the emergence of generative AI only exacerbates the need for. Perhaps of most value for the academy in an AI-assisted era is an orientation toward hybridity that is both cautious and open-minded, both critical and exploratory, and consistently analytical while remaining enduringly curious. When DH scholars have engaged with hybridity in the past, they have variously developed models of education that foreground hybridized media and infrastructures (Nørgård, Schreibman, and Huang 2022; see also the journal Hybrid Pedagogy) and theorized hybrid approaches to project management (Tabak 2017). With large language models (LLMs), humans create alongside not only machines but also the traces left behind from a constellation of human activities routed through machine infrastructures that the machine has learned from, blending past and present into a hybrid no one can truly fully document or be said to fully understand.
Thus, when considering how universities orient their programs and pedagogies toward the growing ubiquity of generative AI, an orientation toward the ambiguities and latent potentials of hybridity that digital humanities perspectives are so accustomed to may help inform practices better able to support on-the-ground student learning in specific campus communities. As DH practitioners, a ground-up model would allow the space for considering how humans and machines work together to create in a hybrid process. Approaching AI as part of a long history of digital tools, DH scholars can let their classroom experiences and research expertise inform the policies that universities implement and sustain over time.
Criticality
Institutionally, common concerns for university administrators seem to be plagiarism and misuse of AI (Quinn 2024). However, the ethical issues of AI frequently begin in the classroom and extend far beyond the university setting. Because digital humanists are often techno-critical, these perspectives offer important insights into how these technologies can be used while keeping in mind the difficult ethical terrain encountered in their use. This DH-informed criticality foregrounds attention to AI’s biases, proclivities, and cultural assumptions. DH perspectives are valuable when critiquing AI use because of the sociotechnical approach many practitioners take to their work. According to communication studies scholars Brian Chen and Jacob Metcalf (2024), a “sociotechnical approach recognizes that a technology’s real-world safety and performance is always a product of technical design and broader societal forces, including organizational bureaucracy, human labor, social conventions, and power” (1). This perspective recognizes that a tool’s effectiveness, performances, and consequences are not predetermined; they arise from the real-world interplay between technical design and social dynamics (Chen and Metcalf 2024). If university structures were better equipped for AI’s pedagogical implementation, the labors of DH practitioners could be better spent experimenting, theorizing, and sharing with the larger public.
As one example, the environmental impacts of generative AI use are often under-discussed in academic and corporate circles but are prime sites for digital humanistic investigation. Environmental critiques that DH practitioners are well-equipped for point out how AI requires massive amounts of energy to power the server farms that are necessary to run their intensive systems (Coleman 2023). Training and running AI systems takes an incredible amount of energy, and the more AI tools are used, the greater the output of greenhouse gasses. Coleman notes that the “exact effect that AI will have on the climate crisis is difficult to calculate, even if experts focus only on the amount of greenhouse gasses it emits.” This is in part because most AI companies are not transparent about their energy use, but also because emissions are only one way that AI impacts the environment. This is the sort of criticality that DH-informed perspectives are equipped to contribute.
Moreover, many humanists are concerned with the effect of AI on marginalized communities and the labor conditions of those marginalized in society, even as research suggests AI can be of great benefit to English language learners, offering pathways to navigate language barriers as well as potential to assist with fluency (Fathi and Rahimi 2024). DH scholars are especially equipped to assess intersectional issues across race, class, gender, and disability because of the flexible interdisciplinary backgrounds that many DH scholars combine. Allowing practitioners to focus their efforts on the broad implications and ethics of AI will allow for more robust and interesting conversations across the institution.
Pedagogy
Finally, the “ground up” expertise of Digital Humanities practitioners should be prioritized in local university settings because AI most directly affects the work students enact in humanities classrooms. Whereas AI use is more regular and accepted in fields like computer science, data analysis, and mathematics, the humanities have understandable reservations about uncritical misuse (Extance 2023). A “ground up” approach to AI would empower DH practitioners to lead discussions of AI integration into university cultures, as digital humanists have long been building approaches for optimal use of technology in the classroom. Moreover, DH practitioners know the local needs of their campuses and students better than far-off techno-solutionists in Silicon Valley proffering one-size-fits-all solutions.
For example, practitioners in DH and DH-adjacent classrooms are already developing interesting assignments “from the ground up” that challenge student perceptions of AI use. Paul Fyfe (2023) developed an assignment that asked undergraduate students to “harvest content from an installation of GPT-2” and then incorporate the material into their final essay (1). Fyfe asked students to highlight which elements of the writing was their own original creation and what was developed with AI. In discussing Fyfe’s assignment, Chris Anson notes that student reflections “focused on the ethics of AI assistance, what the program did to extend their own perspectives, and how the material might or might not be considered plagiarism… The shared insights of the students are impressive and point to the broader goal of teaching discourse in all its complexities and contextual variations” (2023, 44). In a first-year writing classroom taught by one of this article’s authors, students are asked to assess different AI platforms (Gemini, ChatGPT, Sudowrite, Poe, Claude). These types of innovative assignments not only challenge student assumptions about AI but make transparent the complexities of writing in a constantly shifting technological landscape. As DH practitioners are experts in teaching alongside emergent technologies, they should more frequently be asked to lead workshops, develop innovative courses, and receive institutional support for this kind of expert labor.
Conclusion
It may be true that, as OpenAI CEO Sam Altman told The Harvard Gazette, “telling people not to use ChatGPT is not preparing people for the world of the future” (Simon 2024). However, this does not mean that digital humanists, teachers, administrators, and other stakeholders in higher education should uncritically accept prescriptions from Silicon Valley techno-solutionists offering one-size-fits-all approaches without much in the way of the local knowledge successful teaching requires. Practitioners of the digital humanities have developed expertise surrounding implementation, emotional labor/surveillance, and hybridity. They have done so with a grounded understanding of their local campus and students that one-size-fits-all, “top down” approaches can’t offer.
As AI use becomes increasingly ubiquitous on college campuses, reaffirming the political economy of localized “on the ground” DH perspectives may help students to write, create, and design in forms that reflect the continuing evolution of the academy alongside emerging technologies. Crucially, though, it also enables the academy to actively participate in the invention of AI’s role in campus life while also staying true to the enduring timelessness of the humanities that techno-capitalist corporate-university initiatives, like those conjured by Khan and Altman, risk overlooking to their own detriment. In this endeavor, approaches built “from the ground up” will always be better equipped to tailor situated practices to student needs on particular campuses.