“The very notion of traditional [higher] education will become obsolete. The new technologies that are now being developed will enable people of all ages and social conditions to learn anything, anywhere, at any time. Learning will not be based, as it is today, on mechanisms of selection and exclusion.”1 Among the many handwringing prognostications about the changes AI will bring to higher education, this one feels particularly dire for professors and students alike. Ending exclusion sounds like a good idea; the implied replacement of faculty-student interactions with technology-assisted learning would be a disaster. But there’s a twist: this quote comes from an interview with Lewis Perelman in the 1Christian Science Monitor in 1993, and its impetus was the advent of the world wide web and its internetworked communications that blew a zephyr of disruption through higher education, a sector notoriously resistant to change. Thirty-plus years later, the internet is no less problematic, but one thing is clear: the world wide web hasn’t so much obviated traditional higher-education experiences as it has shifted higher-education assumptions. I found the quote via a web search, of course; it was curated within Elon University’s “Early 1990s Internet Predictions Database,” which is one of many web-based archives that has made primary-source research easier and more inclusive in the past few decades.2 The internet may not be an exact analog to AI, but neither is the history of the internet irrelevant to our understanding of AI’s future impact on higher education.
José Antonio Bowen and C. Edward Watson’s book does not open with this particular quote, but it does begin with a reminder that the internet seemingly changed everything—and that higher education nevertheless abides. The authors quickly establish that fear of change is a counterproductive motivator with their own epigraph, courtesy of Marie Curie: “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less” (p. 1). Bowen, a musicologist and former president of Goucher College, has been publishing books on the bleeding edge of higher-education pedagogy since hisTeaching Nakedfirst appeared in 2012.Watson is the Vice President for Digital Innovation at the American Association of Colleges and Universities and former Director of the Center for Teaching and Learning at the University of Georgia (UGA). In a spirit of fearless exploration, Bowen and Watson offer their “practical guide to a new era of human learning,” with hundreds of AI prompt examples and dozens of AI-facilitated teaching techniques. But the book offers much more besides: it reaffirms a number of core pedagogical principles that have nothing to do with AI in particular and everything to do with great teaching in general. Bowen and Watson’s writing is replete with teaching and learning maxims:Teaching Naked first appeared in 2012.3 Watson is the Vice President for Digital Innovation at the American Association of Colleges and Universities and former Director of the Center for Teaching and Learning at the University of Georgia (UGA). In a spirit of fearless exploration, Bowen and Watson offer their “practical guide to a new era of human learning,” with hundreds of AI prompt examples and dozens of AI-facilitated teaching techniques. But the book offers much more besides: it reaffirms a number of core pedagogical principles that have nothing to do with AI in particular and everything to do with great teaching in general. Bowen and Watson’s writing is replete with teaching and learning maxims: 3
“Pedagogy is about improving the odds that students will learn.” (p. 130)
“If the why is clear, students will be more motivated . . .” (p. 134)
“We will need to both prompt and grade for process and rethink what we’re expecting in terms of product.” (p. 157)
“We need to clarify further what we want students to learn, why it is valuable, and especially why the effort and discomfort required are necessary.” (p. 184)
Indeed, they reinforce that AI is only useful insofar as it supports our existing pedagogical goals. AI is also useful to the extent that it exposes how our teaching is currently disconnected from our student learning goals. The advent of AI may change a great deal about higher education, but it will not change what defines great teaching. That is why, regardless of what you think of AI, you should read this book.
Teaching with AI is laid out in three parts consisting of four chapters each, plus an introduction and an epilogue. The introduction spills little ink justifying the topic at hand; throughout the book, Bowen and Watson come across as preternaturally self-assured of the relevance of their writing. Part I, “Thinking With AI,” starts with a 30,000-foot overview of artificial intelligence broadly and the large language models (or LLMs) that fuel generative AI specifically (the latter are what people generally describe with the shorthand of “AI”). Chapter 1, “AI Basics,” reviews differences between a range of generative AI tools that are too numerous to list here. One important early takeaway from the book is that users should select a specific generative AI tool according to their specific need; to use ChatGPT for all your needs is to use clarinets and clarinets alone to perform a Mahler symphony. Chapters 2 through 4 make the case that, because industry, business, service, and other sectors are already expecting employees to use AI, our students should be ready for what they might encounter in most workplaces. With the exception of some bombastic, almost certainly overdetermined claims—one section heading reads “AI will change every job” (p. 35)—the authors offer convincing, scholarly evidence that AI will in fact change many occupations in the information economy (p. 27), and that we owe it to our students to teach them critical AI literacy and creativity skills. Part II, “Teaching With AI,” focuses on how faculty can deploy AI tools to design courses, manage cheating, plan lessons, and grade. Part III presents the student side and arguably deserves to be its own book: “Learning With AI” may be an even more significant topic than teaching with it. I devote more attention to Parts II and III below. For now, suffice it to say that Teaching With AI is a quick read. And if you don’t have time to read even just chapters 11 and 12, which offer dozens of use cases for teaching with AI, you can efficiently skim or search through hundreds of model AI prompts at the companion page on Bowen’s professional website, which provides a treasure trove of ideas.4 4
Maybe the words “treasure trove” leave a bad or at least ambivalent taste in your mouth. Fair warning: you will not enjoy the almost gleeful boosterism ofTeaching with AI. You should still read the book. Without any help from Bowen and Watson, I can identify a number of profound ethical concerns surrounding AI for educators—and everyone else—to consider:
- AI results are rife with inaccuracies, known as “hallucinations”
- AI diminishes the humanity at the heart of the humanities and steals the work of so many writers and artists
- Use of AI in academic contexts feels like a form of cheating, a shortcut that will severely undermine our ability to acquire knowledge and develop habits of mind
- AI is a privacy nightmare, our every query and upload feeding its already capacious maw with our own ideas and information
- There are significant environmental concerns: a single AI query may use eight times more energy than a Google search and the equivalent of a bottle of water to cool the massive data centers required to enable generative AI.5
There is something of an inverse relationship between the gravity of these concerns and the real estate they occupy in the book. Bowen and Watson acknowledge ethical qualms but admit that they don’t address them head on, dismissing them with the hopeful claim, “We are sure someone [else] will write that book” (p. 2). You should readthisbook anyway. For all the problems raised by AI, much like the internet, this book anyway. For all the problems raised by AI, much like the internet, this is a genie that isn’t going back in the bottle. And just like the internet, the technology itself is neither good nor evil; it’s what humans do with it that matters. In the realm of teaching and learning, Bowen and Watson argue that much good can come of teaching with AI.
Despite any reasonable doubts we may have, a major argument for engaging with AI is that ethical concerns cut multiple ways. Bowen and Watson point out that, if AI can save doctors two to three hours a day of bureaucratic work, as one study shows it might, then doctors might be able to spend more time with their patients, which will benefit everyone (except doctors who don’t like spending time with patients) (pp. 31–32). AI will also hasten the identification of life-threatening medical conditions. It would be unethical not using it? They argue, for instance, that “graduates without the ability to think, write, and work with AI will be at a serious disadvantage for future jobs. We need to think about equity of outcomes beyond our classrooms” (p. 134).
If our students’ needs are at the heart of our teaching, the capacity of AI to meet students’ needs is at the heart of the book. As the authors indicate, although some of us might be inclined to dismiss or actively resist AI, the technology is definitely relevant to our students. Students are using AI, sometimes well, but more often very poorly and unethically. AI is transforming how students learn, whether it transforms teaching or not.Teaching with AI, then, is best read with care and concern for our students front of mind. We can hold on to our entirely legitimate skepticism about AI and balance a sense of resigned pragmatism with pedagogical idealism to determine a reasonable path forward.5 6
Care and concern for students are particularly foregrounded in the pivotal chapter 6, on cheating and academic integrity. The authors take an unconventionally sanguine view of the risks of cheating with AI. For one thing, they see AI not as a cause of cheating, as so many do. Instead, they perceive cheating with AI as a symptom of a deeper malaise that afflicts so many students. Lack of motivation, hidden curricula, exigencies in their personal lives—these are the reasons students cheat rather than maliciousness or some inherent character flaw. Ultimately, Bowen and Watson advocate for trusting our students, including the ones who cheat, to make rational choices about their lives. This should not be as radical an idea as it is.
InTeaching with AI, the costs of a lack of trust are manifold. Bowen and Watson take the side of students by listing the ways that AI detectors are not our friends, with an emphasis on their unreliability:
- They are wont to identify faculty work as AI content (p. 107)
- They produce unacceptable levels of false positives (p. 112)
- They constitute a potentially unethical business in and of themselves (p. 118)
- They are part of a cat-and-mouse, cheating-and-detection arms race that has no positive outcome (pp. 118–25)
Efforts by faculty to make AI use impossible—reverting to in-class, handwritten exams or oral exams during student hours—are often impractical at best, run counter to Universal Design for Learning principles, and at worst violate the Americans with Disabilities Act.6 Ultimately, Bowen and Watson conclude that a defensive or paranoid posture toward cheating is bound to poison student and faculty experiences. They remind the reader, “[G]ood pedagogy should always be our first consideration. Combining high standards with high care, building trust and community, [and] focusing on equity and inclusion7
. . . can both increase learning and reduce cheating” (p. 129). The authors follow this statement of teaching philosophy with a series of recommendations that include:
- Regular, low-stakes assignments
- In-class active learning
- Reasonable workloads
- Be flexible
- Model and promote academic integrity
- [Teach] digital and AI literacy
- Better assignments and assessments
Incorporating these recommendations may be easier said than done, but the authors aren’t wrong to put them forward. In chapter 10, they provide an important albeit belated foundation to their argument that faculty should trust their students more than AI detectors. Drawing on scholarship on motivation, Bowen and Watson argue that students will be less likely to cheat if they can confidently state the following: “I care,” “I can,” and “I matter” (p. 185). These three statements correspond to the centrality of a feeling ofpurpose to engagement; a sense of self-efficacy; and a strong sense of belonging or mattering. AI is largely a secondary concern in this chapter, which presents the clearest articulation of the authors’ own teaching philosophy and their insistence that faculty be transparent about their motivations for engaging with or limiting the use of AI.7
If faculty are willing to operate from a position of trust, transparency, and self-awareness of their students’ desires to feel purpose, self-efficacy, and a sense of mattering, then other practices recommended in the book become easier to swallow. For instance, Bowen and Watson do not advocate for banning AI entirely. Instead, every teacher should determine when AI does and doesn’t have a place in alignment with course goals and personal teaching philosophies and then generate their own AI policy accordingly (chapter 7). Bowen and Watson are clear that AI need not be allowed in every circumstance. They advise talking with students about what AI can and can’t do (p. 134) and quip that “AI is like a free like a [sic] puppy; knowing when to say yes and when to skip it will be important” (p. 157). Course and assignment goals can and should drive decision making around when AI can be useful and when it should be restricted.
Instructors can even use AI themselves to determine whether existing course goals and learning outcomes correspond appropriately to course processes and assessments. I’ll use my own course goals from an early-music survey as an example:
By the end of the semester, you will be able to:
- Distinguish aurally between pieces from different times and places by identifying and explaining relevant stylistic features
- Apply sophisticated listening skills to the task of describing known and unknown music using appropriate terminology
- Compare a given piece to music that came before and after
- Analyze primary sources (literary, musical, and visual) in terms of the ways they reveal their authors’ opinions and life experiences
- Demonstrate how music connects individuals, societies, and institutions in disparate times and places
- Argue the value and merits of encountering unfamiliar, unlikeable, or difficult-to-understand music
I like to think that my course goals are overwhelmingly oriented toward critical listening, application, comparison, synthesis, and persuasion. The course involves a great deal of listening, plenty of in-class discussion and score analysis, and a series of listening-based “problem sets” (low-stakes take-home tests) that are open book. I fed Google Gemini my entire coursepack (which includes these goals, the class schedule, and daily handouts that describe in-class exercises and terms I expect students to learn) as well as all of the problem-set keys and my final “unessay” assignment prompt. I prompted Gemini as follows: “This is a coursepack for a 200-level music history course at a selective liberal arts college as well as four take-home tests and a final project assignment. Compare the course goals in the syllabus to the in-class activities, tests, and final project: where do the course goals match the assessments well? Where might one of my faculty colleagues notice a disconnect between the course goals and what I’m asking students to do to prove that they’re learning in the class?” Gemini responded, “one of the goals is for students to be able to ‘compare a given piece to music that came before and after.’ However, none of the assessments specifically ask students to do this. . . . Finally, one of the goals is for students to be able to ‘argue the value and merits of encountering unfamiliar, unlikeable, or difficult-to-understand music.’ This goal is not reflected in any of the assessments.” The response tells me I either need to be more explicit about where students are working toward these course goals, or I need to cut them.
I ask my students not to use AI on the problem sets, but even if they do, AI will only get them so far: it can’t (yet) listen to the recordings I’ve assigned nor identify timestamps when the terms I’m asking students to define might productively be applied. It definitely can’t refer back to specific class discussions or in-class activities. But even if it could, my students would still have to use AIwell to complete problem sets successfully. If “each quiz or test we give actually functions as an act of pedagogy” (p. 95), as Bowen and Watson aver, then students taking the time to use AI to correct their work on a problem set before they submit it are still distinguishing, applying, and analyzing. Students who take the time to “teach” AI about my class by uploading their class notes; prompting AI to make connections to pieces, people, and concepts we have studied; and reminding AI to take critical approaches to those materials are practicing the very synthesis skills I’m hoping they’ll develop.
Chapter 8, “Grading and (Re-)Defining Quality,” focuses on what it means to demand that students use AI well. Bowen and Watson provocatively suggest that AI will allow us to hold students to ever-higher standards, not despite AI but with its assistance. Every teacher will need to redefine what “C” work means going forward (p. 150) because AI can do C work easily. The authors argue that we need to engage in some grade deflation, making the old C the new F and asking students to step up their game. “Rather than banning AI,” they quip, “let’s just ban all C work” (p. 151).
A major strategy Bowen and Watson offer to raise expectations among faculty and students alike involves deploying AI as a personalized tutor, debate or other roleplay partner, and discussion leader (pp. 168–76). Chapter 8 If students have done their due diligence in drafting—that is, what Peter Elbow dubs “writing to learn”9—then soliciting and responding to AI feedback becomes yet another growth opportunity. I struggle to see a conversation with AI about one’s own writing as cheating. If it is, I’m as guilty as the next student: I’ve used AI to give me feedback on a chapter draft, on my syllabi, and on this essay. 10
I was surprised that a chapter I imagined would get top billing—on writing—comes late in the book (chapter 11). After all, this is the topic that has most dominated discussion of AI’s perils for the humanities. But the chapter on writing is positioned strategically after the chapter on care, underscoring Bowen and Watson’s contention that when students believe “I can, I care, I matter,” they are far less likely to use AI to cheat on writing assignments. That might address one of the central concerns I hear from musicologists who teach writing-intensive classes: that students will no longer be motivated to “write to learn” if AI can write for them. But Bowen and Watson are confident that, “just as calculators did not eliminate the need for human math, AI will not eliminate the need to write and to write well and with ease, clarity, and voice” (p. 199). They lean heavily into the “writing to learn” paradigm, pitching writing assignments that weigh process over product. They suggest asking students to write about themselves, respond to ethical dilemmas, practice journaling, and conduct interviews. Given that “it is proving virtually impossible to create writing assignments that AI can’t do (at least to C-grade level)” (p. 201), and given that “good writing is good editing” (p. 213), they recommend asking students to submit writing along with the prompts they used to get AI to generate and/or revise and/or modify the voice of the artifact. Again, all of these recommendations strike me as good pedagogies that use of AI can help to enhance.
Chapter 10 AI will never replace human interaction (although there are horror stories about people trying).11 The greater number of the “authoritarian” parts of teaching we can pass off to AI in the form of drilling information, assessing student learning, and even grading, the more our work as teachers becomes less like that of a judge, general, or ruler and more like that of a gardener, woodcarver, or sand sculptor: we work 12with the material at hand rather than against it.
What I love aboutTeaching with AIis its clear-eyed emphasis on pedagogical values that I share. But it is far from a perfect book. In addition to its failure to address with any degree of depth the many reasonable ethical concerns circulating about AI, it also fails to show what can go wrong both in and out of the classroom. AI is a “naive intern” that only functions well when we prompt it well. The majority of the prompts shared inTeaching with AIare just ideas rather than fully fledged realizations, and the book provides little direct instruction in prompt engineering. The authors admit that it takes quite a bit of finessing to get high-quality responses in many cases, but they tend not to show their own work in this regard. Despite the instructions in early chapters to think carefully about which AI chatbot or tool is best suited for which activity, little guidance is given as to which kinds of prompts work best when using different tools (pp. 155–56 provide a rare exception). I would have appreciated more reminders that students aren’t necessarily going to have the know-how or the persistence to get the best results out of AI. That admission is a key reason why scholars—the people with the expertise and the critical thinking skills and question-generation skills and pedagogical skills—need to teach students to use AIwell.Teaching with AI is its clear-eyed emphasis on pedagogical values that I share. But it is far from a perfect book. In addition to its failure to address with any degree of depth the many reasonable ethical concerns circulating about AI, it also fails to show what can go wrong both in and out of the classroom. AI is a “naive intern” that only functions well when we prompt it well. The majority of the prompts shared in Teaching with AI are just ideas rather than fully fledged realizations, and the book provides little direct instruction in prompt engineering. The authors admit that it takes quite a bit of finessing to get high-quality responses in many cases, but they tend not to show their own work in this regard. Despite the instructions in early chapters to think carefully about which AI chatbot or tool is best suited for which activity, little guidance is given as to which kinds of prompts work best when using different tools (pp. 155–56 provide a rare exception). I would have appreciated more reminders that students aren’t necessarily going to have the know-how or the persistence to get the best results out of AI. That admission is a key reason why scholars—the people with the expertise and the critical thinking skills and question-generation skills and pedagogical skills—need to teach students to use AI well.
I found the tone of the book to be grating at times. Whether tongue in cheek or not, the breathless salesmanship in excerpts like this one irked more than inspired:
But wait, there’s more. You could, for example, get an AI to draft an accreditation report, optimize your class schedule, act as an external consultant for your strategic plan, create a departmental dashboard, plan an event, anticipate future student demands, review government compliance, create a department newsletter, do a sentiment analysis of teaching, or review policies for equity and recommend changes that would increase graduation rates or support for underrepresented students (p. 104).
Bowen and Watson—both distinguished scholars—might have tempered their salesmanship with a bit more circumspection. Thanks to their unbridled enthusiasm, the listy-ness of the excerpt above is more representative than exceptional, becoming overwhelming at times. Chapters 11 and 12 in particular offer dozens of great ideas, but none with the kind of depth I suspect many teachers will want to see before they’re willing to try what the authors propose. The book is also astonishingly broad, touching so many fields that some of the applications of AI will feel quite distant from the needs or interests of musicologists, not to mention those in many other disciplines.
Through its very broadness, however, the book demonstrates indirect sympathy to a concern I’ve heard colleagues voice: generative AI seems to rob us of pleasures central to the intellectual lives of professional and budding musicologists. Those pleasures include slow, deliberative, and repetitive reading, writing, and listening, and we want our students to experience the formative nature of these scholarly pursuits. In response to this, Bowen and Watson repeatedly emphasize that there are times when we should encourage studentsnot to use AI because of the pleasure or learning it will rob. And they demand that readers consider how AI can reduce the burden of those things that don’t give students (or faculty) pleasure. In doing so, it might actually allow all of us more time to spend on activities such as reading, writing, and listening.
In other words, teaching with AI is not an all-or-nothing pursuit. Bowen and Watson argue that we don’t have to use AI for everything and that we shouldn’t avoid using AI entirely, either. An unspoken thesis in their book—one that I find deeply compelling—is that AI forces faculty and students to decide what really matters to us in our teaching and learning. I am confident that a great deal of my students’ learning will remain stubbornly analog because singing in class helps themcare; guided listening helps them recognize that they can; and assignments that give them agency to bring themselves to their schoolwork remind them that they matter. Like every other tool—the printing press, personal computers, the internet—we have to decide when a tool is useful and when it is counterproductive. Making that distinction is a task that will require teachers to engage in open-ended, albeit cautious and critical engagement with AI. But the sooner we make such a distinction, the sooner we can get on with the things that we do best as humans, with or without AI assistance.
- Romolo Gondolfo and Lewis J. Perelman, “Will Technology Alter Traditional Teaching?” Christian Science Monitor, September 22, 1993, https://www.elon.edu/u/imagining/expert_predictions/will-technology-alter-traditional-teaching-6/, verified via https://www.proquest.com/newspapers/will-technology-alter-traditional-teaching-series/docview/291209518/se-2?accountid=351.[↩]
- “Elon University’s Early 1990s Internet Predictions Database,” Imagining the Internet: A History and Forecast, Elon University, accessed October 20, 2024, https://www.elon.edu/u/imagining/time-capsule/early-90s/90s-database/.[↩]
- José Antonio Bowen, Teaching Naked: How Moving Technology Out of Your College Classroom Will Improve Student Learning (San Francisco: Jossey-Bass, 2012).[↩]
- José Antonio Bowen, “AI Literacy and Prompting,” Teaching Naked: AI Teaching and Workshops, accessed October 27, 2024, https://teachingnaked.com/prompts/.[↩]
- Full disclosure: I read Teaching with AI immediately after finishing Cate Denial’s Pedagogy of Kindness (Oklahoma City: Oklahoma University Press, 2024). The centering of care and concern for students in both produced some stimulating and surprisingly consonant counterpoint.[↩]
- A foundational text on Universal Design for Learning in education is Anne Meyer, David Rose, and David Gordon, Universal Design for Learning: Theory and Practice (Wakefield, MA: CAST Professional Publishing, 2014). See also “About Universal Design for Learning,” CAST, accessed November 9, 2024, https://www.cast.org/impact/universal-design-for-learning-udl.[↩]
- The “Transparency in Learning and Teaching” (TILT) framework is a leitmotif with varying implications for many of the chapters. For more on the TILT framework, see Mary-Ann Winklemes, “Introduction: The Story of TILT and Its Emerging Uses in Higher Education,” in Transparent Design in Higher Education Teaching and Leadership, ed. Mary-Ann Winklemes, Allison Boye, and Suzanne Tapp (Sterling, VA: Stylus Press, 2019), 1–14. See also TILT Higher Ed, accessed October 27, 2024, https://www.tilthighered.com/.[↩]
- Along similar lines, my colleagues Kirk Martinson, Karen Olson, Sara Dale, and Kendall George have developed a set of “Ps of Prompt Engineering” that include “prime” (give AI the information it needs to know); “persona” (assign it a role or voice); “public” (define the audience); “product” (clarify the form or format the results should take); “prompt” (the actual query); and “polish” (re-prompt as needed to refine the results). They developed their list in response to a similar, less alliterative list in Louie Giray, “Prompt Engineering with ChatGPT: A Guide for Academic Writers,” Annals of Biomedical Engineering 51, no. 12 (2023): 2629–33, https://doi.org/10./s10439-023-03272-4.[↩]
- See Peter Elbow and Mary Deane Sorcinelli, eds., Writing to Learn: Strategies for Assigning and Responding to Writing Across the Disciplines (San Francisco: Jossey-Bass, 1997).[↩]
- Recent scholarship has cemented the importance of faculty-student interactions for a variety of student outcomes, and while some have advocated for AI as a way to engage students who may feel disconnected, there is no question that in-person, relationship-rich teaching will continue as the gold standard in higher education. See Peter Felton and Leo M. Lambert, Relationship-Rich Education: How Human Connections Drive Success in College (Baltimore: Johns Hopkins University Press, 2020); and Peter Felton, Oscar R. Miranda Tapia, Isis Artze-Vega, and Leo M. Lambert, Connections Are Everything: A College Student’s Guide to Relationship-Rich Education (Baltimore: Johns Hopkins University Press, 2023).[↩]
- See, for example, Kevin Roose, “Can A.I. be blamed for a teen’s suicide?” New York Times, October 23, 2024, https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html?searchResultPosition=2.[↩]
