Listening is essential to teaching music history, but incorporating it into our pedagogy poses many challenges. Whether I’m lecturing, leading a small-group discussion, or designing assignments, I want students to engage with music that they have actually listened to, not just read about. When students complete one of my classes, I want them to be able to listen critically and connect what they hear to music-historical issues. As Pamela Beach and Benjamin Bolden have argued, critical listening is closely related to critical literacy and critical thinking.1 Critical listening, reading, and thinking belong to a cluster of shared skills that are an essential part of university learning, and cultivating them requires practice. Just as a quick skim of a text is insufficient to prepare for a class discussion, a single listening is rarely sufficient for deep engagement. Critical listening requires close and targeted relistening, perhaps multiple times, accompanied by reflection. And like any skill, learning critical listening requires substantive feedback.
But how do we teach this kind of listening? In my previous teaching, I’ve tried two approaches, both of which I’ve found unsuccessful. With shorter listenings, I’ve simply played music in class, when possible, with a projected score/text or with students following along on handouts or in anthologies. Before listening, I introduce the piece and what we are listening for. While listening, I verbally and/or physically point out key features, and at the end, I ask students to discuss what they have heard. Listening together as a group in class works reasonably well for some students, and it allows for the kind of teaching games designed by Laurie McManus.2 However, it takes a lot of valuable class time and is thus not feasible with longer listenings, and students using hearing aids or other assisted-hearing devices may struggle to understand what is said over the music. Nor does it allow for students to relisten individually if they had trouble hearing or noticing what I was pointing out.
Alternatively, I have asked students to read an introduction to the music (this might be taken from an article or an anthology, or I might prepare an original text of my own) and then listen to the music before class, following along with a score, piano reduction, or text/translation as appropriate. This was my usual experience when I was an undergraduate, and it was what we did when I began to TA as a graduate student.3 I also adopted it in my first classes as instructor of record. Without any disrespect to my past teachers, I find this approach problematic as well. Even if students do the listening, I have no way of knowing that they have noticed the features that we are interested in for class discussion. While class discussion may seem to reveal which students have prepared and which have not, some students may have spent quite a lot of time and effort trying to prepare but struggled to connect what they read to what they heard, or they may have even made wrong connections. The fundamental issue here is that students do all the preparation in isolation, and they receive little to no feedback to help them develop their critical-listening skills.
The main way that we give feedback to students is through assessment. Assignments and activities allow us to check student learning and provide comments that point out errors, suggest where things could be improved, and highlight successes. We can also ask students to perform self-assessments based on guidelines or potential answers. But the question remains: How do we apply best practices for assessment and most helpfully provide feedback on student listening?
Based on my own previous experience as a student, my practice as a teacher, and conversations with colleagues, the main “feedback” students receive on their listening is from listening quizzes. On closer examination, however, these fail either to assess or to provide feedback on the critical-listening skills we are trying to cultivate in our students. Simple identification quizzes only test students’ ability to remember what a piece sounded like, often only the first few seconds. Earlier in my career, I gave many a listening quiz where I played the first minute of a piece, only to watch some students recognize it immediately, write down the correct answer, and then twiddle their thumbs for the remaining fifty seconds (and through the entire second playing of the except), while other students sat the whole minute with panicked looks on their faces––whether they heard just the first ten seconds or the full minute would have no impact on their ability to answer. Even those who tried to guess the answer based on what they heard were not applying the kind of critical listening I was trying to teach: Recognizing that a clip is for full orchestra and guessing one of the orchestral works from the list of required listening is hardly comparable to being able to aurally recognize how Beethoven’s approach to motivic work and musical form differed from his precursors and use that recognition to deepen one’s understanding of how Beethoven’s music was thought to better express his inner subjectivity. As far as the latter ability is concerned, I have also tried expanding listening quizzes to ask students to not only identify the piece, but also answer questions about its characteristics, features, or historical context. But this does not solve the underlying issue: Listening quizzes only test what is essentially rote memorization, not the cultivation and application of critical-listening skills.
In this article, I present an approach to these issues that I have developed in my teaching at the University of Melbourne. Over the past four years and with the assistance of a fantastic team of teaching assistants, I have developed a series of interactive videos for students’ at-home listening that provides students with immediate feedback and allows for repeated listening, and that enables the teaching staff to follow students’ work, adapt lesson plans accordingly, and, when necessary, offer further individual support. As an additional benefit, creating these videos has liberated my teaching from following a single anthology that students are required to purchase. Instead, I have curated a range of listenings, drawing from a variety of sources that allow me to tailor topics and diversify content. Below, I discuss interactive listening videos in two parts. I begin with the practical creation of the videos. This section also includes several video demonstrations. I then turn to the key pedagogical considerations that go into selecting pieces and designing activities.
For the sake of consistency, I will focus on how I have applied these techniques in the nineteenth-century survey, titled “Western Music History 2: The Long Nineteenth Century.” I recognize that there continues to be ongoing debate within the field over the value of such survey courses, but this article will not directly engage those questions.4 I believe the large-lecture survey class is a useful example of the value of these interactive listenings, as it relies on weekly student listening and presents particular challenges in terms of scaling individual feedback. In fact, the techniques I advocate here may also help other scholar-teachers to address problems with the survey as a format, particularly as they pertain to issues of coverage and diversity of examples.5 Nevertheless, the approach I am sharing is broadly applicable in virtually any kind of music history class.
Practical Creation
In this article, I will focus on my experiences using a tool called FeedbackFruits that simplifies the creation and use of interactive videos. The design and features of this tool have necessarily shaped the specifics of how I address pedagogical challenges; how I design activities depends on what the tool does and does not allow. FeedbackFruits is a commercial product that is embedded in my learning management system (LMS) via a university subscription (my university uses Canvas, but FeedbackFruits is also available on Blackboard and Moodle), and based on conversations with colleagues, it is a fairly common add-on at other universities.6 My goal in this article is not a commercial endorsement of the product; I’ll also discuss some of the program’s shortcomings.7 Similar activities could be created using freely available programs like H5P,8 or commercial paid programs such as VoiceThread,9 but I will not be discussing them due to lack of firsthand experience. While the specifics of creating videos would need to be tailored to these programs, my discussion of designing learning outcomes (including what information to include, formulating questions, and so forth) is broadly applicable regardless of program used.
The two main advantages of FeedbackFruits are that it simplifies the creation process and that it is integrated into all the functions of the LMS. I create interactive activities directly in the LMS, students complete them there, and when my TAs and I grade their responses (we mark only for completion), those grades appear directly in my LMS gradebook.
Video 1 is a tutorial of how I create listening activities using FeedbackFruits. The first step is to prepare the activity: I write an introduction and instructions for the specific listening, prepare a video that synchronizes score to music (or in the case of genres like opera or ballet where the interaction between music and stage action is important, I also use video of stage productions),10 choose what information or questions I want to appear while students listen, and identify at what time in the recording these should pop up. It may seem contradictory to include a visual element for a listening activity, but I believe including score or stage action discourages distracted listening (i.e., going on social media or trying to simultaneously complete the assigned reading while the music plays in the background). Or in some cases, I want the students to focus on the interaction between music and stage action.
Video 1
Once I have completed this preparation, I can create the activity itself. First, I create a new page in Canvas using FeedbackFruits. Within the FeedbackFruits shell, I copy-paste the instructions into the blank field for instructions, and then upload the video. After that, I add the interactive elements, what FeedbackFruits calls “annotations.” Anyone can add annotations given the way FeedbackFruits is designed: I can add them before students listen, or students can add them while listening. In most of the activities, I prefer to add annotations myself, but as I will discuss later, in some cases I also ask students to make their own.
There are two kinds of annotation in FeedbackFruits. Both are time specific, indicated by a dot on the tracker at the bottom of the video. When watching the video, the annotations appear as you reach their specified time locations. The first kind of annotation are information cards that pop up along the bottom of the screen without stopping the video. I use these to point out whatever it is that students are listening for in the piece. To take a very straightforward example: When we are listening for sonata form, these cards pop up to inform students that a new formal section has begun or a modulation has taken place. Video 2 provides an introduction to what these videos look like from the students’ perspective, including these information cards and the questions cards to be discussed below. For those not watching the videos, figure 1 shows such a pop-up information card indicating the start of the second theme area in the first movement of Beethoven’s Eroica symphony. Along the tracker, you can see plain white dots indicating the locations of pop-up information cards.

Figure 1 Screenshot from interactive video of Ludwig van Beethoven, Symphony no. 3 in E-flat major, first movement. (As will be discussed below, Maurice Windleburn was a teaching assistant who helped me to create these videos.)
Video 2
The second, more useful form of annotation are question cards. These can be set so that the recording stops playing and does not resume until the student answers a question. The location of these can be seen in figure 1 where there are “lock” icons along the tracker. Questions can take several forms; I prefer free response. To stick with the example of sonata form, near the end of a formal section, I have a question card appear that asks: “As we approach the end of the Exposition [or relevant section], what key characteristics of this formal section did you hear?” (see figure 2) The questions then ask students to reflect and conduct a self-assessment. After they submit their free-response answer, the correct answer appears, and they are asked to rank how close their answer was: “correct,” “almost,” or “wrong” (see figure 3). There is no grade associated with these answers; it is simply an opportunity for students to evaluate for themselves how well they are linking their listening to key concepts. If they wish, they can use the video tracker to go back and relisten; after completing the question, there will no longer be a break in their listening if they go back.

Figure 2 Screenshot from interactive video of Ludwig van Beethoven, Symphony no. 3 in E-flat major, first movement

Figure 3 Screenshot from interactive video of Ludwig van Beethoven, Symphony no. 3 in E-flat major, first movement
For listenings where I use the pop-up information and question cards, I do not ask students to add their own annotations. For other listenings, however, the focus of the activity is for students to listen for something and create an annotation when they hear it. For instance, in preparation for a lecture on “Gendered Aesthetics in the Nineteenth Century,” I ask students to first read a handout that compiles excerpts from reviews of Ethel Smyth’s opera The Wreckers that specifically address Smyth’s gender in relation to her music.11 I then ask students to watch a scene from the opera and to add annotations for at least three moments where they hear something that the critics might have identified as an example of Smyth writing in a “masculine” or “feminine” manner. In their annotation, they have to describe what it is about the music at that moment that caught their attention: a crunchy harmonic progression, the onset of a brass fanfare, etc. For an activity like this, there is no “correct” answer for students to measure themselves against, but as I will discuss below, students can conduct self-assessment by comparing their answers to the anonymized responses of their peers. And of course, we continue this discussion in the following class.
Regardless of whether I have added annotations or the students have added their own, I end each video with an open question. To return to the example of sonata form, I ask: “How did Beethoven’s treatment of sonata form resemble and diverge from sonata form as you studied it in Haydn and Mozart last semester?” The answer to the final question is always: “We’ll be discussing this in class, so bring your ideas with you!” Admittedly, this is a bit corny, but it sets up class discussion well, and this is the point. As the Beethoven example illustrates (or the Holmès example if you watched videos 1 and 2), many of the questions I ask focus on musical structure or form, setting up class content that makes connections between these musical features and broader historical issues. Indeed, as Kelly Bylica writes, listening to identify key formal elements is only a starting point before the discussion of alternative interpretations and other forms of deeper engagement.12
When creating each video, I also adjust the settings for grading. There are a variety of settings available to grade responses, but I set the videos to mark only for completion. FeedbackFruits can automatically record whether students have completed every question card and/or watched the full video. This is integrated into my Canvas gradebook and provides a basic level of accountability that students complete the activities. More valuable from a pedagogical perspective is the ability to see an overview of all student responses to each question, either by the whole class or by discussion section. This allows my TAs and I to measure how well students are meeting the learning outcomes of each listening, and we can adjust lesson plans accordingly (e.g., spend extra time reviewing sonata form, or skip the review entirely to dive into broader historical issues). When necessary, we can also reach out directly to individual students who struggled with a specific activity (or identify students who are struggling overall for additional support). Video 3 explores these features of the interactive listenings from the instructor’s perspective.
Video 3
Finally, I can set the activities so that students can only stream, not download the videos (or associated audio). This is important because it means that only students enrolled in the class can access the materials on the password-protected LMS, and thus my use of recordings, etc., exclusively for educational purposes falls under the Australian legal equivalent of Fair Use.
Pedagogical Concerns
In this section, I discuss the process of selecting what pieces to include as required listenings and how to design the interactive tasks for students. The selection of pieces proceeds much as it would without interactive listenings, but this approach gives me greater freedom. When I previously taught survey classes, the need to provide students with scores and listening guides for each reading created a strong incentive to use a single, published anthology that students had to purchase. At times I would ask students to listen to something from a different anthology and provide them a scan of its pages, or ask students to listen to something with a listening guide I created myself by distilling an analysis drawn from a journal article or book chapter. But the ease of the anthology and the desire to get students’ money’s worth out of it meant that we generally just used the anthology.
Interactive listenings allow for more mixing and matching between anthologies. Building on the example of Beethoven’s Third Symphony above, I adapted my activity for this listening from the Norton Anthology of Western Music.13 I use it in a two-part introductory lecture in which I draw on Dahlhaus’s discussion of the nineteenth century as the “Era of Beethoven and Rossini” to frame the semester as an exploration of different approaches to music and music making, and how those approaches became ideologically loaded.14 To help make this point, I pair the first movement of Beethoven’s Third Symphony with the overture to The Barber of Seville, drawing on the presentation of that piece in the Oxford Anthology of Western Music.15 Students come to class with an aural and theoretical understanding of both pieces’ form, allowing us to discuss how Beethoven’s substantial expansion of the development section was tied to the Romantic ideology of music as subjective self-expression and heroic narratives, while Rossini’s overture is essentially a sonata form with a vestigial development section. Rather than seeking to express subjectivity, sublimity, and philosophical depth in an expanded development, Rossini instead conveys sensual pleasure and beauty by turning his focus to the presentation and repetition of original themes and melodies.
Selecting pieces is also a dynamic process based on what sorts of interactive tasks they facilitate––and what sorts of tasks are possible within FeedbackFruits. For example, about midway through my nineteenth-century survey, we spend a week discussing absolute versus program music in the so-called War of the Romantics.16 Alongside conventional examples by Berlioz and Liszt for program music and Brahms for absolute music, I include works by Augusta Holmès and Emilie Mayer. Holmès provides a straightforward example. Her symphonic poem Pologne took as its program the 1866 painting Les massacres de Varsovie by Tony Robert-Fleury depicting the 1861 massacre of Polish demonstrators in Warsaw by Russian soldiers. It provides students an excellent example of how a “program” did not need to be a narrative text. Drawing on an article by Ryszard Daniel Golianek that explains the work’s program and four-part structure, I designed an interactive listening that guides students through several steps of reflection: At the beginning of sections, students are told the program for that portion of the piece, and near the end of the sections, they are asked to reflect on what musical elements conveyed that program.17 At the end of the piece, students are asked to reflect on where they felt the music effectively conveyed the program, and where they felt it did not.
Emilie Mayer’s First Symphony posed greater challenges due to the requirements of FeedbackFruits. Mayer is a good example to discuss absolute music as she was a successful composer of symphonies (as well as other chamber and orchestral music) in the mid-nineteenth century, a period whose symphonies are often overlooked not just in textbooks, but in music historiography more generally.18 Her First Symphony is particularly useful for framing the discussion of how both sides of the War of the Romantics claimed Beethoven’s legacy. Mayer’s symphony is replete with references to Beethoven, including tonal architecture modeled on Beethoven’s Fifth Symphony and motivic allusions to the “Appassionata” Piano Sonata.19 As a representative of “Team Absolute Music,” Mayer’s symphony thus pairs well with Brahms’s First Symphony and its many references to Beethoven.
Mayer’s (and Brahms’s) references to Beethoven, however, are better suited to a guided listening than an interactive listening. One might ask students to notice Mayer’s tonal plan moving from C minor to C major, but a leading question like “Can you think of any famous symphonies by earlier composers with this tonal plan?” is not pedagogically effective. Similarly, motivic allusions are better demonstrated in short, comparative clips that make the relation between original and modified motifs clear, rather than by providing leading questions or expecting students to have a deep familiarity with Beethoven’s piano sonatas. After much reflection, I decided to use Mayer’s symphony as a review of sonata form. In previous iterations of the course, we discussed sonata-form length in the first week of the semester (using the examples above from Beethoven’s Third Symphony), but we did not explicitly revisit and revise the topic at any point, which was an obvious pedagogical oversight. Additionally, sonata form provides a solid basis for class discussion of topics like Mayer’s references to Beethoven in her tonal architecture and motivic work.
Readers may have noticed that by including Holmès and Mayer in a discussion of absolute and program music, I have added two examples by women into a topic (large-scale orchestral music for public performance from the nineteenth century, like symphonies and symphonic poems) that tends to be even more male-dominated than most topics in textbooks and anthologies. Indeed, as Paul Gabriel Luongo has noted, despite efforts to broaden inclusion, the Norton Anthology remains dominated by white, male composers, and much of the diversification that has happened has taken place with music composed after 1900.20 Examples of music by women in the nineteenth century tend to be for solo piano, chamber, or choral repertoire. Interactive listenings have helped me to expand coverage in my survey to include substantially more music by women, composers of color, and––an additional concern for me as a teacher based in Melbourne given the Euro-American focus of the major textbooks and anthologies––composers from Australia and New Zealand.
Critical Reflections
These interactive listenings resolve the major shortcomings of my previous approaches to student listening assignments as identified in the introduction to my article. I can be confident that students have not only completed listenings before class, but also that their listening aligned with the desired learning outcomes. Because I no longer need to use a single anthology that all students must buy, interactive listenings have also allowed me to pick examples that are particularly well suited to what I want to cover in class and to add much-needed diversity to the selection of the historical figures we study.
Information cards ensure that students connect key features of the music to their aural experience; I do not need to worry that students are confusing information from a listening guide or, for instance, thinking that the start of the transition between a sonata’s first and second theme areas is actually the start of the development. With shorter listenings done together in class, I might have shouted this information over the recording, but now students can complete this preparation at home, freeing up class time for deeper discussion. Students can relisten as needed, and the use of pop-up information cards is also more accessible for students using hearing aids or other assisted-listening devices.
Question cards stimulate students to think critically about what they have heard in real time, helping to cultivate critical-listening skills, and students receive robust feedback on their listening. Self-assessment against both potential answers and their peers’ responses fosters deeper learning and ensures students are not making wrong connections. My TAs and I can also use students’ responses to gauge the class’s overall understanding and tailor lesson plans; moreover, we can more clearly identify students needing additional individual assistance. Open-ended questions at the end of listenings prime students for class discussion.
Finally, I can be reasonably confident that students are not using Artificial Intelligence (AI) to get around doing the listenings. It is difficult to get specific answers from AI on targeted and specific questions about the online videos (recall, students cannot download the video itself to upload to an AI). While they may use AI to generate general answers, for example, of typical features they “heard” in the cabaletta section of a scena, my TAs and I can provide direct feedback that their answers were too vague or unspecific, and need to be improved in future activities.
Using FeedbackFruits to create the interactive listenings does have some minor drawbacks. Because the program is primarily intended to get students interacting with each other, it is impossible to set videos so that students cannot see other students’ responses. This means that I cannot create videos that “quiz” students, for example, by asking them to add their own annotation cards at the start of each new formal section of an operatic scena. If they did, the first student to get it correct would spoil the activity for everyone else. Sometimes it can be a challenge to formulate good questions.
Another drawback to students seeing each other’s responses relates to large class sizes. My survey has around one hundred students, and if faced with self-assessing against so many of their peers’ responses, students are likely to be overwhelmed and not engage with the responses.21 As I have mentioned above and also shown in the videos, FeedbackFruits does have a setting that limits students’ ability to see only the responses of other members of their discussion section. While this solves the problem of sample size, it raises others. Most frustratingly, when students transfer enrollment from one section to another (especially at the start of the semester), this change is not always quickly and properly recorded on Canvas. As a result, FeedbackFruits sometimes thinks a student is enrolled in multiple sections, which confuses its attempts to limit students to see only responses from their assigned section. This can cause the activity to crash for everyone. FeedbackFruits can get similarly confused when I check that a video is set up correctly by viewing it in Canvas’s “Student View” mode. In these cases, I have to reset the entire activity, which also erases all the inputs from students who have already completed the task. Usually, it is possible to save the existing responses before resetting the activity, but the additional manual work of compiling and reviewing them is a time-consuming hassle.
One of the greatest advantages of using FeedbackFruits for these videos is that it streamlines certain aspects of the process of setting up interactive activities (as opposed to, say, doing it all with H5P alone). Nevertheless, creating individual listening activities is time consuming. Either I or a teaching assistant has to make videos that sync score and music, and each listening requires a detailed plan of what we are listening for, where in the score and in the recording those specific features appear, and so forth. I still rely a lot on anthologies, but I have also developed listening activities straight out of scholarly books or articles, following the author’s analysis (as in the Holmès example above, drawn on Golianek’s work). I have also benefitted from teaching assistants hired using funds made available at my university for moving teaching online during the COVID-19 pandemic (these interactive listenings are just as useful for online teaching as for in-person delivery), and I would like to acknowledge the contributions of Maurice Windleburn, Madeline Roycroft, and Chai Jie Low.
Conclusion
Plenty of pedagogy scholarship tells us that we should not overly rely on student evaluations to gauge the quality of our teaching, but it is still worth noting that student responses to the model presented here have generally been positive. While some students have been frustrated that there is accountability that they actually complete required listening (although I am quite flexible with extensions), many others have commented that the interactive listenings are helpful and make it clear why they need to do the listening. Occasional technical glitches remain a frustration for both me and the students, and students have also noted that the variation in time required for listening from week to week (one week they may need to listen to two longer symphonic movements, while another week they are only listening to two or three nineteenth-century parlor songs) can make planning their time difficult. Additionally, because each listening is set up as a separate assignment, some students have suggested that the subject feels “over assessed.” I continue to take student feedback on board, especially as it concerns the time demands of listening activities. Since first introducing interactive listening in 2022, I have worked to streamline the assignments and their grading, in an effort to keep total weekly listening times manageable, and to respond to other student feedback as I can.
Beyond student evaluations, I can see the value of this approach manifest in my students’ work. Students are better prepared for class discussions and activities, and they are more confident writing about the sounds of music (as opposed to just the content of printed scores) in their final research papers. I am also pleased to observe how students’ responses to the interactive listening questions improve as they sharpen their critical-listening skills over the course of the semester. As I’ve noted, pedagogical research has established the value of interactive learning. After four years of implementation, critical evaluation, and refinement, I believe these interactive listening activities offer a useful model for music history teaching.
- Pamela Beach and Benjamin Bolden, “Music Education Meets Critical Literacy: A Framework for Guiding Music Listening,” Music Educators Journal 105, no. 2 (2018): 43–50, https://doi.org/10.1177/0027432118808580.↗
- Laurie McManus, “Playing by Ear: Listening Games in the Music History Classroom,” this Journal 5, no. 1 (2014): 23–39, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/5.1-McManus-130-Article-Text-930-1-10-20140922.pdf.↗
- Teaching terminology can vary widely from institution to institution, and from country to country. This article draws on my experience as a student and teacher at six institutions in four countries (on four continents). For comprehensibility, I have done my best to convert titles and terminology to standard American academic language, i.e., the role of “tutor” in Australia and Hong Kong or “TF” at Harvard University is described throughout this article as “TA.”↗
- See Colin Roust, convenor, “Roundtable: The End of the Undergraduate Music History Sequence?” this Journal 5, no. 2 (2015): 49–76, https://jmp.amsmusicology.org/vol-5-no-2-2015/. See also Louis Kaiser Epstein et al., “Mind the Gap: Inclusive Pedagogies for Diverse Classrooms,” this Journal 9, no. 2 (2019): 119–72, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/9.2_Epstein-et-al_306-Article-Text-2203-1-10-20190806.pdf; Timothy Mark Crain, “Beyond Coverage: Teaching for Understanding in the Music History Classroom,” this Journal 4, no. 2 (2014): 301–18, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/4.2-Crain-Beyond-Coverage-110-Article-Text-655-7-10-20140312.pdf; Sara Haefeli, Teaching Music History with Cases: A Teacher’s Guide (Routledge, 2023); and the scholarship cited in note 5. While I agree with many of the critiques of the chronological survey as a model, I believe that many readers will be in a similar position to myself, where institutional politics mean that moving away from a chronological survey is a long-term goal. In the meantime, we need to find ways to make the chronological survey work as best we can.↗
- Indeed, diversity and coverage have been particularly thorny issues in debates about the survey. For instance, Kimary Fick argues that no amount of diversification or expansion of coverage can redeem the power dynamics inherent in the survey (Kimary Fick, “Systems of Power, Privilege, and Oppression: Toward a Social Justice Education Pedagogy for the Music History Curriculum,” this Journal 12, no. 1 (2022): 46–67, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/12.1_Fick_329-Article-Text-2423-1-10-20220919.pdf), and Paul Gabriel Luongo critiques the idea that the chronological survey can be meaningfully diversified and asks whether other ways of framing music history overviews would not be more effective (Paul Gabriel Luongo, “Constructing a Canon: Studying Forty Years of the Norton Anthology of Western Music,” this Journal 12, no. 1 (2022): 1–36, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/12.1_Luongo_334-Article-Text-2421-1-10-20220919.pdf). Other scholar-teachers accept the survey as a model (either out of pragmatism or real belief in it as a model) but seek to diversify or decolonize its content. See Ralph P. Locke, “What Chopin (and Mozart, and Others) Heard: Folk, Popular, ‘Functional,’ and Non-Western Music in the Classic/Romantic Survey Course,” in Teaching Music History, ed. Mary Natvig (Ashgate, 2002), 25–42; Peter Burkholder’s and Brian C. Thompson’s chapters in C. Matthew Balensuela, ed., Norton Guide to Teaching Music History (W. W. Norton, 2019); Margaret E. Walker, “Towards a Decolonized Music History Curriculum,” this Journal 10, no. 1 (2020): 1–19, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/10.1_Walker_310-Article-Text-2265-1-10-20200409.pdf; and Horace J. Maxile Jr. and Kristen M. Turner, Race and Gender in the Western Music History Survey: A Teacher’s Guide (Routledge, 2022).↗
- Feedbackfruits has recently ended a program that allowed educators to apply for free access, but trial subscriptions are available. Feebackfruits, accessed November 24, 2025, https://feedbackfruits.com/get-started-now/educators#.↗
- In this, I model my approach on Jennifer Hund, who also used her specific experiences with an online product to discuss how she incorporated peer review into her teaching. Jennifer L. Hund, “Writing About Music in Large Music Appreciation Classrooms Using Active Learning, Discipline-Specific Skills, and Peer Review,” this Journal 2, no. 2 (2012): 117–32, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/2.2-1-Hund-41-Article-Text-273-12-10-20120306.pdf.↗
- H5P, accessed October 29, 2025, https://h5p.org/.↗
- VoiceThread, accessed November 24, 2025, https://voicethread.com/products/highered/.↗
- Creating videos that sync music and score can be created using video-editing software, or two low-tech options are 1) copying and pasting the score onto a series of PowerPoint (or a similar program’s) slides, then recording the presentation with the music playing, and finally exporting the presentation as a video, or 2) using video capture (or even Zoom) to record scrolling of the score on screen while the music plays.↗
- To create this handout, I relied on Elizabeth Kertesz’s “Issues in the Critical Reception of Ethel Smyth’s Mass and First Four Operas in England and Germany” (PhD diss., University of Melbourne, 2001).↗
- Kelly Bylica, “Critical Listening and Authorial Agency as Radical Practices of Care,” in The Oxford Handbook of Care in Music Education, ed. Karin S. Hendricks (Oxford University Press, 2023), 482–93.↗
- Ludwig van Beethoven, Symphony in E-flat Major, op. 55, no. 3 (Eroica): First movement, Allegro con brio, in Norton Anthology of Western Music, 6th ed., ed. J. Peter Burkholder and Claude V. Palisca, vol. 2, Classic to Romantic (W. W. Norton, 2010), 282–321.↗
- Dahlhaus critically engages with this concept, which comes from nineteenth-century discourse, especially the writings of Raphael Georg Kiesewetter. Carl Dahlhaus, “The Twin Styles,” in Nineteenth-Century Music, trans. J. Bradford Robinson (University of California Press, 1989), 8–15.↗
- Gioachino Rossini, Il Barbiere di Siviglia (The Barber of Seville), Overture, in Oxford Anthology of Western Music, ed. Klára Móricz and David E. Schneider, vol. 2, The Mid-Eighteenth Century to the Late Nineteenth Century (Oxford University Press, 2013), 298–302.↗
- On this term, see David Larkin, “The ‘War’ of the Romantics,” in Liszt in Context, ed. Joanne Cormac (Cambridge University Press, 2021), 85–93.↗
- Ryszard Daniel Golianek, “Imaginary Poland: The Musical Depiction of a Non-Existent Country in Instrumental Music by Nineteenth-Century Foreign Composers,” Ad Parnassum 12, no. 23 (2014): 107–33, https://www.utorpheus.com/index.php?route=adparnassum/issues&num=23.↗
- See Carl Dahlhaus’s claim that “by the middle of the nineteenth century [the symphony] had entered a crisis most clearly evidenced by the nearly two decades that separate Schumann’s Third Symphony (1850), his final work in the genre, from any work of distinction that represented absolute rather than programmatic music.” Dahlhaus, Nineteenth-Century Music, 265. For critiques of Dahlhaus on this point, see David Brodbeck, “The Symphony After Beethoven After Dahlhaus,” in The Cambridge Companion to the Symphony, ed. Julian Horton (Cambridge University Press, 2013), 61–95; and Christopher Fifield, The German Symphony Between Beethoven and Brahms: The Fall and Rise of a Genre (Ashgate, 2015), esp. 82. Since I last taught the nineteenth-century survey, a new article by Joanne Cormac has appeared that explicitly discusses Mayer in the context of the historiography of mid-nineteenth-century symphonies, and I look forward to using Cormac’s work to enrich our class discussion of Mayer. Joanne Cormac, “Regionalizing the Nation: The Symphonies of Franz Lachner and Emilie Mayer in Relationship to German National Identity,” Journal of the American Musicological Society 78, no. 2 (2025): 361–408, https://doi.org/10.1525/jams.2025.78.2.361.↗
- My approach to teaching this symphony draws substantially from Magnus Mulligan’s analysis. See Magnus Mulligan, “Analytical Perspectives on Emilie Mayer’s Symphony No. 1” (DMA thesis, North Dakota State University, 2023).↗
- Luongo, “Constructing a Canon.”↗
- On the benefits of interaction between students in the class, and particularly the importance of keeping group sizes at manageable levels, see Thomas W. Smialek Jr. and Renee Reiter Boburka, “The Effect of Cooperative Listening Exercises on the Critical Listening Skills of College Music-Appreciation Students,” Journal of Research in Music Education 54, no. 1 (2006): 57–72, https://doi.org/10.1177/002242940605400105.↗

