Vol. 15 no. 2 (2025)

Using Universal Design for Learning in Score Reading and Music Listening

Citation Info
Print
← Back

Learning in music history classes can feel like patting one’s head while rubbing one’s stomach and hopping on one foot—multitasking is unavoidable. One of the central challenges of teaching music lies in its inherently sensory nature: the sound of the music, the physical gesture of performance, and the visual dimension of notation. This reliance on multiple senses can pose difficulties for students with sensory disabilities or those who learn differently. Music history courses, in particular, exemplify the complexity of multitasking, as they require students to listen, follow an unfamiliar score, and simultaneously recall prior knowledge and contextual information relevant to the work at hand.

Because so many music history courses require music theory as a pre- or co-requisite, students also must have the ability to read Western music notation and have some command of theory and/or aural skills to successfully complete the course. These skills are traditionally taught in one way and carried over into music history courses. But what about students who may be weak score readers or have reading disabilities? Planning ahead with UDL strategies for these issues reduces the need to make on-the-spot adjustments.1

The most traditional approach to teaching music history—using a written musical score to demonstrate historical concepts—can be especially problematic given the pervasive idea that the musical score is “the defining version of the past.”2 As Edith Boroff explains, “Music has been viewed too much as notes. . . . Out of necessity, theorists and historians focus on music that has been notated, but that can be . . . a misleading limitation.”3 Those of us who teach music majors tend to assume that all of our students have similar music literacy skills—and we teach accordingly.4 But this guiding assumption is incorrect. We need to rethink pedagogical strategies that expect all students to simultaneously score-read and listen actively in the classroom. UDL researchers Thomas Tobin and Kristen Behling propose “plus-one thinking” as a way to adhere to the UDL principle of multiple means of representation and engagement: Every means of representing something—in our case, a musical score written with Western notation—should be supplemented with at least one additional mode of representation.5

The assumption that all students possess—or should possess—the same level of music literacy limits our ability to use score reading effectively in class. It also prevents students who need additional support, whether through added markings, alternative notations, or other aids, from achieving the same outcomes as their more fluent peers. By offering at least one alternative way to represent the written score, we give students greater flexibility in how they engage with the material, allowing them to adjust their approach according to their needs in different lessons or contexts. This practice exemplifies what James Lang calls “small teaching”: a minor adjustment that can have a major impact on student learning.6

AMS Studies in Music ad. AMS members receive 30% discount

Although dyslexia is not commonly associated with music reading, empirical studies indicate that individuals with dyslexia, alexia, dyscalculia, dysgraphia, or attention-deficit/hyperactivity disorder (ADHD) frequently demonstrate reduced levels of music literacy. Such difficulties may arise from challenges in decoding notation, processing rhythm and pitch—particularly with respect to temporal precision and accuracy—and navigating the inherently linear structure of musical scores.7 Individuals with these conditions also experience difficulties with working memory, a core component of music literacy because the act of active listening triggers one’s working memory.8 This is compounded when someone is reading a musical score while listening.9 But students with learning and processing disabilities are not the only ones who may need a little extra help. A survey I conducted a few years ago with first-generation college students on their experiences in music history classes cited score reading and the multitasking skills involved in score reading and active listening as barriers to their learning.10 Research has shown that students of color and low-income students tend to face obstacles related to obtaining instruments, lessons, and support for music study that inhibit later musical success, including fluency in music literacy.11 For many other students who do not fit into the above categories, music history classes are the first time they are doing any of these things, let alone doing them all at once. Lack of experience, in and of itself, can be a barrier.

Persistent stigma surrounds low music literacy and the use of supports for score reading, with some dismissing modified notations or alternative layouts as mere “crutches.” In reality, allowing students to annotate scores with note names, scale degrees, fingerings, solfège syllables, or other aids is not a form of cheating but a legitimate strategy for making notation functional according to students’ needs. As Bruce Quaglia, who integrates UDL into his music theory classes, has observed, this recognition prompts us to reconsider the role of the musical score within our learning objectives.12

Some people have proposed systems of notation that are simply less notated. Jessica Bailey, a pianist with a self-disclosed nonverbal learning disorder, has suggested that we “replace the notation in the score with letters so they match the names of the keys.”13 Dutch researchers Nanke Flach, Anneke Timmermans, and Hanke Korpershoek have proposed alternate layouts of music notation to include design adaptations of size, use of color, and stem direction.14 The creator of Dodeka, Jacques-Daniel Rochat, reinvented music notation by placing music in four positions—on, above, below, and between the lines—on a four-line staff with twelve notes to remove accidentals and indicate rhythm through the lengths of the notes.15 Paul Morris, the creator of Clairnote SN, simplifies notation by modifying the staff to more consistently and intuitively indicate intervals.16 Steven Heath, creator of Dodici, has created a notation system with a staff that represents the chromatic scale, has overlapping clefs, allows notation of microtones, and alters the note shapes for the rhythms.17 Blake West and Mike Sall, creators of Hummingbird, preserve the clef and five-line staff, but alter the representation of pitches to more clearly represent music’s spatial element.18 Many websites for these systems feature some notated scores that have already been created, but their number is limited. I share these approaches not only because they may prove useful, but also to underscore a broader point: Even these strategies for making written music more accessible and user-friendly remain grounded in the conventions of Western notation and the primacy of the notated page. Developing ways to represent music beyond this standardized system marks an essential first step toward “plus-one” thinking.

For instructors who teach with scores from anthologies, the best practice is to provide students the score in both digital and hard copy, as the former will allow students to enlarge the score as needed. One viable option has been suggested by Richard Picking, who conducted a study on electronic versus paper scores and noted that there was a strong preference for animated scores in which “each note on the score was marked in time to the music.”19 These tools can take the form of a scrolling score, in which a moving line indicates the position of the audible music within the notation, or an animated graphic score. The scrolling format is especially useful when close attention to pitch, rhythm, harmony, and intervallic structure is desired. It also helps students follow the recording’s progress—particularly in scores that include transposing instruments. If the actual notes are not important to what you’re teaching (scandalous, I know!), then using an animated graphic score, alternatively known as an animated visual score, can help to show things like texture, rhythm, and pitch levels. Each animated graphic score is uniquely designed to reflect the specific characteristics and demands of a piece, using color, shape, size, and motion to illustrate its structure and expression. These kinds of scores can also benefit students who are hearing impaired, as they visually represent musical elements—such as texture, pitch, and rhythm—that are essential to the music’s sound. Though rare, there are musicians with amusia, a condition that hinders sound processing, and these kinds of scores can help with this as well.20 In these cases, I recommend showing the animated graphic score in class while also providing a digital version of the traditional score for students who prefer to follow the recording in that format. A final suggestion for depicting sound in a recording comes from Sean Zdenek, who has recommended either using plain waveforms or annotated waveforms that visualize the sound being played.21

For those not set on using motion-based visuals, a visual listening guide such as that created by Hannah Chan-Hartley can also be effective.22 She has designed listening guides for major orchestras that visually map form, melody, instrumentation, and other musical elements, enhancing the listening experience by illustrating texture, timbre, and structural relationships. There is also a version accessible by an app that allows people to interact with the listening guide. Chan-Hartley is constantly creating new listening guides and posting them online and takes commissions for guides that can fit a variety of needs.

Encoded music offers another viable option. If a score is available in MusicXML or the Music Encoding Initiative (MEI) programming language, instructors can download the code and reupload it into a notation software, enabling the score to become playable with notes changing color as they sound. Having access to the digital code for a score also opens possibilities for alternative versions, such as those in modified stave notation that employ thicker lines, larger fonts, and adjustments to “symbol size and density, [and that] reduce redundant space, place symbols consistently (placing non-pitch and rhythm signs around, rather than on, the stave)[,] and [describe] in words the location of sporadic or unusual symbols.”23  Other adaptations might include the use of colored notation or color-dot systems. A caveat about color: When using colored-note or color-dot notation, red and green should be avoided, as red-green color blindness is the most common form of color-vision deficiency. At least one alternative form of notation should also be provided. In addition, all color-coded materials should maintain sufficient contrast to ensure legibility. Furthermore, it is important to consider that some students may experience sound-color or color-sound synesthesia, which could make color associations in notation more confusing than helpful. Open Score is also a project that creates digital interactive scores that can aid in supporting music reading.

MusicXML can be used to generate a score in Braille Music Markup Language (BMML) for students who require or prefer that format. Another, less common assistive technology derived from MusicXML is the talking score, developed in 1989 for blind and visually impaired musicians. Talking scores function much like musical audiobooks; they narrate the information found in staff notation that would otherwise be accessed visually and alternate these verbal descriptions with corresponding musical examples.24 This segmented approach to score reading can benefit students who are blind or have low vision, as well as those who consider themselves less confident music readers.

Skepticism about the possibility of teaching music history without score reading can be countered by the example of music theorist Brian Alegant, who has successfully taught music theory and analysis without reliance on written scores.25 This approach is not unprecedented: As early as 1998, Allen Cadwallader and David Gagné argued that musical analysis should not depend solely on visual interpretation of the score but should engage directly with the music’s sound.26 Students, in turn, can demonstrate their understanding through multiple modalities—visual, textual, and alternative notational forms—in alignment with the UDL principle of providing multiple means of action and expression.

The strategies that I’ve presented can be used equally as well in face-to-face classes as online and hybrid ones given that these can be analog, digital, or both. This work is part of my UDL book research, so there will likely be additions and changes to my suggestions today as my research progresses. I am providing a resource sheet with a bibliography, including places to get some of the things I’ve discussed or tutorials to make your own. When something is inaccessible to one person, it’s inaccessible, period. These strategies not only benefit students with disabilities, but benefit all students regardless of whether they have or need accommodations. Anticipating your students’ needs in advance and providing multiple means of representation and engagement for music in its different forms is just sound pedagogy.

  1. Alexandra Carrico and Katherine Grennell, Disability and Accessibility in the Music Classroom: A Teacher’s Guide (Routledge, 2023), 21.
  2. Lise Karin Meling et al., “Decolonizing Higher Education: Rationales and Implementations from the Subject of Music History,” in MusPed:Research 6: Explorative Perspectives in Music and Education, ed. Ola Buan Øien et al. (Cappelen Damm Akademisk, 2023), 180.
  3. Edith Boroff, “A New Look at Teaching Music History,” Music Educators Journal 79, no. 4 (1992): 41 (emphasis original), https://doi.org/10.2307/3398530.
  4. Louis Kaiser Epstein et al., “Mind the Gap: Inclusive Pedagogies for Diverse Classrooms,” this Journal 9, no. 2 (2019): 120, https://jmp.amsmusicology.org/wp-content/uploads/2026/02/9.2_Epstein-et-al_306-Article-Text-2203-1-10-20190806.pdf.
  5. See Thomas J. Tobin and Kirsten T. Behling, Reach Everyone, Teach Everyone: Universal Design for Learning in Higher Education (West Virginia University Press, 2018).
  6. See James Lang, Small Teaching: Everyday Lessons from the Science of Learning, 2nd ed. (Jossey-Bass, 2021).
  7. Katie Overy et al., “Dyslexia and Music: Measuring Musical Timing Skills,” Dyslexia 9, no. 1 (2003): 18, https://doi.org/10.1002/dys.233; James L. Reifinger Jr., “Dyslexia in the Music Classroom: A Review of Literature,” Update 38, no. 1 (2019): 9–10, https://doi.org/10.1177/8755123319831736; Luiz Rogério Jorgensen Carrer, “Music and Sound in Time Processing of Children with ADHD,” Frontiers in Psychiatry 6 (2015), https://doi.org/10.3389/fpsyt.2015.00127; Elizabeth Morrow, “Music Reading for Students with Learning Disabilities,” American String Teacher 73, no. 4 (2023): 21–26, https://doi.org/10.1177/00031313231197638; Akira Midorikawa et al., “Musical Alexia for Rhythm Notation: A Discrepancy Between Pitch and Rhythm,” Neurocase 9, no. 3 (2003): 232–38, https://doi.org/10.1076/neur.9.3.232.15558.
  8. Adi Lifshitz-Ben-Basat and Leah Fostick, “Music-Related Abilities Among Readers with Dyslexia,” Annals of Dyslexia 69 (2019): 318–34, https://doi.org/10.1007/s11881-019-00185-7; Sissela Bergman Nutlet et al., “Music Practice Is Associated with Development of Working Memory During Childhood and Adolescence,” Frontiers in Human Neuroscience 7 (2014), https://doi.org/10.3389/fnhum.2013.00926; Ricardo Pozenatto, “Working Memory in Musicians: A Review of Literature,” Research Perspectives in Music Education 21, no. 1 (2020): 48–49, https://flmusiced.org/FLMusicApps/MemberContent/RPME/Default.aspx.
  9. Laura Herrero and Nuria Carriedo, “The Contributions of Updating in Working Memory Sub-Processes for Sight-Reading Music Beyond Age and Practice Effects,” Frontiers in Psychology 10 (2019), https://doi.org/10.3389/fpsyg.2019.01080.
  10. Reba A. Wissner, “Teaching the First-Generation College Student in the Music History Classroom: A Student-to-Professor Perspective,” in Sound Pedagogy: Radical Care in Music, ed. Colleen Renihan et al. (University of Illinois Press, 2024), 198–210.
  11. See Eugenia Costa-Giomi and Elizabeth Chappell, “Characteristics of Band Programs in a Large Urban School District: Diversity or Inequality?” Journal of Band Research 42, no. 2 (2007): 1–18, https://www.researchgate.net/publication/292632512_Characteristics_of_band_programs_in_a_large_urban_school_district_Diversity_or_inequality; and Kenneth Elpus and Carlos R. Abril, “High School Music Ensemble Students in the United States: A Demographic Profile,” Journal of Research in Music Education 59, no. 2 (2011): 128–45, https://doi.org/10.1177/0022429411405207.
  12. Bruce W. Quaglia, “Planning for Student Variability: Universal Design for Learning in the Music Theory Classroom and Curriculum,” Music Theory Online 21, no. 1 (2015), https://mtosmt.org/issues/mto.15.21.1/mto.15.21.1.quaglia.html.
  13. Jessica Bailey, “Earned, Not Learned: How Classical Music Notation Is Not Built for Neurodivergent Students,” I Care If You Listen, June 11, 2024, https://icareifyoulisten.com/2024/06/classical-music-notation/.
  14. Nanke Flach et al., “Effects of the Design of Written Music on the Readability for Children with Dyslexia,” International Journal of Music Education 34, no. 2 (2014): 4, https://doi.org/10.1177/0255761414546245.
  15. “Dodeka Alternative Music Notation,” Dodeka, last modified 2019, https://dodekamusic.com/learn/alternative-music-notation/.
  16. “Clairnote SN,” Clairnote SN, accessed October 4, 2024, https://clairnote.org/.
  17. Steven Heath, “Alternative Music Notation,” Dodici Alternative Music Notation, accessed October 4, 2024, https://dodicimusicnotation.com/.
  18. “Home,” Hummingbird Notation, accessed October 4, 2024, https://www.hummingbirdnotation.com/.
  19. Richard Picking, “Reading Music from Screens vs. Paper,” Behaviour & Information Technology 16, no. 2 (1997): 76, https://doi.org/10.1080/014492997119914.
  20. Nicoletta Alossa and Lorys Castelli, “Amusia and Musical Functioning,” European Neurology 61, no. 5 (2009): 269–72, https://doi.org/10.1159/000206851.
  21. Sean Zdenek, Reading Sounds: Closed-Captioned Media and Popular Culture (University of Chicago Press, 2015), 237.
  22. Hannah Chan-Hartley, Symphonie Graphique, accessed November 4, 2025, https://www.symphonygraphique.com/shop.
  23. Sally-Anne Zimmerman, “Modified Stave Notation—Encouraging Musical Independence Through Accessible, Easily Produced Scores,” International Congress Series 1282 (2005): 1115, https://doi.org/10.1016/j.ics.2005.05.046; Kimberly McCord et al., Accessing Music: Enhancing Student Learning in the General Music Classroom Using UDL (Alfred Publishing, 2014), 8–9.
  24. Talking Scores, accessed October 4, 2024, https://www.talkingscores.org/.
  25. See Brian Alegant, “Listen Up!: Thoughts on iPods, Sonata Form, and Analysis without Score,” Journal of Music Theory Pedagogy 21, no. 7 (2007): 137–56, https://doi.org/10.71156/2994-7073.1152.
  26. Allen Cadwallader and David Gagné, Analysis of Tonal Music: A Schenkerian Approach, 1st ed. (Oxford University Press, 1998), vii.