Music is the academic concept and field of study concerned with the systematic organization of sound and silence across temporal, structural, and cultural domains. This discipline investigates its creation, performance, perception, historical evolution, and societal roles, employing diverse analytical, theoretical, and empirical methodologies to understand its forms, meanings, and functions.
Ontological type
Cultural and Social Functions
Historical Development
Media Forms and Platforms
Psychoacoustic Perceptual Foundations
1920 - 1963
Formal Cognitive Modeling
1964 - 2006
Neurocomputational AI Music
2007 - 2023
Psychoacoustic Perceptual Foundations era
Leonard B. Meyer [1] was a central figure in psychoacoustic perceptual foundations during this era, with affiliations across universities and research centers in the United States and Europe. His key contributions in this era center on meanings and emotional interpretation in music, as developed in 1957 Emotion and Meaning in Music [3], Meaning in Music and Information Theory [4], and 1959 Some Remarks on Value and Greatness in Music [5], which helped ground perceptual research in semantic and evaluative dimensions. Charles Seeger [2] was a prominent figure in the era, with involvement across American universities and cultural institutions. His PRESCRIPTIVE AND DESCRIPTIVE MUSIC-WRITING [6] (1958) delineated how writing conventions shape listening and pedagogy, offering a framework that supported measurement-driven pedagogy and cross-cultural perceptual studies in music.
Formal Cognitive Modeling era
Fred Lerdahl[1], affiliated with Columbia University[3] and the KTH Royal Institute of Technology[4] in this era, helped ground formal theories of music cognition. His influential A Generative Theory of Tonal Music[7] advanced generative grammar-based models of tonal harmony, meter, and melody, helping formalize hierarchical expectations and guide empirical tests in cognitive research. George Tzanetakis[2], associated with Princeton University[5] and Carnegie Mellon University[6] in this era, contributed to computational approaches that map musical structure to data-driven analysis. His Musical genre classification of audio signals[8] (2002) exemplified early machine-learning pipelines for genre labeling, enabling scalable empirical validation of models of perception and cognition.
Neurocomputational AI Music era
Daniel P. W. Ellis [1] is associated with University of California, Berkeley [3] and Columbia University [4] during this era. He contributed the librosa library [7], a Python toolkit enabling robust audio analysis for scalable, reproducible MIR research in this era. David Huron [2] has been affiliated with University of Pittsburgh [5] and McGill University [6] during this era. David Huron [2]'s 2007 paper 'Sweet anticipation: music and the psychology of expectation' [8] articulated how musical expectation shapes neural processing and provides foundations for predictive-coding models in this era.