Harmonic, Melodic, Structural and Emotional Features Ontology.

Explore the project

The Harmonic, Melodic, Structural and Emotional Features Ontology

This research aims to apply Semantic Technologies – in particular the Semantic Web – to musicological applications.
Before doing so, this work analyses the limits of musicological research extensively, focusing on the interaction between different musicological approaches. More focus is placed to the quantitative approaches of systematic musicology. Special attention is then paid to the dichotomy, underlying musicological research, between a physical component - typically investigated by means of scientific methods - and a component related to cultural and perceptual aspects, typically studied by the humanities. Therefore, the first research question of this research is:

RQ1) How can semantic technologies help in linking different aspects of musicology?

The representational aspect of musical material is then examined, i.e. how music is experienced both from a perceptual and an analytical perspective. Hence, a detailed analysis of the various types of computational representations (signal, symbolic, vector-based and semantic) is provided. This kind of analysis lays the groundwork for the second research question of the research:

RQ2) How to represent music for musicological analysis in order to catch as many aspects as possible?

In order to address these two major questions, an ontological approach is proposed. A set of musical features that can serve this purpose is then discussed, divided into four main categories: melodic, harmonic, structural and emotional. Different Music Information Retrieval approaches aimed at extracting such features are examined and implemented. Finally, the HaMSE ontology is proposed as a model for representing musical content in different forms (both symbolic and signal), linking them to the set of previously defined features. This ontology also aims to describe the cultural characteristics of the composition, e.g., the author and the era in which the song was composed.

Describing the Musical Work

The ontology aims to describe the characteristics of the music track, i.e. all the information that has been defined as "metadata" in the previous sections. To do this, the Music Ontology has been used. This ontology's choice is mainly due to the wide use that has been made of it in various works in recent years and its broad interoperability with other ontologies (e.g. TimeLine ontology, Music Score Ontology, etc.).

Describing Symbolical Music Events

In order to represent the symbolic representations of a composition, the HaMSE ontology has been implemented. The class hamse:SymbolicEvent aims to describe everything that can be a nuclear element of the symbolic representation, interpreted as either a note or a pause. The notation event, such as all the other classes listed in this section, is intended as a subclass of hamse:MusicologicalFeature. This Symbolic Event is a super-class of all the classes that characterise the note (e.g. classes that express dynamic, pitch, etc.).

Representing Extracted Features

The representation of the extracted features aims to connect these features with both the symbolic representation from which they were extracted and the signal representation, providing a temporal reference on their location. As far as the features extracted from the audio track are concerned, the opposite is true: the features are represented in connection with their temporal location and a reference of their approximate location within the symbolic representation is provided.

Score-audio-features Alignment

To align the symbolic representation to the audio track the TimeLine Ontology has been used. This ontology allows two different timelines to be defined and aligned. In particular in this case I am going to align an abstract timeline (tl:AbstractTimeLine) with a discrete timeline (tl:DiscreteTimeLine). These two different timelines are aligned by means of tl:TimeLineMap.

Ontology Documentation

The ontology documentation was created using Widoco

Ontology Visualisation

The ontology visualisation was created using WebVOWL.