Musicology and Ethnomusicology in the Digital Era – BuzAr Journal Vol I – Buzarovski Archive


Welcome!

Buzarovski Archive (BuzAr) is a digital collection of video, audio, photos, books, papers, scores and other artifacts related to Balkan cultures and traditions. The collection is based on Dimitrije Buzarovski's musical scores, performances, video and audio recordings, digitised cultural heritage, and musicological and ethnomusicological works.

Established: October 15, 2012

Starting from December 1, 2014, BuzAr Collections will be available online in streaming format. Selected items from the collections will be posted periodically.

Copyright Notice:

Artifacts on BuzAr are for non-commercial, educational, or research purposes. Users of these artifacts agree to cite Buzarovski Archive.
BuzAr Journal
Volume 1, 2014
Dimitrije Buzarovski
Musicology and Ethnomusicology in the Digital Era*

Last year, in my presentation at The Struga Music Autumn Conference, I was concerned with the changes in the music culture and particularly music art in the digital era. This paper will deal with the influence of computer technology in musicology and ethnomusicology, thus corresponding to the main concern of this session contemporary trends in musicology and ethnomusicology. This presentation is mainly a result of the everyday practical experience with digital technology, and the replacement of human activities with automatic actions of the new machines. Last year I spoke about technology as art, now I will speak about the theoretical problem technology as a science.

 

There are three major issues concerning the relationship computer technology musicology/ethnomusicology:

theoretical ( humans vs. machines)

methodological (speculative vs. empirical methodology)

stylistical (language, syntax, structure).

 

We have defined this relation by choosing two theoretical disciplines (musicology and ethnomusicology), although the problem equally relates to all scientific disciplines. In our case, we are observing an application of a universal situation in the specific area of musicology and ethnomusicology.

 

Theoretical issues

 

We have already pointed out that humans are more and more replaced by computers in the scientific and research activities. Thus, the central theoretical question relating to the new environment is:

 

What is the portion of human participation vs. the portion of the technical tools in the final scientific results and conclusions?

 

Consequently we can ask:

 

What role will be played by humans compared machines in future science?

 

These questions place the problem in a meta-theoretical area.

In order to answer them, we will point out the three different levels of the music phenomenon:

acoustic level (real sound, observed independently of our hearing)

perceptual level (our registration of sound and perceptional organisation in our consciousness)

semantic level (our experience and the meaning of the organised aural sensations).

 

This division was developed in order to determine the participation of the digital technology in the different levels of the realisation of the phenomenon.

 

The acoustic level of the phenomenon can be exclusively registered by the technical means for the recording and processing of sounds. These means experience permanent improvement, and any technical novelty is immediately applied in the sound processing. Almost all hardware and software for digital signal processing includes some tool for sound analysis and a presentation of its acoustical features. Consequently we can conclude that the human participation at this level is minimal, and limited to the selection of the input signal and the reading of the results.

 

The perceptual level of the phenomenon is elaborated by different theoretical disciplines -  music theory, harmony, polyphony, musical forms etc. including the applied disciplines, such as psychology and sociology. Music graphic symbols of contemporary western notation (notes, rests, dynamics, articulations, harmonic symbols etc.) are completely adjusted to our perceptual characteristics. They relate to our perceptual coding and correspond relatively to the real acoustic status.

For instance, the group of sounds C1E1G1 designates a triad in a certain tonality (conditinally tonic in C major) in our perceptual organisation of the acoustic stimulus. In an acoustic sense, it is only a reduced form of a complex ratio of frequencies and amplitudes. Following the same acoustic approach - this group of sounds does not build a relation to any other group of sounds. The relation dominant-tonic is only a function of our perceptual organisation and part of the semantic level, which we will discuss later.

As a result, human participation becomes crucial in the determination of these characteristics of the phenomenon. That's why, at this level, we apply methodology which analyses the acoustic material through our perceptual organisation and understanding of the phenomenon. Thus, the software used both for creation or analysis of this level (for example - MIDI/score processors) is built upon the perceptual categories (intervals, duration and time signature, harmony based on superposition of thirds, forte/piano dynamics, legato/staccato articulation etc.). The first attempts to launch expert systems (software) for analysis of musical forms in the mid 80s followed the same principles. We can predict a complete automatisation of  an analysis at this level in the future.

 

The most intangible layer for machine analysis is the semantic one. This is where the specific connotation of the acoustic and perceptive material is created. Semantic analysis of this level requires a very complex approach due to the large number of parameters. That is why the conclusions mainly rely on and promote the unique features of human intuition, again as a part of human intelligence. The assumption that we could never build analytical algorhythms i.e. tools for this level, has only a theoretical meaning.

 

In fact, the major technological problem at this level is quantification. Existing computer technology is based upon arithmetic operations. In order to make the surveyed parameters understandable for machine use, they must be quantified. That is why the acoustic level was the easiest one for machine analysis, as it only contains numerical parameters. All three categories: frequency, amplitude and time are quantified at the moment of digitisation, and thus ready for machine analysis.

 

To summarise:

the analysis at the acoustic level is completely machine-based, there is a tendency for full computer analysis at the perceptual level, while the semantic level is still inaccessible for machine analysis

there is a constant endeavour to replace human with automatised machine activities

humans determine: the aspects of the phenomenon which will be researched, the technology for analysis and processing of the data, the way of digitisation or quantific and the retrieval and reading of the results of machine analysis.

we will need to redefine the relation between humans and tools in scientific research

we will need to redefine the rights, especially when expert systems are in use

we will need to redefine the concept of scientific discovery .

These conclusions relate to the entire area of scientific research, and they are equally applicable to musicology and ethnomusicology.

 

Methodological issues

 

We could approach the problem of general scientific methodology at two levels:

theoretical i.e. meta-theoretical and

practical (the individual manifestations of the phenomenon).

 

This corresponds to the universally accepted principles, in which for the first level we use the speculative, rationalistic, or deductive method (as is in fact, our approach in this paper), and for the second level we use the empirical approach, or induction. This division was established at the beginning of the XVII century by the rationalistic and empirical philosophical schools. Our main concern is to determine the level of changes which have happened meanwhile (if there are any), especially with the application of the new computer technology.

 

The meta-theoretical problems were reserved and no doubt will continue to rely on the rationalistic methodology, i.e. rational capacity of the creative subject - humans. Humans still have a comparative advantage over machines.

 

As far as the individual manifestation of the phenomenon is concerned, the cyber era brought substantial changes. Among the most important and evident changes is the introduction of storage of the research material, i.e. the data about the phenomenon, in multimedia databases. They represent the most economical coverage of a large, in fact - enormous, quantity of different types of parameters (audio, text, numeric, graphics in still and movie pictures) .

The databases per se are not a novelty, and they were a logical result of the collection processes in the empirical approach. The difference between the computer databases and the previous ones, is the possibility of extremely fast search and retrieval of different categories of materials. The creation of computer databases automatically enables the performance of very complicated statistical analyses, something unthinkable in the past. These analyses are far beyond the capacity of human calculations. The use of statistics, which is now a prerequisite for any scientific work with serious ambitions and stable methodology, was totally unknown for music disciplines, until recently. The progress brought by statistical methodology in the analysis of nominal data, (facing the previously-mentioned problem of quantification) was especially important.

 

In general, there is an evident challenge for the methodology applied in musicology and ethnomusicology. They have to adjust to the advantages of the new technology. Musicological research with a descriptive approach, which moves slowly from parameter to parameter, is part of history. In order to enable statistical analysis, every musicologist is obliged to organise the research material in a database with clearly defined fields and consequent quantification. The statistical analysis is not a guarantee for a scientific discovery, but is definitely a prerequisite for any scientific activity.

 

 

Stylistical issues

 

The participants in the cyber era predominantly communicate in English. Although there is a controversy concerning its pros and cons, the use of a common language has facilitated global communication, especially in the computer nets, such as the Internet. The arguments against the use of a common language are mainly based on the loss of the subtle nuances of the local languages.

 

The use of English has a direct influence on the stylistical formation of the theoretical papers. This influence can be observed at three levels:

lexical,

syntactical and

structural.

 

At the lexical level we can observe a large number of borrowed words and neologisms. Nevertheless, the existing equivalents in the local languages and the neologisms are less used, especially in communication among computer specialists.

 

The syntactic level undergoes morpho-syntactical changes (due to the use of compound words). The constant use of English in Internet menus, browsers, search engines, manuals etc., has permanent pressure on the syntax of other languages. The need for economisation imposes a simplified syntax structure, a reduction of the number of words and a loss of surface realisation of predicates. The sentences may consist only of nouns with very few indicators pointing to their relations. The use of video presentations with a lot of graphical elements, tables, drawings, still and movie clips, has its direct influence in the syntactic level of the theoretical papers. The condensed presentation of the thoughts in thesis is another example of the loss of the use of verbs.

 

The stylistical changes are evident in the general structure of the papers, which is becoming more and more standardized for all theoretical disciplines. The need for economisation results in an increase of the use of graphs, tables, numbers of the chapters, parts of the chapters etc. The footnotes are omitted in accordance with the principle - if there is something important, it should be a regular part of the text, not part of the additions.

The number of theoretical works in all scientific areas is increasing constantly. In order to make them accessable for reading (having in mind human capacity), the sole solution is to reduce their length. The quantity of theoretical works which were offered to a scientist of the XIX century cannot be compared to the quantity which is emerging at the beginning of the XXI century. The XIX century scientist could still read most of the theoretical works during active life. The XXI century scientist will be able to read only a part of the theoretical literature that emerged in the meantime. Thus, stylistical condensation , i.e. economisation, is an unavoidable process of improving the accessibility of the new papers.

 

Among the other aspects needing revision is the quotation from the available literature, or bibliography. The theoretical papers written in the 60s and the 70s contained large lists of literature from the area, sometimes bigger than the actual text. They presented small databases for the published literature in the area and consequently played a very special role in assisting the scientist living further from the big scientific and cultural centres.

The situation basically changed with the use of Internet search engines and the increasing  number of specialised institutions for bibliographical databases (RILM for instance). A common search with several key words in some of these databases is able to  create a list of works, with the length, or number of pages, far beyond the capacity of reading in an average human life. We could assume that the future use of bibliographical quotations will be restricted to the control over the author's knowledge of the adequate literature in the area.

 

We cannot consider the economisation and the simplification of the theoretical papers, particularly in a stylistical sense, as a pauperisation of the scientific and human spirit. The central issue for any scientific paper is whether it has something to say. The complicated stylistical, often metaphorical expressions, were a possibility for hiding the theoretical and methodological deficiencies.

 

Our paper was based upon the distinction between musicology and ethnomusicology. We deeply believe that the unification of methodology will make this disctinction obsolete. Music folklore is only one of the genres of music culture. The concept that musicology deals with artistic music and ethnomusicology with folklore can result in the formation of other separate disciplines for the other genres, such as jazz, pop, rock etc. We do not believe that each genre should possess a parallel discipline and methodology. We have already pointed out that all music is equal at the acoustic and perceptual level. The differences might appear at the semantic level, mainly because of the socio-cultural influence.

 

In that sense, musicology is the general discipline covering all manifestations of the music phenomenon. This does not contradict the formation of specialised branches for different genres. In accordance with the specific features of the genre they might develop additional methodological tools.




* First published in 2002. Contemporary Trends in Musicology and Ethnomusicology, The Struga Music Autumn, ed. D. Buzarovski. Skopje: IRAM, pp. 2
–12; also available at http://buzar.mk/IRAM/Conferences/IIConf.html (2002)

 

 ↑ Back to top  
Home
© Buzarovski, 2014
All rights reserved.
Unauthorized duplication of any of the materials related to the files
which are part of this web site is a violation of applicable laws.