c649ba46809dafd4d06649edf659847f19af06eb
[asr1617.git] / intro.tex
1 \section{Introduction}
2 The primary medium for music distribution is rapidly changing from physical
3 media to digital media. The \gls{IFPI} stated that about $43\%$ of music
4 revenue arises from digital distribution. Another $39\%$ arises from the
5 physical sale and the remaining $16\%$ is made through performance and
6 synchronisation revenues. The overtake of digital formats on physical formats
7 took place somewhere in 2015. Moreover, ever since twenty years the music
8 industry has seen significant growth
9 again\footnote{\url{http://www.ifpi.org/facts-and-stats.php}}.
10
11 There has always been an interest in lyrics to music alignment to be used in
12 for example karaoke. As early as in the late 1980s karaoke machines were
13 available for consumers. While the lyrics for the track are almost always
14 available, an alignment is not and it involves manual labour to create such an
15 alignment.
16
17 A lot of this musical distribution goes via non-official channels such as
18 YouTube\footnote{\url{https://youtube.com}} in which fans of the performers
19 often accompany the music with synchronized lyrics. This means that there is an
20 enormous treasure of lyrics-annotated music available but not within our reach
21 since the subtitles are almost always hardcoded into the video stream and thus
22 not directly usable as data. Because of this interest it is very useful to
23 device automatic techniques for segmenting instrumental and vocal parts of a
24 song, apply forced alignment or even lyrics recognition on the audio file.
25
26 These techniques are heavily researched and working systems have been created
27 for segmenting audio and even forced alignment (e.g.\ LyricSynchronizer%
28 \cite{fujihara_lyricsynchronizer:_2011}). However, these techniques are designed
29 to detect a clean singing voice and have not been tested on so-called
30 \emph{extended vocal techniques} such as grunting or growling. Growling is
31 heavily used in extreme metal genres such as \gls{dm} but it must be noted that
32 grunting is not a technique only used in extreme metal styles. Similar or equal
33 techniques have been used in \emph{Beijing opera}, Japanese \emph{Noh} and but
34 also more western styles like jazz singing by Louis
35 Armstrong\cite{sakakibara_growl_2004}. It might even be traced back to viking
36 times. For example, an arab merchant visiting a village in Denmark wrote in the
37 tenth century\cite{friis_vikings_2004}:
38
39 \begin{displayquote}
40 Never before I have heard uglier songs than those of the Vikings in
41 Slesvig. The growling sound coming from their throats reminds me of dogs
42 howling, only more untamed.
43 \end{displayquote}
44
45 %Literature overview / related work
46 \section{Related work}
47 Applying speech related processing and classification techniques on music
48 already started in the late 90s. Saunders et al.\ devised a technique to
49 classify audio in the categories \emph{Music} and \emph{Speech}. They was found
50 that music has different properties than speech. Music has more bandwidth,
51 tonality and regularity. Multivariate Gaussian classifiers were used to
52 discriminate the categories with an average performance of $90\%%
53 $\cite{saunders_real-time_1996}.
54
55 Williams and Ellis were inspired by the aforementioned research and tried to
56 separate the singing segments from the instrumental
57 segments\cite{williams_speech/music_1999}. This was later verified by
58 Berenzweig and Ellis\cite{berenzweig_locating_2001}. The latter became the de
59 facto literature on singing voice detection. Both show that features derived
60 from \gls{PPF} such as energy and distribution are highly effective in
61 separating speech from non-speech signals such as music. The data used was
62 already segmented.
63
64 Later, Berenzweig showed singing voice segments to be more useful for artist
65 classification and used an \gls{ANN} (\gls{MLP}) using \gls{PLP} coefficients
66 to detect a singing voice\cite{berenzweig_using_2002}. Nwe et al.\ showed that
67 there is not much difference in accuracy when using different features founded
68 in speech processing. They tested several features and found accuracies differ
69 less that a few percent. Moreover, they found that others have tried to tackle
70 the problem using myriads of different approaches such as using \gls{ZCR},
71 \gls{MFCC} and \gls{LPCC} as features and \glspl{HMM} or \glspl{GMM} as
72 classifiers\cite{nwe_singing_2004}.
73
74 Fujihara et al.\ took the idea to a next level by attempting to do \gls{FA} on
75 music. Their approach is a three step approach. The first step is reducing the
76 accompaniment levels, secondly the vocal segments are separated from the
77 non-vocal segments using a simple two-state \gls{HMM}. The chain is concluded
78 by applying \gls{Viterbi} alignment on the segregated signals with the lyrics.
79 The system showed accuracy levels of $90\%$ on Japanese music%
80 \cite{fujihara_automatic_2006}. Later they improved hereupon%
81 \cite{fujihara_three_2008} and even made a ready to use karaoke application
82 that can do the this online\cite{fujihara_lyricsynchronizer:_2011}.
83
84 Singing voice detection can also be seen as a binary genre recognition problem.
85 Therefore the techniques used in that field might be of use. Genre recognition
86 has a long history that can be found in the survey by
87 Sturm\cite{sturm_survey_2012}. It must be noted that of all the $485$ papers
88 cited by Sturm only one master thesis is applying genre recognition on heavy
89 metal genres\cite{tsatsishvili_automatic_2011}.
90
91 Singing voice detection has been tried on less conventional styles in the past.
92 Dzhambazov et al.\ proposed to align long syllables in Beijing Opera to the
93 audio\cite{dzhambazov_automatic_2016}. Beijing Opera sometimes contains
94 growling like vocals. Dzhambazov also tried aligning lyrics to audio in
95 classical Turkish music\cite{dzhambazov_automatic_2014}.
96
97 \section{Research question}
98 It is debatable whether the aforementioned techniques work because the
99 spectral properties of a growling voice is different from the spectral
100 properties of a clean singing voice. It has been found that growling voices
101 have less prominent peaks in the frequency representation and are closer to
102 noise than clean singing\cite{kato_acoustic_2013}. This leads us to the
103 research question:
104
105 \begin{center}\em%
106 Are standard \gls{ANN} based techniques for singing voice detection
107 suitable for non-standard musical genres like \gls{dm} and \gls{dom}?
108 \end{center}