Invalid Article ID. Unable to display article.
|
|||||||||||||||||||
Home
Online Store
Free Lessons
Raagas
Articles
Songs Archive
Feedback
About Us
|
|||||||||||||||||||
Article: Harmonic Entropy
Harmonic entropy is the simplest possible model of consonance. It asks the
question, "how confused is my brain when it hears an interval?" It assumes
only one parameter in answering this question.
Our brain determines what pitch we'll hear when we listen to a sound. It
does so by trying to match the frequencies in the sound's spectrum (timbre)
with a harmonic series. The pitch we hear is high or low depending on
whether the frequency of the fundamental of the best-fit harmonic series is
high or low. The pitch corresponding to the fundamental itself need not be
physically present in the sound. Sometimes, the meaning of "best-fit" will
not be clear and we'll hear more than one pitch. This happens when several
tones are playing together, or when the spectrum of the instrument is
highly inharmonic.
Entropy is a mathematical measure of disorder, or confusion. For a dyad,
consisting of two tones which are sine waves or have harmonic spectra, one
can immediately understand the behavior of the harmonic entropy function.
The brain's attempt to fit the stimulus to a harmonic series is quite
unambiguous when the ratio between the frequencies is a simple one, such as
2:1 or 3:2. More complex ratios, or irrational ones far enough from any
simple one, and the limited resolution with which the brain receives
frequency information makes it harder for it to be sure about how to fit
the stimulus into a harmonic series. The resolution mentioned is
parameterized by the variable s. A computer program is used to calculate
the entropy for every possible interval (in, say, 1¢ increments). The set
of potential "fitting" ratios is chosen to be large enough (by going high
enough in the harmonic series) so that further enlargements of the set
cease to affect the basic shape of the harmonic entropy curve.
Further clarifications about the units used in the graph on the X- and Y-axes, etc.:
Nats are a unit of entropy or information content. More familiar, from
computer science, are bits. To convert from nats to bits, multiply be
log2e = 1.442 695.
Cents are a logarithmic measure of musical interval size. For two
frequencies p and q, the interval in cents between them is defined as
1200*log2(p/q).
n*d<10000 -- 10000 is the 'seed limit'. This means that the harmonic
entropy is calculated by assuming that our central pitch processor could
ideally recognize any ratio n/d such that n*d is less than 10000. This is
chosen arbitrarily. Increasing this number has very little effect on the
shape of the harmonic entropy curve, but increases the overall entropy
level.
s=1.2% means that the resolution with which auditory information is
relayed to our central pitch processor is assumed to be 1.2%. This is
essentially the only free parameter in the harmonic entropy model. The
appropriate value for s will depend on timbre, register, duration, the
particular listener in question, and other factors. 1.2% is a generic
value that seems to represent many typical situations well. The other
graph used 0.6% which represents very fine hearing conditions.
1/sqrt(n*d) weighting: It turns out that the "width" of each ratio n/d in
interval space -- the amount of "room" it has between neighboring ratios
within the 'seed' set -- is approximately proportional to 1/sqrt(n*d). The
approximation is very good when n and d are small, but breaks down as n*d
approaches the 'seed limit' -- in this case, 10000. 1/sqrt(n*d) weighting
simply means that rather than using the actual "widths", the 1/sqrt(n*d)
proportionality is assumed to hold exactly, no matter how close n*d gets
to 10000. This has only a small effect on the harmonic entropy curve, but
makes it a lot less sensitive to the actual choice of 'seed limit'.
Artifacts that result from the specific choice of 10000 get washed out by
this method.
2004/05/26
|
|||||||||||||||||||
© Copyright 2000-2003, SoundOfIndia.com. All rights reserved. |