Abstract
|
|
Auditory scene analysis is extremely challenging. One approach,
perhaps that adopted by the brain, is to shape useful representations
of sounds on prior knowledge about their statistical structure. For
example, sounds with harmonic sections are common and so
time-frequency representations are efficient. Most current
representations concentrate on the shorter components. Here, we
propose representations for structures on longer time-scales, like the
phonemes and sentences of speech. We decompose a sound into a product
of processes, each with its own characteristic time-scale. This
demodulation cascade relates to classical amplitude demodulation, but
traditional algorithms fail to realise the representation fully. A new
approach, probabilistic amplitude demodulation, is shown to
out-perform the established methods, and to easily extend to
representation of a full demodulation cascade.
|