When we listen to music we often have an intuitive idea whether some song sounds well – even if it is not our favourite personal style. We also are quite able to guess by which composer a piece of classical music was written or which band performs an unknown song. The question how we actually achieve this kind of classification touches the area of pattern recognition which is of course one of the traditional areas of AI. Since pattern recognition models can often be turned into synthesis methods, an intriguing idea would be to understand pattern recognition in music well enough in order to produce successful songs.
In this talk I will present some ideas about how relevant patterns can be identified in music. One approach is to use entropy and Markov models in order to recognize composers (analogous to the recognition of writers based on their usage of words) and styles. A second topic will be the observation that music shares a crucial property with fractal images, namely self-similarity. This at least provides a criterion to test whether an automatically produced piece of music conforms with the rules of well-sounding music. Unfortunately, this approach does not give rise to an algorithm by which successful music may be produced, just as knowing that stock-market prices follow the rules of self-similarity is no recipe to become rich.
The question whether the lack of methods to automatically produce nice and interesting music is rather due to the deficiency of AI methods or indicates an inherent necessity of creativity will be left as a discussion topic.