Dr. Joe Timoney studied Electronic Engineering, completing his PhD in 1998. He joined the Dept. of Computer Science at NUI Maynooth in the following year. He teaches on undergraduate programs in Computer Science and in Music Technology. His research interests are based in the area of audio signal processing, with a focus on musical sound synthesis and the digital modelling of analogue subtractive synthesis. He has supervised a number of PhD students in the fields of Audio analysis and Digital Audio watermarking.
Additionally, he has worked on EI innovation vouchers and a Commercialisation project on Watermarking. In 2003 he spent a 3 month research visit at ATR laboratory in Kyoto, Japan, and in 2010 made a research visit to the College of Computing at Zhejiang University, Hangzhou, China. He has developed a strong research collaboration with the Dept of Signal processing and Acoustics at Aalto university in Finland.
He is a member of the Audio Engineering Society. Alongside his academic work, he also is a keen DIY electronics enthusiast and has built a number of synthesizers and drum machines. He participated in last year’s Mini-Maker Faire as part of the NUI Maynooth team.
Audio signal processing, with a focus on musical sound synthesis and the digital modelling of analogue subtractive synthesis.
Dr. Joseph Timoney
Music is omnipresent in our daily lives, and it is hard to imagine that this has not always been the case. We rarely stop to wonder how listeners experienced music in past times and how technological innovation shaped our expectations and listening habits. In the 19th century, listening to (professionally performed) music required the listener to visit a dedicated venue such as a church or a concert hall at a specific time. Obviously, the event character implies that the listener had no influence on the program and the performing artists, the time of the concert, or its location. Furthermore, there was no alternative to sharing your listening experience with an audience and there existed no option of listening repeatedly to the same music performance. While nowadays we still enjoy concerts, the majority of our listening experience is unrelated to live performances.
The first notable change to our listening habits was initiated at the end of the 19th century with the introduction of technology to record and to reproduce a music performance. The gramophone (and its competitors, the graphophone and the phonograph) enabled listeners for the first time to listen to a music performance at home, at any time desired, and possibly alone. What previously was a unique, non-repeatable performance of pre-selected repertoire in a concert venue lost its temporal and spatial uniqueness. In addition to these contextual changes, listening to recorded music is different from a concert; on top of obvious technical deficiencies of the recording and reproduction system (limited bandwidth and dynamic range, added distortion and noise, missing ambient envelopment), there is no direct communication or interaction between the performers and the audience anymore. This has implications for both the recording and the listeners: in the recording studio is no audience, no applause, and no stage fright, and the reproduction misses the performers’ gestural and facial expressions and the interaction with other listeners. A recording also invites the listener to repeated listening, allowing a level of analytical listening unheard of before.
During the following decades, technological innovation focused on improving the quality of the listening experience: condenser microphones improved the recording quality, and the introduction of vinyl LPs improved reproduction quality significantly. At the same time, stereophony significantly enhanced the listening experience by creating an illusion of localization and spatial envelopment.