MIMIC: Musically Intelligent Machines Interacting Creatively

Lead Research Organisation: Goldsmiths College
Department Name: Computing Department

Abstract

This project is a direct response to significant changes taking place in the domain of computing and the arts. Recent developments in Artificial Intelligence and Machine Learning are leading to a revolution in how music and art is being created by researchers (Broad and Grierson, 2016). However, this technology has not yet been integrated into software aimed at creatives. Due to the complexities of machine learning, and the lack of usable tools, such approaches are only usable by experts. In order to address this, we will create new, user-friendly technologies that enable the lay user - composers as well as amateur musicians - to understand and apply these new computational techniques in their own creative work.

The potential for machine learning to support creative activity is increasing at a significant rate, both in terms of creative understanding and potential applications. Emerging work in the field of music and sound generation extends from musical robots to generative apps, and from advanced machine listening to devices that can compose in any given style. By leveraging the internet as a live software ecosystem, the proposed project examines how such technology can best reach artists, and live up to its potential to fundamentally change creative practice in the field. Rather than focussing on the computer as an original creator, we will create platforms where the newest techniques can be used by artists as part of their day-to-day creative practices.

Current research in artificial intelligence, and in particular machine learning, have led to an incredible leap forward in the performance of AI systems in areas such as speech and image recognition (Cortana, Siri etc.). Google and others have demonstrated how these approaches can be used for creative purposes, including the generation of speech and music (DeepMinds's WaveNet and Google's Magenta), images (Deep Dream) and game intelligence (DeepMind's AlphaGo). The investigators in this project have been using Deep Learning, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Networks (LSTMs), and other approaches to develop intelligent systems that can be used by artists to create sound and music. We are already among the first in the world to create reusable software that can 'listen' to large amounts of sound recordings, and use these as examples to create entirely new recordings at the level of audio. Our systems produce outcomes that out-perform many other previously funded research outputs in these areas.

In this three-year project, we will develop and disseminate creative systems that can be used by musicians and artists in the creation of entirely new music and sound. We will show how such approaches can affect the future of other forms of media, such as film and the visual arts. We will do so by developing a creative platform, using the most accessible public forum available: the World Wide Web. We will achieve this through development of a high level live coding language for novice users, with simplified metaphors for the understanding of complex techniques including deep learning. We will also release the machine learning libraries we create for more advanced users who want to use machine learning technology as part of their creative tools.

The project will involve end-users throughout, incorporating graduate students, professional artists, and participants in online learning environments. We will disseminate our work early, gaining the essential feedback required to deliver a solid final product and outcome. The efficacy of such techniques has been demonstrated with systems such as Sonic Pi and Ixi Lang, within a research domain already supported by the AHRC through the Live Coding Network (AH/L007266/1), and by EC in the H2020 project, RAPID-MIX. Finally, this research will strongly contribute to dialogues surrounding the future of music and the arts, consolidating the UK's leadership in these fields.

Planned Impact

We will directly engage stakeholders in the process of music making with creative tools, exploring the role that AI will play in the future of the creative industries. We will bring complex AI and machine learning technologies to the general user of creative software; we will democratise technologies that are still emerging in academia and corporate R&D labs.

These groups will benefit from new software, course materials, events, artistic outputs and industry collaborations:

a) creative practitioners and their audiences; specifically musicians, composers and their audiences;
b) the hacker/maker community;
c) industry professionals; including through existing industry partnerships with record labels (XL, Universal), music technology companies (Akai, Roli, Ableton, Reactable, Cycling74, Abbey Road Red) and our project partner, Google Magenta;
e) learners; including those from secondary and higher education, home learners, and academics and professionals
f) the general public.

A key aim of our project is to create a simplified live coding language for coding in the browser where novices can learn about AI and machine learning through a clear and simple, yet powerful, live coding programming language written on top of JavaScript , which is well supported and popular. This simplified live coding language will be designed specifically for musicians and artists and will allow them to pursue new routes for creating music.

We will facilitate a range of high-quality use cases with creative professionals. This will bridge gaps between research and industry, accelerating the impact of artificial intelligence by deploying it in real-world and professional music-making and listening contexts.

Our events series will bring musicians, composers and audiences together, providing an important platform for the continued dissemination our work and the work of those practitioners whom we support through the creation of new tools.

In addition to concerts, we will run symposia to expand and further develop critical thought in these fields, inviting participation from a range of stakeholders. We will also disseminate and support artistic output through the creation of our platform, making it simple not just to create work, but also to share it amongst friends and colleagues, from both outside and inside our connected communities.

Our background technologies will be open source and available to academics and SMEs alike, allowing them to use contemporary AI in ways that are currently very challenging for novices.

We will generate significant, engaging and unique course materials, associated with our existing MOOC provision, and targeted at a range of different learners, from secondary education, through HE, to home learners, academics and professionals. This will help people to acquire skills in machine learning at any stage.

Our track record indicates we are capable of meeting the significant interest from the general public around these issues. Recent public engagement activities from team members have included:

- applying Deep Learning to the creation of artworks we are currently exhibiting at the Whitney Museum of American Art, with an accompanying paper at SIGGRAPH
- significant press around the use of AI and Machine Learning for music
- generative AI music software created used by Sigur Ros for the release of their most recent single (Route One), and a generative remix of the track broadcast for 24 hours on Icelandic National TV and watched by millions of people online
- contribution to the first computer generated West End musical
- high profile experiments on live national radio, as well as experience developing large scale, online collaboration platforms
- machine learning software for composers and musicians, downloaded over 5,000 times and the world's first MOOC on machine learning for creative practice;
- design of various popular live coding systems (ixiQuarks, ixi lang, the Threnoscope).

Publications


10 25 50