How humans and AI can make music together

Ge Wang doesn’t use computers to make music the way most people use computers make music. He uses computers to make… computer music. Wang works at Stanford, as an associate professor in the Center for Computer Research in Music and Acoustics. He also conducts the school’s famed Laptop Orchestra, was a co-founder of the music app maker Smule, and created a programming language called Chuck that turns code into sound. He understands how computers, music, and humans interact more deeply than most. He also has some ideas about where it’s all headed.

On this episode of The Vergecast, the third and last in our mini-series about the future of music, we chat with Wang about what’s next for computer music. He tells us about teaching his students to play with technology rather than trying to master it, and how tool makers should be approaching their work in a time of AI.

This conversation goes some unexpected and deep places, as so many conversations about AI tend to. We talk a lot about what it means to be creative, and even human, in a world filled with technology meant to make everything more efficient, less complicated, and more homogenous.

Whether you’re writing an email or a symphony, there’s a tool out there designed to make it easier. But is easier the goal? And if it’s not, how do we preserve all the things that make the hard work worth doing? What are we, the humans, even here for anymore? Like I said, it got deep. But we enjoyed it, and we think you will too.

If you want to know more about Ge and his work, here are some links to get you started:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top