Our team of experts is ready to answer!
You can contact us directly
Telegram iconFacebook messenger iconWhatApp icon
Fill in the form below and you will receive an answer within 2 working days.
Or fill in the form below and you will receive an answer within 2 working days.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Reading Time
8
Sergey Korol
OpenCV.ai autor
AI and CV in music and pop culture
Robots can't do the dishes or clean our house yet, but they can already create a symphony.
June 23, 2024

How to write a song in a second

First of all, artificial intelligence technologies are good at making music. For example, Google DeepMind's Lyria allows you to create tracks without being able to sing or know how to read music — a whole set of technologies will work instead of  a human. With Lyria, you can not only create your own music, but also mimic popular stars. The AI can write an entire song on demand in the style of "Lyrical pop ballad about cats", but one can still sometimes get the uncanny valley feeling.

But  more abstract music — the AI handles it with no problem. For example, the Edel project by the popular singer Grimes allows you to create a personalized musical space — a stream of ambient for focused  work or meditation. Edel already has millions of active users around the world. The app features a special mode that generates music for sleep, including children's sleep — it was developed together with neurophysiologists.

Edel is trying to go beyond smartphones by introducing its AI generator for wearables and smart speakers

The market for AI music is projected to reach $1.2 billion by 2025. Already today, more than 60% of popular artists use AI services to create or improve lyrics and melodies.

At the same time, computer vision is also related to music — although not only microphones, but even cameras are not used for this purpose. Sound tracks (e.g. songs or voice recordings) are represented as spectrograms. And afterwards, the algorithm works with spectrograms as with images: to analyze sound, recommend similarities — and even create other spectrograms to recreate new tracks from them.

Millions of fans of a nonexistent artist

And there are already enough fully AI-generated musicians in pop culture. Neural networks create not only the tracks, but also the image of the performer. For example, the virtual singer Noooooouri has hundreds of thousands of fans and successful collaborations with global brands. And the number of track listens has passed 1 million.

Hatsune Miku is a wordplay in Japanese that also means "Sound of the Future"

Even more popular is virtual performer Hatsune Miku. In addition to being famous online, the 3D singer even appears on stage: for this purpose, they use a special technology of projecting images onto layers of opaque glass. Hatsune has an army of fans, and she has even been invited to perform at one of the most popular music festivals in the world, Coachella.

Concerts reimagined

And vice versa— human artists are not averse to entering the 3D world too. For example, popular musicians like Twenty One Pilots hold large-scale concerts in Fortnite, which can be attended in virtual reality. The concert by rapper Travis Scott in Fortnite drew 12.3 million people — even though it lasted only 15 minutes.

Virtual music events are largely a substitute for real concerts, with their limitations in terms of safety and ticket prices

AR and VR are also coming to the world of concerts. Visitors to Coachella this year could attend the first AR stage in the history of the festival. By pointing their smartphones at the stage, the audience saw mashup visualizations all around them. Gorillaz performed on stage in a fully 3D format: people-musicians sang, and their virtual images could be seen on the phone screen.

The band Nine Inch Nails is also famous for their love of modern technology. The band even has inhouse technical team, which develops complex light and music shows with AR and VR elements. They use cameras installed on stage to broadcast distorted images onto moving screens behind the musicians.

The band transports more than 15 tons of lighting and computer equipment — to concerts, including dozens of cameras and servers

Digital technology is also giving a second life to bands that can no longer perform on stage. In 2023, ABBA started to tour with three-dimensional avatars of older musicians. To create them, the images of the stars were digitized using a set of 160 cameras, and then AI was used to make the images move and sing. The virtual artists perform a total of 22 songs during the 2-hour concert.

In addition to 3D projections of musicians, the audience of the show will enjoy "the best sound in the history of concerts" — because digital performers do not have to struggle with the noises and shortcomings of musical instruments and microphones

We at OpenCV.ai are amateurs in music and shows, but professionals in computer vision and artificial intelligence. We will be happy to energize your audiovisual shows with our technologies.

Let's discuss your project

Book a complimentary consultation

Read also

June 27, 2024

AI in Fashion

Everyone wants clothes that fit better and cost less.
June 23, 2024

Artificial intelligence and computer vision — behind the microphone and on the stage

Robots can't do the dishes or clean our house yet, but they can already create a symphony.
June 12, 2024

Why it's important to calibrate multiple cameras — and how to do it right

In the previous article, we talked about the importance of the camera calibration process, which is employed by computer vision and machine learning algorithms. We discussed how to do this by placing a pattern or, in some cases, using surrounding objects as a pattern in the camera's field of view. But if there is more than one camera, things get complicated!