Now AI is coming for rap music

You may have missed it amid the recent news blizzard, but last week marked an important moment in artificial intelligence. Capitol Records, the music label behind iconic artists like Nat King Cole and Frank Sinatra, has announced that it has signed a rapper by the name of FN Meka. Unlike “Ol Blue Eyes” and its other famous predecessors, FN Meka is not human. It’s a virtual avatar. More importantly, Meka’s songs are created using AI

I’m probably not the best judge of these things, but I found Meka’s new single “Florida Water” to be no worse than most popular man-made music you hear from These days. “Give me that Patek, need that AP, need that zaza,” is a representative verse.

The amount of artificial intelligence behind Meka’s music is not entirely clear. Capitol Records’ announcement was vague. In an interview with Music Business Worldwide last year, Meka creator Anthony Martini said that a human voice performs the vocals, but the lyrics and composition are the product of AI.

“We have developed a proprietary artificial intelligence technology that analyzes certain popular songs of a specific genre and generates recommendations for the various building blocks of the song: lyrical content, chords, melody, tempo, sounds, etc. We combine then those elements to create the song,” Martini said in the interview.

A few thoughts on this:

  • This is yet another example of the collision course between intellectual property and artificial intelligence. The recent kerfuffle over text-image AI art and the human-made artworks that “trained” AI (which we talk about further down in this week’s newsletter) show just how the problem is thorny. In the tough business of music – in which artists, labels and streaming platforms have been fighting over royalties for years – it’s going to be a royal mess. By signing Meka to a label, perhaps AI can be trained exclusively on music from the Capitol Records catalog. But how does Capitol make up for its roster of artists whose music has been used to “inspire” Meka? And if Meka is only formed on the music of the Capitol catalog, will his songs be anything other than a high-tech remix?
  • Despite the sensationalist nature of an AI rapper and the potential for a new breed of virtual human-replacing crooners, the future could look less like Meka and more like The Wizard of Oz, with the AI ​​quietly doing the work behind the curtain. As AI algorithms for composing music become more widely available, it will be easy for musicians to harness the technology and use it as a quick tool to create more songs. As a listener, you may never know that your favorite artist’s new song was concocted by algorithms.

That said, FN Meka currently has 10 million followers on TikTok. So listen to “Florida Water” and judge for yourself as you read the rest of this week’s AI news.

Alexei Oreskovic
@lexnfx
alexei.oreskovic@fortune.com

Today’s edition was curated and written by Jeremy Kahn.

AI IN THE NEWS

Tesla data could be used to improve driving safety. But who owns this data? This is the question raised by a New York Times story about the wealth of data that Tesla’s advanced cruise control (called Autopilot, but not actually capable of fully autonomous driving) collects and how that data, especially the detailed traffic accident information it provides, could prove useful to transport regulators, road authorities, insurance companies and other vehicle manufacturers. The story is about a lawyer who has used the data in lawsuits and is trying to build a business based on collecting, anonymizing, and then selling that data. The problem is that it’s unclear whether this information belongs to customers who own Tesla vehicles or to Tesla itself.

Elon Musk reveals more details about his robot Optimus. In an essay published in a magazine sponsored by the Cyberspace Administration of China, the billionaire entrepreneur said he wanted the humanoid robot Tesla is building to be able to cook, clean and mow grass. He also says the robot, including a prototype that Musk said could launch as soon as late September, could help care for the elderly, according to an article about the trial in tech publication The Register.

Google’s AI is good at spotting naked images of children. But some parents have been wrongfully investigated for potential child abuse and had their Google accounts permanently deleted after sharing innocent images. The New York Times spoke to parents who had to send photos of their children’s genitals to doctors for legitimate medical purposes, but were nonetheless banned from Google and reported to local law enforcement. The problem is that although Google has automated systems that are very good at spotting potential child pornography that is uploaded to its cloud-based servers – and which have become better at discerning certain innocent images of naked children, like a parent taking a photo of their own toddler frolicking naked at the beach – systems still don’t understand enough contextual information to know when a photo might be shared for a legitimate purpose. The company told the newspaper that it stood by its decisions in the two cases reported by the newspaper, even though law enforcement quickly cleared the two fathers involved in any crime. But the company’s head of child safety operations also said the company has consulted with pediatricians so that its human reviewers can better understand possible conditions that could appear in photographs taken for medical reasons.

Exscientia and Bayer end their drug development partnership. In a rare setback for AI-based drug discovery, pharma giant Bayer and UK-based AI-based drug research firm have ended a deal in which the two collaborated to find likely targets for oncology and cardiovascular drugs. News site Fierce Biotech said Bayer has paid Exscientia around $1.4 million in revenue so far under the partnership, which was signed in 2020 and is worth up to $243 million in initial fees and future payments if certain development milestones are reached. The site specifies that Exscientia reserves the right to develop drugs for one of the two targets identified during the collaboration with Bayer. Exscientia’s stock, which is listed on the Nasdaq, lost 20% of its value after the company announced the end of the partnership.

AN EYE ON AI TALENT

Seattle energy startup Booster hired André Hamel be his chief technology officer. Hamel, according to GeekWire, had been an executive at LivePerson and previously held various engineering and machine learning roles at Amazon.

A LOOK AT AI RESEARCH

Can a robot dream of itself? Researchers at Columbia University recently trained a robot arm to learn an image of its entire body from scratch, without any human input. The robot learned completely by trial and error, starting with random movements, then seeking to plan future actions and predict the position of its body while performing these tasks. He could learn to accurately answer questions about whether certain coordinates in three-dimensional space would be occupied by his body at a certain time based on the action he was performing. The researchers then tried to use various neural network visualization techniques to try to understand how the robot imagined itself during the different stages of learning. They found that while the robot initially imagined itself as some kind of loose cloud, it gradually learned a very precise image of its own body. The results of the experiment were published in Scientific robotics. You can read more about the research here and you can watch a great video about the project here. Why is this important? Because in order for a robot to perform more complex tasks, especially in crowded environments and around other people or robots, it will need to have a good idea of ​​its own body shape and how it occupies space . This “self-awareness” is essential for the robot to safely plan its future movements, especially in new environments, without having to be specifically trained for each new space it enters.

FORTUNE ON AI

Read this article before Google flags it as clickbait – by Will Daniel

Elon Musk reportedly considered investing in a rival to his brain computing startup Neuralink – by Alena Botros

TikTok cracks down on misinformation and paid political content ahead of US midterm elections – by Alena Botros

Indeed CEO Leverages Data to Determine What Candidates Want in the Workplace – by Fortune Editors

BRAINFOOD

Is the use of AI-generated media ethical? This question is becoming increasingly urgent as AI systems capable of creating professional-looking images in a wide variety of styles from text descriptions become commercially available. These systems include OpenAI’s DALL-E 2, a similar system created by AI research lab Midjourney, and another called Stable Diffusion created by open-source developer collective Stability AI. The problem is that many artists and illustrators claim that the increasingly powerful software robs them of potential work and is unethical for organizations that can afford to pay humans to create visual content from instead turn to AI software. (Artists feel particularly aggrieved because these AI systems are formed from thousands of images of historical and contemporary artworks and illustrations found on the Internet. Artists receive no compensation for unwittingly contributing to that training data. And to add insult to injury, it’s possible to trick one of those AI systems into creating an image in the specific style of a particular artist.) Charlie Warzel, journalist who covers the intersection of technology and culture and writes the “Galaxy Brain” newsletter for The Atlantic fell into this controversy when he used Midjourney’s AI image creation software to create an image of far-right radio jock Alex Jones to illustrate a recent newsletter article. Warzel was attacked on Twitter for this, and he later wrote a blog post apologizing for using Midjourney and discussing the whole controversy. But Warzel won’t be the last person to encounter this problem as AI image generation becomes increasingly common. Whether artists and illustrators will be able to convince big business and the media to avoid using such software remains to be seen. But I wouldn’t bet on it. The software becomes too capable to ignore. (Check this twitter thread for many stunning examples.)

Our mission to improve business is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Comments are closed.