Deepfakes that imitate celebrity voices to create audio with violent comments, the latest fashion on 4chan


Voices generated by artificial intelligence (AI) are one of the most unique phenomena in recent years. Despite the fact that many have discovered how to put it to productive use, many others they are misusing of this technology: the deepfake voices.

As we have explained on other occasions, a deepfake It’s an imitation of a video. Audio or photo that looks authentic but is the result of manipulation of technology artificial intelligence (AI).

With the advancement of AI technology, creating deepfakes is becoming easier. To produce a deepfake voice, simply record your voice for a while and then send the resulting recording to a company or application that provides that service.

Well, in this case, ElevenLabs, founded by former employees of Google and Palantir, is denouncing the use of its beta tool for rather deplorable uses. 4chan members appear to have used the application to generate voices that sound like Joe Rogan, Ben Sharpio and Emma Watson.

“Crazy weekend. Thank you all for testing our Beta platform. While we see our technology being overwhelmingly applied to positive use, we are also seeing an increasing number of voice cloning misuse cases”explained ElevenLabs in a recent tweet.

It was to be expected: artificial intelligence being used to do the wrong thing and deceive users

As mentioned in Vice, several clips have been uploaded to 4chan (the Japanese forum) using the tool so that, for example, it seems that Emma Watson is reading an excerpt from Mein Kampf. You can find cases as delicate as this one, although others are the most absurd, going through the transphobic, the homophobic and the racist.

Now that these AI systems only need a small amount of audio to train in order to create a workable artificial voice that mimics an individual’s speaking style and tone, the opportunity to do harm increases.

And it is that, if before the systems required tens or even hundreds of hours of sound, now realistic voices can be generated from just a few minutes of audio content.

2001 A Space Odyssey

The real fear is that people will be fooled and act or think about something that’s fake because it sounds like it’s coming from someone real. Some people try to find a technical solution to protect us. However, a technical solution will not be 100% foolproof.

ElevenLabs, for its part, has added that, although it can trace any audio generated by artificial intelligence to a specific user, it is studying more protection measures. These include requiring payment information or a “full ID” to perform voice cloning, or manually verifying each voice cloning request.


Leave a Reply

Your email address will not be published. Required fields are marked *

About Us

Our team of experienced writers and editors come from diverse backgrounds and bring a wealth of knowledge and expertise to our website. We are passionate about our work and are committed to upholding the highest standards of journalism.