AI startup under fire after trolls used its voice cloning tool to make celebrities say "offensive things"
The firm ElevenLabs, maker of the Prime Voice artificial intelligence voice cloner, is currently under fire as
trolls have flooded social media with audio deep fakes, featuring celebrity voices making racist statements and calling for violence using its tool.
The startup, which released earlier this month the tool that allows people to upload recordings of anyone speaking and use this to generate an artificial voice, will introduce additional "safeguards" following the misuse, reports said.
Found on the image-based bulletin board website
4chan, internet trolls used the said AI tool to make renowned personalities say "offensive" things. Among the celebrities whose voices have been "deep faked" were David Attenborough and "Harry Potter" star Emma Watson.
Attenborough's voice was used to create a sweary rant about his career in the Navy Seals while
Watson's was used to read passages from the book Mein Kampf, an autobiographical manifesto by Nazi Party leader Adolf Hitler. In one audio example, the fake voice of Joe Biden was used to announce the invasion of Russia.
Other well-known people whose voices were faked were Joe Rogan, James Cameron and Tom Cruise as well as a range of fictional characters reading racist and misogynist hate speech.
"Crazy weekend – thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases. We want to reach out to the Twitter community for thoughts and feedback!" The company tweeted and replied to the same thread with: "While we can trace back any generated audio back to the user, we'd like to address this by
implementing additional safeguards."
The firm's current plan of action includes additional account verifications to enable voice cloning, such as payment info or even full ID verification; verifying copyright to the voice by submitting a sample with prompt text; and drop voice lab altogether and manually verify each cloning request.
At the moment, the tool is still in the same state as the company aims to give people access to "compelling, rich and lifelike voices" for storytelling. Despite the controversy, the tool has been described by some as "the most
realistic AI text-to-voice platform seen."
ElevenLabs was founded by Mati Staniszewski, an ex-Palantir deployment strategist, and Piotr Dabkowski, an ex-Google machine learning engineer. The two also created voice cloning and dubbing abilities for film and publishing industries. They declared they received funding of £1.6 million ($1.93 million).
Microsoft: Voice cloning tools can be used by cybercriminals for scamming and fraudulent purposes
Earlier in the year, another artificial intelligence advancement from Big Tech company Microsoft was introduced to the market. The program, called VALL-E, designed for text-to-speech synthesis, can clone the voice after hearing a person speak for a mere three seconds.
According to
PCMag's Michael Kan, a team of the tech giant's researchers created the technology by having the system listen to 60,000 hours of English audiobook narration from over 7,000 different speakers to get it to reproduce human-sounding speech. This sample is hundreds of times larger than what other text-to-speech programs have been built on.
According to the makers, the AI-powered program can manipulate the cloned voice to say whatever is desired and replicate emotion in a person's voice or be configured into different speaking styles. (Related:
Google worshipers applaud their own total enslavement as Google AI unveils near-perfect human voice mimicry tech.)
"Since VALL-E could synthesize speech that maintains speaker identity, it
may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker," the Microsoft researchers wrote in their paper.
"The technology, while impressive, would make it easy for cybercriminals to clone people's voices for scam and identity fraud purposes," Kan commented, adding that it's actually not hard to imagine the same technology fueling cybercrime when even the inventor of the technology acknowledges potential threats.
Visit
FutureTech.news for more news related to artificial intelligence-powered platforms.
Watch the video below that talks about how
ElevenLabs AI system cloned Health Ranger Mike Adam's voice.
This video is from the
Health Ranger Report channel on Brighteon.com.
More related stories:
Voice assistants Siri and Alexa creating RUDE, ANTISOCIAL children.
Google suspends engineer for exposing "sentient" AI chatbot.
WEF's "Global intelligence collecting AI" to erase ideas from the internet.
Google veterans to launch drones with "most advanced AI" ever.
Sources include:
DailyStar.co.uk
Express.co.uk
Twitter.com
PCMag.com
Brighteon.com