
AI models designed to closely simulate a person’s voice are making it easier for bad actors to impersonate their loved ones and scam vulnerable people out of thousands of dollars, the Washington Post reported.
Rapidly evolving in sophistication, some AI voice generation software requires only a few audio sentences to convincingly produce speech that conveys the sound and emotional tone of the speaker’s voice, with alternatives requiring as little as three seconds. For those targeted—often the elderly, the Post reported—it can be increasingly difficult to detect when a voice is inauthentic, even when the emergency circumstances described by scammers seem implausible.
Technological advances are apparently making it easier to prey on people’s worst fears and terrified victims told the Post they felt “horrific horror” when they heard what sounded like pleas directly from friends or family members who desperately needed help. One couple sent $15,000 through a bitcoin terminal to a scammer after they believed they had spoken to their son. The AI-generated voice told them he needed legal fees after being involved in a car accident that killed a US diplomat.
According to the Federal Trade Commission, so-called impostor scams are extremely common in the United States. This was the most frequently reported form of fraud in 2022 and caused the second highest loss to those targeted. Out of 36,000 reports, more than 5,000 victims were scammed out of $11 million over the phone.
Because these impostor scams can be run from anywhere in the world, it is extremely challenging for authorities to combat them and reverse the worrying trend, the Post reported. Not only is it difficult to trace calls, identify scammers, and recover funds, but sometimes it is challenging to decide which agencies have jurisdiction to investigate individual cases when scammers are operating from different countries. Even when it is clear which agency should be investigated, some agencies are unable to cope with the increase in the number of cases.
Ars could not immediately reach the FTC for comment. Will Maxson, assistant director of the FTC’s marketing practices division, told the Post that raising awareness of scams that rely on AI voice simulators is probably the best defense consumers have right now. It is recommended that any request for cash be treated with suspicion. Before sending funds, try to contact the person who seems to be asking for help by means other than a voice call.
Defenses against AI voice impersonation
AI voice modeling tools have been used to improve text-to-speech generation, create new possibilities for speech editing, and extend the magic of movies by cloning famous voices like Darth Vader’s. But the power to produce convincing voice simulations easily causes scandals, and no one knows who is to blame when the technology is misused.
Earlier this year, there was a backlash when some 4chan members made deep voices of celebrities making racist, offensive or violent remarks. At that point, it became clear that companies would need to consider adding more safeguards to prevent misuse of the technology, Vice reported, or risk being held liable for significant damage. , such as ruining the reputation of famous people.
Courts have yet to decide when or whether companies will be liable for injuries caused by deep-veined voice technology – or any of the other increasingly popular AI technologies, like ChatGPT – where risks appear defamation and misrepresentation are on the rise.
Courts and regulators may face increased pressure to scrutinize AI, however, as many companies appear to be releasing AI products without full knowledge of the risks involved.
For now, some companies seem unwilling to slow down the release of popular AI features, including controversial ones that let users imitate famous voices. Recently, Microsoft implemented a new feature during its Bing AI preview that can be used to imitate famous people, Gizmodo reported. With this feature, Microsoft seems to be trying to avoid any scandals by limiting what famous voices can inspire an impostor to say.
Microsoft did not respond to an Ars request for comment on how well safeguards currently work to prevent the celebrity voice impersonator from generating offensive speech. Gizmodo pointed out that like many companies looking to capitalize on the widespread interest in AI tools, Microsoft is relying on millions of users to beta test its “still dysfunctional AI,” which can be used apparently to generate controversial talk by presenting it. as a parody. Time will tell how effective any early solutions are in mitigating risks.
In 2021, the FTC issued AI guidance, telling companies that products should “do more good than harm” and that companies should be willing to hold themselves accountable for the risks associated with using products . Later, the FTC told companies last month, “You must be aware of the risks and reasonably foreseeable impact of your AI product before placing it on the market.”