The Alarming Abuse of Artificial Intelligence: Voice Cloning and Deepfakes as Case Studies

4
Last updated- 17 September 2024
Ademulegun James
Ademulegun James
Last updated- 17 September 2024
239
40
What’s inside?
Scammers Can Clone Your Voice to Scam People Close to You
Voice Cloning Scams
Deepfakes 
The Insidious Landscape of Threat 
How Can We Defend Ourselves Against this Insidious Threat?  
The Human Element 
Closing Thoughts 

Scammers Can Clone Your Voice to Scam People Close to You

This is the story of Adeola Fayehun, a famous Nigerian journalist and political satirist known for her engaging and often humorous take on serious geopolitical, social, and economic issues affecting Africans.

It was the middle of the night when I began receiving calls from Nigeria. The calls came from someone close to me, around 2 or 3 a.m. At first, I thought, “I don’t pick up calls when I’m asleep. If it’s an emergency, I know the people who would call, and I would answer. But no, no, no—my sleep is precious.” So, I ignored the calls and went back to sleep.

In the morning, I returned the call, curious to know what was so urgent and to my astonishment, the person on the other end told me that I had called him in the middle of the night, claiming I was stranded in Nigeria and in need of money.  I was baffled and replied, “What? That wasn’t me!” But he insisted, saying that someone had contacted him, sounding exactly like me, explaining that I was on a short visit to Nigeria, had run out cash, and needed him to send money through someone I was supposedly sending his way.

He said the voice on the phone was indistinguishable from mine, which is why he tried to confirm by calling on my U.S. number. When I didn’t answer, he contacted my brother, who assured him that I was not in Nigeria and advised him not to give any money to anyone. Despite this, the impersonator continued to call him, urging him to hurry up. Realizing something was off, he decided to play along. I can’t recall the details of what he said, but at one point, he even joked, “Would you like me to add some food to the money since you’re out of cash? You might also be hungry.”

Voice Cloning Scams

Fraudsters use AI-powered voice cloning technology to mimic the voices of trusted individuals, such as family, friends, or business associates. These scams often initiate fictitious calls where victims are deceived into revealing sensitive information or transferring money into an account. AI abuse, especially with respect to voice cloning has become a significant concern causing financial losses, privacy violations and severe damage to people’s reputation. Artificial Intelligence technology is being exploited by scammers to clone voices leading to deceptive calls, making it nearly impossible to distinguish between real and fake calls. This poses a significant risk for individuals and organizations worldwide because anyone can be targeted.

Scammers Can Clone Your Loved One’s Voice to Deceive Those Closest to Them

In another incident, Adeola Fayehun recounted how a scammer attempted to deceive her using her uncle’s cloned voice:

“I received a call from someone that I know very well telling me that we have a WhatsApp group meeting I responded, ‘Uncle you know I don’t participate in these WhatsApp groups that you add me to without my consent He replied, ‘I understand but did you get a code for the zoom meeting?’  I said, ‘I am not sure, I probably did but I haven’t opened that group message.’  He insisted, ‘Check to see if you got it. I told him it didn’t matter because I wasn’t joining any meeting, but he kept pressing, saying, ‘I just want to make sure you got the code, go and check.’

I started feeling uneasy, wondering why he was being so pushy. It wasn’t just insistence anymore; it was almost as if he was ordering me to check for the code. Still believing I was talking to my uncle, I checked WhatsApp and told him, ‘Okay, I didn’t see any code.’ He said, ‘No, no, check again—it’s a text message.’ So, I checked my text messages and found a code. When I told him, he urged, ‘Read it, read it to me.’

That’s when it hit me—something was off. I suddenly realized that this might be a scam. Up until that moment, I believed I was speaking with my uncle, who had just traveled to Nigeria, which explained the Nigerian number. But now, I started questioning everything.”

Out of respect for him—and you know I’m a very respectful person—I started to respond, “Uncle, no…” but stopped myself. Instead, I said, “Uncle, I’ve been told never to read numbers to anyone over the phone.” The moment I said that the person on the other end became furious and abruptly hung up. That’s when it hit me—I realized what had just happened. I thought, “Wait a minute, this isn’t right. My uncle would never be so pushy, and he would certainly never hang up on me.”

I had narrowly escaped being scammed at the last minute. Immediately, I called my uncle’s U.S. number, and when he answered, I said, “Uncle, someone just tried to scam me using your voice.” He was shocked and replied, “I hope they weren’t successful!” Apparently, someone had gained access to the WhatsApp group he had created, along with the contacts of everyone in the group. The scammer had already deceived one person, prompting my uncle to post a warning in the group. Unfortunately, I hadn’t seen the warning because I rarely check that WhatsApp group.

That’s one of the ways they tried to scam me, and it’s something to be aware of. And then, the second time this happened, they actually used my voice. Yes, they can use your voice too—don’t think it’s just other people’s voices they can clone.

Watch Full Video Here: [Using AI, Scammers Cloned and Used My Voice](https://www.youtube.com/watch?v=vRGzJ0wOiQQ&t=2s)

The CEO Voice Cloning Scam

In 2019, scammers employed AI technology to mimic the voice of a CEO from a UK-based energy firm. The synthetic voice, while almost indistinguishable from the real one in accent and tone, directed an employee to make transfer of over $200,000. The company obeyed, convinced that the directive came from the superior. This event represents the first documented case in Europe of AI voice cloning being exploited to commit fraud, highlighting the increasing complexity of cyber-attacks.  

Deepfakes

Deep fakes are intentionally manipulated AI-powered media that can change or transform human identity in videos. With advancements in generative adversarial networks, (GANs), these deep fakes have become disturbingly realistic which makes it incredibly difficult to differentiate between authentic videos from fake ones.

According to a 2023 report by Pindrop, (https://www.pindrop.com/blog/findings-in-our-deepfake-and-voice-clone-consumer-report), 60% of people who participated in their survey amplified their concerns about deep fakes and voice clones with more than 90% acknowledging that they pose potential threat. In the deep fake video posted by Tamer Sahin below, the Republican US presidential candidate, Donald Trump is seen entering a house with a gun. He was later joined by former US Secretary of States, Hilary Clinton with a box of Pizza and gun. Watch video here: https://bit.ly/47dX7mE

In another similar deepfake posted by Ravit Dotan via LinkedIn, Donald Trump is seen robbing a shopping mall with guns. He was seen subsequently arrested by the Police. Business Tycoon Elon Musk, US President Joe Biden, Russia’s President Vladimir Putin, CEO of Meta, Mark Zuckerberg, Former US President Barack Obama, Pope of Catholic, all featured in the deep fake video. Watch video here: https://bit.ly/479xDXs

The Zelenskyy Deepfake Incident:

During the turbulent period of the Russian invasion of Ukraine, a deepfake video of President Volodymyr Zelenskyy emerged, deceptively showing him directing Ukrainian soldiers to lay down their arms. This deepfake was released through compromised Ukrainian news agencies as well as Pro-Russian social media platforms. It was designed to create confusion, demoralize the Ukrainian troops and create discord. As Hany Farid, a digital forensic maestro, noted, “it pollutes the information ecosystem, casting a doubt on all content. This incident accentuates the geopolitical consequences of deepfakes, beyond fraudulent endeavors.”

The Insidious Landscape of Threat

Creating deepfakes has become increasingly easy and popular because of the abundance of AI-powered tools and technology that is now available and accessible by anybody. According to [Veritone Voice](https://www.veritonevoice.com/blog/tackling-deepfake-voice-fraud-with-artificial-intelligence), The abuse of these technologies has escalated, worsening efforts to safeguard against such attacks. From social media platforms to financial institutions, no sector is immune to exploitation.

Data from the Identity Theft Resource Centre (ITRC) show that almost 234 million people experienced some sort of data breach in the first three quarters of the year 2023. Reports reveal that the Federal Trade Commission (FTC) received over 5.39 million reports in 2023, out of which 48% involved cases of identity fraud and 19% identity theft.

These numbers are not mere figures. Rather, they reflect the vast scale of disaster and an urgent need for robust counteractions.

Rapid Adoption and Accelerated Development of Artificial Intelligence 

The release of AI tools has been accelerating rapidly. According to the Stanford AI Index Report 2024, there were 149 foundation models released in 2023, which is more than double the number released in 2022. This suggests an average of about 0.41 foundation models per day in 2023. https://hai.stanford.edu/news/ai-index-state-ai-13-charts

However, this number only accounts for foundation models, which are a subset of all AI tools. Current statistics emphasize the growth of AI users, with over 250 million people using AI tools worldwide in 2023, which underscores the massive impact and proliferation of these tools. https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide

The global Artificial Intelligence market is expected to reach $826.70 billion in value by 2030, highlighting the wide expansion of AI applications and tools. https://www.statista.com/outlook/tmo/artificial-intelligence/worldwide

Having comprehensively espoused the alarming abuse of Artificial Intelligence, it is critical to ask a crucial question:

How Can We Defend Ourselves Against this Insidious Threat? 

To protect ourselves from the dangers of AI, we must employ a multi-faceted approach:

Technology-Driven Solutions

In line with the updated EU AI Act which is set to be applied in about one year, deepfake videos should be marked to distinguish between real videos and deep fakes. SynthAI by DeepMind under Google is already advancing solutions in this regard. SynthAI creates invisible watermarks in AI-powered footage and images.  They remain invisible as they are embedded in pixels, yet computers can detect them. SynthAI generates thousands of accurately annotated synthetic images and automatically uses these images to train machine learning models. It integrates synthetic computing, blockchain, and artificial intelligence to enhance the efficiency and accuracy of data-driven models.

Moreover, I recommend that standard audio and video watermarking procedures should be implemented to help validate genuine media, making it easier to detect alterations. Companies like Veritone have mastered the use of inaudible watermarks to ensure the authenticity of synthetic videos. AI-powered detection tools are also essential. These tools scrutinize media for the most remote signs of manipulation. However, with the unwavering advancements in deep fake technology, staying ahead of this problem remains a challenge.

Regulatory Frameworks

Governments and regulatory institutions must collaborate with AI-driven Tech companies to establish stringent guidelines for the creation and distribution of AI-generated content. Policies necessitating digital watermarks or cryptography signatures can discourage malicious use. An awareness campaign akin to those conducted by the FTC, should be amplified to educate the public and business enterprises about the dangers of deep fakes as well as voice cloning.

Organizational Vigilance

Companies and other stakeholders should strengthen their authentication protocols, including multi-factor authentication and biometric verification, to undermine the risk of AI-driven scams. Every tech Start-up should have a dedicated cybersecurity department that draws technocrats from AI background and cybersecurity expertise to collectively conduct training programs for employees to educate them, thereby enhance their ability to effectively recognize and respond to potential deep fake threats.

The Human Element

Consider the story of a bank manager from the UAE who received a call from what he believed was a familiar voice, instructing him to authorize a $35 million transfer for a corporate business. The voice was that of a director he frequently interacted with. Having trusted the familiarity of the voice, he authorized the transfer, only to realize later that it was a deep fake fraud involving multiple fraudsters. The emotional trauma of such huge deception is egregious, eroding trust and creating an ambience of suspicion. As we navigate this digital warfare, the human element remains our most formidable defense. We must intentionally cultivate a culture of skepticism where individuals are encouraged to verify before they trust. This can be very helpful in preventing these risks.

Closing Thoughts

The growing abuse of AI through deep fakes and voice cloning represents an urgent call to act decisively. While Artificial Intelligence is meant to simplify our lives, give us more speed, assist us with automation and enhance our productivity, we must not allow it to be at the expense of our safety, security and overall well-being. The stakes are high. Beyond our financial transactions, it impacts the very fabric that underpins our society. In the struggle to calibrate innovation with ethics, we must make vigilance, awareness and ingenuity our greatest allies.

Through our collective actions, we must ensure that we strike a delicate balance between the mouthwatering promise of AI and the peril of its reckless use.

Sources:

  1. Using AI, Scammers Cloned and Used My Voice Source: Adeola Fayehun Link: [Using AI, Scammers Cloned and Used My Voice](https://www.youtube.com/watch?v=vRGzJ0wOiQQ&t=2s)
  2. The Zelenskyy Deepfake Incident: Source: NPR Link: [Deepfake Video of Zelenskyy](https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia)
  3. The CEO Voice Cloning Scam: Source: The Wall Street Journal Link: [Fraudsters Use AI to Mimic CEO’s Voice in Unusual Cybercrime Case](https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402)
  4. The Bank Manager in UAE: Source: Forbes Link: [Huge Bank Fraud Uses Deep Fake Voice Tech to Steal Millions](https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=5f1a42037559)

Other Sources:

https://www.ftc.gov/news-events/topics/identity-theft/report-identity-theft

https://www.idtheftcenter.org/publication/2023-data-breach-report/

https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide

https://www.forbes.com/advisor/business/ai-statistics/

https://www.statista.com/outlook/tmo/artificial-intelligence/worldwide

https://hai.stanford.edu/news/ai-index-state-ai-13-charts

About the author
Ademulegun James
Ademulegun James
An AI Expert and Technophile. I am an Artificial Intelligence Ethicist that creates contents around policies guiding AI systems, while promoting ethical development and deployment of AI innovations.