In 2019, some cybercriminals used Machine Learning to steal money from a company. They did it by mimicking the voice of an executive in a phone call that gave the direction “Transfer the money!”. (I’m summarizing the phone call quite a bit of course!). Since the boss called and said to transfer the money, the money was transferred.
This was touted as the ‘first’ AI-powered heist. I’d say it probably wasn’t the first, it was just the first one we heard about.
Standard defenses won’t work against this. We have firewalls, IDS’s, antivirus, vulnerability scanners, and more tools that all defend us against computer attacks. This was a computer attack, but it wasn’t a computer attack against a computer. It was a computer attack against a person. Call it AI Social Engineering.
I did some research on Social Engineering and the standard defense against it is education. Teaching people to not give information to random people is important, but what if the people on the phone sound like your boss? If I got a phone call with the correct caller ID from my boss and it sounded like him on the phone, of course I’d do what he asked. He’s the boss.
It’s not like a random person calling me up and asking me for things, it’s the boss, the man that signs the paychecks so to speak.
In my opinion, the education that is useful to defend against Social Engineering attacks isn’t going to help against AI Social Engineering, especially with this voice attack. We need technical solutions against this technical problem. In other words, we need Defense Against the Dark AI. (Sorry Harry Potter fans, I couldn’t resist!)
What do you think these defenses will look like? Write a Field Note for DTRAP and tell us.