Current Major Uses Of AI In Criminal Activities

Current Major Uses Of AI In Criminal Activities

Following up on my previous post about the uses of AI in 2023, the one area that has gotten the lion’s share of the response is AI used in criminal activities. The responses I’ve received fall into 4 categories:

  1. Woah, AI can be used for criminal activities? I had no idea! (I know! Shocking, right?!)
  2. I’m a policy person. What new policies restricting AI use should we write? (I am not convinced that regulating AI in the US is a good idea.)
  3. I’m a startup person in AI. How do I ensure I get funded for this in 2023? (My short answer – don’t bother.)
  4. I’m a media producer. Can you come to talk on (insert show) about this? (Sure!)

So, let’s dig into this and discuss some of the biggest uses of AI for criminal activities (in no particular order):

  1. Ransomware attacks: Ransomware attacks, in which attackers encrypt a victim’s data and demand a ransom for the decryption key, have become increasingly prevalent in recent years and are likely to continue to be a major threat in the future. AI is becoming a huge accelerator here – from specific organizational or individual targeting, AI is now being used to identify categories of targets and automate the attack vectors.
  2. Supply chain attacks: Supply chain attacks, in which attackers compromise the software or hardware of a vendor in order to gain access to the systems of their customers, have also become more common and are likely to continue to be a significant risk in the coming years. AI is being used in two very interesting ways. First, by having the AI create and then insert subsets of ‘bad’ code into devices so that traditional antivirus/spyware/etc. tools can’t use traditional signatures to find it; and second, similar to ransomware, identifying categories of potential targets and automating the attacks.
  3. Cloud security: As more and more organizations (especially large corporations, governments, and financial services institutions) move their data and applications to the cloud, the security of cloud-based systems is likely to become an increasingly important concern. The lag these groups have experienced in moving to the cloud has increased, not decreased, the overall risk. As such, the criminal element is making use of automated AI systems to identify less secure environments and targeting the ‘lift and shift’ legacy technology programs that it can identify.
  4. Internet of Things (IoT) security: The proliferation of connected devices, including smart home appliances and industrial control systems, will likely continue to be a source of risk as these devices often have poor security measures in place. Take the cloud security note above and apply it here, except the risks are higher, baseline security is lower and the pace of migration is faster. This is the biggest ‘back office’ concern I have.
  5. Deepfakes: The ability of criminals to use deep fakes to automate the compromising of individuals – both in terms of getting access to personal accounts as well as using them to open the doors to other, broader enterprise or government access. Deepfakes coupled with natural language processing (NLP), machine learning (ML), and the massive amount of data out there make this a huge concern. When I was a federal employee, someone pretending to be me called our HR group with enough personal info that if it hadn’t been for a thoughtful HR employee, my paycheck would have ended up being sent to a fintech company sitting in front of a Banking as a Service provider with less than stellar risk management. Now, imagine that being entirely automated and millions of people being targeted automatically on a daily basis… Please ensure you have multi-factor authentication on all your cellular and financial services accounts, kids.

Well now that I’m properly depressed, I’m going to leave it here. Where would you all like me to take this discussion?