Cybersecurity has always been a cat-and-mouse game, with malicious actors and security professionals continually refining their tactics to stay one step ahead of the other. Artificial intelligence (AI) is fundamentally changing the game.

While IT organizations are using AI to augment security, cybercriminals are also leveraging the technology to find new ways around defense mechanisms. In fact, Forrester Research analysts say that agile and well-funded cybercriminal organizations are adopting AI much faster than IT organizations — a trend that is likely to dramatically reshape the threat landscape over the next few years.

Traditional security tools are largely dependent on signature- and rules-based defenses. They look for known patterns of bytes, functions, hashes or other characteristics that have been previously identified and indexed as malware. However, malware authors are increasingly using AI to create “polymorphic” malware that can evade these measures.

Shape-Shifting Malware

Polymorphic malware constantly changes its identifiable features in order to evade signature- and rules-based detection. With each iteration, this shape-shifting malware changes identifiable characteristics such as file names or encryption keys to make itself unrecognizable.

Polymorphic techniques have around for decades — Webroot researchers say than 94 percent of all malware employs polymorphic techniques. But AI is changing the speed and scale of these threats. They no longer require the manual intervention of hackers to change the file signature. AI-powered malware changes its code automatically.

AI also enhances many other types of threats by making them more automated, scalable and finely targeted than ever before. Industry experts say AI is particularly effective in automating social engineering attacks, phishing attacks and advanced persistent threats (APTs). AI-powered automation can increase the scale and frequency of these attacks and make it possible for low-skill individuals to launch highly sophisticated attacks.

Smart Phishing

“Smart phishing” is another AI-powered exploit. Rather than the mass scale of traditional phishing scams, smart phishing targets specific individuals by exploiting online information such as their personal information, contacts and favorite sites. Using natural language processing and text analysis tools, hackers can then mimic the look, feel and writing style of these resources in order to automatically generate malicious websites, emails and links that are likely to trick victims.

Researchers at ZeroFox have demonstrated that a fully automated smart phishing system can create tailored Twitter tweets based on a user’s demonstrated interests to produce high click-through rates to infected links. They believe Russian hackers used such a system in a 2017 attack in which malware-infected tweets were sent to more than 10,000 Twitter users in the U.S. Department of Defense.

Brute-Force Bots

AI tools can also automate conventional brute-force hacking with automated bots. In a recent experiment, researchers set up a honeypot — a server for a fake online financial firm — and exposed usernames and passwords in a dark web market. As researchers monitored the fake site, a single automated bot broke in, scanned the network, collected credentials, siphoned off data and created new user accounts so attackers could gain access later. The bot accomplished all of this in only 15 seconds.

Artificial intelligence offers IT security professionals new tools to combat threats with dynamic threat-detection and authentication frameworks. However, cybercriminals are eagerly weaponizing AI to create new threats. It’s still a cat-and-mouse game, but with far more sophistication. If organizations are going to stay ahead, they will need to move beyond basic security tools.

Contact SSD today for a confidential assessment. We can help you develop a robust strategy that will prepare your organizations for what’s coming in 2020.