This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

AI used for and against cyberattacks

"You either die a hero or you live long enough to see yourself become the villain." - Batman

AI, machine learning, and threat intelligence can recognize patterns in data to enable security systems to learn from past experience. In addition, AI and machine learning enables companies to reduce incident response times and comply with security best practices. However, as this article details, generative AI tools present new threats that need to be addressed by developers of AI tools and cybersecurity professionals.

One example of the good and bad aspects of AI is "fuzzing" which is a way to test large amounts of random data input into a system to identify its vulnerabilities. AI can use fuzzing to quickly test lots of inputs. Microsoft has used fuzzing to improve security in its software. However, hackers can also use fuzzing to learn about system weaknesses.

While AI can improve security, at the same time it can make it easier for cybercriminals to penetrate systems with no human intervention.

A recent series of proof-of-concept attacks show how a benign-seeming executable file can be crafted such that at every runtime, it makes an API call to ChatGPT. Rather than just reproduce examples of already-written code snippets, ChatGPT can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits difficult to detect by cybersecurity tools.

Tags

artificialintelligence, innovative technology