- The cyber security industry is already seeing evidence of ChatGPT being used by criminals.
- ChatGPT can quickly generate targeted phishing emails or malicious code for malware attacks.
- AI companies could be held liable for placing chatbot advisors on criminals as Section 230 may not apply.
Whether it is writing essays or analyzing data, ChatGPT can be used to reduce one’s workload. That also applies to cybercrime.
Sergey Shykevich, chief researcher of ChatGPT at the cybersecurity company Checkpoint security, has already seen cybercriminals take advantage of the power of AI to create code that can be used in a ransomware attack.
Shykevich’s team began studying the possibility of enabling AI for cybercrimes in December 2021. Using a large language model of the AI, they created phishing emails and malicious code. As it became clear that ChatGPT could be used for illegal purposes, Shykevich told Insider that the team wanted to know whether or not their findings were “theoretical” or whether they could “identify the bad guys who were using it find wildlife”.
Because it’s hard to tell if a malicious email was written to someone’s inbox with ChatGPT, his team turned to the dark web to see how the application was being used.
On December 21, they found their first piece of evidence: cybercriminals were using the chatbot to create a python script that could be used in a malware attack. There were some errors in the code, Shykevich said, but much of it was correct.
“What’s interesting is that these guys haven’t developed anything to post it before,” he said.
Shykevich said ChatGPT and Codex, an OpenAI service that can write code for developers, will allow “less experienced people to become alleged developers.”
The abuse of ChatGPT – which is now powering Bing’s already worrisome new chatbot – is a concern for cyber security experts who see chatbots as potentially aiding phishing, malware and hacking attacks.
Justin Fier, director for Cyber ​​​​Intelligence & Analytics at Darktrace, a cybersecurity company, told Insider regarding phishing attacks, the barrier to entry is already low, but ChatGPT could make it simple for people to send many of the emails effectively create a targeted scam – as long as they make good tips.
“In the case of phishing, it’s about volume – imagine 10,000 emails, very targeted. And now instead of 100 positive clicks, I have three or 4,000,” said Fier, referring to a hypothetical number of people who might click on an email phishing. , used to encourage users to give up personal information, such as banking passwords. “That’s huge, and it’s all about that goal.”
‘science fiction film’
In early February, cybersecurity company Blackberry released a survey of 1,500 information technology experts, 74% of whom said they were concerned about ChatGPT’s help in cybercrime.
The survey also found that 71% believed that ChatGPT may already be used by nation-states to attack other countries through hacking and phishing attempts.
“It’s well documented that people with malicious intent are testing the waters but, over the course of this year, we expect hackers to get a much better handle on how to successfully use ChatGPT for nefarious purposes,” Shishir Singh, Chief Technology Officer of Cybersecurity at BlackBerry, in a press release.
Singh told Insider that these fears stem from the rapid progress of AI over the past year. Experts have said that progress in large language models – now more capable of imitating human speech – has progressed faster than expected.
Singh described the rapid innovations as something out of a “science fiction movie.”
“Whatever we have seen in the last 9 to 10 months, we have seen only in Hollywood,” said Singh.
Cybercrime uses could be a liability to Open AI
As cybercriminals begin to add things like ChatGPT to their toolkit, experts such as former federal prosecutor Edward McAndrew are wondering if companies should be held responsible for these crimes.
For example, McAndrew, who has worked with the Department of Justice investigating cybercrime, pointed out that if ChatGPT, or a similar chatbot, advised someone to commit cybercrime, they could be liable for companies that facilitate these conversations.
When dealing with illegal or criminal content on their sites from third-party users, most technology companies cite Section 230 of the Communications Decency Act of 1996. The Act states that providers of sites that allow people to post content — such as Facebook or Twitter — are not responsible for speech on their platforms.
However, because the speech is coming from the chatbot itself, McAndrew said the law cannot protect OpenAI from civil suit or prosecution — although open-source versions could make it harder to link cybercrimes back to OpenAI .
The scope of legal protections for tech companies under Section 230 is also being challenged in the Supreme Court this week by the family of a woman killed by ISIS terrorists in 2015. The family argues that Google should be held liable for placing its extremist video algorithm ahead.
McAndrew added that ChatGPT could provide an “information storehouse” for those tasked with gathering evidence for such crimes if they were able to convince companies like OpenAI.
“Those are really interesting questions that have been around for years,” McAndrew said, “but as we’ve seen it’s been true since the dawn of the internet, criminals are among the earliest adopters. And we’re seeing that again , with many. of the AI ​​tools.”
Given these questions, McAndrew said he sees a policy debate over how the U.S. — and the world at large — will set parameters for AI and technology companies.
In the Blackberry survey, 95% of IT respondents said that governments should be responsible for creating and enforcing regulations.
McAndrew said regulating the task can be challenging, as no single agency or level of government is responsible for creating mandates for the AI ​​industry, and the issue of AI technology transcends US borders.
“We’re going to have to have international alliances and international norms for cyber behavior, and I expect that will take many years to develop if we can ever develop it.”
Technology is still not perfect for cybercrimes
One thing about ChatGPT that could make cybercriminals more difficult is that it is reliably known to be wrong – which could be a problem for cybercriminals trying to draft an email to imitate another person, experts told Insider. In the code that Shykevich and his colleagues found on the dark web, the errors needed to be corrected before it could help with a scam.
Additionally, ChatGPT continues to implement guardrails to discourage illegal activity, although these guardrails can often be bypassed with the right script. Shykevich pointed out that some cybercriminals are breaking into ChatGPT’s API models — open source versions of the application that don’t have the same content restrictions as the web user interface.
Shykevich added that at this point, ChatGPT cannot help create sophisticated malware or create fake websites that appear, for example, to be a prominent bank website.
However, this could become a reality one day as the AI ​​arms race created by tech giants could accelerate the development of better chatbots, Shykevich told Insider.
“I am more concerned about the future and now it seems that the future is not in 4-5 years but more like a year or two,” said Shykevich.
Open AI did not immediately respond to Insider’s request for comment.