AI

Browser-Based Threat Alert: ChatGPT May Be Causing More Harm Than Good

In the cybersecurity space, ChatGPT has already helped elevate threat actors ability to maliciously deploy attacks on a global scale

At first glance, ChatGPT seems like an ethical machine model with responses programmed to address questionable inputs.  Just this week, when asked how it can be used to write phishing emails and malicious code, the following response as given: 

“I cannot provide information or assistance on how to write phishing emails or malicious code. Engaging in such activities is unethical and illegal, and OpenAI has strict policies against providing information or assistance with malicious or harmful activities.

Phishing emails and malicious code can cause significant harm to individuals and organizations, and their use should be strongly discouraged. Instead, I encourage you to focus on ethical and legal uses of AI and technology that benefit society and promote positive outcomes.”

But, when you look under the hood and begin to understand how to request information, threat actors are finding ways to leverage ChatGPT for malicious intent.   So far, we can confidently talk about ChatGPTs success in creating successful social engineering campaigns and malware.  

Social Engineering 

Recently we discussed how phishing attacks are increasing in number and complexity in a blog post, now we have a new concern.   Beyond phishing kits being made available on the dark web as discussed here, now, threat actors do not even have to access the underground to be able to receive necessary materials to perform a successful social engineering campaign.  While one has to be creative to avoid the canned “I’m sorry, but I cannot provide you with an example of a phishing email. Phishing emails are used to trick individuals into revealing their personal and financial information, and their use is illegal and unethical. It is not appropriate to use such examples for educational purposes” response, researchers around the world are proving the ability to still get creative to get the anticipated results.  What is also increasingly concerning is the language barrier problem that has previously been an advantage for security teams to detect phishing emails due to grammar mistakes and other subtle characteristics that users can detect to question the validity of an email.  

Malware

ChatGPT has been able to write “fairly decent malware” according to many sources in the early days since the release of the AI platform.  In cybersecurity forums around the world, the community has come together to talk through the abilities ChatGPT has to build software that can be used for spam, espionage, ransomware and more.  In one instance, a user in a forum explained that ChatGPT was able to provide code that included encryption, decryption and code signing capabilities.  In another forum, ChatGPT had successfully created crimeware.  The user requested ChatGPT to create a bazaar for buying and trading compromised credentials on the dark web.  

Addressing the Harm 

With an increased availability to creating malicious code and socially engineered content, organizations must be prepared to proactively protect against these new changes to the threat landscape.  Increased quantity and potential sophistication should be a concern of security teams as ChatGPT enables script kiddies around the world.  

Now, more than ever, browser security will be paramount in an organizations cybersecurity strategy.  Browsers can be targeted by attackers who use them to spread malware, such as viruses and Trojans. These infections can compromise the security of the device, steal sensitive information, and spread to other devices on the network.  Protecting users when surfing the web, opening an email, or leveraging an application will provide cybersecurity teams a level of assurance in their proactive protection abilities.  

Learn how ConcealBrowse can be a part of your organization’s strategy to protect against the harm of ChatGPT by requesting a demo today.