IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Conn. CISO Raises Security Concerns Over BadGPT, FraudGPT

Almost everyone has heard of ChatGPT. But Jeff Brown, CISO for the state of Connecticut, shares his concerns on some of the other “dark side” apps that have emerged with generative AI.  

hands typing on a laptop with an image indicating an AI search over the top
Adobe Stock/KHUNKORN
A few weeks back I was reading LinkedIn posts from some top chief information security officers and one post jumped out at me from Connecticut CISO Jeff Brown. While linking to an article from the Wall Street Journal, Jeff wrote this in his post:

“Welcome to the dark side of AI and the rise of BadGPT and FraudGPT. These aren't your everyday AI chatbots; they're uncensored AI models trained to craft convincing phishing emails and develop potent malware with alarming efficiency. A groundbreaking study by researchers at Indiana University unveiled over 200 dark web services offering large-language model hacking tools. This revelation is a sobering reminder of the evolving cyber landscape, with some purpose-built hacking tools priced as low as $5 a month.

“The advent of ChatGPT has coincided with a staggering 1,265% surge in phishing attacks, compounded by the emergence of deepfake voice and video technologies. The most alarming case involved an employee from a Hong Kong multinational company being deceived into transferring a staggering $25.5 million during a deepfake conference call. This incident put CIOs and CISOs on high alert, bracing for a wave of sophisticated phishing scams and deepfakes.

“These tales of 'Good Models Gone Bad' underscore a crucial point: While public models like ChatGPT are being fortified with safety controls, there are also tools being honed for darker purposes. As we continue to harness the benefits of AI advancements, we also have to stay vigilant, recognizing that not all AI risks will be eliminated through legislation.”

I have worked with Jeff Brown for more than four years while he has lead Connecticut’s cybersecurity efforts for state government. He is a respected leader among state CISOs, and I asked him if he would be willing to be interviewed on this topic for my blog. He agreed, and that interview is recorded below.


jeff brown.jpg
Dan Lohrmann (DL): What concerns you most about BadGPT, FraudGPT and other similar tools?

Jeff Brown (JB): My biggest concern is that while the good guys are putting on AI guardrails, attackers are removing them. These purpose-build AI tools are a way of democratizing attacker knowledge that would have otherwise only been accessible to highly skilled attackers. The misuse of these tools by malicious actors for harmful purposes, such as the creation of deepfakes or the spread of misinformation, is a real and growing threat. For skilled attackers, these tools enable attacks at scale and more sophisticated phishing or spear-phishing attacks. In other words, it lowers the bar for the attackers and raises the bar on what we need to defend against.

DL: Has the state of Connecticut seen an uptick in phishing, spear-phishing or other sophisticated cyber attacks in the past year?

JB: We’ve implemented a number of new security controls that give us both greater visibility and the ability to respond and recover faster when something goes wrong. Email continues to be the most popular vector for attacks because of its pervasiveness and the fact that it's an easy avenue for attackers to exploit. We've seen a steady increase in phishing attempts, and the sophistication of these attacks has also increased. We continue to improve our abilities to both detect and react to phishing-based attacks, but I anticipate this problem only getting worse with generative AI. Of course, we are also using AI tools to help defend employee inboxes which has been very promising so far, so AI is not all bad news from the defender’s perspective.

DL: Have you seen any cyber attacks using BadGPT and FraudGPT (or similar) tools?

JB: Determining the exact tools in use can be challenging due to the nature of these attacks, but we can definitively say that there's been a significant uptick in the frequency of email-based attacks. They are increasing not only in number but also in sophistication, indicating that the attackers are constantly evolving and enhancing their methods.

DL: Where do you think this trend is heading? Will new GenAI make things worse or help cybersecurity overall?

JB: While the misuse of GenAI is a concern, AI tools also offer new methods for stronger cybersecurity defense controls. As the technology evolves, we can expect AI to be employed in improving threat detection and response capabilities and ultimately in more automation. I think it will continue to be an arms race between attackers and defenders, but tools like Microsoft’s Security Copilot look promising and could not only make the defender’s job easier, but also potentially help address the skills shortage by freeing up time for overwhelmed security analysts.

DL: What can be done to help governments prepare for what is coming next?

JB: Governments need to invest in training and awareness programs for their staff, as well as in advanced cybersecurity tools. The key is to not get complacent. The threat doesn’t stop evolving, which means that our defenses need to evolve along with it. As states continue the push toward digital government, cybersecurity needs to have a seat at that table as well as the resources to build a plausible defense against the growing list of cyber threats.

DL: In what ways are GenAI tools helping Connecticut defend against new forms of cyber attacks?

JB: The velocity and scope of attacks is growing every day and the defenders need to adapt to the changing environment. GenAI tools are already helping us by enhancing our threat detection capabilities and response times. The promise they hold is helping us analyze vast amounts of data quickly and efficiently, identifying potential threats that would have been hard or impossible to detect manually. Also, these tools are a lot faster than poring manually through log files or running simple searches. In the future, AI capabilities will be table stakes in most security products.

DL: Where can CISOs, security pros and other government officials go to learn more about these cyber attack trends using GenAI tools? What is the best way to get educated on this fast-moving topic?

JB: This is a very fast-moving space, so I recommend following reputable cybersecurity news sources, attending relevant webinars and conferences, and participating in professional cybersecurity forums and discussion groups. The most important thing is not to bury your head in the sand and to embrace the possibility and potential AI has to help on the defense side of the equation. Ignoring or banning AI tools is not going to be a winning strategy for the future.

DL: Anything else you want to add?

JB: Greater collaboration and information sharing among government entities and the private sector is going to be the key to our long-term success. Just having the conversation about tools, processes and best practices can help us refine existing strategies and help us react faster to the evolving threats. It’s going to be a combination of tools, information sharing and stronger defensive tactics that make the difference.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.