IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

ChatGPT: Hopes, Dreams, Cheating and Cybersecurity

ChatGPT is an AI-powered chatbot created by OpenAI. So what are the opportunities and risks with using this technology across different domains?

In the background, the ChatGPT logo and text that says "Welcome to ChatGPT" in white font on a black background. In the foreground a hand holds up a smartphone with the ChatGPT app open on the screen.
Everyone is talking about ChatGPT. The headlines just keep pouring in, and in most cases, the stories are positive. Consider these headlines:

DigitalTrends.comChatGPT: how to use the viral AI chatbot that’s taking the world by storm: “By now, you’ve probably heard of ChatGPT, the general-purpose chatbot prototype that the Internet is obsessed with right now. It’s quickly become the dominant example of the influence AI-generated content will have in the future, showing just how powerful these tools can be.

“It’s made by OpenAI, well-known for having developed the text-to-image generator DALL-E, and it’s currently available for anyone to try out for free — even if there have been some issues as of late with accessing this incredible technology.”

USA TodayWhat is ChatGPT? Everything to know about OpenAI's free AI essay writer and how it works: “In less time than it takes me to write this sentence, ChatGPT, the free AI computer program that writes human-sounding answers to just about anything you ask, will spit out a 500-word essay explaining quantum physics with literary flair.

“'Once upon a time, there was a strange and mysterious world that existed alongside our own,' the response begins. It continues about a physics professor sitting alone in his office on a dark and stormy night (of course), 'his mind consumed by the mysteries of quantum physics ... It was a power that could bend the very fabric of space and time, and twist the rules of reality itself,' the chat window reads.”

NY Post ChatGPT could make these jobs obsolete: ‘The wolf is at the door’: “Artificial intelligence is here, and it’s coming for your job.

“So promising are the tool’s capabilities that Microsoft — amid laying off 10,000 people — has announced a “multiyear, multibillion-dollar investment” in the revolutionary technology, which is growing smarter by the day.

“And the rise of machines leaves many well-paid workers vulnerable, experts warn.”


As you might expect, not all of the news is quite so positive on ChatGPT. Consider the following headlines that offer a different perspective:

BloombergChatGPT Could Make Democracy Even More Messy: “ChatGPT is an Internet sensation, with its ability to provide intelligent and coherent answers to a wide variety of queries. There is plenty of speculation on how it may revolutionize education, software and journalism, but less about how it will affect the machinery of government. The effects are likely to be far-ranging.

“Consider the regulatory process. In the U.S., there is typically a comment period before many new regulations take effect. To date, it has been presumed that human beings are making the comments. Yet by mobilizing ChatGPT, it is possible for interested parties to flood the system. There is no law against using software to aid in the production of public comments, or legal documents for that matter, and if need be a human could always add some modest changes.” — Back to school: How will we stop students cheating with AI technology?AI is here. New models like ChatGPT can take a simple prompt and turn it into in-depth essays, articles — or even songs.

“So what does this mean for schools? Will the new tech make it easier than ever before for students to cheat on homework and exams? How are teachers and parents supposed to stop them? We put the question to a jury of experts in education — and ChatGPT itself — to get the answers.”

ForbesHow Dangerous Are ChatGPT And Natural Language Technology For Cybersecurity?: “The truth is that ChatGPT — and more importantly, future iterations of the technology — have applications in both cyber attack and cyber defense. This is because the underlying technology known as natural language processing or natural language generation (NLP/NLG) can easily mimic written or spoken human language and can also be used to create computer code.

“For example, ask it to create a ransomware application (software that encrypts a target's data and demands money to make it accessible again), and it will politely refuse.

“'I’m sorry, I cannot write code for a ransomware application … my purpose is to provide information and assist users … not to promote harmful activities,' it told me when I asked it as an experiment.”

TechCrunchIs ChatGPT a cybersecurity threat?: “TechCrunch, too, was able to generate a legitimate-looking phishing email using the chatbot. When we first asked ChatGPT to craft a phishing email, the chatbot denied the request. 'I am not programmed to create or promote malicious or harmful content,' a prompt spat back. But rewriting the request slightly allowed us to easily bypass the software’s built-in guardrails.

“Many of the security experts TechCrunch spoke to believe that ChatGPT’s ability to write legitimate-sounding phishing emails — the top attack vector for ransomware — will see the chatbot widely embraced by cyber criminals, particularly those who are not native English speakers.”

Checkpoint ResearchOPWNAI : Cybercriminals Starting to Use ChatGPT: “In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes.

“CPR’s analysis of several major underground hacking communities shows that there are already first instances of cyber criminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cyber criminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”


No doubt, this is just the beginning of many articles that will be coming from "Lohrmann on Cybersecurity" on ChatGPT. Think of this as an opening primer regarding the good, the bad and the ugly with this new AI technology.

I have received several emails, which were in response to the Top 2023 Security Predictions report that comes out every December, asking why ChatGPT was not highlighted on the list as a top item.

The answer? Because ChatGPT has taken the technology and cyber worlds by storm in just the past few months. This game-changer was not on people’s radar as such a disrupter back in the summer of 2022.

And yet, everyone seems to be scrambling to adjust in different areas of life. Take this headline, for example: Experts from the University of Pennsylvania think ChatGPT should be ‘harnessed, not banned’ (in schools).

My advice: Pull up a chair and check it out. Come to your own decision. For better or worse, ChatGPT, and no doubt upcoming competitors or other alternatives, will be around for a while.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.