IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Is Everywhere — Should We Be Excited or Concerned?

Artificial intelligence is slowly transforming many areas of life — and fast — but we all need to pay attention. Reactions are all over the map, and AI will be used for both good and evil.  

Illustration of a human head rendered in electrical cords and circuit boards.
Wherever you turn, artificial intelligence is showing up in new technology products and services across almost all industries.

Here are a few examples of news headlines from just one day last week:

Bloomberg: Google Adds a Suite of New AI Tech for Photos and More
“Google’s annual I/O conference kicked off on Tuesday and the company showed off all the ways it’s using artificial intelligence to make our family memories more vivid, to make smartphone cameras less racist, and potentially to even save lives.”

BBC: “The Navy sub commanded by artificial intelligence
“MSubs of Plymouth, a specialist in autonomous underwater vehicles, won a £2.5m Ministry of Defence contract to build and test an Extra-Large Unmanned Underwater Vehicle (XLUUV) that should be able to operate up to 3,000 miles from home for three months.

“The big innovation here is the autonomy. The submarine's movements and actions will be governed entirely by Artificial Intelligence (AI).”

ZDNet: AI and data science jobs are hot. Here’s what employers want
“Up to 10,000 jobs in AI and data science open each month, and the trend is only growing — but candidates often lack the right skills.”

Wired: These Ex-Journalists Are Using AI to Catch Online Defamation
“CaliberAI wants to help overstretched newsrooms with a tool that’s like spell-check for libel. But its potential uses go far beyond traditional media.”

Reuters:Is that Tom Hanks speaking in Japanese? No, it’s just AI
“Bad lip-syncing in dubbing and subtitles can put off audiences and hurt box office takings of foreign films.

“AI may be about to change all that.

“Start-up Flawless AI, co-founded by film director Scott Mann, has a tool that it says can accurately recreate lip sync in dubbing without altering the performance of the actors.”

And the list of AI examples goes on. This video from Simplilearn looks at the top AI technologies this year:

At the end of last year, Daniel Newman offered these “4 AI Trends Set To Accelerate In 2021” in Forbes:

  • The growth of robotic process automation and AI-driven automation
  • A consistent and accelerated shift toward cybersecurity and AIOps
  • Confluence with the Internet of Things
  • Personalized AI for marketing
ChiefExecutive claims that organizations can break down bias and mitigate the pitfalls of AI in tech hiring: “We often hear that when it comes to DEI 'sunlight is the best disinfectant,' but layering AI on top of today’s resume screens will not only exacerbate the pedigree bias problem, but it will also create a black box around the vetting process by obfuscating the bias. The algorithm will continue selecting candidates based on pedigree proxies — i e., candidates who may have attended a top university or have experience at a Big-5 tech company. This artificially shrinks your candidate choices and pipeline diversity. …

“The key to implementing digital transformation in hiring is to strike the right balance of human and technology. Use technology to lighten the cognitive load of your interviewers by suggestion questions based on the role and competencies being evaluated. Use video recordings to review your interviewers and train them to spot mistakes like ambiguity or preferential treatment.

“Successfully implementing A.I. in recruiting and hiring is going to take an investment. Not just an investment in technology, but an investment in creating more inclusive data science teams. This step is critical to ensure we’re not codifying today’s biases in the next generation of tech.”

On the scary side of AI, when looking long-term, these articles from Futurism and The Guardian describe a fascinating interview with Nobel prize-winning economist Daniel Kahneman, in which he says, “clearly AI is going to win. How people are going to adjust is a fascinating problem.”

The interview is both enlightening, hopeful and a bit shocking in many respects. Here are two examples:

“Do you feel that there are wider dangers in using data and AI to augment or replace human judgment?

“Daniel Kahneman: There are going to be massive consequences of that change that are already beginning to happen. Some medical specialties are clearly in danger of being replaced, certainly in terms of diagnosis. And there are rather frightening scenarios when you’re talking about leadership. Once it’s demonstrably true that you can have an AI that has far better business judgment, say, what will that do to human leadership?

“Are we already seeing a backlash against that? I guess one way of understanding the election victories of Trump and Johnson is as a reaction against an increasingly complex world of information — their appeal is that they are simple impulsive chancers. Are we likely to see more of that populism?

“Daniel Kahneman: I have learned never to make forecasts. Not only can I certainly not do it – I’m not sure it can be done. But one thing that looks very likely is that these huge changes are not going to happen quietly. There is going to be massive disruption. The technology is developing very rapidly, possibly exponentially. But people are linear. When linear people are faced with exponential change, they’re not going to be able to adapt to that very easily. So clearly, something is coming… And clearly AI is going to win [against human intelligence]. It’s not even close. How people are going to adjust to this is a fascinating problem – but one for my children and grandchildren, not me.”


Closer to home and turning to AI, well-known blogger and Harvard Kennedy School researcher Bruce Schneier gave a talk this week during the virtual RSA 2021 Conference on this topic. You can watch his preview to that talk in this YouTube video:
Coverage of the talk in PC proclaimed: “We’re Not Prepared for AI Hackers, Security Expert Warns.”

Schneier believes that, initially, AI analysis will favor hackers. “When AIs are able to discover vulnerabilities in computer code, it will be a boon to hackers everywhere,” he said.

Over time, however, he believes that the advantage will ultimately favor defenders — the good guys. The same technology used to find and exploit vulnerabilities can also be used to find and fix software vulnerabilities before they can be exploited. “We can imagine a future where software vulnerabilities are a thing of the past,” Schneier argued.

You can watch a long list of excellent RSA 2021 sessions for free at this YouTube site for the RSA Conference in 2021.

Another one of the sessions that covered machine learning, data and automation was this talk by Doug Merritt, CEO of Splunk.

He gives some great examples, and he offers some encouraging news (courtesy of Mandiant) regarding hacker dwell times before being discovered.

Dwell time average was 78 days in 2018, but it went down to 56 days in 2019 and down to 17 days (on average) in 2020. And yet, 78 percent of security leaders expect that another SolarWinds-like supply chain attack is coming.


I want to close this blog on an upbeat note regarding AI. I absolutely love this interview in Time magazine by Darktrace CEO Poppy Gustafsson. She addresses a lot of tough questions with clear, easy-to-understand answers. She also covers machine learning and cyber defense today:

“Where does the AI come in?

“[Gustafsson:] What we’re doing is unsupervised machine learning, which means you’re not teaching it, you’re not going in and saying, “This is what a threat looks like. This is what bad behavior looks like.” It goes in, and it learns for itself. So what you’re not doing is, you’re not making any assumptions about what you think good behavior should be and what you think bad behavior should be. It simply goes into an organization and learns the digital heartbeat for that organization.

“What does that look like?

“[Gustafsson:] Imagine that someone stole your car and they’ve got the keys so they had legitimate access to your car. But then they’re driving around, and they’ve got the seat in a different position, the rearview mirror is in a different place, they’re listening to a different radio session; maybe they’re a bit driving a bit slower than you normally do or maybe a bit faster. It’s all these small little changes. And despite the fact that the alarm hasn’t gone off, I can tell that’s not you driving you because there’s just so many little indicators that say, This isn’t in keeping with the way that you normally behave.”

If only we could all communicate cybersecurity the ways that Gustafsson does in this interview — the cyber industry would be in a much better place.

Regardless of your views on AI, I encourage you to become educated — and fast. The tech and cyber worlds are changing rapidly, and AI (including machine learning) is now a big part of the mix.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.