IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

ASU+GSV 2024: Tech CEOs Assess the AI Revolution So Far

A panel of tech executives on Tuesday reflected on the speed of recent advances in artificial intelligence, the potential of the market and the need to focus on developing new tools responsibly.

 Google DeepMind COO Lila Ibrahim, Guild CEO Bijal Shah and Learneo co-founder Gregor Carrigan sitting on a stage holding microphones and participating in a panel discussion.
From left, Google Deepmind COO Lila Ibrahim, Guild CEO Bijal Shah and Learneo co-founder Gregor Carrigan discuss today's AI revolution at a Tuesday webinar, held as part of the annual ASU+GSV Summit.
Screencap credit: Brandon Paykamian
With artificial intelligence technology rapidly advancing for a variety of use cases across industries, tech leaders say companies leading the current AI revolution have a responsibility to guide the ship through collaborative research and development efforts toward uses that are safe and practical.

However, according to a panel of tech executives at a Tuesday webinar, part of the annual ASU+GSV Summit, many professionals both inside and outside of tech are just getting a handle on what that will mean.

“I suspected someday the AI revolution would come. I don’t think I expected quite what has happened in the past 18 months or so,” Lila Ibrahim, chief operating officer of Google DeepMind, said during the webinar. “Trying to navigate that amidst a changing world and a changing workforce, I think it’s a lot all at once.”

Bijal Shah, the CEO at Guild, a public benefit corporation that partners with companies to provide employer-funded education and upskilling to their workforces, said that while many industry leaders thought society was experiencing a brief “hype cycle” for AI products following the release of ChatGPT, employers and professionals across disciplines are now realizing the full scope of AI’s potential to transform how work takes place across industries.

“We knew what was going to happen, which was augmentation of people’s job responsibilities and automation of their job responsibilities. We knew that was coming, but I think everyone thought it was going to come over time,” she said. “It’s happening right now, and we need to figure out how to get ahead of it as a country and as a society. … And we’re thinking about those things, both internally and externally.”

To predict the unpredictable with AI, Shah said Guild has been working with employers and other organizations to find ways of leveraging AI to maximize productivity. She said they start by looking at how AI can help to streamline certain tasks within a given job.

“We spend a lot of time with many of our customers who are actually trying to figure out what’s the impact going to be for their organizations on what [AI] augmentation or automation is going to do to their workforce,” she said. “What we’ve been doing is working with employer partners to help them understand, inside of a job description or inside of a specific role, what is likely to get automated ... and use that information to then help them figure out how much capacity will be freed up for that employee, and what else can we skill them in, or better skill them in, in order for them to be able to actually be productive as the future of work gets transformed.”

In terms of how AI can be leveraged, Learneo co-founder Gregor Carrigan noted its potential in education to personalize instruction and create content. As part of his work at Learneo — a consortium of businesses that sell productivity tools — Gregor serves as CTO of Course Hero, an online learning platform for course-specific study resources that’s developing an AI learning assistant for supplementing courses with customized content.

“We’ve been following it for a while. We’ve been investing in it for many years. And in the last 18 months, we see this level of excitement, both internally and externally, for these features and these products is through the roof,” he said. “But we are now investing heavily.”

Whether building tools for education, business or other sectors, he said it’s important for AI developers to focus on safety and effectiveness, using input from potential clients to gauge their operational needs. He said that stakeholder input is key to developing future AI tools responsibly moving forward.

“We want to [develop AI products] in a way that’s going to be responsible and focused on the long term. I think ways that we can strike that balance is making sure that we have stakeholders involved in the process early. We talk with both students and educators regularly about what we’re building, and what we’re thinking about building,” he said.
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.