IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Minerva University AI Research Lab Turns Concepts into Startups

The Minerva University AI Research Lab has brought together groups of students to create and pitch their own AI tools, with an emphasis on addressing the ethical and technical concerns about the technology.

Aerial view of students sitting around a table collaborating.
Students at the Masason Foundation/Minerva University AI Research Lab work together on developing new ideas for AI tech tools before pitching them to investors.
(Minerva University)
With the Internet currently abuzz with debate about the technical and ethical problems with artificial intelligence technology and “AI artist” applications amid advances in machine learning, educators at Minerva University have been working to help others understand how AI technology works, as well as how to develop it ethically and effectively. Central to these efforts is the Masason Foundation/Minerva University AI Research Lab, a collaborative AI research and startup program where a select group of fellows learn about and create AI technologies.

According to Minerva President Mike Magee, the university first launched the lab in 2019 in partnership with the Masason Foundation, a Japanese organization that offers scholarships and other resources for projects concerned with the future of civilization, with the goal of giving students a chance to develop new AI-based products before pitching their ideas to investors. Magee said the program includes an academic year of independent research and a summer internship at Masason’s AI-focused venture capital partner DEEPCORE Inc. in Tokyo, where students can get firsthand experience developing and launching new AI tools, and it has already yielded several AI-based tech products.

Magee added that the program is unique compared to other higher-ed startup incubators in that students, mentors and faculty can collaborate remotely.

“Through the program, students are encouraged to consider how AI might be brought to bear on solving the world’s most intractable challenges,” he wrote in an email to Government Technology. “Since its launch, projects and startup concepts emerging from the lab have included ideas to address challenges around climate, nutrition, and women’s reproductive health.”

According to an email from the university, program activities include designing and brainstorming sessions, mentor/mentee sessions for students to practice being advisers and mentors for other students, and funding-pitch sessions where students present their ideas to panels of venture capitalists and create market plans. The email noted that one group of students recently developed an AI-based SAT prep tool called MathApp, which identifies gaps in a student’s knowledge so they can improve their SAT scores.

As far as the types of discussions that take place at the Minerva lab, computational sciences professor and lab co-director Patrick Watson said educators and mentors emphasize today’s concerns about AI technology and its applications, as well as who is interested in those concerns and why, in order to encourage the ethical development of AI as machine learning tools become more ubiquitous.

“AI trust and interpretability, AI safety, data bias and algorithmic bias are the concerns most talked about, and part of that is because solving these problems — or at least appearing concerned about them — is in the financial interest of technology companies. We hear a lot less about how AI and machine learning put alternative lenses on our existing values, even though that’s more often how ethicists think about new technologies,” Watson said.

He explained that because AI systems usually rely on historical data patterns and algorithms, they often make “arcane decisions that are difficult to explain in language,” which he believes often makes it difficult to explain how the technology works and leads to sensationalized media coverage focusing on people’s anxieties about AI.

“That makes it easy to write a hand-wringing think piece about the ‘mysterious, scary robots.’ But I don’t think the systems are particularly hard to interpret from a mathematical or statistical standpoint. The engineer who built the system can usually tell you exactly what it’s doing,” he said. “AI safety, which is the broad idea that we should try to prevent robots from killing all humans like they do in sci-fi movies, is in a similar place. Scary ideas get a lot of attention and funding because they spread more readily through popular media.”

When it comes to algorithmic biases, Watson said, many people tend to worry that AIs and the data they collect are not “broadly representative of diverse perspectives or populations,” noting the problems with facial recognition technology similar to what’s found in some academic proctoring programs, which have been reported to not recognize faces with darker skin tones.

“We can never collect ‘enough’ data to represent the dynamic, changing and unstable world we live in. But established actors like tech companies work to combat data bias, not to make things more just or fair per se, but because it helps their market share to sell their products to people from lots of different backgrounds and perspectives,” he said. “One of the things that I think is interesting about AI is that it surfaces these ethical issues.”

While it’s difficult to predict where advancements in the field of AI will take machine learning technology in the coming years, he noted that AI tech developers and consumers may need to expect the unexpected. At the same time, he said he expects the technology to play an increasingly important role in daily operations across industries and for a variety of uses in the years to come as it advances, and as society pushes to make AI-based educational programs and products more commonplace.

Despite the uncharted territory, Watson said he’s optimistic about the future of AI technology and its potential for social good.

“I think more and more people will work directly with AI technologies in the next decade. That said, those technologies will not look very spectacular. This push will focus on the large number of functions that are required for doing all kinds of things but are generally not part of our usual conception of ‘intelligent behavior,’” he said, adding that he expects more jobs to involve some sort of AI technology.

Watson said one of the useful things about AI is that it’s relatively unresponsive to social opprobrium.

“The bots generally see patterns in data pretty clearly. If they’re behaving outrageously, that’s a reflection of the environment they’re created in. This is often a positive thing, because it makes it hard for someone to simultaneously have the technical power that derives from these new AI technologies, and power derived from traditional networks of wealth and social connections,” he said. “I’m looking forward to a world where technical skills are a source of influence beyond those traditionally embedded in social networks. The big issue, of course, is access and education.”

Magee said the university hopes to spread discussions such as these, and activities similar to the lab’s, to other institutions interested in establishing their own AI development-centered programs. He said Minerva is working with partners across the world in the hopes of launching similar lab models in other industry sectors, such as sustainability and geopolitics, moving forward.

“Minerva would like to expand this successful approach to project-based learning to enable students to develop and incubate ethical, practical and real solutions to the world’s most pressing issues and innovations,” he wrote.
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.