IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Teaching Emerging Tech and an 'AI Bill of Rights'

Cybersecurity professor and ethicist Ed Zuger discusses teaching technology ahead of the curve, and whether an Artificial Intelligence Bill of Rights might lay the groundwork for responsible innovation.

AI.image
(TNS) — At my institution we take great pride in remaining innovative and staying apace with technologies lessons. The past couple years have been rife with meetings, research, and developments about new technology that most American consumers are aware of and know are coming, though don't yet have a deep understanding about.

Robotics and artificial intelligence are two sometimes intertwined examples. We have been building and activating degrees and courses in these fascinating, cutting-edge areas of tech. They are both in relatively infant stages, from a historical perspective. They are both also not coming, but have already been here. While we are cresting the frontier to keep ahead of the masses we also know that others have plunged head-in long ago. Heck, it was Louisville, Kentucky's, own George Devol who in the 1950s invented and patented a reprogrammable robot in the fashion we still perceive of them. Since his "Unimate," robots have grown with us.

Not long after that first-gen version spawned from science fiction and imagination, "the academy" knew it had to build knowledge around the subject. The computer era ushered in all sorts of development: the Internet, space travel, and robotics solutions. Robots grew into pop culture thereby informing the masses, and scaring them all the while.

Yet, I can contend with peace that in the 2020s, when we are building academic facilities and faculties to teach the next generation about robotics and AI we are doing so still on the edge. There is something important and beneficial about not being the literal first. The thing that rings truest about that is called "lessons learned." To lay the pavement just a little further down the pathway from its earlier adopters gives one the advantage of seeing, and then curing, their ills as they were without a playbook of any sort. Their advantages, if not obvious, would have been that they were the first to embark, and so will always have that glory, as it were, and any reputation that goes with it. Back in my wheelhouse of higher ed, and off the road-building metaphor, somewhere like MIT seems to fit the bill in these terms as they relate to robotics education. The Massachusetts Institute of Technology is one of those places that has over time become others' nicknames: "Wide Plains University has a robotics lab that's 'the MIT of the Midwest.'"

If the span of cutting-edge, emerging technologies and their education can be described in terms of decades, then it should be of no surprise that the governmental timeline of robotics, also, has a long pathway and is still learning its lessons. I'm moving from the information lane of the masses' conception of robotics toward the scarier one, now. I'll never excitedly cry out, "The robots are coming! The robots are coming!" So, maybe "scary" is too harsh, so let's leave it at wary. We need to be aware, informed, and to understand the limits of these products of the human mind and our endeavors. After all, we do not [yet?] have a robot that can build new robots, and do so totally independent of human drive and decision-making.

Here's a phrase that I never anticipated writing while in law school in the aughts, or while anywhere earlier than that including a dark movie house showing a purportedly scary robot movie—think "War of the Worlds." The White House is pushing for an Artificial Intelligence Bill of Rights. Whaa?!

Does AI have rights? Do Americans have some sort of right over robots? It's new and confusing, and I had to try to understand, especially and selfishly as I continue to support academic growth in AI and robotics. These are some of the countless lessons learned that coming in a wee bit late, though remaining cutting-edge, affords us.

In an opinion piece authored by the President's Science Advisor and Director of the White House Office of Science & Technology Policy, the problem that led to the invention of an AI Bill of Rights was not whether or to what extent a robot has rights, or that AI as a function bears any rights. The rights at issue, like all our other Constitutional rights, are those that must be equitably distributed to all people no matter their diverse underpinnings and by ensuring peerless inclusion of all in these rights. With control, AI and robotics can remain helpful and avert the scariness. That's the gist, at least.

Behind that irrefutable stance—all people being treated equally—lies the technology inherent to the problem. Robots, as Devol demonstrated and any textbook instructs, are the products of and thereby limited by the abilities, intellect, and actions of mankind. They are governed by humans, sure, but more mechanically speaking they are programmed and react to data. Data that humans derive and plug into the robot's processors make the thing do what it does, whether it merely vacuums your living room while you're away—the data having been your "training" the robot vacuum about where walls and furniture are—or it is such a sophisticated robot that it sandwiches together the body and chassis of a Corvette. Data drives the robots.

The White House, and other proponents of an AI Bill of Rights, seem to be acknowledging that data, itself, is prone to problems especially in the diversity, equity, and inclusion arena. Biased data in ... robotic problems of exclusion out. The U.S. government, in response to this logic, will try to control what data gets into robots because it cannot control the robots once that occurs.

An AI Bill of Rights isn't new since European nations had already begun paving that pathway. Now, the U.S. can learn lessons and, maybe, program a viable framework to make robotics solutions fair, just, and reflective of our diverse culture. My students, at least, will learn those important robot lessons.

Ed Zuger is a professor of cybersecurity at University of the Cumberlands, an attorney, and a trained ethicist. Reach him at edzugeresq@gmail.com.

©2022 The Times-Tribune (Corbin, Ky.). Distributed by Tribune Content Agency, LLC.