IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Will Our Smart Gadgets Become Trusted or Oppressive Companions?

As we turn more of our decision-making over to devices, experts say, our reliance on these interconnected tools will far surpass today’s dependence on smartphones.

(TNS) — Like legions of hyperactive butlers, many of the brainy gadgets being developed for the Internet of Things will anticipate our needs and make choices for us — without being told what to do — marking a momentous transformation in our relationship with machines.

As we turn more of our decision-making over to the devices, they will evolve into our personal confidants and counselors, determining everything from the time we wake up and clothes we wear to the music we listen to and route we take to work. In the process, experts say, our reliance on these interconnected tools will far surpass today’s dependence on smartphones.

These autonomous assistants are widely expected to help us stay healthier, take better care of our loved ones, live more comfortably, become more environmentally responsible, and boost our productivity by freeing us from an endless array of mundane, everyday tasks so we can concentrate on the most important ones.

But social scientists and others worry these computerized devices might make decisions that are seriously flawed or that we otherwise dislike, leaving us feeling less in control of our lives. More troublingly, their ceaseless surveillance could result in an excessively conformist society, some experts fear — especially with government and other entities exploring the use of these intelligent machines to identify and deter “abnormal behavior.”

“When we’re not being tracked, we’re more free to experiment, to be our authentic selves, to read new things, to be different kinds of people,” said Neil Richards, a law professor and privacy specialist at Washington University in St. Louis. But such omnipresent monitoring, he believes, “menaces our society’s foundational commitments to intellectual diversity and eccentric individuality.”

Stanford University researchers believe society may be profoundly impacted by Internet-of-Things machines endowed with “artificial intelligence,” generally defined as humanlike capabilities. So in December they began a centurylong study of the technology — with findings to be published every five years — in part to assess the implications “of systems that can make inferences about the goals, intentions, identity, location, health, beliefs, preferences, habits, weaknesses, and future actions and activities of people.”

Understanding such effects is crucial, experts say, because the technology is rapidly being adopted. About 13 percent of consumers already have outfitted their homes with a smart thermostat, security camera or other device, according to an Internet-of-Things study in August by consulting firm Accenture. Within five years, it added, that figure will likely hit 69 percent.

Instead of just doing what we command, many of the devices are being empowered with sophisticated software and microelectronics to act on their own initiative as our personal advisers.

Seattle’s Pith makes a smart furniture fabric called BackTracker, which the company says “nags” people to correct the way they sit if their poor posture might cause them back pain. A computerized fork from Hapilabs in Hong Kong admonishes users with lights and vibrations when they eat too fast. And Atlanta-based Monsieur claims its “intelligent bartender” not only remembers which alcoholic drinks its owner prefers, but “knows when you’ve had a long day at work and offers a double instead of a single.”

That’s just for starters, according to this prediction from Santa Clara chipmaker Intel about the technology that’s coming:

“Your bed knows when you wake up. It tells the radio to switch on so you can listen to the traffic and weather report or music it knows you enjoy. It tells the coffee machine to make a fresh pot. When you prepare for the day, your toothbrush notifies you that it’s time to see the dentist and it schedules an appointment based on your availability. Your shower adjusts its temperature based on your preference and when you go to the bathroom mirror, it reminds you to take your vitamins. As you get dressed, your closet mirror helps you choose an outfit based on the weather and what activities you have planned. As you leave the house, a display on the way lets you know you forgot your wallet.”

Mary Czerwinski, a Microsoft research manager and cognitive psychologist, said it’s conceivable some machines might function for their owners as a kind of psychotherapist, noting that people will get so close to their devices, they’ll think, “What would I ever do without this?” not unlike the relationship depicted in the movie “Her.”

To heighten such emotional attachments, some consumer robots are being given lifelike human features, and researchers at the University of Hertfordshire, in England, are developing versions “capable of expressing anger, fear, sadness, happiness, excitement and pride.”

Yet machines making decisions for people stirs mixed emotions.

Consider the self-driving cars being developed by Google and many automakers. Of more than 15,000 vehicle owners surveyed this year by market researcher J.D. Power, only about 1 in 4 expressed interest in having their next car chauffeur them about. Among those looking forward to that is 61-year-old Deryl Stanley, vice president of a club whose members customize classic cars.

“One day I’ll be 90 years old and need to go to the doctor, pick up some groceries and drop off some laundry,” he said. “It sure would be nice to go out to the garage, punch in where I want to go, and let it take me there.” Besides, he added, the autonomous vehicles “could virtually eliminate all the problems associated with driving under the influence.”

But fellow club member Joe Wilder, a 72-year-old retired drug-company salesman who has lovingly restored a 1956 Crown Victoria, is less enthusiastic.

“Self-driving cars may be safer, but I don’t think the drive will be as enjoyable as when I have the ability to speed up, slow down, wander here and there, and feel the car in my control,” he said. “Technology has taken a lot of living life away from us.”

One concern that could influence how we feel about the Internet of Things is that the technology might prove prone to malfunctions, as some experts fear.

That might not be a big problem if a smart refrigerator gets confused and orders too much milk, said Jörg Denzinger, a University of Calgary computer science professor. But in a computerized transportation system, where cars automatically relay braking alerts to each other in emergencies, a glitch could cause multiple crashes, he said, adding that designers of the technology “need to be careful.”

Although experts say smart devices generally won’t make decisions for people without at least initially seeking their consent, anybody hoping to approve every action their gadgets take would quickly suffer what researchers call “consent fatigue.” As a result, it’s widely expected that people will give their devices the power to act independently much, if not most, of the time.

However, that could produce an unhealthy “techno-dependency” in people, resulting in them losing self-reliance and suffering “a lack of depth and breadth of understanding about how the world works,” according to a Microsoft forecast on the impact of smart devices in coming years. “If we are not careful, undermining these values may make the world of 2020 a much less rewarding world to live in.”

Others worry that human and machine goals may conflict, particularly if individual and societal interests clash.

You may want your smart car to drive the quickest route, but for environmental reasons it might be programmed to choose slower roads that minimize fuel consumption, Israeli researchers have speculated. And if you’re hospitalized with an illness, they added, it’s conceivable your doctor’s smart software might oppose giving you an effective new antibiotic, to limit the general population’s risk of becoming resistant to the drug.

Such clashes could “get creepy” — and perhaps insulting — noted Martin Reynolds, a fellow with the research firm Gartner, who speculated during an Internet-of-Things conference that you might be dying for Kentucky Fried Chicken one day, but your smart car — knowing you’re overweight — “directs you to someplace to get a salad.”

Having your consumer gadgets scrutinize and record everything you do also could get disconcerting.

To test that, researchers at the Helsinki Institute for Information Technology installed video cameras, microphones and other monitoring gear in 10 Finnish households in what was billed as a groundbreaking study, despite its limited size, to learn how devices might affect people. While most of the subjects got used to being incessantly observed, some grew so annoyed they hid their activities by blocking the cameras’ view or turning them off.

For some of the subjects, “the surveillance system proved to be a cause of annoyance, concern, anxiety and even rage,” the study concluded, noting that the snoopy gadgets deprived the participants “of the solitude and isolation they expected at home.”

Aside from worrying about who will see the personal information these gadgets gather on their users and spew across the Internet, privacy advocates fear the technology might turn everyone into timid sheep.

Because the data smart devices gather will likely result in the government and others creating profiles on everyone, “behaving normal will eventually become the ultimate practice in the Internet of Things,” warns Paul De Hert, a criminal law expert at the Institute for European Studies in Brussels.

“It limits creativity, it inhibits individuality, social change, progress,” added Bruce Schneier, a fellow at Harvard Law School’s Berkman Center for Internet and Society. “You get conformity and stagnation. These are really big issues.”

Heightening that concern, government officials in the U.S., Europe and elsewhere are studying the use of smart video-surveillance systems to spot “abnormal behavior.”

One example is BRS Labs’ “behavior recognition system” that Amtrak has deployed in some of its Bay Area train stations and that San Francisco’s Municipal Transportation Agency plans to use. After several weeks of videotaping a location, BRS Labs’ technology learns to recognize usual patterns of activity and alerts its human operators if it spots anything out of the ordinary, said the company’s chief scientist, Wesley Cobb, adding, “this is stuff that 10 years ago everybody would have said, ‘Nah, that’s science fiction.’ “

Two European Internet-of-Things technologists have even proposed sending people warnings through their smart devices if the gadgets detect “behavior violating regulations of a society.” Moreover, “to prevent antisocial behavior from re-occurring,” they suggest “automatic publication of such incidents on the web,” a strategy they term “name-and-shame.”

Many experts believe the benefits of the Internet of Things will far outweigh any problems it causes. Besides, Elizabeth Charnock, CEO of Half Moon Bay software company Chenope, said it’s common for new innovations to trigger temporary hand-wringing.

“People start off screaming about privacy,” she said, and then “people just stop thinking about it.”

©2015 San Jose Mercury News (San Jose, Calif.)