CAMBRIDGE, Mass. — Covering technology trends for more than 30 years can leave you a bit jaded about what might be considered new, cool and transformative. Too often, what is considered innovative can be a solution that ends up benefiting only small portion of the population that can afford it, while its actual impact might best be described as superficial.
Imagine a robotic device assisting a disabled or elderly person by handing them a glass of water so they can take their medication or bringing them utensils so they can eat their meal. It could happen soon, but first scientists have to figure out the problem of manipulation: identifying an object and then picking it up, said Stefanie Tellex of Brown University, who is working on robots that can help people.
“The problem right now," she said, "is that most robots can’t pick up most objects most of the time."
That’s because human environments are tremendously complex. The solution is to develop computer vision so that robots can both recall what an object is and then pick it up with precision. To do that, Tellex is crowdsourcing more than 400 robots, known as Baxters, currently used in research, to practice picking up a variety of objects. The visual information that is captured during the learning process will allow other robots to learn more rapidly what an object is (a ruler? a fork? a glass?) and pick it up correctly.
Robotics could also be a tremendous help to first responders, entering structures too unsafe for humans. Sangbae Kim, a professor of mechanical engineering at MIT, showed a clip of the interior of the Fukushima Nuclear Power Plant shortly after the earthquake hit in 2011. Because of the high levels of radiation, workers couldn’t get inside and stem the radiation leaks and mitigate the damage. But the right kind of robot could have worked its way through the rubble and get to the source of the leaks.
The robot that Kim has in mind is based on “bio-mimicry” and has the ability to traverse uneven terrain under its own propulsion. In other words, a robot that can mimic human- or animal-like motions, able to walk or even run.
“It will be a robot that can go where humans cannot or should not go," he said, "surpassing the capabilities of humans in dangerous locations."
While Kim has developed some prototypes that can walk, jump and even run, building an autonomous robot that can precisely mimic human action is still a way off.
The human body has 6.4 billion genetic codes. A change in one can create a disease. So far, medical science has identified about 5,000 diseases based on genetic defects that afflict millions of people. The question more scientists are asking these days is, “why can’t we edit the genetic code to fix the disease?” said Nessan Bermingham, chief executive officer and founder of Intellia Therapeutics.
The answer to “why” is beginning to turn into “how” — and more recently “when” as new breakthroughs make it possible for scientists to edit certain genes at the cellular level and to slow down, stop and possibly reverse some genetic illnesses, according to Bermingham. Already, scientists have figured out how to edit a gene by insertion, deletion or repair, by taking certain cells out of the body, changing them and then returning them to the body.
Gene editing is cutting edge, but it’s also still experimental. While scientists can change a single gene, the next goal is to tackle more complex changes in the genome of the human body. Clinical trials involving humans could start in 2018.
Another form of gene therapy involves special kinds of cells, known as T Cells, that can attack and kill unhealthy cancer cells, according to Marcela Maus, director of Cellular Immunotherapy at the Massachusetts General Hospital Cancer Center. By combining computer technology with immunotherapy, Maus and others have figured out how to enhance T cells so they can attack and reverse the effects of certain cancers, such as leukemia and lymphoma.
Will cancer eventually be cured? “Some cancers might be cured,” said Maus, “but it’s hard to say all cancers will be cured. We need to continue working at combining gene editing and immunotheraphy.”
Hard to believe, but nearly 4 billion people on the globe do not have access to the Internet. The barriers are threefold: lack of infrastructure, lack of affordability and lack of necessity. Facebook, which already has 1.7 billion monthly active users, wants to bring down at least two of those barriers, according to Yael Maguire, who works at the Facebook Connectivity Lab.
It starts by mapping where the unconnected (and those who are connected at very low speeds) live. Facebook is using machine learning tools to create maps with good resolution to identify the location of unconnected communities and how many live there. So far, the company has identified at least 1.6 billion people who live outside the range of mobile networks.
Now the company is in the process of constructing unmanned aerial vehicles (UAV) to bring digital infrastructure to some of the most remote parts of the world. The company wants its Aquila drone to become the next communications network in the stratosphere. Aquila is a solar-powered UAV, which Maguire referred to as a high-altitude platform that will eventually fly for months at a time, providing Internet connectivity to rural locations at low cost. Facebook must first figure out, however, how to keep such a large drone up in the air for long periods where the atmosphere is extremely thin. Getting Aquila into full production is years out, Maguire conceded.
The company is also working on ways to bring low-cost connectivity to urban environments in developing countries. The project known as Terragraph will fill a city with small nodes capable of providing high-volume connectivity at low cost.
The secret to Terragraph’s potential is its use of the so-called V-band, an unlicensed radio spectrum that has been ignored by most carriers up to now because of interference from oxygen and water. By saturating an area with numerous nodes, Facebook believes it can overcome the limitations of the V-band and open it up to use in urban locations where fiber or cable connectivity is too expensive and existing mobile networks are too limited.
Can a computer describe what it sees? Thanks to deep learning, a form of artificial intelligence, computers are beginning to learn how to define what’s in an image. Russ Salakhutdinov, an associate professor at Carnegie Mellon University, provided examples of where a computer not only recognized a cat in a photograph, but was able to add more information, such as the box the cat was in and the color of the pet.
For humans, such recognition might seem simple; for computers, this kind of image understanding, called “multimodal learning” is a breakthrough, according to Salakhutdinov. The goal is to have computers extract knowledge from just a small number of images and learn how to tell different stories based on the image. For example, Salakhutdinov showed a photo of a duck swimming on water, with a reflection of the duck on the water. But the computer saw two ducks. Trying to teach computers a task such as image recognition in a way that can be done accurately and fast is incredibly difficult.
But Silicon Valley sees deep learning as the future for AI; companies are spending hundreds of millions of dollars on AI labs and are in a bit of recruiting arms race for the best minds — like Salakhutdinov. When he was introduced to the audience at Emtech, the moderator announced that the scientist had just taken a job at Apple to help with their deep learning projects. Salakhutdinov would not say what those projects involved, but according to a recent tweet, he will serve as Apple's director of AI research.