IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Cerf on the Net

Predictions for the future from one of the fathers of the Internet.

More than 30 years ago, Vint Cerf and colleague Robert Kahn - performing research sponsored by the Defense Advanced Research Projects Agency - created the core standards that allow computers across the globe to link together.

The two men developed the transmission control protocol/Internet protocol (TCP/IP) suite, a stack of networking protocols that forms the Internet's foundation. Ultimately their work revolutionized how citizens, businesses and governments use and share information.

Today Cerf is vice president and chief Internet evangelist of Google, where part of his job is to identify new Internet applications and technologies. In addition, Cerf is chairman of the Internet Corporation for Assigned Names and Numbers (ICANN) a nonprofit organization that coordinates Internet domain names and IP addresses globally.

Cerf spoke with Government Technology at Google's Washington, D.C., offices. During the hour-long conversation, Cerf discussed numerous issues that will shape the Internet's future, including Net neutrality, municipal wireless projects and mobile connectivity.

 

GT: A quote from you says that 99 percent of all Internet applications haven't been conceived of yet. Why do you say that?

Cerf: The basis for my speculation is to look at the rate at which new ideas are coming along on the Net, either within the Web context or elsewhere. There is an increasing number of people with capability and interest in building applications on the Net. You can predict even now, with only 1 billion users on the Net, that as we move toward the next decade of the 21st century, maybe we'll have 5 billion users - that's a factor of five right there. And some of these things are not linear in terms of the rate at which inventions happen. Every time somebody invents something that's successful or comes up with a new standard, it creates another platform on top of which invention can happen. So this thing is a positive feedback loop.

 

GT: What will the Internet look like in the future?

Cerf: We can already see some very clear trends, and I think the clarity of my vision probably doesn't go more than five or six years out. One thing for sure is that an increasing amount of applications will be available on mobile devices. Second, the speeds at which you can access the Net will increase over time, both in the wireless and the wired world. Third, more and more devices are going to be Internet-enabled, which means they can be managed through the network. You can imagine exchanging all your remotes to control your entertainment equipment with one single mobile, which interacts with them through the Internet, which means it could be anywhere. You don't have to be at home in the living room or entertainment room controlling the device directly with an infrared signal. Instead you're talking through the network to those devices, and of course strong authentication keeps the 15-year-old next door from reprogramming your entertainment system.

Another thing we'll see is an increasing amount of sensor-type systems being part of the Internet, so their information is accessible that way. It could be buildings or automobiles that are instrumented. Devices we carry around might be capable of detecting hazardous materials in the air. They may even be capable of detecting humidity, temperature and other very basic things. But the result of collecting all of that information is a micro-view of climates or weather, making our weather prediction even more precise because of the data we get.

Beyond that, it's a little hard to say, except for an effort to expand the Internet's operation so it can work across the solar system. That's part of an application I have been working on with the Jet Propulsion Laboratory, and more generally with NASA. It is

reasonably predictable that during the second decade of the 21st century, a networking platform for deep-space communication will emerge and make the kinds of spacecraft we use for exploration more flexible. Often these spacecraft are single-platform devices, and you talk to them through a single radio link from Earth. The exchange is just two-way.

As we build more flexible networking capabilities that can work in deep space, we can imagine constellations of spacecraft, sensor networks and orbiters all communicating locally with each other, maybe on an interplanetary basis, and not necessarily just back to Earth. So in the very much longer time frame - 20, 30, 40 years from now - we might see quite a collection of devices around the solar system interacting through this deep space interplanetary network.

 

GT: How far along is that work?

Cerf: The new protocols required to make things work flexibly in deep space across interplanetary distances are pretty well stabilized. In fact, when we were working on the interplanetary networking design, we realized that we actually ended up working on a special case of a more general concept called delay- and disruption-tolerant networking. When you're communicating with something on another planet, the planet's rotating and you're cut off from communication until the rotation brings it back in view. Or you may not be able to talk to an orbiter when it's behind a planet. So those are types of disruptions. Delay, of course, is inescapable because of the distances between the planets. They are literally astronomical.

We looked at the general case of delay and disruption tolerance in networking protocols and realized this would apply to certain tactical situations here on Earth. In tactical mobile communications, you're using radios, you're moving around, and the connectivity is varying. You might lose radio contact, or you could be jammed. A variety of impairments could occur which cause communication to be disrupted for uncertain amounts of time. Hence, delay and disruption tolerance is needed.

We've been testing that theory with the Defense Department. We've taken the space protocols and implemented them for the Marine Corps, which is trying them out in tactical environments, and they work very well. Then we worked with the sensory network people to use these techniques for sensor networks, and that's working out very well. This is sort of a nascent beginning of a whole new class of communication protocols that are not based on exactly the same assumptions that the TCP/IP protocols were 30 years ago.

 

GT: So that research is producing benefits right now?

Cerf: Yes, terrestrially. That was a very satisfying outcome because people were saying, "Why are you bothering to network the planets? It's 100 years from now." In truth, the initial motivation was simply to look as far ahead as possible and say, "What would we have to do if we really wanted to have an interplanetary network? What would it look like?" But then we realized there were some terrestrial applications. That's true for the civilian sector also. People who carry mobile devices are well aware of the potential for discontinuity and impairment, and these protocols try to overcome that.

 

GT: What are some barriers to continued Internet expansion?

Cerf: The network needs to go through some major changes to continue to grow. The current design was standardized in 1978. The address space available for the Internet protocol is 4.3 billion unique terminations. It's called IP version four, and the address space is 32 bits long, and those 32 bits are not used absolutely 100 percent efficiently. So we can foresee a time when the allocations of addresses will end, at least from the ICANN point of view. Its IANA [Internet Assigned Numbers Authority] function hands out address space

to the regional Internet registries, which in turn hand them out to the Internet service providers [ISPs].

We can foresee a time around 2011 when there won't be any more address space for IANA to hand out. By that time, I'd like to see everyone ready to operate using IP version 6. It was standardized some years ago, probably in 1995 or 1996. But in the 10-year interim, there has been enough IPv4 address space available - and certain types of hacks, called network address translation, have allowed the Internet user and provider communities to avoid moving to IPv6.

But they can't avoid that after 2011. So I've been encouraging everyone to move quickly to have a v6 capability in parallel with v4. Some people speak as if you throw a switch and everyone is running v6, but these will have to run in parallel for some time. If you're a server, you want to have both v4 and v6 access to the Internet so that you won't care whether your customers come to you with either protocol. The reason IPv6 is important is that it has a 128-bit address space - that's about 340 trillion, trillion, trillion addresses or 3.4 x 1038.

The technology is more or less there. Software makers have already produced it in the routers. It's sitting in Windows Vista, or XP for that matter. It's sitting in the Macintosh OS X. And it's available in Linux and the other derivatives of UNIX. There are still some issues associated with all the network management software that needs to run simultaneously with v4 and v6, and the ISPs have not been offering IPv6 service readily because people haven't been asking for it. People need to realize that when you do run out of v4 address space, the only way to expand is to have v6 capability. We want to be there ahead of time so we don't have a big crisis.

The second thing happening in the Internet is making the domain name system capable of expressing identifiers using scripts other than Roman characters. We'd like people to be able to make registrations in Cyrillic if they happen to speak Russian or Bulgarian, or one of the languages that uses the Cyrillic script, or in Farsi, Arabic or Urdu, which is the Arabic script, or in Korean which uses Hangul, or in Chinese which uses the Chinese syllabary, and so on. ICANN has been working to adopt standards, which are being devised by the Internet engineering task force for the use of these scripts beyond simply Latin characters.

We are coming to a time now where testing is going on to put entries in the root zone file of the domain name space. The root is the thing that points to the ".us" and the ".fr" and the ".com" and the ".net" and ".org." So those roots today only have things in it that are expressed in Latin characters. We are going to test putting things in the root that express identifiers in things like using Cyrillic script, Arabic script and so on - 11 of them altogether. We're hoping that sometime in 2008, we will be able to accept applications for new top-level domains that are expressed in these other scripts.

The third thing going on is called DNSSEC, which means Domain Name System Security. The domain system maps from names - Gmail.com, for instance - to IP addresses. With DNSSEC, when your computer asks, "What's the IP address associated with this domain name?" we'd like to offer a digitally signed answer. This guarantees to the consumer of the DNS service that the information they receive has not been altered since it was placed in the domain name system.

There are some forms of attack against the domain name system today, which involve what we call pollution or compromise

of the caches, which is information that's accumulated by a resolver that's near you. Right now, someone can attack the resolver and give www.google.com the wrong IP address, and send you to the wrong place. That would not be a good thing, certainly, from Google's point of view. More importantly, if that happened to your bank account, you wouldn't want to have a party running an application looking like your bank, but it's really someone saying, "Please give me your username and password." Digital signatures are one way of removing opportunities for misleading people in the network.

 

GT: How will government regulation influence IPv6 acceptance and mobile Internet connectivity?

Cerf: One area where I've been vocal, along with others, has to do with how open the Internet access methods are. When the Internet was first becoming available to the public in the early to mid-1990s, most people got access to the network by dialing an Internet service provider. If you didn't like the service from one Internet service provider, you could dial a different telephone number and switch to another. When broadband capability emerged from cable modems and digital subscriber loops, the number of competing ISPs collapsed to the point where some people have no broadband access at all. The FCC estimated in 2005 that something like 60 percent of the country had a choice of either DSL or cable, 30 percent had one but not both, and 10 percent had no broadband at all, often in the rural parts of the country.

Even where there are two competitors, it isn't clear the degree to which that drives prices down and increases quality. So this absence of competition is of some concern. Another concern is that the parties offering these broadband facilities - and also the parties that offer wireless capability - are designing business models that constrain what applications are permitted. This isn't as open an environment as the Internet has been in the past. It's the openness, the opportunity to offer arbitrary services, which has given the network its vigor in the economic and innovative sense.

For example, when Larry Page and Sergey Brin started Google, they didn't need permission from an ISP to offer the service. They simply put it up, and if you had access to the Internet, you could go there. There has been a debate under the term "Net neutrality," a term whose definition has been distorted and twisted in the course of the arguments. But from Google's point of view, our interest is keeping the network as open as possible. Once the consumer gets access to the network, they should be free to go anywhere in the world to get any application. If you have a device capable of doing Internet, it should be able to use any Internet service. If it's capable of using the Internet at all, you should be able to download new applications and run them.

In the wireless world, that isn't the case. The platforms, even when they're Internet enabled, are not open - you may not be able to download a new application unless the wireless provider agrees and puts it on the platform for you.

This openness can be seen in different ways, whether it's open platforms, open applications on the platforms, open interconnection and access to the various Internet-based networks. In the absence of a regime which preserves that openness, the Internet could easily move into a very constrained mode, which makes it look more like cable television. Personally I don't think that would be a good thing. Not just because of a personal preference for openness, but I believe the economic energy behind the network has been aided by an environment where standards are openly available and any platform is free to implement these standards

and then access the Internet.

I'm worried the U.S. government, and other governments around the world, may fail to fully understand how important that openness is to the economic benefit of the network.

 

GT: When you joined Google, there was speculation that the company would create a nationwide wireless network. What is Google's vision in that regard?

Cerf: Speculation about national networking may have come about because of the work we did in Mountain View, Calif., and San Francisco with wireless communication. The Mountain View wireless system was a test case to figure out how to respond to a request from the city of San Francisco for bids to put in free wireless services. We built the Mountain View network just to figure out what you have to do and what it cost, so we could decide if we should actually offer to do this.

The San Francisco situation was fairly complex. If you proposed to build such a wireless network, and that proposal was accepted, what acceptance meant is you get to negotiate with something like 29 different jurisdictions around the San Francisco Bay Area. So it's taken quite a while to work through that.

But many people, I think, misunderstood our willingness to be good citizens in this wireless effort as the first step in an attempt to do nationwide municipal networking. At the moment, that's not a business model we aspire to.

Google believes that expanding the Internet and providing more of it to more people is a good thing. It certainly is good for our business model, and we're not shy about admitting that. But we also think the Internet has been beneficial to the 1 billion people using it today - unfortunately that's only about one-sixth of the world's population. So we are interested in taking steps that will encourage more Internet to be built. Whether we do it, or someone else does, it is less important than that it does happen and that more people have access to the information that's available.

The amount of information available on the Net today is astonishing. The quality ranges from terrible to spectacularly good. That information is being generated now more by users of the Net than any others. The consumers have become producers, which is an interesting phenomenon, and has a positive spiral associated with it. The more people share information, the more people get access to that information and use it to invent new things and come up with results.

 

GT: Let me take you back to some of the municipal wireless work that's going on. What's the right role for government in promoting those networks?

Cerf: I'm quite a fan of the notion of a municipal network. I'm particularly conscious of the fact that some residents of municipalities don't get very good access to the network. And there are examples of cities that have promoted implementation of a wireless network throughout the city - Philadelphia being a prime example. I believe that if the citizens of a particular city or county decide amongst themselves that they are willing to tax themselves - in order to create a bond - to have the capital needed to build a network, whether it's wireless or wired, they should be permitted to do that.

On the other hand, people might argue that this is the government competing with the private sector. I differ with those who make that assertion, and my rationale is quite simple: What would typically happen is if a city decided to build a wired network or a wireless network, almost certainly it would turn to the private sector to build it and maybe even to operate it. The issue here is not so

much that it's the government competing with the private sector, it's just that there are some parts of the private sector that don't want competition.

So if the money doesn't come to them, they consider it illegal competition or unacceptable competition. I view this quite differently. I think it's an opportunity for others in the private sector to get engaged in networking beyond those who have traditionally been involved in offering networking services.

 

GT: So it's a challenge to their business models?

Cerf: That's it exactly, and Internet has that characteristic. It's been challenging business models dramatically. It's all a consequence of the dropping cost of digital everything; whether it's digital storage, digital computation or digital transmission, the cost per bit has dropped dramatically. Voice over IP is a good example of that. I don't know whether Skype will survive or not - and I'm not suggesting that it won't - but it's been a remarkably useful and effective tool, and its price is very modest compared to what you typically pay for international calling.

They've found a useful niche for themselves without having to build out the same kind of infrastructure that some of the telcos did. And of course they're taking advantage of the Internet's infrastructure, which has its own underlying business model.

 

GT: What are the biggest threats to the Internet's stability and continued growth?

Cerf: Security is a big issue now, partly because so many people are dependent on the network. This is an area where governments really [should] have a role to play. There are some technical problems with the network right now; denial of service attacks are a serious issue. They can go against the main system or particular hosts. There was a recent attack against the computers in Estonia, which appears to have been launched by people in Russia that really looked like a national attack against another country's communications assets.

Some of those denial of service attacks can be defended against. Some are the consequence of botnets, which are large numbers of computers - typically PCs - that have been compromised. They've ingested Trojan horse software that allows a remote party to take over control of the machine and cause it to do something. Botnets grab control of these machines by laying traps on Web sites. When you go to the Web site, often the browser will download the Java software to the interpretive language, and that Java software might actually cause a Trojan horse to be downloaded.

The biggest vulnerability at the browser level is allowing too much to happen. Too many of the computer resources are accessible to the software. I don't mean to point the finger only at Java; any interpretative language has this potential risk. So there are a lot of botnets out there.

My estimate at the high end is that about 150 million machines may have been infected. The denial of service attacks - and the spam you can generate with these botnets - is serious. People rent these things out. "You want 20,000 machines to attack this guy over here? That will cost you X number of dollars."

Some of those threats can be mitigated by using the DNSSEC; some of them can be mitigated by using what's called IPsec, which builds an encrypted tunnel between computers talking to each other on the Net. By encrypting all the traffic going between them, some of the attack vectors or modes can't see the traffic, and so they can't make any predictions. They don't know which protocols are being used, so they can't effectively attack that encrypted stream. Strong authentication using public key cryptography could really contribute to resistance to some of these problems. When you download software, you want to be sure it's been digitally signed by something you recognize.

Those are some of the technical issues associated with network and computer security, but then there are the broader issues - fraud, abuse, harassment and things of that sort. Only governments are in the position to do something about those. They can pass legislation that says the following things are considered antisocial and there will be consequences, they can establish treaties in which they cooperate to track down perpetrators. Governments have an important role to play in the domain, and I think they are just beginning to appreciate that.