Now scheduled for a Christmas completion, Baltimore CTO Chris Tonjes shares how the city's technology situation is going from bad to great.
Never let a good crisis go to waste. Winston Churchill, considered a great political and military strategist, offered that advice, and Baltimore decided last year to take heed. Finding themselves with a rapidly decaying infrastructure, city officials needed to modernize fast and given their dire situation, expectations were low, according to Chief Technology Officer Chris Tonjes. But the city is now looking at completing its infrastructure upgrade by the end of 2013, and it will not only meet today’s industry standards, Tonjes said, but exceed them.
The city contracted with Cisco, Dell and VMware to create a fabric-based computing infrastructure, spending about $1 million on the upgrade so far, which Tonjes reports is about 80 percent complete. He expects that figure will continue to rise slightly. The city still needs to spend about $500,000 on network improvements, but overall officials are very happy with how things have turned out. The city is poised to offer better services, including an internal and external cloud, Tonjes said.
“We were able to take a situation that was really bad and instead of just modernize to get us to a point of technical equivalency to the way most people run things, I think we’re actually in a really good, advanced place,” he said. “And I think we’re going to see a very quick return on investment and a very quick ability for us to move with the kind of agility we need to keep people interested in using our services and to establish ourselves as thought leaders and delivery leaders.”
The city’s infrastructure had to be replaced right away, Tonjes explained, but rather than doing it “willy nilly,” they wanted to look to the future and use their bad situation to take a risk they might not have otherwise taken. “We had a very bad situation where we had extremely old, decrepit storage infrastructure,” he said. “We had really old 10- and 15-year-old switches and we had a set of servers that were at the end of their life and needed to be replaced. We had a gun to our head.”
Fabric-based computing takes the various infrastructure pieces and allows them to be woven together and managed as one piece. “The benefits of that is that in a data center, theoretically, you can have fewer people because the same people who manage your switches can manage your storage, server and computing power,” Tonjes said.
The city has about 10 times the storage it had previously, Tonjes said, more computing power, and it can scale its resource provisioning as needed as the city expands services. “We can focus really carefully on improving our ability to virtualize servers, applications and even desktops, and we can focus really more on doing software-based things than hardware-based things,” he said, and as a bonus, the new system is more energy efficient.
One of the greatest cost savings for the city is that it won’t need to hire more employees to manage its data center as it expands services, he said. This concept has been the subject of several online editorials over the past couple of years, with some worried that fabric-based computing will put some IT professionals out of a job.
In 2012, Gartner Analyst Carl Claunch addressed the issue during a conference, arguing that fabric-based computing will increase automation of services, inevitably leading to fewer employees managing data centers.
Most major vendors now offer some form of fabric-based computing architecture in their hardware lineup. One of the challenges, Tonjes said, is finding a solution that will fit the organization’s needs without being too restrictive. Some vendors require that an organization purchase their entire suite, while other companies allow mixing and matching of products and brands.
Fabric-based computing aligns well with the general movement in IT towards unifying services and simplifying the environment -- two benefits in evidence in the Baltimore deployment. One online editorial suggests that fabric computing is not just a buzzword, but an important step in data center architecture that will allow organizations to focus more on services and less on IT management.