In the public sector, IT teams are often facing down looming projects and frequently discuss how nail-biting an 18-month timeline can look from the business end of the venture. So, imagine, if you will, condensing that 18 months’ worth of project, whatever it might be, down to a mere 120 days.
It happened in Florida, where the move of a high-value data center that was originally slated for a 2019 move got a sharp legislative prod that started the hands of a stress-inducing doomsday clock.
A move was already on the minds of Florida Agency for State Technology (AST) officials due to the discovery of mold and other problems in the facility, but no one had imagined it would come down to such an air-tight time frame.
While some would recoil at the idea of disassembling, decontaminating, moving and reassembling the state’s most essential IT systems, the Florida team put pedal to metal and got to work, finishing a large-scale move ahead 20 days ahead of schedule — in just 100 days.
“We had a plan to [move the data center], but the plan was around 18 to 24 months,” Chief Operating Officer Kevin Patten told Government Technology, “and all of a sudden, our 18 to 24 months turned literally into 120 days.”
While the fact that they could make such a feat happen is noteworthy on its own, how they did it is of particular interest.
Perhaps at the heart of their success was what many would consider nearly excessive communication between the team at AST and its 12 customer agencies.
Among the agencies relying on the data center were: the Department of Highway Safety and Motor Vehicles, the Department of Children and Families, the Department of Business and Professional Regulation, and the Department of Environmental Protection, to name a few. The state’s primary elections and public safety components also folded into the patchwork of considerations
“It ran the gamut. It probably is a shorter list of who wasn’t affected in state government,” said state Chief Technology Officer Eric Larson. “The earlier [agencies] could get scheduled and orchestrated, the more flexibility [they] would have during the move. It worked extremely well, everybody came in as early as they possibly could with as many resources as they could apply to the situation, knowing that if they were the last ones standing, they were coming over on the truck …”
What began as a break-neck strategy session would evolve into a close-knit group of partners evaluating where agencies could streamline and improve. Agency CIOs and budget officers were gathered to “put dates on paper” and plan for how they could limit downtime of each respective agency’s applications.
“We gave them a list of all of the applications we had documented, all of the servers that they sat on and had them validate that information …” Patten said.
The approach allowed AST customers an opportunity to provide input on their own application operability and the collective mission. And allowing the customers to set priorities and drive the schedule, Patten added, allowed AST the latitude to concentrate on the daunting logistics of the massive move.
“That in and of itself became our target; just don’t let the schedule slide,” he said.
Checklists were also created for the individual applications; one mainframe contained more than 300 unique steps, according to Patten.
Consistent weekly meetings and regular status calls kept everyone on the team informed as to the progress and next steps in the schedule.
Shelley McCabe, chief of Strategic Information Technologies at AST, said the assignment of single points of contact within each agency proved to be a valuable tool for all involved. The ability to reach out and discuss potential issues was made easier through dedicated point people.
In working with the larger agencies, daily conference calls helped to synchronize planning efforts.
“I don’t know if you can over-communicate, but we probably came about as close as you could,” CIO Jason Allison said. “At the highest level, I sent out something to all of my agency head peers on a weekly basis letting them know not just where they were, but where the project was in general, as well as calling my peers who are customers of ours …”
On the technical side, the fear of having to move all of their data center kit also prompted some agencies to opt for virtualization, while others threw their names on the calendar to hold their spot for the moving truck. In total, more 2,051 servers and more than 1,000 terabytes of data needed to make their way across Tallahassee to the new data center.
“One of the things that really made the move much easier was the fact that there was a lot of equipment sitting on the floor, and to look at it in all of its quantity was very daunting,” said Cathy Kreiensieck, chief of Infrastructure and Facilities. “But at the same time, a lot of these agencies we were trying to do virtualization with, and when they were told they had to move, they actually wanted to do the virtualization. That made a world of difference for us.”
Those sitting on the virtualization fence got the pushed they needed to make the move. For interested agencies, the process was included in the move as a strategy.
“We did see a lot of agencies that were fighting against being virtualized, and then all of a sudden everybody wanted on the bandwagon because the physical equipment we had to move, we actually had to decontaminate it,” Kreiensieck said.
Because of the aforementioned mold issue, each piece of equipment also had to be taken apart, cleaned and put back together to avoid compromising the new facility or the integrity of the servers.
Another challenge the team encountered was that of connectivity and the telecom circuits that were tied in with the physical location. Coordinating the move of servers and equipment — and where they would connect on the other end — needed to be addressed.
While all of this was happening, daily operations were also a top priority. Despite the concentration on what officials agree was a “herculean” task, the regular workload of their respective departments did not decrease. Fortunately, Allison said funding for flexible employee scheduling and support from the highest levels enabled the agency to meet the demands on both sides of the house.
As for lessons learned from the project, the team struggled to find a weak point. The sterling success of the move now represents what Allison called a “wonderful opportunity.”
“The question we’ve been asking ourselves around here is, ‘What made this so successful?’” Allison said. “From a lessons learned standpoint, it’s amazing … what you can do when everybody, the business, the Legislature, the agencies, when everybody is moving in the same direction on something and we all have that singular priority …”
Patten and Allison agree that the lesson learned moving forward is not so much what could have been done differently, but rather how the successes from this project can translate to the next major undertaking.
With the project completed June 17, virtualization efforts are still underway, but the meat of the now 1,756 servers now sit in a clean, secure facility.