IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Desktop Virtualization Creates New Set of Challenges

Industry Perspective: How government agencies can address demand ‘storms’ while minimizing waste and also protecting data.

The amount of digital data generated by the public and private sectors is growing rapidly, and enterprises and government agencies alike are working furiously to patch together strategies to handle the increase. This has fueled the rise of virtualization, which came first to servers and storage devices and is now catching on at the desktop level.  

Designed to provide a smoother end-user and administrator experience than their physical predecessors, virtualized desktops deliver lower cost of acquisition and simplified management with a highly scalable, easy-to-deploy and fully protected environment.  However, with virtualization of desktop infrastructure (VDI) comes a set of new challenges. First among these are storage and server resource allocation and data protection and recovery. These problems are perhaps nowhere as serious as in government agencies, which must ensure complete data integrity and availability.

The Problem of Virtualization ‘Storms’

For all its benefits, VDI also brings some challenges to government agencies grappling with data availability and storage issues. VDI input/output (I/O) “storms” — sharp peaks in the demand for server and storage resource — take place several times during the day as users create I/O demand. This happens as large numbers of employees log in, log off and shut down at the same time, as well as when agencies run antivirus scans or software updates. These storms can affect the performance of the virtualized environment for long stretches of time, up to two hours for every event. In addition to resource management, desktop virtualization also challenges developers to arm the technology with the necessary data protection capabilities users need in the event of file loss or corruption.

Depending on the operating system and applications being used, the typical input/output per second (IOPS) requirement of a desktop under normal workload can be five to 10 IOPS. During a boot storm, however, this rate could be 10 times higher. The scope of the storm depends upon the number of virtual desktop users that are consolidated on dense server and storage infrastructure. For a large agency, an I/O storm caused by users logging on in the morning, logging on and off at lunchtime or shutting down in the evening can create an event that takes anywhere between 30 minutes and two hours to dissipate. During that time, an agency and its constituents are likely to see a serious impact on service levels.

One way to avoid that outcome is by sizing an environment for the worst possible I/O storm, but that approach generates significant waste. In an agency with 5,000 employees, if every employee requires 10 IOPS for normal operations, that means the storage infrastructure has to deliver 50,000 IOPS to support all of them. IT administrators might note that each morning 10 percent of the employees might try to log in at the same time, creating a surge of IOPS requirement, known as a “storm.” In this scenario, the average IOPS requirement for each one of these activities is 10 times the normal operation requirement. This means IT would need to account for 100 percent additional IOPS. This will vary from one agency to another, but in this example the total required IOPS is 100,000.

This is a massively scalable environment with a lot of wasted capacity.

A Data Protection Umbrella

Today it’s possible for data centers to avoid waste and the performance degradation caused by I/O storms. A mix of solid-state storage with high-density serial advanced technology attachment (SATA) drives addresses the IOPS and capacity requirements independently. By sizing for IOPS with solid-state disk drives (SSD) or solid-state memory arrays, administrators can address their maximum IOPS requirement with SSD drives while meeting capacity requirements with SATA drives. The results are deep reductions in acquisition costs and physical space requirements, which in turn leads to significant reductions in power and cooling for the data center.

There are several different levels of data protection and recovery that any VDI environment should have: system data, user data repositories and individual virtual desktops.

1.    System data recovery: VDI environments place system data on shared storage resources servicing virtual server clusters. This allows all data to be protected with the snapshots and replication capability of the storage area network (SAN). Most arrays, regardless of vendor, support both functions efficiently, for the most part.  Administrators can recover a full environment locally from snapshots or even remotely by failing over operations to a remote location, once system data is protected and replicated. This leads to effective business continuity planning.

2.    User data repository protection: In many cases, user data in a VDI environment is redirected to network attached storage (NAS) repositories. Most NAS offerings have the ability to protect hosted data with snapshots. Otherwise, data repositories can be backed up individually. When evaluating a data protection solution for VDI environments, it’s important to look for solutions that can span system and user data repositories — for consistency and more effective management and recovery capability.

3.    Individual virtual desktop protection and recovery: With traditional backup methods, an end-user relies on the help desk or a system administrator to recover a file in case of data loss or corruption. This process requires the administrator to get the individual virtual machine up from a previous snapshot or the user data repository and look for that file to recover and send to the end-user. This is a taxing process for the IT staff. Additionally the traditional model of backing up physical desktops is unrealistic for an agency running hundreds of backup agents simultaneously on a few virtualized servers.  

File-level protection and recovery methods provide an additional opportunity for enhanced data protection by embedding lightweight continuous data protection or near-CDP and recovery agents in the VDI gold master image. This approach empowers the users and enables self-service, file-level recovery; the IT staff is freed up for more critical tasks when end-users can browse their directory structures to recover their own files in cases of loss or corruption.

A Virtual Shelter From Storms

Virtualization has a lot to offer the government data center. With flexibility, mobility, higher availability and business continuity assurances, virtualization in all its forms is on the agenda of most agency IT managers. Desktop virtualization is the latest in a series of steps away from physical computing and storage dependency, and it often provides end-users and administrators with a better experience at a lower cost. VDI’s allure includes ease-of-management, scalability, rapid deployment and total protection of the desktop environment. With these benefits, however, comes an unfamiliar set of challenges related to resource allocation and granular data protection and recovery processes. Today, data centers can ably solve these challenges with VDI-specific data protection solutions.  When they do so, government agencies will see any storm clouds surrounding VDI disappear, enabling them to capture greater return on their virtualization dollars.


Fadi Albatal is the vice president of marketing at FalconStor Software.

 

Miriam Jones is a former chief copy editor of Government Technology, Governing, Public CIO and Emergency Management magazines.