IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Public-Sector Multiprocessing Emerges

Multiprocessing, once a fringe technology for scientific applications, has quickly entered the business world as a cost-effective way to boost application performance. So far, states and localities have found little use for either symmetrical multiprocessing or massively parallel processing, but the situation may be changing.

Has your system's database become enormous? Are you having to add more users to the system than you anticipated? Is your application beginning to perform like a subcompact towing a trailer uphill? Then maybe it's time to consider multiprocessing.

Once a specialized (and expensive) hardware technology used primarily in science and research, multiprocessing has quickly entered the mainstream market, providing organizations with significant horsepower to run their growing applications, but at far lower costs than in the past. Multiprocessing is no longer a mainframe or minicomputer solution. Thanks to the advent of cheap microprocessors, PC vendors -- such as Compaq and Dell -- are now offering low-end, Pentium-based, symmetrical multiprocessing (SMP) servers, while system vendors -- such as AT&T GIS and Unisys -- are putting hundreds of microchips into massively parallel processing (MPP) computers for high-end applications.

At the same time, operating systems, such as UNIX, Windows NT, OS/2
and Novell's NetWare, now support
multiprocessing hardware as do the major database vendors, including Informix, Oracle and Sybase. The combination of these trends has resulted in a marked increase in multiprocessing applications.

In 1994, sales of SMP and MPP hit $2.5 billion, according to Superperformance Computing Service, a market research firm based in Mountain View, Calif. With growth at a robust 41 percent a year, SMP and MPP sales are expected to reach $8.4 billion by 1998.

In the commercial sector, retail chains, airlines, insurance firms and financial institutions have begun employing multiprocessing to increase the speed of their applications, many of which run databases that are hundreds of gigabytes in size. Besides speed, multiprocessing has other benefits as well, including scalability -- SMP can start with two processors and scale up to 32 -- the ability to handle complex queries, and support for continuous operations. Not so long ago, these kinds of benefits were not high on any government's priority list for computing. But today, that's beginning to change. Under pressure to become more responsive to constituent needs, perform faster and increase their reliance on technology, government agencies are struggling to adapt and change. Multiprocessing, with its performance, scalability and processing capabilities, can give states and localities the means to move in the right direction.

While not every government computer application is a candidate for multiprocessing, some are more suitable than others. These include tax audits, public healthcare claims analysis, and crime analysis such as fingerprint or mugshot matching.

SYMMETRICAL MULTIPROCESSING
Of the two types of multiprocessing in use today, SMP is perhaps the most mainstream. SMP computers can contain as few as two processors or as many as 32 (and in some cases as many as 64, though the average falls between four and sixteen). The processors share system memory and devices and have no pre-assigned tasks. That means they are equally capable of performing an application's tasks assigned by the operating system. As a result, the processors are rarely idle. (A less popular format, known as asymmetric multiprocessing, dedicates processors to certain tasks. If a processor is not running an assigned request, it stays idle.) Just a few years ago, only the UNIX operating system (OS) could run symmetric multiprocessing systems. Today, however, SMP versions of IBM's OS/2 and Novell's Netware are available. Windows NT, Microsoft's network operating system, also has an SMP capability. The key ingredient of any SMP-capable OS is a job scheduler utility that directs an application process to run on a particular CPU.

To give organizations applications that run on SMP, the major database vendors have developed SMP versions of their core database products. To enable a database system to achieve a high rate of performance on a SMP computer, it has to support parallelism and multi-threading. Parallelism allows the database system to divide tasks so that they can be completed sooner. Normally, database operations, utilities and query processing, are performed sequentially, but parallelism allows the system to process different portions of the data concurrently, which boosts the system's throughput.

Multi-threading enables a database system to handle multiple user requests and concurrent transactions. Database systems from Oracle, Informix and Sybase support these methods of multiprocessing. Informix's Version 7 of its OnLine Dynamic Server is used primarily for three classes of SMP applications, according to David Watson, a product marketing manager for Informix. "The bulk of the demand is for running online transaction processing followed by data warehousing and multimedia."

MASSIVELY PARALLEL PROCESSING
A computer with a couple dozen processors inside might sound powerful, but imagine a computer with hundreds of processors. That's MPP. Unlike SMP, which allows many processors to share the same system memory, MPP incorporates each processor with its own local memory. Each processor node in MPP contains an operating system kernel, enabling each processor to operate relatively independently of the others.

With SMP, performance gains begin to tail off as the number of processors sharing memory increases. That's why most SMP applications involve from four to sixteen processors. MPP doesn't face that dilemma, so scalability is not a problem. MPPs with more than a thousand processors are possible. MPP has entered into the business world of computing at a time when electronic data is exploding. According to the International Technology Group, the typical Fortune 500 company of 1979 possessed about 8 billion characters of electronic data. By 1990, the same company's data had increased to nearly 28 trillion characters. By the turn of the century, that company's electronic information will balloon to over 400 trillion characters. Not only are organizations awash with data, they are trying to put it to better use. Data mining and complex queries for decision support are the new kinds of activities taking place in businesses to help them stay ahead of the competition. With MPP, a Wal-Mart or American Express can process in minutes or hours the kind of queries that used to take mainframes days or weeks to do.

"MPP allows you to get at those golden nuggets of information," remarked Vic Velivis, a program manager for Unisys' parallel processing program. MPP can take a query, such as 'how many different flavors of a brand of toothpaste did a retail chain sell last week,' and break it down into separate tasks, search a 100+ gigabyte database and then join together the relevant data. With MPP's tremendous capacity for decision support, vendors believe that the technology can serve the government sector as well as it is now beginning to serve private organizations.

AT&T Global Information Systems, which recently acquired Teradata -- a provider of MPP products -- is the overall leader in parallel processing, with about 40 percent of the market, according to John Grim, AT&T's assistant vice president for public sector marketing. Grim said that the public sector is just beginning to apply parallel processing, with most interest coming from the federal government so far.

OPUS THE COMPUTER
Believing that the public sector's need for decision support will be increasing in the near future, Unisys has begun to press its Open Parallel Unisys Server (OPUS) program as a possible solution. OPUS was launched by Unisys just two years ago as an MPP solution for businesses. Now it's being offered to the public sector.

According to Velivis, early parallel systems were based on proprietary hardware, operating systems and even databases, making them difficult to integrate into existing computing systems. So Unisys developed OPUS as an open systems alternative. "It uses standard Intel microprocessors, Novell's UNIX operating system and supports industry-standard databases, such as Oracle," said Velivis.

What makes OPUS unique, however, is its microkernel operating system, a feature that enables the entire mass of processor, storage, memory, peripheral and other computing resources to be viewed as a single entity. According to Velivis, this feature, known as the Single System Image (SSI), can reduce the cost and time required for porting and developing applications for in-house IS and commercial software developers. That particular feature could help move MPP forward because, according to Watson and others, current MPP puts a much greater demand on system support than does SMP. "It's simply more complex to manage MPP than SMP," he said.

Unisys thinks its SSI feature will give MPP the kind of stability and reliability that other computing architectures have demonstrated. The vendor also feels that the benefits of MPP are too good for the public sector to pass up.

Myles Tillotson, Unisys' public sector marketing manager, sees a growing need in government for computerized decision support, where data is analyzed and integrated into meaningful patterns. "Intelligence gathering for crime analysis and fraud detection are some initial examples," he said. "But the use of parallel processing for decision support will have real impact in government when leaders look at data to see how a specific decision can impact a program." Tillotson pointed out that for years, states have produced statistical reports for the feds on state Medicaid programs, but have never used the data to see how services could be enhanced cost-effectively. With MPP that's now possible.

Building an MPP system is not cheap, however. Costs for a fully developed system start at $1 million and go higher. But with control and management of huge social service programs shifting back to the states, Tillotson sees states willing to justify decision-support systems in order to make more informed choices about the strategic direction that safety-net programs should take.

THE FUTURE OF MULTIPROCESSING
With multiprocessing still in the early phases of use in both the commercial and government sector, it's hard to predict just how significant this computing architecture will become. As governments look more closely at SMP and MPP they need to keep in mind some possible trends in the near future. For instance, some experts believe the differences between SMP and MPP will begin to blur. One way to get around some of the limitations of SMP (scalability beyond 32 or more processors), is to combine two or more SMP servers in a cluster. Another variation calls for using a set of SMPs as a node in an MPP computer.

David Watson is one who believes that MPP will eventually be replaced by clusters of SMP computers. Why? "Because clusters of SMPs can be managed better than an MPP computer," he said.

Whichever way multiprocessing develops, it's clear that the technology is here to stay. With state agencies integrating databases to create massive data warehouses of information, it's likely that computing performance will become an issue, especially when more government workers start asking for detailed analysis of the data. Multiprocessing should be able to handle that.


*


With more than 20 years of experience covering state and local government, Tod previously was the editor of Public CIO, e.Republic’s award-winning publication for information technology executives in the public sector. He is now a senior editor for Government Technology and a columnist at Governing magazine.
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.