MAINFRAMES ... THEY'RE BAAAACK (Actually, they've been here all along!)

Is there a way to balance the competing needs of access and privacy? Should the Social Security Administration try again to make some of their information available on the Web?

by / February 28, 1998 0
A man's tie gets wider ... then thinner. A lady's skirt gets longer ... then shorter. What goes one way must then go the other. It's a law of the universe, and it's a good thing. Otherwise, men would be wearing ties as wide as a bedsheet and women would be wearing skirts below their shoes. And that, my friends, would be a tad uncomfortable, not to mention ridiculous.

In the IS world, we centralize, then we decentralize. Now we're back to centralizing.

What's really happening is that we've learned some lessons from client/server development that tells us there are many ways to make soup. It's showed us these little, inexpensive Intel PCs aren't as cheap as we thought they were, and UNIX servers aren't as open as everyone believed. It's not that client/server is dead. Actually, very little dies in the information systems field. We're still making half-inch open -- reel tape drives this year ... and next.

The bottom line is that the world is speeding up. Einstein said this is a universal phenomenon. It's not just computers making us faster. We're all in some vortex orchestrated by an intelligence far greater than our feeble minds can fathom. As a result, there is little time to do everything we plan to do. IS deparments simply have no time to convert from mainframe systems to client/server systems and meet the demand for new applications that are piling up in the inbox.

Even if NT were to become the most scalable operating system ever, and Intel machines become 100 times faster, there still won't be time to convert mainframe applications to Wintel. Just like the year-2000 problem, there just isn't enough time and manpower to do the job. Forget the cost. If all taxpayers were willing to donate their life savings to their communities, there still wouldn't be enough time. Since the advent of IBM's System/360 in 1964, the world has spent $2 trillion in mainframe development. Some estimates say it's more like $4 trillion. Hey, a trillion here, a trillion there ... oh well.

All you have to remember is that if something is hailed as a panacea, rest assured ... it won't be. There was a book called Distributed Processing -- End of the Mainframe Era? It fortold the demise of the mainframe, because of a new powerful technology called the minicomputer. It stated its case succinctly. Mainframes were on their way out. That was 20 years ago. When the figures are finally tallied for calendar 1997, the amount of mainframe MIPS shipped is expected to be 70 percent greater than in 1996. Sure. There were the bleak years during the early 1990s when client/server was the rage. There was negative growth for mainframes, and IBM would love to
forget that it lost $16 billion in that grim era. But 1994 was a turnaround year, showing a 28 percent growth in mainframe MIPS shipped, which jumped to 65 percent in 1996. They're baaack and they're here to stay.

There's more to it than just being locked in. Organizations are starting to appreciate what a mainframe really means -- rock solid reliability and scalability that doesn't quit. Following is a synopsis of what mainframe technology is all about and who's doing it. We're talking about traditional IBM mainframes, not large HP, DEC or Sun servers or others that claim mainframe capabilities.


In 1964, IBM announced System/360, the first computer system series that was ever developed. Having bet the company on this new "family" concept, T. J. Watson Jr. launched an architecture that remains largely intact 34 years later. There have been various IBM-compatible mainframe vendors over the years. In the late 1970s, RCA was the first with its Spectra 70 systems, but the computers were unreliable and never a threat. In 1970, Dr. Gene Amdahl, chief architect of System/360, formed his own company and succeeded in producing a line of IBM-compatibles that still compete today. Amdahl has about 7 percent of the IBM mainframe market.

In 1979, Amdahl left to form Trilogy, a quarter-billion dollar startup that would employ wafer-scale integration to create a 2.5" superchip. Far ahead of its time, the superchip never came about. Amdahl later founded Andor, another IBM-compatible venture, but suffered from the "mainframes were out" syndrome in the early 1990s and Andor didn't make it either. Not to be undone, Amdahl is still at it with Commercial Data Servers, which recently introduced entry-level mainframes to replace the several thousand 43xx and 9370s out there at half the cost of IBM Multiprise systems. Amdahl hopes once again to have the premier mainframe, beating out everybody by the end of 1998 with a cryogenic-based 270-MIPS machine.

In addition to Amdahl 1 (Amdahl Corp.) and Amdahl 2 (Commercial Data Servers), Hitachi is the other major IBM-compatible mainframe vendor. Hitachi claims 24 percent market share overall and 50 percent share in the top 4 percent of customers. Its Skyline series indeed is the fastest single processor machine on the market today at 124 MIPS. A new 152-MIPS machine is expected shortly.


Is a mainframe a license to steal? How can Hitachi sell a 10-CPU system for $10 million that delivers a total of 1,000 MIPS (million instructions per second), when five Pentiums will deliver the same 1,000 MIPS? There are reasons, but comparing MIPS ratings and raw megahertz is about as misleading as it gets. This is the kind of bean-counter thinking that made client/server so appealing. In actuality, many mainframe instructions will do more actual processing than the equivalent Intel instruction, but that's a fraction of the issue. IBM and the compatible vendors have had 34 years to perfect the System/360 architecture, and those 34 years have produced a formidable machine. There is a difference!

First of all, mainframes offload input and output to separate computers called channels, so data transfers take place simultaneously with CPU processing and other data transfers. Additional processors may act as I/O traffic cops between the CPU and the channels to handle the processing of exceptions (What happens if the channel is busy? If it fails? etc.). IBM offers 256 channels per system and Amdahl and Hitachi support up to 512. All these subsystems handle the transaction overhead, freeing the CPU to do real "data processing" such as adding up columns and updating account balances, the purpose of the computer in the first place.

Secondly, instead of one pathway into memory as in a PC, there are multiple memory banks providing multiple ports into memory. For example, Hitachi's Skylines have 16 ports into main memory and 32 ports into cache memory, and that cache memory is 10 times faster than regular, main memory.

Thirdly, while a 200MHz Pentium has an internal bus of 66MHz, a 200MHz mainframe may have a data bus that also runs at 200MHz, three times as fast. The multipliers add up: three times the bus speed, 10 times the cache speed, perhaps 32 or 64 overlapped data transfers. Multiply one times the other, and the combination makes a processing machine unlike anything else.

Mainframes are very scalable. While lesser machines can poop out, mainframes keep on going, and going and going (shades of the pink bunny!). Using symmetric multiprocessing (SMP), they offer systems with up to a dozen processors that share a single memory. To increase capability beyond a single system, IBM uses its Parallel Sysplex technology to cluster up to 32 systems, making them all appear as a single system image. In practice, most users don't scale beyond six or seven systems, because it gets very complex (that's six or seven systems, each having multiple processors). Amdahl recommends no more than four or five.

Both Amdahl and Hitachi rely on IBM's OS/390 (formerly MVS) software, which after 34 years of evolution, has become bulletproof. For years, IBM has employed hundreds of people whose sole purpose in life day after day is to try and crash MVS. It's a tad harder than crashing Windows, which just about anybody can do in 10 seconds. NT may be more stable than Win 95, but NT has had barely five years in the trenches.

Half the hardware in a mainframe is error detection and correction circuitry. Every subsystem is continuously monitored for potential failure, in some cases even triggering a list of parts to be replaced at the next scheduled downtime. As a result, mainframes are incredibly reliable. The mean time between failure of a system going down on its own is generally 20 years! Now ... them thar's dependability if ah ever heerd of it. When Tandem came out with its fault-tolerant computers in the mid-1970s, mainframes were nowhere near as reliable as they are today. Today, redundant power supplies and RAID storage arrays are standard operating procedure.

Switching from bipolar chips to CMOS technology has also improved reliability. Mainframes are now on a price performance curve with PCs, because all the research and design is in CMOS today. Even though bipolar and ECL chips are faster, they run hotter, and they're bigger. IBM's ES/9000 bipolar models are 20 times as large as its CMOS family of Parallel Enterprise Servers. And, yes, the entire CPU is on one microprocessor chip. The Parallel Enterprise Servers are the medium to high-end models with the peripherals in separate cabinets, as has been traditional with mainframes. IBM also offers its entry-level to medium-size Multiprise models with the disks right inside the 'ole cabinet. What a concept.

Hitachi bridges both worlds with its CPU chip. Its ACE technology is 60 percent CMOS and 40 percent ECL (emitter-coupled logic) bipolar. That's how come it makes the fastest machine today.


IBM's OS/390 runs UNIX natively. Over the years, IBM has made its
flagship operating system Spec 1170 compliant, which means UNIX applications can be recompiled to OS/390 and run natively! This is a solid move considering UNIX servers are a $60 billion business.

Wind/U from Bristol Technology of Ridgefield, Conn., is a software porting tool that was originally written to take Windows applications written in C or C++ and port them to UNIX. It was later enhanced to create S/390 applications, which means Windows applications written in C and C++ can be made to run on the big iron, too.

Of course, some wonder why IBM doesn't make OS/390 Windows compliant from the getgo just like it did for UNIX. Windows was enough of a threat to mainframes to cause the "bleak years," and it surely remains as formidable an opponent as UNIX, if not more so. But, this would be a gargantuan undertaking. Does IBM clone Intel chips and integrate them right into the S/390 chip? Can it license Intel technology? Can it truly reproduce Windows, API for API, without having the source code from Microsoft? Could it get the source code? Does it want to? Could it simply emulate Windows and forget Intel hardware? Lots of considerations. And, for what? Just to own the entire computing industry once again. That's all.

Alan Freedman's Computer Desktop Encyclopedia on CD-ROM is "the" award-winning reference on the computer industry. Contact The Computer Language Company, 215/297-8082 (FAX 8424) or . *

March Table of Contents