I'm familiar with a government agency that still uses DOS as its workstation operating system. Several years ago, this organization developed information management systems that were as effective and appropriate as they could be, given the technology then available. Obviously, available technology has changed. Also, the system's user requirements have changed. The agency must now change or replace most of its information systems quickly and simultaneously. These rapid large-scale changes will require radical modifications in user procedures and a lot of user retraining.
To compound the problem, this agency's local area network hardware platform has not been upgraded for several years. The network server is worked to its disk capacity andperiodically crashes. This leaves 150 users unable to do anything during the downtime. The agency estimates that downtime costs $5,000 a minute. There is only one server on this network, so work can't be shuttled to a different server.
All the costs of network upgrading and retraining will have to be absorbed in one or two fiscal years. Service to users and to agency customers will be disrupted, and the risks of total system failure are maximized.
I've seen similar situations develop in other organizations. Several potential missteps lead into this trap. I've observed one root problem over and over: Decision-makers frequently underestimate the costs of doing nothing, of maintaining the status quo.
In an ideal world, upgrade and replacement decisions are made on a rational basis: cost effectiveness. Open any managerial finance textbook and you find discussions of net present value, internal rate of return and discounted future cash flows. The textbook cost/benefit analysis consists of defining a problem, proposing solutions, developing cost and benefit estimates for each alternative,
then selecting the alternative that yields the greatest payoff or rate of return.
For these methods to work, accurate estimates of costs and benefits for each proposed alternative must be calculated. For projects involving new system development, upgrades or acquisitions, this is straightforward: obtain acquisition costs from vendors and estimate in-house development costs based on labor and so forth. These costs are relatively easy to isolate.
But what about the alternative that doesn't involve change, the status quo option? The costs of doing nothing are often overlooked, underestimated or assumed to be zero. This is because the costs of doing nothing are often hard to quantify or identify.
The Price of Nothing
Underestimated costs of doing nothing include downtime, using small-organization procedures in a big organization, lost migration paths, unsupported or peculiar software, the cost of large-increment upgrades vs. small-increment upgrades, the need for spare capacity to enable response to new problems or opportunities, and incompatible mixes of old and new software and hardware.
The most obvious status-quo cost factor is downtime. A failing system's downtime can be reported and projected. A system not currently failing may fail in the future, yet you have no downtime history on which to base projections. In either case, future downtime must be estimated, and costs of lost work time and repairs must be extended from these estimates. Downtime can be estimated by projecting disk and other resource utilization based on past experience and on forecasted growth in demand.
Moving On Up
The next cost factor involves clinging to outgrown systems. California's county government agencies have grown considerably over the last 30 years. Business transactions, numbers of employees and operational complexity have all increased. Instead of adopting new methods and technologies, some organizations stretch their old information systems to accommodate the growth they experienced. Eventually, the organization slows down, becomes less efficient or effective, and gives its customers poor service. If the organization's revenue is fee-based, that revenue will decrease. Other related costs are more qualitative: public dissatisfaction, more complaints to elected officials and pressure for privatization.
Lost migration paths can be costly. The expense and time required to manage upgrades probably prohibit staying 100 percent current all the time. On the other hand, falling too far behind in hardware and software configurations may lead to lost migration paths. Upgrading from version 1.0 to version 2.0, or from 2.0 to 3.0, can be difficult, but is usually supported by the software or hardware vendor. The jump from 1.0 to 3.0 is much more difficult, but may still be supported by the vendor. The change from version 1.0 to 5.0 may well prove impossible, and the vendor will probably not support it. The impact shows up as staff overtime or the inability to adapt at all.
The ability to upgrade also relates to unsupported software and software no one outside your agency uses. If this software was developed in-house, the agency or owner may be able to support it. If the software was developed by an outside vendor can't be maintained by the user, the organization is stuck with it. When business requirements change, such orphan software cannot change with it. This may force total replacement and all its associated costs -- installation, downtime, overtime and training.
Changing by Degrees
The costs of hardware and software replacement will generally be greater than a normal software upgrade from one version to another. It is easier and less risky to manage small system-change projects than large ones. Information-system changes involve line-worker training. Time spent in training is time away from performing primary duties. This is true of small or large increments of system change, but one big project and its associated time consumption is usually more disruptive and costly than two small projects in succession.
Intangible human factors also must be taken into account; the additional stresses of radical change and overtime take a toll on workers. These cost not only additional overtime, but also additional sick leave and staff turnover.
One way to be prepared for change is to have unallocated resources -- spare capacity to respond to new business requirements and opportunities -- built into systems designs. Some county agencies are required by state and federal governments to report statistics regarding workload and expenditures. The state and federal governments intermittently change these mandates, sometimes without much advance notice.
To compound this problem, state and federal authorities making these mandates tend to assume that counties have the systems resources necessary to respond to them. County agencies must meet these mandates or have funding withheld by the state or Washington, D.C.
Like Oil and Water
If an organization prefers to maintain the status quo, it will only change system components when forced. This will result in a mishmash of new, old and totally obsolete components. New software and hardware can be engineered to provide some backward compatibility. There is a real danger, however, of discovering after an upgrade that the new stuff isn't compatible with the old stuff. Quattro Pro for DOS version 5.0 doesn't like to print under Windows 95, but the agency that discovered this did so after upgrading some PCs to Windows 95; such problems can often only be diagnosed through trial and error. These incompatibilities make upgrading impossible at a certain point, unacceptable due to the inevitability of change, or force the organization to suddenly do more upgrading that originally planned, which returns us to the increments-of-change issue.
Everything Costs Something
This overview of the costs of standing still may serve as a starting point in weighing the benefits of moving forward. The price of not changing is often less obvious and harder to quantify than the expenses of change. There is no such thing as cost-free status quo.
Robert Lombardi, who has taught community college information systems classes for nine years, is an information technology professional working for local government in Stockton, Calif.
November Table of Contents