April 30, 1999 By David Aden
In large part, flagging patience can be attributed to the changing role of networks and computers over the past several years. They have evolved from an interesting adjunct that may help someone do their job better to a utility central to getting the job done. This fundamental change helps explain why downtime might give rise to more expressive and sometimes colorful emotional responses.
Unfortunately, the subject of managing computer resources is so vast that no single article can hope to do more than touch the surface. Nonetheless, there are general concepts, trends and definitions that might help when starting to confront this vital issue.
Traditionally, computer-resource management has been divided into several categories applicable to the management of a network, an individual computer or a running application. The more common of these subdivisions, as enumerated in The Essential Client/Server Survival Guide by Robert Orfali, Dan Harkey and Jeri Edwards, include:
1) Performance monitors that provide text-based and graphical representations of how the hardware, network or application is performing. They generally answer "how much" or "how many" questions. Managers can use them to drill down to a particular piece of the computing environment to get details on how things are going. Perhaps more importantly, these tools usually include the ability to set thresholds for the values they track. The performance monitor silently watches the value unless it goes above or below the threshold; then it takes some action, most often to notify an administrator of the situation. In some cases, the performance monitor can be instructed to carry out an action designed to rectify the problem or situation.
2) Inventory or asset-management tools keep track of what's available. They answer "what" questions -- what computers, what software, what network devices do we have?
3) Configuration management deals with the way a particular piece of software or hardware is set up. Such tools answer "how" questions. This information can be used to help understand and optimize system performance. For example, tools to assist "database tuning" may fall into this category, as would tools that help track and adjust the configuration of network devices.
4) Security-management tools can apply to hardware and software. They generally deal with "who" questions -- who has access to what and, with access, what are they allowed to do? In the heterogeneous environments common today, such tools can be vital. Without them, system administrators have to manually grant or revoke rights for each computer or network that an individual needs to access. Well-designed security tools help give administrators a single view of mixed-vendor environments. Security management is an issue whether the administrator is managing
network devices, individual computers or even software.
In the computer and software arenas, several trends exist. Some tools provide a single interface for granting user access to a variety of systems. One interface may be used to create a user account on both UNIX and Windows NT systems -- the administrator just enters the required data and the tool takes care of figuring out how to get the job done on the target system. Another approach is to use a universal directory service that defines access rights no matter the target operating system (see "Directory Service Slugfest," Government Technology, January).
5) Software-distribution tools automate or assist in distributing and configuring software. They contain elements of inventory, configuration and security management. They answer questions like "who got what, and when," and provide an interface to help administrators distribute software without having to walk from machine to machine, CD in
You may use or reference this story with attribution and a link to