The Stuff of Networking

The Stuff of Networking

by / January 31, 1995
Networking is perhaps the most unruly discipline within the computer industry. It's high-pressure business. The network is the spinal column of information systems delivery. It must stay up and running or the work of the organization comes to a halt. Network administrators have been known to quit the business to take up lighter work such as brain surgery.

In order to grasp the field of communications and networking, it is necessary to understand some of its basic components and concepts.


The one concept you must understand today is "client/server." Shouted from every mountaintop, it is a buzzword not to be omitted from any vendor's advertising. It's "in" and "hot." The simple explanation: using the LAN to do jobs that have been done previously on a mini or mainframe. It implies mission critical applications running industrial strength software.

What it really means is this: If you take a dBASE or Paradox database and run it on the server, the network versions of these packages may provide record and file locking to prevent two users from updating the same data at the same time, but accessing the database puts a strain on the network. To search the database, you have to pass all the records in the table across the network comparing each record in the client machine, because a copy of the database management system (DBMS) resides in each client. If 20 clients are doing sequential searches on a large database, the network bogs down. This is not client/server.

The concept of client/server is that there is only one copy of the DBMS, and it resides in the server. It does the searching of the database and passes just the selected records over the network back to the client. If, for example, you're summing a single field in a table, the query is transmitted from the client to the server, and the answer is transmitted from the server to the client. The server does the heavy database work, and the network remains free of excess traffic.

All high-end DBMSs, such as Oracle, Sybase, INFORMIX, Ingres, et al, provide this capability. Even dBASE and Paradox have been upgraded for client/server use with the Borland Database Engine, or BDE. The BDE provides linkage to the major SQL databases (Oracle, Sybase, etc.) that reside in the server.

Another issue with client/server is the ability to move various pieces of the application processing to different servers. If the client does all the processing and the server does all the database processing, that's called "two-tier client/server." If yet another server does some of the application processing, that's called "three-tier client/server." Three-tier client/server provides design choice. It may often make sense to centralize proprietary, complicated application processing in a high-speed server rather than duplicating the program in each client. It also may be better to move that processing out of the database server so it doesn't become overworked.

The question then is how easy is it to design a system and place various parts of the logic on different servers? Well, you can always do it with traditional 3GL coding. Writing in C or C++ lets you do anything you want, but the goal of rapid systems development is writing in higher-level languages and using visual programming tools. New client/server development systems, such as DYNASTY and Forte, have recently been introduced with the ability to drag and drop icons of program modules onto icons of servers. That's the high-level kind of development we're talking about. Most all client/server development systems are claiming this ability in some manner or another. When the dust settles, the winners may be those systems that can more easily distribute the pieces of the system onto different machines.


The network operating system, or NOS, is the controlling software that enables a server to accommodate multiple clients and provides the hooks to the communications between them. It allows for remote drives to be accessed as if they were local drives, and it provides file sharing and print services for users on the network. The major network operating systems are NetWare, LAN Manager and LAN Server, VINES and Windows NT. UNIX combined with TCP/IP and NFS, VMS combined with DECnet, the Mac OS combined with AppleTalk, and IBM's SNA combined with its respective communications components also provide NOS services. LANtastic for DOS, Windows & OS/2 is a very popular peer-to-peer networking system, in which any client can also be a server.


When data is sent across a network, it is formatted into frames, or packets, that contain the addresses of the sending and receiving stations. A sequence number is also added to the frames to ensure that what is sent is received in its entirety. This set of formats and rules for message transfer is known as the "communications protocol." If the communications system is able to span multiple networks, the address of the destination network is also included in what is known as a "routable protocol."

If you're going to learn network communications, you've got to learn the OSI model. What was once thought to be the universal communications system now only serves as a reference model, but it does serve that purpose well. The OSI model is an outline of a protocol stack. The protocol stack is a layered architecture that starts at the top layer with the application program making a request to transfer a file, search a database or send e-mail. That programming interface starts a chain of movement with the original message wending its way down to the very bottom of the stack where the data link protocol or access method reside. The software in each layer adds its own bits and bytes to the message to inform its counterpart on the receiving machine.

The most popular data link protocols are Ethernet, Token Ring and LocalTalk. The data link sends the packets to the receiving station, sometimes by way of intervening devices, such as bridges and routers, and the whole process reverses itself moving the message back up the chain with each layer stripping off the bits and bytes that were added in the sending machine.

Each of the major network architectures, such as SNA, DECnet, TCP/IP, NetWare, AppleTalk and LAN Manager/Server, have their own protocol stacks that mirror the OSI model to some degree or another. When several of these architectures are used within a department, gateways (computers with conversion software) are used to translate from one to the other.


The LAN access method is implemented by plugging in a printed circuit board known as a "network adapter," also called a "network interface card" or NIC, into the PC's bus in each client and server. The adapters are cabled together using coaxial cable or twisted wire pair. In the case of fiber distributed data interface (FDDI), the cable is an optical fiber.

Ethernet, the most popular LAN access method, uses a variety of cable types. The original Ethernet, known as 10Base5, or "ThickNet," uses a common coaxial cable connecting all adapters. Small transceivers coming from the adapters clamp onto the cable. A less expensive Ethernet known as 10Base2, or "ThinNet" uses a lighter gauge coaxial cable with BNC connectors that attach to each adapter. The thinner coax moves easier through ducts, and the BNC connectors are easier to install.

The third and most popular Ethernet method is 10BaseT, which uses telephone wire. Not only are twisted wire pairs easier to push and bend through close quarters, but the wiring scheme provides better management, because all nodes are wired through a central hub. However, all Ethernets are shared media LANs. A station broadcasts its packet onto the network and all stations listen for it, whether strung together via a common cable or through the hub. The hub only repeats what comes in to all of its outgoing lines.

The star topology of 10BaseT has given rise to one of the hottest new networking trends: "switched Ethernet." By replacing the hub with a switch, you can dramatically improve network performance between nodes. Instead of sharing 10 megabits per second between all stations, the switch provides 10 Mbps between any two stations.

If 10 Mbps between two nodes isn't required, then an Ethernet switch can also be used to segment LANs, providing a 10 Mbps pipe between LAN segments, each of which is still sharing 10 Mbps in an individual workgroup.

Another Ethernet technology to emerge is Fast Ethernet, which provides 100 megabits per second instead of 10. Fast Ethernet, or 100BaseT, is a faster 10BaseT that also uses twisted pairs wired through a central hub or switch. Fast Ethernet provides a great improvement for server traffic in client/server environments, because the server gets all the action. But it could also be overkill for sharing files within workgroups. Grand Junction solves this problem by combining 10BaseT and 100BaseT in its FastSwitch switch, which provides, for example, 25 10 Mbps and two 100 Mbps ports. The low-speed ports are attached to clients, and the high-speed ports are attached to servers.

3Com's LANplex is another example of a device that switches Ethernet between LAN segments and a server backbone. Its LANplex 2500 provides 16 10 Mbps ports for segmenting LANs and a 100 Mbps port for connection to a FDDI backbone or a 155 Mbps port to an ATM backbone. FDDI has become widely used as a network backbone, because of its fault tolerance. Its "dual counter-rotating ring" topology contains primary and secondary rings with data flowing in opposite directions. If the line breaks, the secondary ring is used to bypass the fault.


You can't easily string that cable across town, county or state, so when you need to communicate with remote offices, you've got to use the services of the telephone companies and other common carriers. Your choice to span remote distances is to set up point-to-point connections or to use a switched service, which charges you for the number of bits and bytes transmitted.

The traditional point-to-point connections are the ubiquitous T1 (1.5 Mbps) and T3 (45 Mbps) lines. Pieces of T1 lines known as "fractional T1," are also available in 64 Kbps increments. The only problem with point-to-point connections is that you pay for the pipe from here to there whether you use it or not.

On the other hand, switched services are available on a "pay as you go basis." The bandwidth is increasing for these services from the traditional Switched 56 and Switched 64 services, which provide 56 and 64 Kbps respectively, to frame relay at 1.5 Mbps to SMDS at up to 45 Mbps to - eventually - asynchronous transfer mode (ATM) at rates of 155 Mbps and beyond.

The network hardware used to connect to leased lines and various digital services is called the DSU/CSU. The DSU/CSU is to digital services what a modem is to analog services. The CSU terminates the external line at the customer's premises. It also provides diagnostics and allows for remote testing. If your communications devices are T1 ready and have the proper interface, then the CSU is not required, only the DSU.

The DSU does the actual transmission and receiving of the signal and provides buffering and flow control. The DSU and CSU are often in the same unit. The DSU may also be built into the multiplexor, commonly used to combine digital signals for high-speed lines.


Bridges are devices that are inserted into networks to break a large LAN into smaller ones for better management. Bridges function at the data link, or MAC, layer. Some bridges can also switch between Ethernet and Token Ring.

Routers are devices that function at a higher level than bridges and are used to route traffic from a LAN to a WAN. They inspect the network address in the network layer of the protocol and switch the packets to the appropriate network. Because routers can analyze the protocol, they are also used to segment LANs by filtering protocols. For example, a TCP/IP application can be routed to one segment, while a DECnet application can be routed to another.

Gateways are computers that convert from one major protocol to another, such as from SNA to TCP/IP. Gateways are also used to switch from one e-mail protocol to another.

Switches are devices that cross one station or LAN segment over to another, which has been the traditional use of the telephone PBX. In the 1970s, many thought a PBX-like switch would become the networking technology of the future, handling both voice and data. Well, they were right about the architecture, but right now it's still for data. Perhaps it will come to fruition as it was envisioned years ago.

Switching is without a doubt the mode of the 1990s. Ethernet switching is increasing dramatically, and it fits right in with ATM, the newest, most comprehensive networking technology to come along in years. Although standards are still forthcoming for several ATM issues, such as how to connect Ethernet and Token Ring LANs (legacy LANs) to ATM, ATM's capabilities are unique.

ATM is a cell-based, switching technology used for both LANs and WANs, allowing for a seamless connection between local area networks and the telephone companies, all of which do or will provide ATM services. ATM is a scalable technology that currently runs at 25 Mbps, 45 Mbps and 155 Mbps, but can grow to multi-gigabit transmission rates. It can also handle isochronous (time dependent) data just like a text file. That means realtime videoconferencing can pass over an ATM network with the same ease as your e-mail message. Stay tuned.


A file server is a computer in a network used to store applications and files that are shared by all the users in the network. A database server is dedicated to database processing and holds the DBMS and databases. As microprocessor architecture improves in performance, the server increasingly takes the place of the mini and mainframe. However, in its new role, the demands placed on a single-CPU microcomputer are also increasing.

One of the niftiest architectures to come down the pike in a long time is called "symmetric multiprocessing," or SMP. What's neat about it is that it provides performance scalability within the same system. It requires an SMP operating system and a computer with two or more CPUs inside. SMP simply uses all the CPUs that are available as a stable of resources. Whenever a process needs execution, the SMP operating system assigns a free CPU to it. When that process becomes idle waiting for input or output, that CPU is assigned to another process that needs to execute, either in a different program, or within the same program if that application supports multiple threads of execution. It's a marvelous concept, because it provides a way to expand the server to handle increasing transaction volume.

Hardware vendors such as Sequent, Pyramid and Encore pioneered SMP, but virtually every other hardware maker is now offering SMP machines. Many flavors of UNIX have been revamped for SMP, and Windows NT was designed from the getgo for it. An SMP-based NetWare is also in the offing.


Today, networking has become the backbone of every department's information systems. Long predicted to merge into one system, voice and data still travel through their own independent networks and equipment. However, the issues of network management (monitoring network flow and congestion) and systems management (software distribution and backup) with the ever increasing demands multimedia make on network bandwidth is enough of a job. Let voice take care of itself for a while. Some day, it will all merge into one digital blob. After all, we're still in our infancy with computers and networking. We sometimes tend to forget that.

Back to the Data Center?

Why is networking so difficult? If it isn't for the bewildering number of products and offerings, it's the fact that they change daily. As distributed computing has become the norm, it is increasingly difficult to manage devices spread out all over the place than it was when the processors were in the glass-enclosed datacenter.

As sure as the sun shines in the sky, in time it will all go back that way. We're spreading so much complexity out into the field that it will be necessary to rein it back in and manage it in one spot. With optical fibers and parallel processing architectures, the technology will let us do it. It may take 10 years, but when skirts go up, they can only go down. When ties get too thin, they get wide again. When we've had our fill of distributed, decentralized bedlam, we will find it a joyous occasion to go back to centralized processing with dumb terminals. With an optical fiber attached to each terminal, response times will be instantaneous. Had that technology already been in place in the late 1970s, personal computers might never have taken off.