Agencies at every level of government have a wealth of information in legacy databases that needs to be accessed or shared. Whether the agency wants to share this information internally or externally, the problem is essentially the same: making the information available in a secure, accessible way that minimizes development time and keeps application maintenance costs to a minimum.

As the world races toward universal acceptance of the World Wide Web, many are looking to the Web as the way to solve this problem. Admittedly, the Web offers some powerful advantages: Access is nearly ubiquitous; it is platform independent; existing infrastructures can be used, often with few or no changes; and the same technology can be used for internal as well as external applications.

Despite these advantages, the protocol that makes the Web work (Hypertext Transfer Protocol or HTTP) leaves much to be desired when building robust client/server applications. HTTP is a rather simple protocol:

* The client, typically called a browser, requests a file from a program called a Web server. Usually, the Web server is running on another computer so the request is done over a corporate network or via the Internet. The requested document usually includes embedded Hypertext Markup Language (HTML) "tags," which describe how the document should be displayed. Some tags may also include references (links) to other documents or images.

* The Web server looks for the document and, assuming it finds it, sends it to the requesting client.

* The client reads and formats the document in accordance with the embedded HTML tags.

If the client later wants another document, the above cycle is repeated. The fundamental problem with using HTTP as the basis for robust, interactive applications is that it has no memory. HTTP doesn't provide a means for knowing whether the same user has made one request or a thousand requests. This approach works perfectly for the original intended purpose: to provide a simple way to organize research data and make it accessible. However, it does not work for most database-related work that requires a "session" -- an ongoing connection between the database program and a particular user.

For example, suppose a user is looking at an alphabetized list of people in a mailing list and wants to scroll down to see the "next" group of names. In order to provide the "next" group, the application program needs to know which query produced the list of names and which names are currently displayed. HTTP isn't capable of this kind of interaction -- as soon as a document is delivered in response to a request, HTTP forgets everything related to the request and the requester. In more technical terms, this lack of session continuity is called a "stateless connection," meaning the server does not keep track of the "state" of the client.

Due to the statelessness of HTTP, Web-server software can operate very quickly and, with very few exceptions, doesn't need to know anything about the content of the documents it provides: It simply finds the requested document, checks file permissions and, if allowed, returns it to the requester.

Expanding the capability of Web-based applications are programmers and vendors who have devised ways to create what might be called "pseudosessions." These solutions all amount to different ways of keeping track of values between HTTP requests. For example, these techniques can let a user enter his name in a Web-based form, store the name so that the next time he requests a document (whether six seconds or six weeks later), the Web-server software appears to remember the user's name.

The limited functionality of Web-server software can also be expanded by installing "partner programs," which work with the Web-server software and provide functionality not present in the software itself. Here's how it works:

David Aden  | 
David Aden DAden@webworldtech.com is a writer from Washington, D.C.