There are so many trends, subtrends and new features in the database world, that it's difficult to get a handle on who's doing what, how they're doing it and why. Gone are the simple, halcyon days of mainframe dominance and IT supremacy. Client/server and open systems mean more choices, more flexible systems -- and more levels of complexity. For anyone investigating or choosing a database management system (DBMS), the new computing world means database decisions based solely on the database software or its vendor may fall short. Interoperability, conformance to network and database standards, supported hardware platforms, operating systems, network operating systems, Web support, third-party tools and a host of other issues must be considered when evaluating any of the current database players.
Even the type of database must be considered. Most of the big players are relational, but object databases are beginning to stake out a share of the market -- although even the definition of "object databases" is open to differing interpretations.
Most of the major relational database makers have added, or are adding, object extensions to their products, so they can handle user-defined and binary datatypes such as pictures, video, faxes, etc. However, this doesn't mean that the database itself is object-oriented. True object-oriented database management systems (ODBMS) handle various object types and are themselves object oriented. Some of the ODBMSes available today are Versant, Servio Corp.'s Gemstone, Object Design's Object Store, Ontos' ONTOS DB, Hewlett-Packard's Open ODB and UniSQL's UniSQL/X. For simplicity's sake, this article does not examine any of the object-oriented database offerings.
To understand how far the database debate has progressed, it is important to understand something of recent database history, SQL in particular.
SQL (Structured Query Language) began life in the 1970s as an IBM English-like interface to databases. It was based on the data modeling work of E.F. Codd and was intended to be a descriptive, nonprocedural language. In short, this meant SQL was used to describe what information was wanted, and the SQL server or engine would decide how best to extract that information and present it in a standard format. In contrast, procedural languages -- such as C, BASIC or Pascal -- require the programmer to instruct the computer on each step needed to accomplish a task.
In the years since its original development, three ISO SQL standards have been issued. According to Orfali, Harkey and Edwards (see "more information" sidebar), the first standard was issued in 1986 and was revised in 1989. Now called SQL-89, it was an intersection of the SQL implementations of that time, which made it easy for existing products to conform. Because of the low threshold, compliance with this standard meant little.
SQL-92 extended the SQL-89 standard and implemented three stages of conformance -- entry, intermediate and full. Most of the big database vendors have implemented some of the SQL-92 specifications, but none have implemen-ted the entire standard. Martin Rennhackkamp (see "more information" sidebar), in an introduction to a recent series of database comparison articles, noted at least two major areas in which many of the big databases have failed to implement the SQL-92 standard.
Although full adoption of the SQL-92 standard is not yet in sight, an even newer standard -- SQL3 -- is already being worked over. The SQL3 standard weighs in at a daunting 1,000 pages and is broken down into seven parts, each of which is being considered separately. Expected adoption of this standard remains years away, yet many of the issues it addresses are already being tackled by database vendors. Intense vendor competition combined with the no-holds-barred rush of developments in client/server and Internet technology is forcing database vendors to move with the times or be left behind.
As a result, proprietary extensions to SQL have been introduced by almost