The server is a multiuser computer. There is no special hardware requirement that turns a computer into a server. The hardware platform should be selected based on application demands and economics. Servers for client/server applications work best when
they are configured with an operating system that supports shared memory, application isolation, and preemptive multitasking. An operating system with preemptive multitasking enables a higher priority task to preempt or take control of the processor from a
currently executing, lower priority task.
The server provides and controls shared access to server resources. Applications on a server must be isolated from each other so that an error in one cannot damage another. Preemptive multitasking ensures that no single task can take over all the
resources of the server and prevent other tasks from providing service. There must be a means of defining the relative priority of the tasks on the server. These requirements are specific to the client/server implementation and not to the file server
implementation. Because file servers execute only the single task of file service, they can operate in a more limited operating environment without the need for application isolation and preemptive multitasking.
The traditional minicomputer and mainframe hosts have acted as de facto enterprise servers for the network of terminals they support. Because the only functionality available to the terminal user is through the host, personal productivity data as well
as corporate systems information is stored on this host server. Network services, application services, and database services are provided centrally from the host server.
Many organizations download data from legacy enterprise servers for local manipulation at workstations. In the client/server model, the definition of server will continue to include these functions, perhaps still implemented on the same or similar
platforms. Moreover, the advent of open systems based servers is facilitating the placement of services on many different platforms. Client/server computing is a phenomenon that developed from the ground up. Remote workgroups have needed to share expensive
resources and have connected their desktop workstations into local area networks (LANs). LANs have grown until they are pervasive in the organization. However, frequently (similar to parking lots) they are isolated one from the other.
Many organizations have integrated the functionality of their dumb terminals into their desktop workstations to support character mode, host-based applications from the single workstation. The next wave of client/server computing is occurring now, as
organizations of the mid-1990s begin to use the cheaper and more available processing power of the workstation as part of their enterprise systems.
The Novell Network Operating System (NOS), NetWare, is the most widely installed LAN NOS. It provides the premier file and print server support. However, a limitation of NetWare for the needs of reliable client/server applications has been the
requirement for an additional separate processor running as a database server. The availability of database server softwarefrom companies such as Sybase and Oracleto run on the NetWare server, is helping to diffuse this limitation. With the
release of Novell 4.x, Netware supports an enterprise LAN (that is, a thousand internetworked devices) with better support for Directory Services and TCP/IP internetworking.
DEC demonstrated the Alpha AXP processor running Processor-Independent NetWare in native mode at the PC Expo exhibit in June 1993. HP, Sun, and other vendors developing NetWare on RISC-based systems announced shipment of developer kits for availability
in early 1994. Native NetWare for RISC is scheduled for availability in late 1994. This will provide scalability for existing Netware users who run out of capacity on their Intel platforms.
Banyan VINES provides the competitive product to Novell 4.x for enterprise LANs. Directory services are provided in VINES through a feature called StreetTalk. VINES 5.5 provides excellent WAN connectivity and is very popular among customers with a
heterogeneous mainframe and minicomputer enterprise. However, it suffers from a weak support for file and printer sharing and a general lack of application package support. Banyan's Enterprise Network Services (ENS) with StreetTalk provides the best
Directory Services implementation today. StreetTalk enables users to log into the network rather than to a server. This single logon ID enables access to all authorized servers anywhere in the network. Banyan made ENS available for Netware 3.11 and plans
to make it available for Netware 4.x and Microsoft's Windows NT Advanced Server.
Microsoft's LAN Manager NOS and its several derivativesincluding IBM Lan Server, HP LAN Manager/UX and DEC Pathworksprovide file and printer services but with less functionality, and more user complexity, than Novell's NetWare. The
operating systems that support LAN Manager provide the necessary shared memory, protected memory, and preemptive multitasking services necessary for reliable client/server computing. They provide this support by operating natively with the OS/2, UNIX, VMS,
and MVS operating systems. These operating systems all provide these services as part of their base functionality. The scalability of the platforms provides a real advantage for organizations building client/server, and not just file server, applications.
The lack of reasonable directory services restricts LAN Manager from the enterprise LAN role today. Microsoft has just released Advanced Server, the Windows NT version of LAN Manager. This provides a much stronger Intel platform than LAN Manager. In
conjunction with the Banyan ENS, Advanced Server is a strong competitor to Novell's NetWare as the preferred NOS.
Network File System (NFS) is the standard UNIX support for shared files and printers. NFS provides another option for file and print services to client workstations with access to a UNIX server. PC NFS is the PC product that runs on the client and
provides connectivity to the NFS file services under UNIX. NFS with TCP/IP provides the additional advantage of easy-to-use support for remote files and printers.
Novell and NFS can interoperate effectively because of the increasing support for TCP/IP as a LAN and WAN protocol. Recent announcements by IBM and Microsoft of alliances with Novell and Banyan promise a future in which all of the features of each NOS
will be selectively available to everyone. Until these products improve their capability to work together, organizations still have the challenge of determining which NOS to select. Most will choose to use NetWare plus Windows clients with OS/2, UNIX, VMS,
or MVS servers for their client/server applications. There will be a significant increase during 1994-95 in the use of NFS based servers with support now available on all major UNIX platforms as well as OS/2, MVS, and VMS.
There is no preeminent hardware technology for the server. The primary characteristic of the server is its support for multiple simultaneous client requests for service. Therefore, the server must provide multitasking support and shared memory
services. High-end Intel, RISC (including Sun SPARC, IBM/Motorola PowerPC, HP PA RISC, SGI MIPS, and DEC Alpha), IBM System/370, and DEC VAX processors are all candidates for the server platform. The server is responsible for managing the server-requester
interface so that an individual client request response is synchronized and directed back only to the client requester. This implies both security when authorizing access to a service and integrity of the response to the request.
With object-oriented technology (OOT) increasingly used to build operating systems and development environments, servers are becoming ubiquitous (anything, anywhere, and anytime) and transparent in technology and location to the user and developer.
NeXtStep provides the only production ready model of what will be the dominant developer model in 1995 and beyond. Sun's DOE implementation of the OMG defined CORBA standards provides a view of the future role of the object server. This is the first
implementation of the vision of the original OOT scientists. The future promises applications assembled from object repositories containing the intellectual property of a business combined with commercial objects made available by OOT developers executing
on servers somewhere.
Servers provide application, file, database, print, fax, image, communications, security, systems, and network management services. These are each described in some detail in the following sections.
It is important to understand that a server is an architectural concept, not a physical implementation description. Client and server functions can be provided by the same physical device. With the movement toward peer computing, every device will
potentially operate as a client and server in response to requests for service.
Application servers provide business functionality to support the operation of the client workstation. In the client/server model these services can be provided for an entire or partial business function invoked through an InterProcess Communication
(IPC) request for service. Either message-based requests (à la OLTP) or RPCs can be used. A collection of application servers may work in concert to provide an entire business function. For example, in a payroll system the employee information
may be managed by one application server, earnings calculated by another application server, and deductions calculated by a third application server. These servers may run different operating systems on various hardware platforms and may use different
database servers. The client application invokes these services without consideration of the technology or geographic location of the various servers. Object technology provides the technical basis for the application server, and widespread acceptance of
the CORBA standards is ensuring the viability of this trend. File servers provide record level data services to nondatabase applications.
Space for storage is allocated, and free space is managed by the file server. Catalog functions are provided by the file server to support file naming and directory structure. Filename maximum length ranges from 8 to 256 characters, depending on the
particular server operating system support. Stored programs are typically loaded from a file server for execution on a client or host server platform.
Database servers are managed by a database engine such as Sybase, IBM, Ingres, Informix, or Oracle. The file server provides the initial space, and the database engine allocates space for tables within the space provided by the file server. These host
services are responsible for providing the specialized data services required of a database productautomatic backout and recovery after power, hardware, or software failure, space management within the file, database reorganization, record locking,
deadlock detection, and management. Print servers provide support to receive client documents, queue them for printing, prioritize them, and execute the specific print driver logic required for the selected printer. The print server software must have the
necessary logic to support the unique characteristics of each printer. Effective print server support will include error recovery for jams and operator notification of errors with instructions for restart.
Fax servers provide support similar to that provided by print servers. In addition, fax servers queue up outgoing faxes for later distribution when communications charges are lower. Because fax documents are distributed in compressed form using either
Group III or Group IV compression, the fax server must be capable of dynamically compressing and decompressing documents for distribution, printing, and display. This operation is usually done through the addition of a fax card to the server. If faxing is
rare, the software support for the compression and decompression options can be used. Image servers operate in a manner similar to fax servers.
Communications servers provide support for wide area network (WAN) communications. This support typically includes support for a subset of IBM System Network Architecture (SNA), asynchronous protocols, X.25, ISDN, TCP/IP, OSI, and LAN-to-LAN NetBIOS
communication protocols. In the Novell NetWare implementation, Gateway Communications provides a leading communications product. In the LAN Server and LAN Manager environments, OS/2 communications server products are available from IBM and DCA. In the
Banyan VINES environment, the addition of DCA products to VINES provides support for SNA connectivity. UNIX servers provide a range of product add-ons from various vendors to support the entire range of communications requirements. VMS servers support
DECnet, TCP/IP, and SNA as well as various asynchronous and serial communications protocols. MVS servers provide support for SNA, TCP/IP, and some support for other asynchronous communications.
Security at the server restricts access to software and data accessed from the server. Communications access is controlled from the communications server. In most implementations, the use of a user login ID is the primary means of security. Using LAN
Server, some organizations have implemented integrated Response Access/Control Facility (RACF) security by creating profiles in the MVS environment and downloading those to the LAN server for domain control. Systems and network management services for the
local LAN are managed by a LAN administrator, but WAN services must be provided from some central location. Typically, remote LAN management is done from the central data center site by trained MIS personnel. This issue is discussed in more detail in
The discussion in the following sections more specifically describes the functions provided by the server in a NOS environment.
Requests are issued by a client to the NOS services software resident on the client machine. These services format the request into an appropriate RPC and issue the request to the application layer of the client protocol stack. This request is received
by the application layer of the protocol stack on the server.
File services handle access to the virtual directories and files located on the client workstation and to the server's permanent storage. These services are provided through the redirection software implemented as part of the client workstation
operating environment. As Chapter 3 described, all requests are mapped into the virtual pool of resources and redirected as necessary to the appropriate local or remote server. The file services provide this support at the remote server processor. In the
typical implementation, software, shared data, databases, and backups are stored on disk, tape, and optical storage devices that are managed by the file server.
To minimize the effort and effect of installation and maintenance of software, software should be loaded from the server for execution on the client. New versions can be updated on the server and made immediately available to all users. In addition,
installation in a central location reduces the effort required for each workstation user to handle the installation process. Because each client workstation user uses the same installation of the software, optional parameters are consistent, and remote
help desk operators are aware of them. This simplifies the analysis that must occur to provide support. Sharing information, such as word processing documents, is easier when everyone is at the same release level and uses the same default setup within the
software. Central productivity services such as style sheets and macros can be set up for general use. Most personal productivity products do permit local parameters such as colors, default printers, and so forth to be set locally as well.
Backups of the server can be scheduled and monitored by a trained support person. Backups of client workstations can be scheduled from the server, and data can be stored at the server to facilitate recovery. Tape or optical backup units are typically
used for backup; these devices can readily provide support for many users. Placing the server and its backups in a secure location helps prevent theft or accidental destruction of backups. A central location is readily monitored by a support person who
ensures that the backup functions are completed. With more organizations looking at multimedia and image technology, large optical storage devices are most appropriately implemented as shared servers.
High-quality printers, workstation-generated faxes, and plotters are natural candidates for support from a shared server. The server can accept input from many clients, queue it according to the priority of the request and handle it when the device is
available. Many organizations realize substantial savings by enabling users to generate fax output from their workstations and queue it at a fax server for transmission when the communication costs are lower. Incoming faxes can be queued at the server and
transmitted to the appropriate client either on receipt or on request. In concert with workflow management techniques, images can be captured and distributed to the appropriate client workstation from the image server. In the client/server model, work
queues are maintained at the server by a supervisor in concert with default algorithms that determine how to distribute the queued work.
Incoming paper mail can be converted to image form in the mail room and sent to the appropriate client through the LAN rather than through interoffice mail. Centralized capture and distribution enable images to be centrally indexed. This index can be
maintained by the database services for all authorized users to query. In this way, images are captured once and are available for distribution immediately to all authorized users. Well-defined standards for electronic document management will allow this
technology to become fully integrated into the desktop work environment. There are dramatic opportunities for cost savings and improvements in efficiency if this technology is properly implemented and used. Chapter 10 discusses in more detail the issues of
electronic document management.
Early database servers were actually file servers with a different interface. Products such as dBASE, Clipper, FoxPro, and Paradox execute the database engine primarily on the client machine and use the file services provided by the file server for
record access and free space management. These are new and more powerful implementations of the original flat-file models with extracted indexes for direct record access. Currency control is managed by the application program, which issues lock requests
and lock checks, and by the database server, which creates a lock table that is interrogated whenever a record access lock check is generated. Because access is at the record level, all records satisfying the primary key must be returned to the client
workstation for filtering. There are no facilities to execute procedural code at the server, to execute joins, or to filter rows prior to returning them to the workstation. This lack of capability dramatically increases the likelihood of records being
locked when several clients are accessing the same database and increases network traffic when many unnecessary rows are returned to the workstation only to be rejected.
The lack of server execution logic prevents these products from providing automatic partial update backout and recovery after an application, system, or hardware failure. For this reason, systems that operate in this environment require an experienced
system support programmer to assist in the recovery after a failure. When the applications are very straightforward and require only a single row to be updated in each interaction, this recovery issue does not arise. However, many client/server
applications are required to update more than a single row as part of one logical unit of work.
Client/server database engines such as Sybase, IBM's Database Manager, Ingres, Oracle, and Informix provide support at the server to execute SQL requests issued from the client workstation. The file services are still used for space allocation and
basic directory services, but all other services are provided directly by the database server. Relational database management systems are the current technology for data management. Figure 4.1 charts the evolution of database technology from the first
computers in the late 1950s to the object-oriented database technologies that are becoming prevalent in the mid-1990s.
Figure 4.1. Database trends.
Database technology has evolved from the early 1960s' flat-file view when data was provided through punch cards or disk files simulating punch cards. These original implementations physically stored data columns and records according to the user view.
The next column in the user view was the next column in the physical record, and the next record in the user view was the next physically stored record. Sorting the physical records provided the means by which a user was presented with a different view of
related records. Columns were eliminated from view by copying the records from one location to another without the unnecessary columns. Many organizations today still use the flat-file approach to data management for reporting and batch update input. Data
is extracted and sorted for efficient input to a batch report. Data is captured for update and sorted for more efficient input to a batch update program.
The second generation of database technology, the hierarchical database, could store related record types physically or logically next to each other. In the hierarchical model implementation, when a user accesses a physical record type, other
application-related data is usually stored physically close and will be moved from disk to DRAM all together. Internally stored pointers are used to navigate from one record to the next if there is insufficient space close by at data creation time to
insert the related data. Products such as IMS and IDMS implemented this technique very successfully in the early 1970s. Many organizations continue to use database applications built to use this technology.
The major disadvantage with the hierarchical technique is that only applications that access data according to its physical storage sequence benefit from locality of reference. Changes to application requirements that necessitate a different access
approach require the data to be reorganized. This process, which involves reading, sorting, and rewriting the database into a new sequence, is not transparent to applications that rely on the original physical sequence. Indexes that provide direct access
into the database provide the capability to view and access the information in a sequence other than the physical sequence. However, these indexes must be known to the user at the time the application is developed. The developer explicitly references the
index to get to the data of interest. Thus, indexes cannot be added later without changing all programs that need this access to use the index directly. Indexes cannot be removed without changing programs that currently access the index. Most
implementations force the application developer to be sensitive to the ordering and occurrence of columns within the record. Thus, columns cannot be added or removed without changing all programs that are sensitive to these records.
Application sensitivity to physical implementation is the main problem with hierarchical database systems. Application sensitivity to physical storage introduced considerable complexity into the navigation as application programmers traverse the
hierarchy in search of their desired data. Attempts by database vendors to improve performance have usually increased the complexity of access. If life is too easy today, try to create a bidirectionally virtually paired IMS logical relationship; that is
why organizations using products such as IMS and IDMS usually have highly paid database technical support staff.
As hardware technology evolves, it is important for the data management capabilities to evolve to use the new capabilities. Figure 4.2 summarizes the current essential characteristics of the database world. The relational database is the de facto
standard today; therefore, investment by vendors will be in products that target and support fully compliant SQL databases.
Figure 4.2. Database essentials.
Relational database technology provides the current data management solution to many of the problems inherent in the flat-file and hierarchical technologies. In the late 1970s and early 1980s, products such as Software AG's ADABAS and System 2000 were
introduced in an attempt to provide the application flexibility demanded by the systems of the day. IBM with IMS and Cullinet with IDMS attempted to add features to their products to increase this flexibility. The first relational products were introduced
by ADR with Datacom DB and Computer Corporation of America with Model 204.
Each of these implementations used extracted indexes to provide direct access to stored data without navigating the database or sorting flat files. All the products attempted to maintain some of the performance advantages afforded by locality of
reference (storage of related columns and records as close as possible to the primary column and record).
Datacom and Model 204 introducedfor the first timethe Structured Query Language (SQL). SQL was invented in the early 1970s by E. F. (Ted) Codd of IBM Labs in Santa Teresa, California. The primary design objective behind SQL was to provide a
data access language that could be shown mathematically to manipulate the desired data correctly. The secondary objective was to remove any sense of the physical storage of data from the view of the user. SQL is another flat-file implementation; there are
no embedded pointers. SQL uses extracted indexes to provide direct access to the rows (records) of the tables (files) of interest. Each column (field) may be used as part of the search criteria.
SQL provides (especially with SQL2 extensions) a very powerful data access language. Its algebra provides all the necessary syntax to define, secure, and access information in an SQL database. The elegance of the language intrigued the user and vendor
community to the extent that standards committees picked up the language and defined a set of standards around the language syntax. SQL1 and SQL2 define an exact syntax and a set of results for each operation. As a consequence, many software vendors have
developed products that implement SQL. This standardization will eventually enable users to treat these products as commodities in the same way that PC hardware running DOS has become a commodity. Each engine will soon be capable of executing the same set
of SQL requests and producing the same result. The products will then be differentiated based on their performance, cost, support, platform availability, and recovery-restart capabilities.
Dr. Codd has published a list of 13 rules that every SQL database engine should adhere to in order to be truly compliant. No products today can meet all of these criteria. The criteria, however, provide a useful objective set for the standards
committees and vendors to strive for. We have defined another set of product standards that we are using to evaluate SQL database engines for the development of client/server applications. In particular, products should be implemented with support for the
following products and standards:
Production-capable client/server database engines must be able to provide a similar operational environment to that found in the database engines present in minicomputer and mainframe computers today. Capabilities for comparison include performance,
auditability, and recovery techniques. In particular, the following DBMS features must be included in the database engine:
In the client/server implementation, you should offload database processing to the server. Therefore, the database engine should accept SQL requests from the client and execute them totally on the server, returning only the answer set to the client
requestor. The database engine should provide support for stored procedures or triggers that run on the server.
The client/server model implies that there will be multiple concurrent user access. The database engine must be able to manage this access without requiring every developer to write well-behaved applications. The following features must be part of the
With the increasing maturity and popularity of OOTs for development, there has been a significant increase in maturity and acceptance of object-oriented database management systems (OODBMS). Object-oriented database management systems provide support
for complex data structures: such as compound documents, CASE entity relationship models, financial models, and CAD/CAM drawings. OODBMS proponents claim that relational database management systems (RDBMS) can handle only simple data structures (such as
tables) and simple transaction-processing applications that only need to create views combining a small number of tables. OODBMS proponents argue that there is a large class of problems that need to be and will be more simply implemented if more complex
data structures can be viewed directly. RDBMS vendors agree with the need to support these data structures but argue that the issue is one of implementation, not architecture.
Relational databases are characterized by a simple data structure. All access to data and relationships between tables are based on values. A data value occurrence is uniquely determined by the concatenation of the table name, column name, and the
value of the unique identifier of the row (the primary key). Relationships between tables are determined by a common occurrence of the primary key values. Applications build a view of information from tables by doing a join based on the common values. The
result of the join is another table that contains a combination of column values from the tables involved in the join.
The development of a relational algebra defining the operations that can be performed between tables has enabled efficient implementations of RDBMSs. The establishment of industry standards for the definition of and access to relational tables has
speeded the acceptance of RDBMSs as the de facto standard for all client/server applications today. Similar standards do not yet exist for OODBMSs. There is a place for both models. To be widely used, OODBMSs need to integrate transparently with RDBMS
technology. Table 4.1 compares the terminology used by RDBMS and OODBMS proponents.
Collection of rows in table
Table definition (user type extension)
Stored procedure (extension)
Transact SQL, PL/SQL, and stored complete procedures
There remain some applications for which RDBMSs have not achieved acceptable performance. Primarily, these are applications that require very complex data structures. Thousands of tables may be defined with many relationships among them. Frequently,
the rows are sparsely populated, and the applications typically require many rows to be linked, often recursively, to produce the necessary view.
The major vendors in this market are Objectivity Inc., Object Design, Ontos, and Versant. Other vendors such as HP, Borland, and Ingres have incorporated object features into their products.
The application characteristics that lead to an OODBMS choice are shown in Figure 4.3. OODBMS will become production capable for these types of applications with the introduction of 16Mbps D-RAM and the creation of persistent (permanent)
databases in D-RAM. Only the logging functions will use real I/O. Periodically, D-RAM databases will be backed up to real magnetic or optical disk storage. During 1993, a significant number of production OODBMS applications were implemented. With the
confidence and experience gained from these applications, the momentum is building, and 1994 and 1995 will see a significant increase in the use of OODBMSs for business critical applications. OODBMSs have reached a maturity level coincident with the demand
for multimedia enabled applications. The complexities of dealing with multimedia demands the features of OODBMS for effective storage and manipulation.
Figure 4.3. Object-oriented database.
To enable more complex data types to be manipulated by a single command, OODBMSs provide encapsulated processing logic with the object definition.
Client/server applications require LAN and WAN communication services. Basic LAN services are integral to the NOS. WAN services are provided by various communications server products. Chapter 5 provides a complete discussion of connectivity issues in
the client/server model.
Client/server applications require similar security services to those provided by host environments. Every user should be required to log in with a user ID and password. If passwords might become visible to unauthorized users, the security server
should insist that passwords be changed regularly. The enterprise on the desk implies that a single logon ID and logon sequence is used to gain the authority once to access all information and process for the user has a need and right of access. Because
data may be stored in a less physically secure area, the option should exist to store data in an encrypted form. A combination of the user ID and password should be required to decrypt the data.
New options, such as floppyless workstations with integrated data encryption standard (DES) coprocessors, are available from vendors such as Beaver Computer Company. These products automatically encrypt or decrypt data written or read to disk or a
communication line. The encryption and decryption are done using the DES algorithm and the user password. This ensures that no unauthorized user can access stored data or communications data. This type of security is particularly useful for laptop
computers participating in client/server applications, because laptops do not operate in surroundings with the same physical security of an office. To be able to access the system from a laptop without properly utilizing an ID number and password would be
The network operating system (NOS) provides the services not available from the client OS.
NetWare is a family of LAN products with support for IBM PC-compatible and Apple Macintosh clients, and IBM PC-compatible servers. NetWare is a proprietary NOS in the strict sense that it does not require another OS, such as DOS, Windows, Windows NT,
OS/2, Mac System 7, or UNIX to run on a server. A separate Novell productPortable NetWare for UNIXprovides server support for leading RISC-based UNIX implementations, IBM PC-compatible systems running Windows NT, OS/2, high-end Apple Macs
running Mac System 7, and Digital Equipment Corporation VAXs running VMS.
NetWare provides the premier LAN environment for file and printer resource sharing. It had 62 percent of the market share in 1993. It is widely installed as the standard product in many organizations. NetWare is the original LAN NOS for the PC world.
As such, it incorporates many of the ease-of-use features required for sharing printers, data, software, and communications lines. Agreements between Novell and IBM to remarket the product and provide links between NetWare and the LAN Server product
confirm the commitment to Novell NetWare's use within large organizations. Figure 4.4 shows the major components of the NetWare architecture, illustrating client and server functions.
Figure 4.4. NetWare architecture.
Novell has committed to move NetWare to an open architecture. Through the use of open protocol technology (OPT), Novell makes NetWare fully network protocol independent. Two standardized interfacesopen datalink interface (ODI) and NetWare
Streamsenable other vendors to develop products for the NetWare environment. This facilitates its integration into other platforms. Figure 4.5 outlines the NetWare open architecture. The diagram also illustrates the wide range of connectivity
supported by NetWare. Client workstations can use Mac System 7, OS/2, DOS, Windows, Windows NT, NetWare, or UNIX NFS operating environments. OS/2, Windows NT, and UNIX servers may be installed on the same LAN as NetWare servers to provide support for
products that require these platforms. Novell's purchase of USL from AT&T has increased its commitment to early support for native UNIX servers. HP, Sun, DEC, and Novell have announced an agreement to port NetWare to their respective UNIX platforms.
Novell has won the battle to be the standard for the file/print server in the LAN environment.
Figure 4.5. NetWare open services.
Novell's published goal is to provide NetWare services totally independent of network media, network transport protocols, client/server protocols, and server and client operating systems, at each layer of network design.
NetWare has benefitted from its high performance and low resource requirements as much as it has from its relative ease of use. This performance has been provided through the use of a proprietary operating system and network protocols. Even though this
has given Novell an advantage in performance, it has caused difficulties in the implementation of application and database servers in the Novell LAN. Standard applications cannot run on the server processor, because NetWare does not provide compatible
APIs. Instead, NetWare provides a high performance capability called a NetWare Loadable Module (NLM) that enables database servers such as Sybase and Oracle, and communications servers such as Gateway Communications provides, to be linked into the NetWare
NOS. In addition, the tailored operating environment does not provide some system features, such as storage protection and multitasking, in the same fundamental way that OS/2 and UNIX do. However, Novell is committed to address these issues by supporting
the use of UNIX, OPENVMS, OS/2, and Windows NT as native operating environments.
With the release of NetWare 4.0, Novell addressed the serious issue of enterprise computing with improved network directory services (NDS), one thousand node domains, and LAN/WAN support for TCP/IP. Native NetWare 4.x will be available to developers in
early 1994 and production ready by the end of 1994. For the other end of the product range, Novell released NetWare Lite in 1993 to address the small business and simple workgroup requirements of LANs with five or fewer workstations. This enables
organizations to remain with NetWare as the single LAN technology everywhere. Clearly, Novell's pitch is that systems management and administration are greatly simplified with the single standard of "NetWare Everywhere."
LAN Manager and its IBM derivative, LAN Server, are the standard products for use in client/server implementations using OS/2 as the server operating system. LAN Manager/X is the standard product for client/server implementations using UNIX System V as
the server operating system. Microsoft released its Advanced Server product with Windows NT in the third quarter of 1993. During 1994, it will be enhanced with support for the Microsoft network management services, currently referred to as
"Hermes," and Banyan's Enterprise Network Services (ENS). Advanced Server is the natural migration path for existing Microsoft LAN Manager and IBM LAN Server customers. Existing LAN Manager/X customers probably won't find Advanced Server an
answer to their dreams before 1995.
AT&T has taken over responsibility for the LAN Manager/X version. Vendors such as Hewlett-Packard (HP) have relicensed the product from AT&T. AT&T and Microsoft have an agreement to maintain compatible APIs for all base functionality.
LAN Manager and Advanced Server provide client support for DOS, Windows, Windows NT, OS/2, and Mac System 7. Server support extends to NetWare, AppleTalk, UNIX, Windows NT, and OS/2. Client workstations can access data from both NetWare and LAN Manager
servers at the same time. LAN Manager supports NetBIOS and Named Pipes LAN communications between clients and OS/2 servers. Redirection services are provided to map files and printers from remote workstations for client use.
Advanced Server also supports TCP/IP communication. In early 1994, Advanced Server still will be a young product with many missing pieces. Even more troublesome, competitiveness between Microsoft and Novell is delaying the release of client requestor
software and NetWare Core Protocol (NCP) support. Microsoft has added TCP/IP support to LAN Manager 2.1 and Advanced Server along with NetView and Simple Network Management Protocol (SNMP) agents. Thus, the tools are in place to provide remote LAN
management for LAN Manager LANs. Microsoft has announced support for IBM NetView 6000 for Advanced Server management.
Advanced Server provides integrated support for peer-to-peer processing and client/server applications. Existing support for Windows NT, OS/2, UNIX, and Mac System 7 clients lets application, database, and communication servers run on the same machine
as the file and print server. This feature is attractive in small LANs. The native operating system support for preemptive multitasking and storage protection ensures that these server applications do not reduce the reliability of other services. Even as
Windows NT is rolled out to provide the database, application, and communications services to client/server applications, the use of Novell as the LAN NOS of choice will continue for peripheral resource sharing applications.
Microsoft has attempted to preempt the small LAN market with its Windows for Workgroups (WfW) product. This attacks the same market as NetWare Lite with a low-cost product that is tightly integrated with Windows. It is an attractive option for small
organizations without a requirement for larger LANs. The complexities of systems management make it less attractive in an enterprise environment already using Novell. WfW can be used in conjunction with Novell for a workgroup wishing to use some WfW
services, such as group scheduling.
IBM has entered into an agreement to resell and integrate the Novell NetWare product into environments where both IBM LAN Server and Novell NetWare are required. NetWare provides more functional, easier-to-use, and higher-performance file and print
services. In environments where these are the only LAN functions, NetWare is preferable to LAN Manager derivatives. The capability to interconnect to the SNA world makes the IBM product LAN Server attractive to organizations that prefer to run both
products. Most large organizations have department workgroups that require only the services that Novell provides well but may use LAN Server for client/server applications using SNA services such as APPN.
IBM and Microsoft had an agreement to make the APIs for the two products equivalent. However, the dispute between the two companies over Windows 3.x and OS/2 has ended this cooperation. The most recent releases of LAN Manager NT 3 and LAN Server 3 are
closer to the agreed equivalency, but there is no guarantee that this will continue. In fact, there is every indication that the products will diverge with the differing server operating system focuses for the two companies. IBM has priced LAN Server very
attractively so that if OS/2 clients are being used, LAN Server is a low-cost option for small LANs. LAN Server supports DOS, Windows, and OS/2 clients. No support has been announced for Mac System 7, although it is possible to interconnect AppleTalk and
LAN Server LANs to share data files and communication services.
Banyan VINES provides basic file and print services similar to those of Novell and Lan Manager.
VINES incorporates a facility called StreetTalk that enables every resource in a Banyan enterprise LAN to be addressed by name. VINES also provides intelligent WAN routing within the communications server component. These two features are similar to
the OSI Directory Services X.500 protocol.
StreetTalk enables resources to be uniquely identified on the network, making them easier to access and manage. All resources, including file services, users, and printers, are defined as objects. Each object has a StreetTalk name associated with it.
StreetTalk names follow a three-level hierarchical format: Item@Group@Organization. For example, a user can be identified as Psmith@Cerritos@Tnet. All network objects are stored in a distributed database that can be accessed globally. Novell's NDS is
similar to StreetTalk in functionality. However, there are key differences. NDS can partition and replicate the database, which will generally improve performance and reliability. NDS is X.500-compliant and enables multiple levels of hierarchy.
StreetTalk supports a fixed three-level hierarchy. The NDS architecture offers more flexibility but with corresponding complexity, and StreetTalk is less flexible but less complex to manage.
One advantage the current version of StreetTalk has over NDS is that StreetTalk objects can have unlimited attributes available for selection. To locate a printer with certain attributes, the command: "Locate a color laser printer with A4 forms on
the 7th floor of Cerritos" finds and uses the printer with the desired characteristics.
VINES V5.5 offers ISDN and TI support for server-to-server communications over a WAN, as well as integration of DOS, Windows, OS/2, and Mac clients. VINES does not support NFS clients.
Novell and Microsoft have announced support for Banyan ENS within their products to be available in Q2 1994. Banyan and DCA provide SNA services to the VINES environment. VINES supports UNIX, DOS, Windows, OS/2, and Mac System 7 clients.
NFS is the standard file system support for UNIX. PC NFS is available from SunSelect and FTP to provide file services support from a UNIX server to Windows, OS/2, Mac, and UNIX clients.
NFS lets a client mount an NFS host's filing system (or a part of it) as an extension of its own resources. NFS's resource-sharing mechanisms encompass interhost printing. The transactions among NFS systems traditionally ride across TCP/IP and
Ethernet, but NFS works with any network that supports 802.3 frames.
SunSelect includes instructions for adding PC-NFS to an existing LAN Manager or Windows for Workgroups network using Network Driver Interface Specification (NDIS) drivers.
With the increasing use of UNIX servers for application and database services, there is an increasing realization that PC NFS may be all that is required for NOS support for many workgroups. This can be a low-cost and low-maintenance option because the
UNIX server is easily visible from a remote location.
Client/server computing requires that LAN and WAN topologies be in place to provide the necessary internetworking for shared applications and data. Gartner Group1 surveyed and estimated the Microsystems' integration topologies for the period 1986-1996;
the results appear in Figure 4.6. Of special interest is the projection that most workstations will be within LANs by 1996, but only 14 percent will be involved in an enterprise LAN by that date. These figures represent a fairly pessimistic outlook for
interconnected LAN-to-LAN and enterprise-wide connectivity. These figures probably will prove to be substantially understated if organizations adopt an architectural perspective for the selection of their platforms and tools and use these tools within an
organizationally optimized systems development environment (SDE).
Figure 4.6. Microsystems integration configuration 1986-1996. (Source: The Gartner Group.)
This model is the most basic implementation providing the standard LAN services for file and printer sharing.
Routers and communication servers will be used to provide communication services between LANs and into the WAN. In the client/server model, these connections will be provided transparently by the SDE tools. There are significant performance
implications if the traffic volumes are large. IBM's LU6.2 implementation in APPC and TCP/IP provides the best support for high-volume, LAN-to-LAN/WAN communications. DEC's implementation of DECnet always has provided excellent LAN-to-WAN connectivity.
Integrated support for TCP/IP, LU6.2, and IPX provides a solid platform for client/server LAN-to-WAN implementation within DECnet. Novell 4.x provides support for TCP/IP as both the LAN and WAN protocol. Internetworking also is supported between IPX and
The lack of real estate on the desktop encouraged most organizations to move to a single deviceusing terminal emulation from the workstationto access existing mainframe applications. It will take considerable time and effort before all
existing host-based applications in an organization are replaced by client/server applications. In the long term, the host will continue to be the location of choice for enterprise database storage and for the provision of security and network management
Mainframes are expensive to buy and maintain, hard to use, inflexible, and large, but they provide the stability and capacity required by many organizations to run their businesses. As Figure 4.7 notes, in the view of International Data Corporation,
they will not go away soon. Their roles will change, but they will be around as part of the enterprise infrastructure for many more years. Only organizations who create an enterprise architecture strategy and transformational plans will accomplish the
migration to client/server in less than a few years. Without a well-architected strategy, gradual evolution will produce failure.
Figure 4.7. The role of the mainframe. (Source: International Data Corporation, conference handout notes, 1991.)
Information that is of value or interest to the entire business must be managed by a central data administration function and appear to be stored on each user's desk. These applications are traditionally implemented as Online Transaction Processing
(OLTP) to the mainframe or minicomputer. With the client/server model, it is feasible to use database technology to replicate or migrate data to distributed servers. Wherever data resides or is used, the location must be transparent to the user and the
developer. Data should be stored where it best meets the business need.
Online Transaction Processing applications are found in such industries as insurance, finance, government, and salesall of which process large numbers of transactions. Each of these transactions requires a minimal amount of user think time to
process. In these industries, data is frequently collected at the source by the knowledgeable worker. As such, the systems have high requirements for availability, data integrity, performance, concurrent access, growth potential, security, and
manageability. Systems implemented in these environments must prove their worth or they will be rejected by an empowered organization. They must be implemented as an integral part of the job process.
OLTP has traditionally been the domain of the large mainframe vendorssuch as IBM and DECand of special-purpose, fault-tolerant processors from vendors such as Tandem and Stratus. The client/server model has the capability to provide all the
services required for OLTP at much lower cost than the traditional platforms. All the standard client/server requirements for a GUIapplication portability, client/server function partitioning, software distribution, and effective development
toolsexist for OLTP applications.
The first vendor to deliver a production-quality product in this arena is Cooperative Solutions with its Ellipse product. Prior to Ellipse, OLTP systems required developers to manage the integrity issues of unit-of-work processing, including currency
control and transaction rollback. Ellipse provides all the necessary components to build systems with these features. Ellipse currently operates with Windows 3.x, OS/2 clients, and OS/2 servers using the Sybase database engine. Novell is working with
Cooperative Solutions to port Ellipse as a Novell NetWare Loadable Module (NLM). It provides a powerful GUI development environment using a template language as a shorthand for development. This language provides a solid basis for building an
organizational SDE and lends itself well to the incorporation of standard components.
As UNIX has matured, it has added many of the features found in other commercial operating systems such as VMS and MVS. There are now several offerings for OLTP with UNIX. IBM is promoting CICS 6000 as a downsizing strategy for CICS MVS. Database
services will be provided by a combination of AIX and MVS servers.
Novell purchased the Tuxedo product from AT&T with its acquisition of USL. OSF selected the Transarc Ensina product as the basis for OLTP with DCE. The DCE recognition quickly placed Ensina in the lead in terms of supported UNIX platforms. IBM has
released a version of DCE for AIX that includes the Ensina technology. NCR provides a product called TopEnd as part of its Cooperation series.
Client/server TP monitor software is becoming increasingly necessary now that client/server systems are growing to include several database servers supporting different vendors' databases and servicing tens, hundreds, and even thousands of users that
need to access and update the same data. UNIX-based OTLP products are maturing to provide the same level of functionality and reliability as traditional mainframe-based IBM Customer Information Control Systems (CICS), yet at less cost and with graphical
Servers provide the platform for application, database, and communication services. There are six operating system platforms that have the greatest potentional and/or are prevalent today: NetWare, OS/2, Windows NT, MVS, VMS, and UNIX.
NetWare is used by many organizations, large and small, for the provision of file, printer, and network services. NetWare is a self-contained operating system. It does not require a separate OS (as do Windows NT, OS/2, and UNIX) to run. Novell is
taking steps to allow NetWare to run on servers with UNIX. Novell purchased USL and will develop shrink-wrapped products to run under both NetWare and UNIX System V, Release 4.2. The products will enable UNIX to simultaneously access information from both
a NetWare and a UNIX server.
OS/2 is the server platform for Intel products provided by IBM in the System Application Architecture (SAA) model. OS/2 provides the storage protection and preemptive multitasking services needed for the server platform. Several database and many
application products have been ported to OS/2. The only network operating systems directly supported with OS/2 are LAN Manager and LAN Server. Novell supports the use of OS/2 servers running on separate processors from the NetWare server. The combination
of Novell with an OS/2 database and application servers can provide the necessary environment for a production-quality client/server implementation. Appendix A describes such an implementation.
With the release of Windows NT (New Technology) in September of 1993, Microsoft staked its unique position with a server operating system. Microsoft's previous development of OS/2 with IBM did not create the single standard UNIX alternative that was
hoped for. NT provides the preemptive multitasking services required for a functional server. It provides excellent support for Windows clients and incorporates the necessary storage protection services required for a reliable server operating system. Its
implementation of C2 level security goes well beyond that provided by OS/2 and most UNIX implementations. It will take most of 1994 to get the applications and ruggedizing necessary to provide an industrial strength platform for business critical
applications. With Microsoft's prestige and marketing muscle, NT will be installed by many organizations as their server of choice.
IBM provides MVS as a platform for large applications. Many of the existing application services that organizations have purchased operate on System 370-compatible hardware running MVS. The standard networking environment for many large
organizationsSNAis a component of MVS. IBM prefers to label proprietary systems today under the umbrella of SAA. The objective of SAA is to provide all services on all IBM platforms in a compatible waythe IBM version of the single-system
There is a commitment by IBM to provide support for the LAN Server running natively under MVS. This is an attractive option for organizations with large existing investments in MVS applications. The very large data storage capabilities provided by
System 370-compatible platforms with MVS make the use of MVS for LAN services attractive to large organizations. MVS provides a powerful database server using DB2 and LU6.2. With broad industry support for LU6.2, requests that include DB2 databases as part
of their view can be issued from a client/server application. Products such as Sybase provide high-performance static SQL support, making this implementation viable for high-performance production applications.
Digital Equipment Corporation provides OPENVMS as its server platform of choice. VMS has a long history in the distributed computing arena and includes many of the features necessary to act as a server in the client/server model. DEC was slow to
realize the importance of this technology, and only recently did the company enter the arena as a serious vendor. NetWare supports the use of OPENVMS servers for file services. DEC provides its own server interface using a LAN Manager derivative product
Pathworks runs native on the VAX and RISC Alpha RXP. This is a particularly attractive configuration because it provides access on the same processor to the application, database, and file services provided by a combination of OPENVMS, NetWare, and LAN
Manager. Digital and Microsoft have announced joint agreements to work together to provide a smooth integration of Windows, Windows NT, Pathworks, and OPENVMS. This will greatly facilitate the migration by OPENVMS customers to the client/server model.
VAX OPENVMS support for database products such as RDB, Sybase, Ingres, and Oracle enables this platform to execute effectively as a database server for client/server applications. Many organizations have large investments in VAX hardware and DECnet
networking. The option to use these as part of client/server applications is attractive as a way to maximize the value of this investment. DECnet provides ideal support for the single-system image model. LAN technology is fundamental to the architecture of
DECnet. Many large organizations moving into the client/server world of computing have standardized on DECnet for WAN processing. For example, Kodak selected Digital as its networking company even after selecting IBM as its mainframe outsourcing company.
UNIX is a primary player as a server system in the client/server model. Certainly, the history of UNIX in the distributed computing arena and its open interfaces provide an excellent opportunity for it to be a server of choice. To understand what makes
it an open operating system, look at the system's components. UNIX was conceived in the early 1970s by AT&T employees as an operating environment to provide services to software developers who were discouraged by the incompatibility of new computers
and the lack of development tools for application development. The original intention of the UNIX architecture was to define a standard set of services to be provided by the UNIX kernel. These services are used by a shell that provides the command-line
interface. Functionality is enhanced through the provision of a library of programs. Applications are built up from the program library and custom code. The power and appeal of UNIX lie in the common definition of the kernel and shell and in the large
amount of software that has been built and is available. Applications built around these standards can be ported to many different hardware platforms.
The objectives of the original UNIX were very comprehensive and might have been achieved except that the original operating system was developed under the auspices of AT&T. Legal ramifications of the consent decree governing the breakup of the
Regional Bell Operating Companies (RBOCs) prevented AT&T from getting into the computer business. As a result, the company had little motivation early on to promote UNIX as a product.
To overcome this, and in an attempt to achieve an implementation of UNIX better suited to the needs of developers, the University of California at Berkeley and other institutions developed better varieties of UNIX. As a result, the original objective
of a portable platform was compromised. The new products were surely better, but they were not compatible with each other or the original implementation. Through the mid-1980s, many versions of UNIX that had increasing functionality were released. IBM, of
course, entered the fray in 1986 with its own UNIX derivative, AIX. Finally, in 1989, an agreement was reached on the basic UNIX kernel, shell functions, and APIs.
The computing community is close to consensus on what the UNIX kernel and shell will look like and on the definition of the specific APIs. Figure 4.8 shows the components of the future standard UNIX operating system architecture.
During all of these gyrations, one major UNIX problem has persisted that differentiates it from DOS, Windows NT, and OS/2 in the client/server world. Because the hardware platforms on which UNIX resides come from many manufacturers and are based on
many different chip sets, the "off-the-shelf" software that is sold for PCs is not yet available for UNIX. Software is sold and distributed in its executable form, so it must be compiled and linked by the developer for the target platform. This
means that organizations wishing to buy UNIX software must buy it for the specific target platform they are using. This also means that when they use many platforms in a distributed client/server application, companies must buy different software versions
for each platform.
Figure 4.8. UNIX architecture.
In addition to the complexity this entails, a more serious problem exists with software versioning. Software vendors update their software on a regular basis, adding functionality and fixing problems. Because the UNIX kernel is implemented on each
platform and the software must be compiled for the target platform, there are differences in the low-level operation of each platform. This requires that software vendors port their applications to each platform they support. This porting function can take
from several days to several months. In fact, if the platform is no longer popular, the port may never occur. Thus, users who acquire a UNIX processor may find that their software vendor is no longer committed to upgrading their software for this platform.
The major UNIX developer groupsUNIX International, Open Systems Foundation (OSF), and X/Openhave worked on plans to develop a binary compatible UNIX. If and when this happens, every new processor will execute the same metamachine language.
Despite the fact that at the machine level there will be differences, the executable code will be in this metalanguage. Software developers then will be able to develop off-the-shelf UNIX applications. When we achieve this level of compatibility, the true
promise of UNIX will be reached, and its popularity should take off. Figure 4.9 reflects the evolution of UNIX versions from the early 1970s to the 1995 objective of a unified UNIX. A unified UNIX will support off-the-shelf applications running on every
Figure 4.9. UNIX history.
The Open Software Foundation (OSF), a nonprofit consortium founded in 1988, now encompasses 74 companies, including Computer Associates, DEC, Groupe Bull, HP, IBM, Microsoft, Novell, Nippon Telegraph and Telephone Corp., Siemens Nixdorf, and even UNIX
International Inc. (which was the standards-setting group for AT&T's, then X/Open's, UNIX System V). The OSF has set a goal to build distributed computing environment (DCE) compatibility into its distributed computing architecture. The OSF aims to
provide an X/Open and POSIX compliant UNIX-like operating system using the Motif graphical user interface. The OSF has developed the Architecture Neutral Distribution Format (ANDF) with the intention of providing the capability to create and distribute
shrink-wrapped software that can run on a variety of vendor platforms. The first operating system version OSF/1 was delivered by OSF in 1992 and implemented by DEC in 1993.
The important technologies defined for OSF include
UNIX is particularly desirable as a server platform for client/server computing because of the large range of platform sizes available and the huge base of application and development software available. Universities are contributing to the UNIX
momentum by graduating students who see only UNIX during their student years. Government agencies are insisting on UNIX as the platform for all government projects. The combination of these pressures and technology changes should ensure that UNIX
compatibility will be mandatory for server platforms in the last half of this decade.
OSF initially developed Motif, a graphical user interface for UNIX, that has become the de facto UNIX GUI standard. The Distributed Computing Environment (DCE) is gaining acceptance as the standard for distributed application development although its
Distributed Management Environment has yet to achieve such widespread support. OSF/1, the OSF defined UNIX kernel, has been adopted only by DEC, although most other vendors have made promises to support it. OSF/1 brings the promise of a UNIX micro kernel
more suitable to the desktop environment than existing products.
The desire for a standard UNIX encourages other organizations. For example, the IEEE tackled the unified UNIX issue by establishing a group to develop a standard portable operating system called POSIX. The objective is to develop an ANSI standard
operating system. POSIX isn't UNIX, but it is UNIX-like. POSIX standards (to which most vendors pledge compliance) exist today. DEC's OPENVMS operating system, for example, supports published POSIX standards. POSIX at this point, however, does little to
promote interoperability and portability because so little of the total standard has been finalized. Simple applications that will run across different POSIX-compliant platforms will be written. However, they will be limited applications because developers
will be unable to use any of the rich, non-POSIX features and functions that the vendors offer beyond the basic POSIX-compliant core.
X/Open started in Europe and has spread to include most major U.S. computer makers. X/Open is having significant impact in the market because its goal is to establish a standard set of Application Programming Interfaces (APIs) that will enable
interoperability. These interfaces are published in the X/Open Portability Guide. Applications running on operating systems that comply with these interfaces will communicate with each other and interoperate, even if the underlying operating systems are
different. This is the key objective of the client/server model.
The COSE announcement by HP, IBM, SCO, Sun, and Univel (Novell/USL) in March 1993 at the Uniforum Conference is the latest attempt to create a common ground between UNIX operating systems. The initial COSE announcement addresses only the user's desktop
environment and graphical user interface, although in time it is expected to go further. COSE is a more pragmatic group attempting to actually "get it done."
Another major difference from previous attempts to create universal UNIX standards is the involvement of SCO and Sun. These two organizations own a substantial share of the UNIX market and have tended to promote proprietary approaches to the desktop
interface. SCO provides its Open Desktop environment, and Sun offers Open Look. The commitment to Motif is a significant concession on their part and offers the first real opportunity for complete vendor interoperability and user transparency to platform.
In October of 1993, Novell agreed to give the rights to the UNIX name to X/Open so that all vendors can develop to the UNIX standards and use the UNIX name for their products. This largely symbolic gesture will eliminate some of the confusion in the
marketplace over what software is really UNIX. COSE is looking beyond the desktop to graphics, multimedia, object technology, and systems management. Networking support includes Novell's NetWare UNIX client networking products, OSF's DCE, and SunSoft's
Open Network Computing. Novell has agreed to submit the NetWare UNIX client to X/Open for publication as a standard. In the area of graphics, COSE participants plan to support a core set of graphics facilities from the X Consortium, the developer of X
Addressing multimedia, the COSE participants plan to submit two joint specifications in response to the Interactive Multimedia Association's request for technology. One of those specifications, called Distributed Media Services (DMS), defines a
network-independent infrastructure supporting an integrated API and data stream protocol. The otherthe Desktop Integrated Media Environmentwill define multimedia access and collaboration tools, including at least one basic tool for each data
type supported by the DMS infrastructure. The resulting standard will provide users with consistent access to multimedia tools in multivendor environments.
COSE also addresses object technology, an area targeted by IBM and Sun. The group will support the efforts of the Object Management Group (OMG) and its Common Object Request Broker (CORBA) standard for deploying and using distributed objects. IBM
already has a CORBA-compliant object system in beta test for AIX. Sun built an operating system code named Spring as a proof of concept in 1992. Sun has a major project underway, called Distributed Objects Everywhere (DOE), that is producing very exciting
productivity results. Finally, COSE will focus on the management of distributed file systems, distribution, groups and users, print spooling, software installation licensing, and storage.
It is not a coincidence that these vendors are coming together to define a standard UNIX at this time. The COSE effort is a defensive reaction to the release of Microsoft's Windows NT. With this commitment to a 32-bit desktop and server operating
system, Microsoft has taken the wind out of many of the UNIX claims to technical superiority. Despite its numerous advantages as a desktop and server operating system, UNIX never has been widely accepted in the general corporate world that favors
DOS/Windows and Novell's NetWare. A key drawback to UNIX in the corporate arena has been the lack of a single UNIX standard. UNIX has a well established position as the operating system of choice for distributed relational databases from vendors like
Informix, Ingres, Oracle, and Sybase. Most of these vendors, however, will port their products to Windows NT as well. Any effort to reduce the problems associated with the multiple UNIX variants will do much to bolster the stature of UNIX as a worthwhile
alternative to Windows NT.
Spin this fantasy around in your mind. All the major hardware and software vendors get together and agree to install a black box in their systems that will, in effect, wipe away their technological barriers. This black box will connect a variety of
small operating systems, dissimilar hardware platforms, incompatible communications protocols, all sorts of applications and database systems, and even unlike security systems. And the black box will do all this transparently, not only for end users but
also for systems managers and applications developers.2 OSF proposes the distributed computing environment (DCE) as this black box. DCE is the most important architecture defined for the client/server model. It provides the bridge between existing
investments in applications and new applications based on current technology. Figure 4.10 shows this architecture defined by the OSF.
The first product components of DCE were released in the third quarter of 1991. DCE competes directly with Sun's open network computing (ONC) environment and indirectly with many other network standards. OSF/1 and DCE are almost certain to win this
battle because of the massive market presence of the OSF sponsors. IBM has now committed to making its AIX product OSF/1 compatible by early 1994. It will be 1995 before the product is mature and complete enough to be widely used as part of business
applications. In the interim, product vendors and systems integrators will use it to build portable products and applications. The general availability of code developed for previous, similar product components will speed the process and enable new
development to be modelled on the previous releases.
DCE has been described as another layer grouping in the OSI model.3 DCE provides the link between pure communications on the lower layers and end-user applications. Figure 4.11 shows "where DCE fits in" between the operating system kernel and
the user application services.
Figure 4.10. Distributed computing environment (DCE) architecture.
Figure 4.11. DCE on OS Layer 6.5.
DCE is a prepackaged group of integrated interoperability applications that connect diverse hardware and software systems, applications, and databases. To provide these services, DCE components must be present on every platform in a system. These
components become active whenever a local application requests data, services, or processes from somewhere. The OSF says that DCE will make a network of systems from multiple vendors appear as a single stand-alone computer to applications developers,
systems administrators, and end users. Thus, the single-system image is attained.
The various elements of DCE are as follows:
SAA is IBM's distributed environment. SAA was defined by IBM in 1986 as an architecture to integrate all IBM computers and operating systems, including MVS, VM/CMS, OS/400, and OS/2-EE. SAA defines standards for a common user access (CUA) method,
common programming interfaces (CPI), and a common communication link (APPC).
To support the development of SAA-compliant applications, IBM described SAA frameworks (that somewhat resemble APIs). The first SAA framework is AD/Cycle, the SAA strategy for CASE application development. AD/Cycle is designed to use third-party tools
within the IBM SAA hardware and mainframe Repository Manager/MVS data storage facility. Several vendors have been selected by IBM as AD/Cycle partners, namely: Intersolv, KnowledgeWare, Bachman, Synon, Systematica, and Easel Corp. Several products are
already available, including the Easel WorkBench toolkit, Bachman DB2, CSP tools, and the KnowledgeWare Repository and MVS tools.
Unfortunately, the most important component, the Repository Manager, has not yet reached production quality in its MVS implementation and as yet there are no plans for a client/server implementation. Many original IBM customers involved in evaluating
the Repository Manager have returned the product in frustration. Recently, there has been much discussion about the need for a production-quality, object-oriented database management system (OODBMS) to support the entity relationship (ER) model underlying
the repository. Only this, say some sources, will make implementation and performance practical. A further failing in the SAA strategy is the lack of open systems support. Although certain standards, such as Motif, SQL, and LU6.2, are identified as part of
SAA; the lack of support for AIX has prevented many organizations from adopting SAA. IBM has published all the SAA standards and has licensed various protocols, such as LU6.2. The company has attempted to open up the SAA software development world. IBM's
director of open systems strategy, George Siegle, says that IBM believes in openness through interfaces. Thus, the complete definition of APIs enables other vendors to develop products that interface with IBM products and with each other. Recent
announcements, such as support for CICS AIX, point to a gradual movement to include AIX in the SAA platforms. The first SAA application that IBM released, OfficeVision, was a disaster. The product consistently missed shipping dates and lacked much of the
promised functionality. IBM has largely abandoned the product now and is working closely with Lotus and its workgroup computing initiatives.
IBM has consistently defined common database, user interface, and communications standards across all platforms. This certainly provides the opportunity to build SAA-compliant client/server applications. The recent introduction of CICS for OS/2, AIX,
and OS/400 and the announcement of support for AIX mean that a single transaction-processing platform is defined across the entire range of products. Applications developed under OS/2 can be ported to interoperate between OS/2, OS/400, MVS, and eventually
AIX, without modification. COBOL and C are common programming languages for each platform. SQL is the common data access language in all platforms.
The failure of SAA is attributable to the complexity of IBM's heterogenous product lines and the desire of many organizations to move away from proprietary to open systems solutions. This recognition led IBM to announce its new Open Enterprise plan to
replace the old System Application Architecture (SAA) plan with an open network strategy. SystemView is a key IBM network product linking OS/2, UNIX, and AS/400 operating systems. Traditional Systems Network Architecture (SNA) networking will be replaced
by new technologies, such as Advanced Peer-to-Peer Communications (APPC) and Advanced Peer-to-Peer Networking (APPN).
IBM has defined SystemView as its DME product. SystemView defines APIs to enable interoperability between various vendor products. It is expected to be the vehicle for linking AIX into centralized mainframe sites. IBM has stated that SystemView is an
open structure for integrating OSI, SNA, and TCP/IP networks. At this time, SystemView is a set of guidelines to help third-party software developers and customers integrate systems and storage management applications, data definitions, and access methods.
The guidelines are intended to further support single-system image concepts.
1 Gartner Group, presentation notes, MicroSystems Integration, September 1991.
2 J.W. Semich, "The Distributed Connection:DEC," Datamation 37, No. 15 (August 1, 1991), p. 28.
3 Jerry Cashin, "OSI DEC Attempt to Add OSI Service," Software Magazine 11, No. 3 (March 1991), p. 87.