The network is the computer is the most appropriate description of client/server computing. Users want to feel that somewhere on the network the services they need are available and are accessible
based on a need and right of access, without regard to the technologies involved. When ready to move beyond personal productivity stand-alone applications and into client/server applications, organ-izations must address the issues of connectivity.
Initially, most users discover their need to access a printer that is not physically connected to their client workstation. Sharing data files among non-networked individuals in the same office can be handled by "sneakernet" (hand-carrying
diskettes), but printing is more awkward. The first LANs installed are usually basic networking services to support this printer-sharing requirement. Now a printer anywhere in the local area can be authorized for shared use.
The physical medium to accomplish this connection is the LAN cabling. Each workstation is connected to a cable that routes the transmission either directly to the next workstation on the LAN or to a hub point that routes the transmission to the
appropriate destination. There are two primary LAN topologies that use Ethernet (bus) and Token Ring (ring).
Ethernet and Token Ring are implemented on well-defined Institute of Electrical and Electronic Engineers (IEEE) industry standards. These standards define the product specification detail and provide a commitment to a fixed specification. This
standardization has encouraged hundreds of vendors to develop competitive products and in turn has caused the functionality, performance, and cost of these LAN connectivity products to improve dramatically over the last five years. Older LAN installations
that use nonstandard topologies (such as ARCnet) will eventually require replacement.
There is a basic functional difference in the way Ethernet and Token Ring topologies place data on the cable. With the Ethernet protocol, the processor attempts to dump data onto the cable whenever it requires service. Workstations contend for the
bandwidth with these attempts, and the Ethernet protocol includes the appropriate logic to resolve collisions when they occur. On the other hand, with the Token Ring protocol, the processor only attempts to put data onto the cable when there is capacity on
the cable to accept the transmission. Workstations pass along a token that sequentially gives each workstation the right to put data on the network.
Recent enhancements in the capabilities of intelligent hubs have changed the way we design LANs. Hubs owe their success to the efficiency and robustness of the 10BaseT protocol, which enables the implementation of Ethernet in a star fashion over
Unshielded Twisted Pair (UTP) wiring. Now commonly used, hubs provide integrated support for the different standard topologies such as Ethernet, Token Ring, and Fiber (specifically, the FDDI protocol) over different types of cabling. By repeating or
amplifying signals where necessary, they enable the use of high quality UTP cabling in virtually every situation.
Hubs have evolved to provide tremendous flexibility for the design of the physical LAN topologies in large office buildings or plants. Various design strategies are now available. They are also an effective vehicle to put management intelligence
throughout the LANs in a corporation, allowing control and monitoring capabilities from a network management center.
Newer token-passing protocols, such as Fiber Distributed Data Interface (FDDI) and Copper Distributed Data Interface (CDDI), will increase in use as higher performance LANs (particularly backbone LANs) are required. CDDI can be implemented on the same
LAN cable as Ethernet and Token Ring if the original selection and installation are done carefully according to industry recommendations. FDDI usually appears first as the LAN-to-LAN bridge between floors in large buildings.
Wireless LANs offer an alternative to cabling. Instead of cabling, these LANs use the airwaves as the communications medium. Motorola provides a systemAltairthat supports standard Ethernet transmission protocols and cards. The Motorola
implementation cables workstations together into microcells using standard Ethernet cabling. These microcells communicate over the airwaves to similarly configured servers. Communications on this frequency do not pass through outside walls, so there is
little problem with interference from other users.
Wireless LANs are attractive when the cost of installing cabling is high. Costs tend to be high for cabling in old buildings, in temporary installations, or where workstations move frequently. NCR provides another implementation of wireless LAN
technology using publicly accessible frequencies in the 902-MHz to 928-MHz band. NCR provides proprietary cards to provide the communications protocol. This supports lower-speed communications that are subject to some interference, because so many other
devices, such as remote control electronic controllers (like a VCR controller) and antitheft devices, use this same frequency.
It is now a well-accepted fact that LANs are the preferred vehicle to provide overall connectivity to all local and distant servers. WAN connectivity should be provided through the interconnection of the LANs. Router and bridges are devices that
perform that task. Routers are the preferred technology for complex network topologies, generating efficient routing of data packets between two systems by locating and using the optimal path. They also limit the amount of traffic on the WAN by efficiently
filtering and by providing support for multiple protocols across the single network.
WAN bandwidth for data communications is a critical issue. In terminal-to-host networks, traffic generated by applications could be modeled, and the network would then be sized accordingly, allowing for effective use of the bandwidth. With LAN
interconnections, and applications that enable users to transfer large files (such as through e-mail attachments) and images, this modeling is much harder to perform. WAN services that have recently emerged, such as Frame Relay, SMDS (Switched Multimegabit
Data Service), and imminent ATM (Asynchronous Transfer Mode) services, enable the appropriate flexibility inherently required for these applications.
Frame Relay uses efficient statistical multiplexing to provide shared network resources to users. Each access line is shared by traffic destined for multiple locations. The access line speed is typically sized much higher than the average throughput
each user is paying for. This enables peak transmissions (such as when a user transmits a large file) that are much faster because they use all available bandwidth.
SMDS is a high-speed service that uses cell relay technology, which enables data, voice, and video to share the same network fabric. Available from selected RBOCs as a wide-area service, it supports high speeds well over 1.5 Mbps.
ATM is an emerging standard and set of communication technologies that span both the LAN and the WAN to create a seamless network. It provides the appropriate capabilities to support all types of voice, data, and video traffic. Its speed is defined to
be 155 Mbps, with variations and technologies that may enable it to run on lower speed circuits when economically appropriate. It will operate both as a LAN and a WAN technology, providing full and transparent integration of both environments.
ATM will be the most significant connectivity technology after 1995. ATM provides the set of services and capabilities that will truly enable the "computing anywhere" concept, in which the physical location of systems and data is made
irrelevant to the user. It also provides the network managers with the required flexibility to respond promptly to business change and new applications.
Interoperability between distributed systems is not guaranteed by just providing network-based connectivity. Systems need to agree on the end-to-end handshakes that take place while exchanging data, on session management to set up and break
conversations, and on resource access strategies. These are provided by a combination of network protocols such as Novell's IPX/SPX, NetBIOS, TCP/IP, and remote process interoperability technologies, such as RPC technology from Sun, Netwise, Sybase,
Oracle, IBM's APPC, CPIC, and Named Pipes.
Network Management is an integral part of every network. The Simple Network Management Protocol (SNMP) is a well-accepted standard used to manage LANs and WANs through the management capabilities of hubs, routers, and bridges. It can be extended to
provide basic monitoring performance measurements of servers and workstations. Full systems management needs much more functionality than SNMP can offer. The OSI management protocol, the Common Management Information Protocol (CMIP), which has the
flexibility and capability to fully support such management requirements, will likely compete with an improved version of SNMP, SNMP V2.
The OSI reference model shown in Figure 5.1 provides an industry standard framework for network and system interoperability. The existence of heterogeneous LAN environments in large organizations makes interoperability a practical reality.
Organizations need and expect to view their various workgroup LANs as an integrated corporate-wide network. Citicorp, for example, is working to integrate its 100 independent networks into a single global net.1 The OSI model provides the framework
definition for developers attempting to create interoperable products.2 Because many products are not yet OSI-compliant, there often is no direct correspondence between the OSI model and reality.
The OSI model defines seven protocol layers and specifies that each layer be insulated from the other by a well-defined interface.
Figure 5.1. The seven-layer OSI model.
The physical layer is the lowest level of the OSI model and defines the physical and electrical characteristics of the connections that make up the network. It includes such things as interface specifications as well as detailed specifications for the
use of twisted-pair, fiber-optic, and coaxial cables. Standards of interest at this layer for client/server applications are IEEE 802.3 (Ethernet), and IEEE 802.5 (Token Ring) that define the requirements for the network interface card (NIC) and the
software requirements for the media access control (MAC) layer. Other standards here include the serial interfaces EIA232 and X.21.
The data link layer defines the basic packets of data expected to enter or leave the physical network. Bit patterns, encoding methods, and tokens are known to this layer. The data link layer detects errors and corrects them by requesting retransmission
of corrupted packets or messages. This layer is actually divided into two sublayers: the media access control (MAC) and the logical link control (LLC). The MAC sublayer has network access responsibility for token passing, collision sensing, and network
control. The LLC sublayer operates above the MAC and sends and receives data packets and messages.
Ethernet, Token Ring, and FDDI define the record format of the packets (frames) being communicated between the MAC layer and Network layer. The internal formats are different and without conversion workstations cannot interoperate with workstations
that operate with another definition.
The network layer is responsible for switching and routing messages to their proper destinations. It coordinates the means for addressing and delivering messages. It provides for each system a unique network address, determines a route to transmit data
to its destination, segments large blocks of data into smaller packets of data, and performs flow control.
When a message contains more than one packet, the transport layer sequences the message packets and regulates inbound traffic flow. The transport layer is responsible for ensuring end-to-end error-free transmission of data. The transport layer
maintains its own addresses that get mapped onto network addresses. Because the transport layer services process on systems, multiple transport addresses (origins or destination) can share a single network address.
The session layer provides the services that enable applications running at two processors to coordinate their communication into a single session. A session is an exchange of messagesa dialog between two processors. This layer helps create the
session, inform one workstation if the other drops out of the session, and terminate the session on request.
The presentation layer is responsible for translating data from the internal machine form of one processor in the session to that of the other.
The application layer is the layer to which the application on the processor directly talks. The programmer codes to an API defined at this layer. Messages enter the OSI protocol stack at this level, travel through the layers to the physical layer,
across the network to the physical layer of the other processor, and up through the layers into the other processor application layer and program.
Connectivity and interoperability between the client workstation and the server are achieved through a combination of physical cables and devices, and software that implements communication protocols.
One of the most important and most overlooked parts of LAN implementation today is the physical cabling plant. A corporation's investment in cabling is significant. For most though, it is viewed strictly as a tactical operation, a necessary expense.
Implementation costs are too high, and maintenance is a nonbudgeted, nonexistent process. The results of this shortsightedness will be seen in real dollars through the life of the technology. Studies have shown that over 65 percent of all LAN downtime
occurs at the physical layer.
It is important to provide a platform to support robust LAN implementation, as well as a system flexible enough to incorporate rapid changes in technology. The trend is to standardize LAN cabling design by implementing distributed star topologies
around wiring closets, with fiber between wiring closets. Desktop bandwidth requirements can be handled by copper (including CDDI) for several years to come; however, fiber between wiring closets will handle the additional bandwidth requirements of a
backbone or switch-to-switch configuration.
Obviously, fiber to the desktop will provide extensive long-term capabilities; however, because of the electronics required to support various access methods in use today, the initial cost is significant. As recommended, the design will provide support
for Ethernet, 4M and 16M Token Ring, FDDI, and future ATM LANs.
Cabling standards include RG-58 A/U coaxial cable (thin-wire 10Base2 Ethernet), IBM Type 1 (shielded, twisted pair for Token Ring), unshielded twisted pair (UTP for 10BaseT Ethernet or Token Ring) and Fiber Distributed Data Interface (FDDI for 10BaseT
or Token Ring). Motorola has developed a wireless Ethernet LAN productAltairthat uses 18-GHz frequencies. NCR's WaveLAN provides low-speed wireless LAN support.
Wireless LAN technology is useful and cost-effective when the cost of cable installation is high. In old buildings or locations where equipment is frequently moved, the cost of running cables may be excessive. In these instances wireless technology can
provide an attractive alternative. Motorola provides an implementation that uses standard Ethernet NICs connecting a group of closely located workstations together with a transmitter. The transmitter communicates with a receiver across the room to provide
the workstation server connection. Recent reductions in the cost of this technology make it attractive for those applications where the cost of cabling is more than $250 per workstation.
Wireless communication is somewhat slower than wired communication. Industry tests indicate a performance level approximately one-half that of wired 10-Mbps UTP Ethernet. NCR's alternative wireless technology, WaveLAN, is a slow-speed implementation
using proprietary communications protocols and hardware. It also is subject to interference by other transmitters, such as remote control electronics, antitheft equipment, and point-of-sale devices.
Ethernet is the most widely installed network topology today. Ethernet networks have a maximum throughput of 10 Mbps. The first network interface cards (NICs) developed for Ethernet were much cheaper than corresponding NICs developed by IBM for Token
Ring. Until recently, organizations who used non-IBM minicomputer and workstations equipment had few options other than Ethernet. Even today in a heterogeneous environment, there are computers for which only Ethernet NICs are available.
The large market for Ethernet NICs and the complete definition of the specification have allowed over 100 companies to produce these cards.3 Competition has reduced the price to little more than $100 per unit.
10BaseT Ethernet is a standard that enables the implementation of the Ethernet protocol over telephone wires in a physical star configuration (compatible with phone wire installations). Its robustness, ease of use, and low cost driven by hard
competition have made 10BaseT the most popular standards-based network topology. Its pervasiveness is unrivaled: In 1994, new laptop computers will start to ship with 10BaseT built in. IBM is now fully committed to support Ethernet across its product line.
IBM uses the Token Ring LAN protocol as the standard for connectivity in its products. In an environment that is primarily IBM hardware and SNA connectivity, Token Ring is the preferred LAN topology option. IBM's Token Ring implementation is a modified
ring configuration that provides a high degree of reliability since failure of a node does not affect any other node. Only failure of the hub can affect more than one node. The hub isn't electric and doesn't have moving parts to break; it is usually stored
in a locked closet or other physically secure area.
Token Ring networks implement a wire transmission speed of 4 or 16 Mbps. Older NICs will support only the 4-Mbps speed, but the newer ones support both speeds. IBM and Hewlett-Packard have announced a technical alliance to establish a single 100Mbps
standard for both Token Ring and Ethernet networks. This technology, called 100VG-AnyLAN, will result in low-cost, high-speed network adapter cards that can be used in PCs and servers running on either Token Ring or Ethernet LANs. The first AnyLAN products
are expected in early 1994 and will cost between $250 and $350 per port. IBM will be submitting a proposal to make the 100VG-AnyLAN technology a part of IEEE's 802.12 (or 100Base-VG) standard, which currently includes only Ethernet. A draft IEEE standard
for the technology is expected by early 1994.
100VG-AnyLAN is designed to operate over a variety of cabling, including unshielded twisted pair (Categories 3, 4, or 5), shielded twisted pair, and FDDI.
The entire LAN operates at the speed of the slowest NIC. Most of the vendors today, including IBM and SynOptics, support 16 Mbps over unshielded twisted-pair cabling (UTP). This is particularly important for organizations that are committed to UTP
wiring and are considering the use of the Token Ring topology.
The third prevalent access method for Local Area Networks is Fiber Distributed Data Interface (FDDI). FDDI provides support for 100 Mbps over optical fiber, and offers improved fault tolerance by implementing logical dual counter rotating rings. This
is effectively running two LANs. The physical implementation of FDDI is in a star configuration, and provides support for distances of up to 2 km between stations.
FDDI is a next-generation access method. Although performance, capacity, and throughput are assumed features, other advantages support the use of FDDI in high-performance environments. FDDI's dual counter-rotating rings provide the inherent capability
of end-node fault tolerance. By use of dual homing hubs (the capability to have workstations and hubs connected to other hubs for further fault tolerance), highly critical nodes such as servers or routers can be physically attached to the ring in two
distinct locations. Station Management Technology (SMT) is the portion of the standard that provides ring configuration, fault isolation, and connection management. This is an important part of FDDI, because it delivers tools and facilities that are
desperately needed in other access method technologies.
There are two primary applications for FDDI: first as a backbone technology for interconnecting multiple LANs, and second, as a high-speed medium to the desktop where bandwidth requirements justify it.
Despite the rapid decrease in the cost of Token Ring and 10BaseT Ethernet cards, FDDI costs have been decreasing at a faster rate. As Figure 5.2 illustrates, the cost of 100 Mbps capable FDDI NICs reached $550 by the end of 1992 and is projected to
reach $400 by 1995. The costs of installation are dropping as preterminated cable reaches the market. Northern Telecom is anticipating, with its FibreWorld products, a substantial increase in installed end-user fiber driven by the bandwidth demands of
multimedia and the availability requirements of business critical applications.
Figure 5.2. Affordable FDDI.
The original standards in the physical layer specified optical fiber support only. Many vendors, however, have developed technology that enables FDDI to run over copper wiring. Currently, there is an effort in the ANSI X3T9.5 committee to produce a
standard for FDDI over Shielded Twisted Pair (IBM compliant cable), as well as Data grade unshielded twisted pair. Several vendors, including DEC, IBM, and SynOptics are shipping an implementation that supports STP and UTP.
The Ethernet technique works well when the cable is lightly loaded but, because of collisions that occur when an attempt is made to put data onto a busy cable, the technique provides poor performance when the LAN utilization exceeds 50 percent. To
recover from the collisions, the sender retries, which puts additional load on the network. Ethernet users avoid this problem by creating subnets that divide the LAN users into smaller groups, thus keeping a low utilization level.
Despite the widespread implementation of Ethernet, Token Ring installations are growing at a fast rate for client/server applications. IBM's commitment to Ethernet may slow this success, because Token-Ring will always cost more than Ethernet.
Figure 5.3 presents the results of a recent study of installation plans for Ethernet, Token Ring, and FDDI. The analysis predicts a steady increase in planned Token Ring installations from 1988 until the installed base is equivalent in 1996. However,
this analysis does not account for the emergence of a powerful new technology which has entered the marketplace in 1993, Asynchronous Mode, or ATM. It is likely that by 1996 ATM will dominate all new installations and will gradually replace existing
installations by 1999.
Figure 5.3. LAN-host connections.
As Figure 5.4. illustrates, Token Ring performance is slightly poorer on lightly loaded LANs but shows linear degradation as the load increases, whereas Ethernet shows exponential degradation after loading reaches 30 percent capacity.
Figure 5.4. Ethernet, Token Ring utilization.
Figure 5.5 illustrates the interoperability possible today with routers from companies such as Cisco, Proteon, Wellfleet, Timeplex, Network Systems, and 3-Com. Most large organizations should provide support for the three different protocols and
install LAN topologies similar to the one shown in Figure 5.5. Multiprotocol routers enable LAN topologies to be interconnected.
Figure 5.5. FDDI interoperability.
ATM has been chosen by CCITT as the basis for its Broadband Integrated Services Digital Network (B-ISDN) services. In the USA, an ANSI-sponsored subcommittee also is investigating ATM.
The integrated support for all types of traffic is provided by the implementation of multiple classes of service categorized as follows:
ATM's capability to make the "computing aywhere" concept a reality is made possible because ATM eventually will be implemented seamlessly both in the LAN and in the WAN. By providing a single network fabric for all applications, ATM also
gives network managers with the required flexibility to respond promptly to business change and new applications. (See Figure 5.6.)
Figure 5.6. ATM Cells.
One of the most important technologies in delivering LAN technology to mainstream information system architecture is the intelligent hub. Recent enhancements in the capabilities of intelligent hubs have changed the way LANs are designed. Hubs owe their
success to the efficiency and robustness of the 10BaseT protocol, which enables the implementation of Ethernet in a star fashion over Unshielded Twisted Pair. Now commonly used, hubs provide integrated support for the different standard topologies (such as
Ethernet, Token-Ring, and FDDI) over different types of cabling. By repeating or amplifying signals where necessary, they enable the use of high-quality UTP cabling in virtually every situation.
These intelligent hubs provide the necessary functionality to distribute a structured hardware and software system throughout networks, serve as network integration and control points, provide a single platform to support all LAN topologies, and
deliver a foundation for managing all the components of the network.
There are three different types of hubs. Workgroup hubs support one LAN segment and are packaged in a small footprint for small branch offices. Wiring closet hubs support multiple LAN segments and topologies, include extensive management
capabilities, and can house internetworking modules such as routers or bridges. Network center hubs, at the high end, support numerous LAN connections, have a high-speed backplane with flexible connectivity options between LAN segments, and include
fault tolerance features.
Hubs have evolved to provide tremendous flexibility for the design of the physical LAN topologies in large office buildings or plants. Various design strategies are now available.
The distributed backbone strategy takes advantage of the capabilities of the wiring closet hubs to bridge each LAN segment onto a shared backbone network. This method is effective in large plants where distances are important and computing facilities
can be distributed. (See Figure 5.7.)
Figure 5.7. Distribution of LAN servers.
The collapsed backbone strategy provides a cost-effective alternative that enables the placement of all LAN servers in a single room and also enables the use of a single high-performance server with multiple LAN attachments. This is particularly
attractive because it provides an environment for more effective LAN administration by a central group, with all servers easily reachable. It also enables the use of high-capacity, fault-tolerant internetworking devices to bridge all LAN segments to form
an integrated network. (See Figure 5.8.)
Figure 5.8. Bridging LAN segments.
Hubs are also an effective vehicle to put management intelligence throughout the LANs in a corporation, allowing control and monitoring capabilities from a Network Management Center. This is particularly important as LANs in branch offices become
supported by a central group.
Internetworking devices enable the interconnection of multiple LANs in an integrated network. This approach to networking is inevitably supplanting the terminal-to-host networks as the LAN becomes the preferred connectivity platform to all personal,
workgroup, or corporate computing facilities.
Bridges provide the means to connect two LANs togetherin effect, to extend the size of the LAN by dividing the traffic and enabling growth beyond the physical limitations of any one topology. Bridges operate at the data link layer of the
OSI model, which makes them topology-specific. Thus, bridging can occur between identical topologies only (Ethernet-to-Ethernet, Token Ring-to-Token Ring). Source-Route Transparent bridging, a technology that enables bridging between Ethernet and
Token-Ring LANs, is seldom used.
Although bridges may cost less, some limitations must be noted. Forwarding of broadcast packets can be detrimental to network performance. Bridges operate promiscuously, forwarding packets as required. In a large internetwork, broadcasts from devices
can accumulate, effectively taking away available bandwidth and adding to network utilization. "Broadcast storms" are rarely predictable, and can bring a network completely to a halt. Complex network topologies are difficult to manage. Ethernet
bridges implement a simple decision logic that requires that only a single path to a destination be active. Thus, in complex meshed topologies, redundant paths are made inoperative, a situation that rapidly becomes ineffective as the network grows.
Routers operate at the network layer of the OSI model. They provide the means to intelligently route traffic addressed from one LAN to another. They support the transmission of data between multiple standard LAN topologies. Routing capabilities
and strategies are inherent to each network protocol. IP can be routed through the OSPF routing algorithm, which is different than the routing strategy for Novell's IPX/SPX protocol. Intelligent routers can handle multiple protocols; most leading vendors
carry products that can support mixes of Ethernet, Token Ring, FDDI, and from 8 to 10 different protocols.
Many organizations were unable to wait for the completion of the OSI middle-layer protocols during the 1980s. Vendors and users adopted the Transmission Control Protocol/Internet Protocol (TCP/IP), which was developed for the United States military
Defense Advanced Research Projects Agency (DARPA) ARPANET network. ARPANET was one of the first layered communications networks and established the precedent for successful implementation of technology isolation between functional components. Today, the
Internet is a worldwide interconnected network of universities, research, and commercial establishments; it supports thirty million US users and fifty million worldwide users. Additional networks are connected to the Internet every hour of the day. In fact
growth is now estimated at 15 percent per month. The momentum behind the Internet is tremendous.
The TCP/IP protocol suite is now being used in many commercial applications. It is particularly evident in internetworking between different LAN environments. TCP/IP is specifically designed to handle communications through "networks of
interconnected networks." In fact, it has now become the de facto protocol for LAN-based Client/Server connectivity and is supported on virtually every computing platform. More importantly, most interprocess communications and development tools embed
support for TCP/IP where multiplatform interoperability is required. It is worth noting that IBM has followed this growth and not only provides support for TCP/IP on all its platforms, but now enables the transport of its own interoperability interfaces
(such as CPIC, APPC) on TCP/IP.
The TCP/IP protocol suite is composed of the following components: a network protocol (IP) and its routing logic, three transport protocols (TCP, UDP, and ICMP), and a series of session, presentation and application services. The following sections
highlight those of interest.
IP represents the network layer and is equivalent to OSI's IP or X.25. A unique network address is assigned to every system, whether the system is connected to a LAN or a WAN. The system comes with its associated routing protocols and lower level
functions such as network-to-physical address resolution protocols (ARP). Commonly used routing protocols include RIP, OSPF, IGRP, and Cisco's proprietary protocol. OSPF has been adopted by the community to be the standards-based preferred protocol for
TCP provides Transport services over IP. It is connection-oriented, meaning it requires a session to be set up between two parties to provide its services. It ensures end-to-end data transmission, error recovery, ordering of data, and flow control. TCP
provides the kind of communications that users and programs expect to have in locally connected sessions.
UDP provides connectionless transport services, and is used in very specific applications that do not require end-to-end reliability such as that provided by TCP.
Telnet is an application service that uses TCP. It provides terminal emulation services and supports terminal-to-host connections over an internetwork. It is composed of two different portions: a client entity that provides services to access hosts and
a server portion that provides services to be accessed by clients. Even workstation operating systems such as OS/2 and Windows can provide telnet server support, thus enabling a remote user to log onto the workstation using this method.
FTP uses TCP services to provide file transfer services to applications. FTP includes a client and server portion. Server FTP listens for a session initiation request from client FTP. Files may be transferred in either direction, and ASCII and binary
file transfer is supported. FTP provides a simple means to perform software distribution to hosts, servers, and workstations.
SNMP provides intelligence and services to effectively manage an internetwork. It has been widely adopted by hub, bridge, and router manufacturers as the preferred technology to monitor and manage their devices.
SNMP uses UDP to support communications between agentsintelligent software that runs in the devicesand the manager, which runs in the management workstation. Two basic forms of communications can occur: SNMP polling (in which the manager
periodically asks the agent to provide status and performance data) and trap generation (in which the agent proactively notifies the manager that a change of status or an anomaly is occurring).
The NFS protocol enables the use of IP by servers to share disk space and files the same way a Novell or LAN Manager network server does. It is useful in environments in which servers are running different operating systems. However, it does not offer
support for the same administration facilities that a NetWare environment typically provides.
SMTP uses TCP connections to transfer text-oriented electronic mail among users on the same host or among hosts over the network. Developments are under way to adopt a standard to add multimedia capabilities (MIME) to SMTP. Its use is widespread on the
Internet, where it enables any user to reach millions of users in universities, vendor organizations, standards bodies, and so on. Most electronic mail systems today provide some form of SMTP gateway to let users benefit from this overall connectivity.
Interestingly, the interconnected LAN environment exhibits many of the same characteristics found in the environment for which TCP/IP was designed. In particular
One of the leading vendors providing TCP/IP support for heterogeneous LANs is FTP Software of Wakefield, Massachusetts, which has developed the Clarkson Packet Drivers. These drivers enable multiple protocols to share the same network adapter. This is
particularly useful, if not necessary, for workstations to take advantage of file and print services of a NetWare server, while accessing a client/server application located on a UNIX or Mainframe server.
IBM and Digital both provide support for TCP/IP in all aspects of their products' interoperability. Even IBM's LU6.2/APPC specification can now run over a TCP/IP network, taking advantage of the ubiquitous nature of the protocol. TCP/IP is widely
implemented, and its market presence will continue to grow.
At the top of the OSI model, interprocess communications (IPCs) define the format for application-level interprocess communications. In the client/server model, there is always a need for interprocess communications. IPCs take advantage of services
provided by protocol stacks such as TCP/IP, LU6.2, Decnet or Novell's IPX/SPX. In reality, a great deal of IPC is involved in most client/server applications, even where it is not visible to the programmer. For example, a programmer programming using
ORACLE tools ends up generating code that uses IPC capabilities embedded in SQL*net, which provide the communications between the client application and the server.
The use of IPC is inherent in multitasking operating environments. The various active tasks operate independently and receive work requests and send responses through the appropriate IPC protocols. To effectively implement client/server applications,
IPCs are used that operate equivalently between processes in a single machine or across machine boundaries on a LAN or a WAN.
IPCs should provide the following services:
All these features should be implemented with little code and excellent performance.
A peer-to-peer protocol is a protocol that supports communications between equals. This type of communication is required to synchronize the nodes involved in a client/server network application and to pass work requests back and forth.
Peer-to-peer protocols are the opposite of the traditional dumb terminal-to-host protocols. The latter are hierarchical setups in which all communications are initiated by the host. NetBIOS, APPC, and Named Pipes protocols all provide support for
The Network Basic I/O System (NetBIOS) is an interface between the transport and session OSI layers that was developed by IBM and Sytek in 1984 for PC connectivity. NetBIOS is used by DOS and OS/2 and is commonly supported along with TCP/IP. Many newer
UNIX implementations include the NetBIOS interface under the name RFC to provide file server support for DOS clients.
NetBIOS is the de facto standard today for portable network applications because of its IBM origins and its support for Ethernet, Token Ring, ARCnet, StarLAN, and serial port LANs, and its IBM origins.
The NetBIOS commands provide the following services:
The application program-to-program communication (APPC) protocol provides the necessary IPC support for peer-to-peer communications across an SNA network. APPC provides the program verbs in support of the LU6.2 protocol. This protocol is implemented on
all IBM and many other vendor platforms. Unlike NetBIOS or Named Pipes, APPC provides the LAN and WAN support to connect with an SNA network, that may interconnect many networks.
Standards for peer-to-peer processing have evolved and have been accepted by the industry. IBM defined the LU6.2 protocol to support the handshaking necessary for cooperative processing between intelligent processors. Most vendors provide direct
support for LU6.2 protocols in their WAN and the OSI committees and have agreed to define the protocol as part of the OSI standard for peer-to-peer applications. A recently quoted comment, "The U.S. banking system would probably collapse if a bug were
found in IBM's LU6.2," points out the prevalence of this technology in highly reliable networked transaction environments.4
Programmers have no need or right to work with LU6.2 directly. Even with the services provided by APIs, such as APPC, the interface is unreasonably complex, and the opportunities for misuse are substantial. Vendors such as PeerLogic offer excellent
interface products to enable programs to invoke the functions from COBOL or C. High-level languages, such as Windows 4GL, access network transparency products such as Ingres Net implemented in the client and server (or SQL*Net in Oracle's case).
These network products basically map layers five and six of the OSI model, generate LU6.2 requests directly to access remote SQL tables, and invoke remote stored procedures. These products include all the necessary code to handle error conditions,
build parameter lists, maintain multiple sessions, and in general remove the complexity from the sight of the business application developer.
The power of LU6.2 does not come without complexity. IBM has addressed this with the definition of a Common Programmers Interface for Communications (CPI-C). Application program-to-program communication (APPC) is the API used by application programmers
to invoke LU6.2 services. Nevertheless, a competent VTAM systems programmer must be involved in establishing the connection between the LAN node and the SNA network. The APPC verbs provide considerable application control and flexibility. Effective use of
APPC is achieved by use of application interface services that isolate the specifics of APPC from the developer. These services should be built once and reused by all applications in an installation.
APPC supports conversational processes and so is inherently half-duplex in operation. The use of parallel sessions provides the necessary capability to use the LAN/WAN connection bandwidth effectively. In evaluating LU6.2 implementations from different
platforms, support for parallel sessions is an important evaluation criterion unless the message rate is low.
LU6.2 is the protocol of choice for peer-to-peer communications from a LAN into a WAN when the integrity of the message is important. Two-phase commit protocols for database update at distributed locations will use LU6.2 facilities to guarantee
commitment of all or none of the updates. Because of LU6.2 support within DECNET and the OSI standards, developers can provide message integrity in a multiplatform environment.
Named Pipes is an IPC that supports peer-to-peer processing through the provision of two-way communication between unrelated processes on the same machine or across the LAN. No WAN support currently exists. Named Pipes are an OS/2 IPC. The
server creates the pipe and waits for clients to access it. A useful compatibility feature of Named Pipes supports standard OS/2 file service commands for access. Multiple clients can use the same named pipe concurrently. Named Pipes are easy to use,
compatible with the file system, and provide local and remote support. As such, they provide the IPC of choice for client/server software that do not require the synchronization or WAN features of APPC.
Named Pipes provide strong support for many-to-one IPCs. They take advantage of standard OS/2 and UNIX scheduling and synchronization services. With minimal overhead, they provide the following:
The use of an RPC across a named pipe is particularly powerful because it enables the requester to format a request into the pipe with no knowledge of the location of the server. The server is implemented transparently to the requester on
"some" machine platform, and the reply is returned in the pipe. This is a powerful facility that is very easy to use. Named Pipes support should become widespread because Novell and OSF have both committed the necessary threads support.
One of the first client/server online transaction processing (OLTP) products on the market, Ellipse, is independent of any communications method, although it requires networking platforms to have some notion of sessions. One of the major reasons
Cooperative Solutions chose OS/2 and LAN Manager as the first Ellipse platform is OS/2 LAN Manager's Named Pipes protocol, which supports sessions using threads within processes.
Ellipse uses Named Pipes for both client/server and interprocess communications on the server, typically, between the Ellipse application server and the database server, to save machine instructions and potentially reduce network traffic. Ellipse
enables client/server conversations to take place either between the Ellipse client process and the Ellipse server process or between the Ellipse client process and the DBMS server, bypassing the Ellipse server process. In most applications, clients will
deal with the DBMS through the Ellipse server, which is designed to reduce the number of request-response round trips between clients and servers by synchronizing matching sets of data in the client's working storage and the server DBMS.
Ellipse uses its sessions to establish conversations between clients and servers. The product uses a named pipe to build each client connection to SQL Server. Ellipse uses a separate process for Named Pipes links between the Ellipse server and the SQL
Ellipse also uses sessions to perform other tasks. For example, it uses a named pipe to emulate cursors in an SQL server database management system (DBMS). Cursors are a handy way for a developer to step through a series of SQL statements in an
application. (Sybase doesn't have cursors.) Ellipse opens up Named Pipes to emulate this function, simultaneously passing multiple SQL statements to the DBMS. An SQL server recognizes only one named pipe per user, so Ellipse essentially manages the
alternating of a main session with secondary sessions.
On the UNIX side, TCP/IP with the Sockets Libraries option appears to be the most popular implementation. TCP/IP supports multiple sessions but only as individual processes. Although UNIX implements low-overhead processes, there is still more overhead
than incurred by the use of threads. LAN Manager for UNIX is an option, but few organizations are committed to using it yet.
Windows 3.x client support is now provided with the same architecture as the OS/2 implementation. The Ellipse Windows client will emulate threads. The Windows client requires an additional layer of applications flow-control logic to be built into the
Ellipse environment's Presentation Services. This additional layer will not be exposed to applications developers, in the same way that Named Pipes were not exposed to the developers in the first version of the product.
The UNIX environment lacks support for threads in most commercial implementations. Cooperative Solutions hasn't decided how to approach this problem. Certainly, the sooner vendors adopt the Open Software Foundation's OSF/1 version of UNIX, which does
support threads, the easier it will be to port applications, such as Ellipse, to UNIX.
The missing piece in UNIX thread support is the synchronization of multiple requests to the pipe as a single unit of work across a WAN. There is no built-in support to back off the effect of previous requests when a subsequent request fails or never
gets invoked. This is the scenario in which APPC should be used.
Anonymous pipes is an OS/2 facility that provides an IPC for parent and child communications in a spawned-task multitasking environment. Parent tasks spawn child tasks to perform asynchronous processing. It provides a memory-based, fixed-length
circular buffer, shared with the use of read and write handles. These handles are the OS/2 main storage mechanism to control resource sharing. This is a high-performance means of communication when the destruction or termination of a parent task
necessitates the termination of all children and in-progress work.
Interprocess synchronization is required whenever shared-resource processing is being used. It defines the mechanisms to ensure that concurrent processes or threads do not interfere with one another. Access to the shared resource must be serialized in
an agreed upon manner. Semaphores are the services used to provide this synchronization.
Semaphores may use disk or D-RAM to store their status. The disk is the most reliable and slowest but is necessary when operations must be backed out after failure and before restart. D-RAM is faster but suffers from a loss of integrity when there is a
system failure that causes D-RAM to be refreshed on recovery. Many large operations use a combination of the two-disk to record start and end and D-RAM to manage in-flight operations.
Shared memory provides IPC when the memory is allocated in a named segment. Any process that knows the named segment can share it. Each process is responsible for implementing synchronization techniques to ensure integrity of updates. Tables are
typically implemented in this way to provide rapid access to information that is infrequently updated.
Queues provide IPC by enabling multiple processes to add information to a queue and a single process to remove information. In this way, work requests can be generated and performed asynchronously. Queues can operate within a machine or between
machines across a LAN or WAN. File servers use queues to collect data access requests from many clients.
Through a set of APIs, Windows and OS/2 provide calls that support the Dynamic Data Exchange (DDE) protocol for message-based exchanges of data among applications. DDE can be used to construct hot links between applications in which data can be
fed from window to window without interruption intervention. For example, a hot link can be created between a 3270 screen session and a word processing document. Data is linked from the 3270 window into the word processing document. Whenever the key of the
data in the screen changes, the data linked into the document changes too. The key of the 3270 screen transaction Account Number can be linked into a LAN database. As new account numbers are added to the LAN database, new 3270 screen sessions are created,
and the relevant information is linked into the word processing document. This document then can be printed to create the acknowledgment letter for the application.
DDE supports warm links created so the server application notifies the client that the data has changed and the client can issue an explicit request to receive it. This type of link is attractive when the volume of changes to the server data are so
great that the client prefers not to be burdened with the repetitive processing. If the server link ceases to exist at some point, use a warm rather than hot link to ensure that the last data iteration is available.
You can create request links to enable direct copy-and-paste operations between a server and client without the need for an intermediate clipboard. No notification of change in data by the server application is provided.
You define execute links to cause the execution of one application to be controlled by another. This provides an easy-to-use batch-processing capability.
DDE provides powerful facilities to extend applications. These facilities, available to the desktop user, considerably expand the opportunity for application enhancement by the user owner. Organizations that wish to integrate desktop personal
productivity tools into their client/server applications should insist that all desktop products they acquire be DDE-capable.
Good programmers have developed modular code using structured techniques and subroutine logic for years. Today, these subroutines should be stored "somewhere" and made available to everyone with the right to use them. RPCs provide this
capability; they standardize the way programmers must write calls to remote procedures so that the procedures can recognize and respond correctly.
If an application issues a functional request and this request is embedded in an RPC, the requested function can be located anywhere in the enterprise the caller is authorized to access. Client/server connections for an RPC are established at the
session level in the OSI stack. Thus, the RPC facility provides for the invocation and execution of requests from processors running different operating systems and using different hardware platforms from the caller's. The standardized request form
provides the capability for data and format translation in and out. These standards are evolving and being adopted by the industry.
Sun RPC, originally developed by Netwise, was the first major RPC implementation. It is the most widely implemented and available RPC today. Sun includes this RPC as part of their Open Network Computing (ONC) toolkit. ONC provides a suite of tools to
support the development of client/server applications.
The Open Software Foundation (OSF) has selected the Hewlett-Packard (HP) and Apollo RPC to be part of its distributed computing environment (DCE). This RPCbased on Apollo's Network Computing System (NCS)is now supported by Digital Equipment
Corporation, Microsoft, IBM, Locus Computing Corp., and Transarc. OSI also has proposed a standard for RPC-like functions called Remote Operation Service (ROSE). The selection by OSF likely will make the HP standard the de facto industry standard after
1994. Organizations wishing to be compliant with the OSF direction should start to use this RPC today.
The evolution of RPCs and message-based communications is detailed in Figure 5.9.
Figure 5.9. The evolution of RPCs.
Organizations that want to build applications with the capability to use RPCs can create an architecture as part of their systems development environment (SDE) to support the standard RPC when it is available for their platform. All new development
should include calls to the RPC by way of a standard API developed for the organization. With a minimal investment in such an API, the organization will be ready to take advantage of the power of their RPC as it becomes generally available, with very
little modification of applications required.
When a very large number of processes are invoked through RPCs, performance will become an issue and other forms of client/server connectivity must be considered. The preferred method for high-performance IPC involves the use of peer-to-peer messaging.
This is not the store-and-forward messaging synonymous with e-mail but a process-to-process communications with an expectation of rapid response (without the necessity of stopping processing to await the result).
The Mach UNIX implementation developed at Carnegie Mellon is the first significant example of a message-based operating system. Its performance and functionality have been very attractive for systems that require considerable interprocess
communications. The NeXT operating system takes advantage of this message-based IPC to implement an object-oriented operating system.
The advantage of this process-to-process communication is evident when processors are involved in many simultaneous processes. It is evident how servers will use this capability; however, the use in the client workstation, although important, is less
clear. New client applications that use object-level relationships between processes provide considerable opportunity and need for this type of communication. For example, in a text-manipulation application, parallel processes to support editing,
hyphenation, pagination, indexing, and workgroup computing may all be active on the client workstation. These various tasks must operate asynchronously for the user to be effective.
A second essential requirement is object-level linking. Each process must view the information through a consistent model to avoid the need for constant conversion and subsequent incompatibilities in the result.
NeXTStep, the NeXT development environment and operating system, uses PostScript and the Standard Generalized Markup Language (SGML) to provide a consistent user and application view of textual information. IBM's peer-to-peer specification LU6.2
provides support for parallel sessioning thus reducing much of the overhead associated with many RPCs, that is, the establishment of a session for each request. IBM has licensed this technology for use in its implementation of OSF/1.
RPC technology is here and working, and should be part of every client/server implementation. As we move into OLTP and extensive use of multitasking workgroup environments, the use of message-based IPCs will be essential. DEC's implementation is called
DECmessageQ and is a part of its Application Control Architecture. The OSF Object Management Group (OMG) has released a specification for an object request broker that defines the messaging and RPC interface for heterogeneous operating systems and
networks. The OMG specification is based on several products already in the marketplace, specifically HP's NewWave with Agents and the RPCs from HP and Sun. Organizations that want to design applications to take advantage of these facilities as they become
available can gain considerable insight by analyzing the NewWave agent process. Microsoft has entered into an agreement with HP to license this software for inclusion in Windows NT.
OLE is designed to let users focus on dataincluding words, numbers, and graphicsrather than on the software required to manipulate the data. A document becomes a collection of objects, rather than a file; each object remembers the software
that maintains it. Applications that are OLE-capable provide an API that passes the description of the object to any other application that requests the object.
WAN bandwidth for data communications is a critical issue. In terminal-to-host networks, traffic generated by applications could be modeled, and the network would then be sized accordingly, enabling effective use of the bandwidth. With LAN
interconnections and applications that enable users to transfer large files (such as through e-mail attachments) and images, this modeling is much harder to perform.
"Bandwidth-on-demand" is the paradigm behind these emerging technologies. Predictability of applications requirements is a thing of the past. As application developers get tooled for rapid application development and as system management
facilities enable easy deployment of these new applications, the lifecycle of network redesign and implementation is dramatically shortened. In the short term, the changes are even more dramatic as the migration from a host-centric environment to a
distributed client/server environment prevents the use of any past experience in "guessing" the actual network requirements.
Network managers must cope with these changes by seeking those technologies that will let them acquire bandwidth cost effectively while allowing flexibility to serve these new applications. WAN services have recently emerged that address this issue by
providing the appropriate flexibility inherently required for these applications.
Distance-insensitive pricing seems to emerge as virtual services are introduced. When one takes into account the tremendous amount of excess capacity that the carriers have built into their infrastructure, this is not as surprising as it would seem.
This will enable users and systems architects to become less sensitive to data and process placement when designing an overall distributed computing environment.
Frame Relay network services are contracted by selecting two components: an access line and a committed information rate (CIR). This CIR speed is the actual guaranteed throughput you pay for. However, Frame Relay networks enable you, for
example, to exceed this throughput at certain times to allow for efficient file transfers.
Frame Relay networks are often qualified as virtual private networks. They share a public infrastructure but implement virtual circuits between the senders and the receivers, similar to actual circuits. It is therefore a connection-oriented
network. Security is provided by defining closed user groups, a feature that prevents devices from setting up virtual connections to devices they are not authorized to access.
Figure 5.10 illustrates a typical scenario for a frame relay implementation. This example is being considered for use by the Los Angeles County courts for the ACTS project, as described in Appendix A.
Figure 5.10. Frame relay implementation.
SMDS is a high-speed service based on cell relay technology, using the same 53-byte cell transmission fabric as ATM. It also enables mixed data, voice, and video to share the same network fabric. Available from selected RBOCs as a wide-area service, it
supports high speeds well over 1.5 Mbps, and up to 45 Mbps.
SMDS differs from Frame Relay in that it is a connectionless service. Destinations and throughput to those destination do not have to be predefined. Currently under trial by major corporations, SMDSat speeds that match current needs of
customersis a precursor to ATM services.
The many advantages of ATM were discussed earlier in the chapter. Although not available as a service from the carriers, ATM will be soon be possible if built on private infrastructures.
Private networks have traditionally been used in the United States for high-traffic networks with interactive performance requirements. Canada and other parts of the world have more commonly used public X.25 networks, for both economic and technical
reasons. With the installation of digital switching and fiber-optic communication lines, the telephone companies now find themselves in a position of dramatic excess capacity. Figure 5.11 illustrates the cost per thousand bits of communication. What is
interesting is not the unit costs, which continue to decline, but the ratio of costs per unit when purchased in the various packages. Notice that the cost per byte for a T1 circuit is less than 1/5 the cost of a 64-Kbps circuit. In a T3 circuit package,
the cost is 1/16.
In reality, it costs the telephone company to provide the service, initiate the call, and bill for it. There is no particular difference in the cost for distance and little in the cost for capacity. British Telecom has recently started offering a
service with distance-insensitive pricing.
LANs provide a real opportunity to realize these savings. Every workstation on the LAN shares access to the wide-area facilities through the router or bridge. If the router has access to a T1 or T3 circuit, it can provide service on demand to any of
the workstations on the LAN. This means that a single workstation can use the entire T1 for the period needed to transmit a document or file.
Figure 5.11. Communication bandwidth trends. (Source: PacTEL tariffs, 1992.)
As Figure 5.12 illustrates, this bandwidth becomes necessary if the transmission involves electronic documents. The time to transmit a character screen image is only 0.3 seconds with the 64-Kbps circuit. Therefore, increasing the performance of this
transmission provides no benefit. If the transmission is a single-page image, such as a fax, the time to transmit is 164 seconds. This is clearly not an interactive response. Using a T1 circuit, the time reduces to only 5.9 seconds, and with a T3, to 0.2
seconds. If this image is in color, the times are 657 seconds compared to 23.5 and 0.8 seconds. In a client/server database application where the answer set to a query might be 10M, the time to transmit is 1,562 seconds (compared to 55.8 and 1.99 seconds).
Figure 5.12. Communications throughput.
When designing the architecture of the internetwork, it is important to take into account the communications requirements. This is not just an issue of total traffic, but also of instantaneous demand and user response requirements. ATM technologies
will enable the use of the same lines for voice, data, or video communications without preallocating exclusive portions of the network to each application.
ISDN is a technology that enables digital communications to take place between two systems in a manner similar to using dial-up lines. Connections are established over the public phone network, but they provide throughput of up to 64 Kbps. ISDN has two
ISDN can provide high quality and performance services for remote access to a LAN. Working from the field or at home through ISDN, a workstation user can operate at 64 Kbps to the LAN rather than typical modem speeds of only 9.6 Kbps. Similarly,
workstation-to-host connectivity can be provided through ISDN at these speeds. Help desk support often requires the remote help desk operator to take control of or share access with the user workstation display. GUI applications transmit megabits of data
to and from the monitor. This is acceptable in the high-performance, directly connected implementation usually found with a LAN attached workstation; but this transmission is slow over a communications link.
Multimedia applications offer considerable promise for future use of ISDN. The capability to simultaneously send information over the same connection enables a telephone conversation, a video conference, and integrated workstation-to-workstation
communications to proceed concurrently. Faxes, graphics, and structured data all can be communicated and made available for all participants in the conversation.
When applications reside on a single central processor, the issues of network management assume great importance but often can be addressed by attentive operations staff. With the movement to client/server applications, processors may reside away from
If the data or application logic necessary to run the business resides at a location remote from the "glass house" central computer room, these resources must be visible to some network managers. The provision of a network control center
(NCC) to manage all resources in a distributed network is the major challenge facing most large organizations today. Figure 5.13 illustrates the various capabilities necessary to build this management support. The range of services is much greater than
services traditionally implemented in terminal connected host applications. Many large organizations view this issue as the most significant obstacle to successful rollout of client/server applications.
Figure 5.13. Network management.
Figure 5.13 illustrates the key layers in the management system architecture:
OSF defines many of the most significant architectural components for client/server computing. The OSF selection of HP's Openview, combined with IBM's commitment to OSF's DME with its Netview/6000 product, ensures that we will see a dominant standard
for the provision of network management services. There are five key OSI management areas:
The current state of distributed network and systems management illustrate serious weaknesses when compared to the management facilities available in the mainframe world today. With the adoption of Openview as the standard platform and including
products such as Remedy Corporation's Action Request System for problem tracking/process automation, Tivoli's framework for system administration, management and security, and support applications from vendors such as Openvision, it is possible to
implement effective distributed network and systems management today. The required integration will create more difficulties than mainframe operations might.
Standards organizations and the major vendors provide their own solution to this challenge. There is considerable truth in the axiom that "the person who controls the network controls the business." The selection of the correct management
architecture for an organization is not straightforward and requires a careful analysis of the existing and planned infrastructure. Voice, data, application, video, and other nonstructured data needs must all be considered.
1 Diane Medina, "Citicorp pulls it together; bank plans integration of 100 networks into one global net," Information Week, No. 347 (November 18, 1991), p. 50.
2 William Stallings, Handbook of Computer-Communications Standards OSI Model and OSI Standards, vol. 1 (Howard W. Sams), 1990.
3 Network World 8, No. 40 (October 7, 1991), p. 43.
4 Mohsen Al-Ghosein, Consultant for Microsoft Consulting Services, personal communication (1992).