Organizations want to take advantage of the low- cost and user-friendly environment that existing desktop workstations provide. There is also a strong need and desire to capitalize on existing investment
at the desktop and in the portfolio of business applications currently running in the host. Thus, corporate networks are typically put in place to connect user workstations to the host. Immediate benefits are possible by integrating these three
technologies: workstations, connectivity, and hosts. Retraining and redevelopment costs are avoided by using the existing applications from an integrated desktop.
Client/server computing provides the capability to use the most cost-effective user interface, data storage, connectivity, and application services. Frequently, client/server products are deployed within the present organization but are not used
effectively. The client/server model provides the technological means to use previous investments in concert with current technology options. There has been a dramatic decline in the cost of the technology components of client/server computing.
Organizations see opportunities to use technology to provide business solutions. Service and quality competition in the marketplace further increase the need to take advantage of the benefits available from applications built on the client/server model.
Client/server computing in its best implementations moves the data-capture and information-processing functions directly to the knowledgeable workerthat is, the worker with the ability to respond to errors in the data, and the worker with the
ability to use the information made available. Systems used in the front office, directly involved in the process of doing the business, are forced to show value. If they don't, they are discarded under the cost pressures of doing business. Systems that
operate in the back room after the business process is complete are frequently designed and implemented to satisfy an administrative need, without regard to their impact on business operations. Client/server applications integrate the front and back office
processes because data capture and usage become an integral part of the business rather than an after-the-fact administrative process. In this mode of operation, the processes are continuously evaluated for effectiveness. Client/server computing provides
the technology platform to support the vital business practice of continuous improvement.
The client/server computing model provides the means to integrate personal productivity applications for an individual employee or manager with specific business data processing needs to satisfy total information processing requirements for the entire
Data that is collected as part of the normal business process and maintained on a server is immediately available to all authorized users. The use of Structured Query Language (SQL) to define and manipulate the data provides support for open access
from all client processors and software. SQL grants all authorized users access to the information through a view that is consistent with their business need. Transparent network services ensure that the same data is available with the same currency to all
In the client/server model, all information that the client (user) is entitled to use is available at the desktop. There is no need to change into terminal mode or log into another processor to access information. All authorized information and
processes are directly available from the desktop interface. The desktop toolse-mail, spreadsheet, presentation graphics, and word processingare available and can be used to deal with information provided by application and database servers
resident on the network. Desktop users can use their desktop tools in conjunction with information made available from the corporate systems to produce new and useful information.
Figure 2.1 shows a typical example of this integration. A word-processed document that includes input from a drawing package, a spreadsheet, and a custom-developed application can be created. The facilities of Microsoft's Dynamic Data Exchange (DDE)
enable graphics and spreadsheet data to be cut and pasted into the word-processed document along with the window of information extracted from a corporate database. The result is displayed by the custom application.
Creation of the customized document is done using only desktop tools and the mouse to select and drag information from either source into the document. The electronic scissors and glue provide powerful extensions to existing applications and take
advantage of the capability of the existing desktop processor. The entire new development can be done by individuals who are familiar only with personal productivity desktop tools. Manipulating the spreadsheet object, the graphics object, the application
screen object, and the document object using the desktop cut and paste tools provides a powerful new tool to the end user.
Developers use these same object manipulation capabilities under program control to create new applications in a fraction of the time consumed by traditional programming methods. Object-oriented development techniques are dramatically increasing the
power available to nonprogrammers and user professionals to build and enhance applications.
Figure 2.1. Personal productivity and integrated applications.
Another excellent and easily visualized example of the integration possible in the client/server model is implemented in the retail automobile service station. Figure 2.2 illustrates the comprehensive business functionality required in a retail gas
service station. The service station automation (SSA) project integrates the services of gasoline flow measurement, gas pumps billing, credit card validation, cash registers management, point-of-sale, inventory control, attendance recording, electronic
price signs, tank monitors, accounting, marketing, truck dispatch, and a myriad of other business functions. These business functions are all provided within the computer-hostile environment of the familiar service station with the same type of
workstations used to create this book. The system uses all of the familiar client/server components, including local and wide-area network services. Most of the system users are transitory employees with minimal training in computer technology. An
additional challenge is the need for real-time processing of the flow of gasoline as it moves through the pump. If the processor does not detect and measure the flow of gasoline, the customer is not billed. The service station automation system is a
classic example of the capabilities of an integrated client/server application implemented and working today.
Figure 2.2. Integrated retail outlet system architecture.
The client/server computing model provides opportunities to achieve true open system computing. Applications may be created and implemented without regard to the hardware platforms or the technical characteristics of the software. Thus, users may
obtain client services and transparent access to the services provided by database, communications, and applications servers. Operating systems software and platform hardware are independent of the application and masked by the development tools used to
build the application.
In this approach, business applications are developed to deal with business processes invoked by the existence of a user-created "event." An event such as the push of a button, selection of a list element, entry in a dialog box, scan of a bar
code, or flow of gasoline occurs without the application logic being sensitive to the physical platforms.
Client/server applications operate in one of two ways. They can function as the front end to an existing applicationthe more limited mainframe-centric model discussed in Chapter 1or they can provide data entry, storage, and reporting by
using a distributed set of clients and servers. In either case, the useor even the existenceof a mainframe host is totally masked from the workstation developer by the use of standard interfaces such as SQL.
SQL is an industry-standard data definition and access language. This standard definition has enabled many vendors to develop production-class database engines to manage data as SQL tables. Almost all the development tools used for client/server
development expect to reference a back-end database server accessed through SQL. Network services provide transparent connectivity between the client and local or remote servers. With some database products, such as Ingres Star, a user or application can
define a consolidated view of data that is actually distributed between heterogeneous, multiple platforms.
Systems developers are finally reaching the point at which this heterogeneity will be a feature of all production-class database engine products. Most systems that have been implemented to date use a single target platform for data maintenance. The
ability to do high-volume updates at multiple locations and maintain database integrity across all types of errors is just becoming available with production-level quality performance and recovery. Systems developed today that use SQL are inherently
transparent to data storage location and the technology of the data storage platform. The SQL syntax does not specify a location or platform. This transparency enables tables to be moved to other platforms and locations without affecting the application
code. This feature is especially valuable when adopting proven, new technology or if it makes business sense to move data closer to its owner.
Database services can be provided in response to an SQL requestwithout regard to the underlying engine. This engine can be provided by vendors such as ASK/Ingres, Oracle, Sybase, or IBM running on Windows NT, OS/2, UNIX, or MVS platform. The
system development environment (SDE) and tools must implement the interfaces to the vendor database and operating system products. The developer does not need to know which engine or operating system is running. If the SDE does not remove the developer
from direct access to the database server platform, the enthusiasm to be efficient will prevent developers from avoiding the use of "features" available only from a specific vendor. The transparency of platform is essential if the application is
to remain portable. Application portability is essential when taking advantage of innovation in technology and cost competitiveness, and in providing protection from the danger of vendor failure.
Database products, such as Sybase used with the Database Gateway product from Micro DecisionWare, provide direct, production-quality, and transparent connectivity between the client and servers. These products may be implemented using DB2, IMS/DB, or
VSAM through CICS into DB2, and Sybase running under VMS, Windows NT, OS/2, DOS, and MacOS. Bob Epstein, executive vice president of Sybase, Inc., views Sybase's open server approach to distributed data as incorporating characteristics of the semantic
heterogeneity solution.1 In this solution, the code at the remote server can be used to deal with different database management systems (DBMSs), data models, or processes. The remote procedure call (RPC) mechanism used by Sybase can be interpreted as a
message that invokes the appropriate method or procedure on the open server. True, somebody has to write the code that masks the differences. However, certain partssuch as accessing a foreign DBMS (like Sybase SQL Server to IBM DB2)can be
ASK's Ingres Star product provides dynamic SQL to support a distributed database between UNIX and MVS. Thus, Ingres Windows 4GL running under DOS or UNIX as a client can request a data view that involves data on the UNIX Ingres and MVS DB2 platform.
Ingres is committed to providing static SQL and IMS support in the near future. Ingres' Intelligent Database engine will optimize the query so that SQL requests to distributed databases are handled in a manner that minimizes the number of rows moved from
the remote server. This optimization is particularly crucial when dynamic requests are made to distributed databases. With the announcement of the Distributed Relational Database Architecture (DRDA), IBM has recognized the need for open access from other
products to DB2. This product provides the app-lication program interfaces (APIs) necessary for other vendors to generate static SQL requests to the DB2 engine running under MVS. Norris van den Berg, manager of Strategy for Programming Systems at IBM's
Santa Teresa Laboratory in San Jose, California, points out that IBM's Systems Application Architecture (SAA) DBMSs are different. Even within IBM, they must deal with the issues of data interchange and interoperability in a heterogeneous environment.2
More importantly, IBM is encouraging third-party DBMS vendors to comply with its DRDA. This is a set of specifications that will enable all DBMSs to interoperate.
The client/server model provides the capability to make ad hoc requests for information. As a result, optimization of dynamic SQL and support for distributed databases are crucial for the success of the second generation of a client/server application.
The first generation implements the operational aspects of the business process. The second generation is the introduction of ad hoc requests generated by the knowledgeable user looking to gain additional insight from the information available.
When SQL is used for data access, users can access information from databases anywhere in the network. From the local PC, local server, or wide area network (WAN) server, data access is supported with the developer and user using the same data request.
The only noticeable difference may be performance degradation if the network bandwidth is inadequate. Data may be accessed from dynamic random-access memory (D-RAM), from magnetic disk, or from optical disk, with the same SQL statements. Logical tables can
be accessedwithout any knowledge of the ordering of columns or awareness of extraneous columnsby selecting a subset of the columns in a table. Several tables may be joined into a view that creates a new logical table for application program
manipulation, without regard to its physical storage format.
The use of new data types, such as binary large objects (BLOBs), enables other types of information such as images, video, and audio to be stored and accessed using the same SQL statements for data access. RPCs frequently include data conversion
facilities to translate the stored data of one processor into an acceptable format for another.
We are moving from the machine-centered computing era of the 1970s and 1980s to a new era in which PC-familiar users demand systems that are user-centered. Previously, a user logged into a mainframe, mini-, or microapplication. The syntax of access was
unique in each platform. Function keys, error messages, navigation methods, security, performance, and editing were all very visible. Today's users expect a standard "look and feel." Users log into an application from the desktop with no concern
for the location or technology of the processors involved.
Figure 2.3 illustrates the evolution of a user's view of the computing platform. In the 1970s, users logged into the IBM mainframe, the VAX minicomputer, or one of the early microcomputer applications. It was evident which platform was being used. Each
platform required a unique login sequence, security parameters, keyboard options, and custom help, navigation, and error recovery. In the current user-centered world, the desktop provides the point of access to the workgroup and enterprise services without
regard to the platform of application execution. Standard services such as login, security, navigation, help, and error recovery are provided consistently among all applications.
Figure 2.3. The computing transformation.
Developers today are provided with considerable independence. Data is accessed through SQL without regard to the hardware, operating system, or location providing the data. Consistent network access methods envelop the application and SQL requests
within an RPC. The network may be based in Open Systems Interconnect (OSI), Transmission Control Protocol/Internet Protocol (TCP/IP), or Systems Network Architecture (SNA), but no changes are required in the business logic coding. The developer of business
logic deals with a standard process logic syntax without considering the physical platform. Development languages such as COBOL, C, and Natural, and development tools such as Telon, Ingres 4GL, PowerBuilder, CSP, as well as some evolving CASE tools such as
Bachman, Oracle CASE, and Texas Instruments' IEF all execute on multiple platforms and generate applications for execution on multiple platforms.
The application developer deals with the development language and uses a version of SDE customized for the organization to provide standard services. The specific platform characteristics are transparent and subject to change without affecting the
As processing steers away from the central data center to the remote office and plant, workstation server, and local area network (LAN) reliability must approach that provided today by the centrally located mini- and mainframe computers. The most
effective way to ensure this is through the provision of monitoring and support from these same central locations. A combination of technologies that can "see" the operation of hardware and software on the LANmonitored by experienced
support personnelprovides the best opportunity to achieve the level of reliability required.
The first step in effectively providing remote LAN management is to establish standards for hardware, software, networking, installation, development, and naming. These standards, used in concert with products such as IBM's Systemview,
Hewlett-Packard's Openview, Elegant's ESRA, Digital's EMA, and AT&T's UNMA products, provide the remote view of the LAN. Other tools, such as PC Connect for remote connect, PCAssure from Centel for security, products for hardware and software
inventory, and local monitoring tools such as Network General's Sniffer, are necessary for completing the management process.
The changes in computer technology that have taken place during the past five years are significantly greater than those of the preceding 35 years of computer history. There is no doubt that we will continue to experience an even greater rate of change
during the coming five-year period.
Consulting a crystal ball, projecting the future, and making decisions based on the projections is a common failure of the computer industry. Predicting the future is a risky business. Industry leaders, technicians, and investors have been equally
unsuccessful on occasion. Figures 2.4, 2.5, and 2.6 repeat some of the more familiar quotes from past fortune tellers projecting the future.
It is important, however, to achieve an educated view of where technology is headed during the life of a new system. The architecture on which a new system is built must be capable of supporting all users throughout its life. Large organizations
traditionally have assumed that their applications will provide useful service for 5 to 10 years. Many systems are built with a view of only what is available and provable today, and they are ready to fall apart like a deck of cards when the operating
environment changes and the architecture cannot adapt to the new realities. Properly architected systems consider not only the reality of today but also an assessment of the likely reality five years after the date of implementation.
Figure 2.4. Workstation market potential. (Source: J. Opel, IBM, 1982.)
Figure 2.5. Technology pessimism. (Source: UNIVAC, 1950, on opportunity for UNIVAC1.)
Figure 2.6. Technology optimism. (Source: Anonymous bankrupt investor, 1986.)
Despite predictions that the scope of change in computer technology in the next five years will exceed that seen in the entire computer era (1950 through 1994), a view of history still provides the only mirror we have into the future.
A 1990 survey of U.S. Fortune 1000 companies, completed by a well-known computer industry research firm, found that on an MIPS (millions of instructions per second) basis, more than 90 percent of the processing power available to organizations exists
at the desktop. This cheap computing power is typically underused today. It is a sunk cost available to be used as clients in the implementation of client/server applications.
Figure 2.7 illustrates the portion of processor capacity allocated to the central site and the desktop. In most organizations, the 9 percent of processor capacity residing in the "glass house" central computer center provides 90 percent or
more of enterprise computing. The 90 percent of processor capacity on the desktop and installed throughout the organization provides less than 10 percent of the processing power to run the business. Most workstation systems are used for personal
productivity applications, such as word processing, presentation graphics, and spreadsheet work. The personal productivity functions performed on these machines typically occupy the processor for a maximum of two to three hours per day.
Figure 2.7. Managing the shift to distributed processing.
Most applications require information that is manipulated also to be read and saved. In the next example, added to the CPU processing is the requirement to perform 1000 physical data read or write operations per second. Figure 2.8 shows the costs of
performing these operations.
Figure 2.8. The I/O bottleneck.
The same portion of the mainframe configuration required to provide one MIPS execution capability can simultaneously handle this I/O requirement. The workstation configuration required to simultaneously handle these two tasks in 1989 cost at least
twice that of the mainframe configuration. In addition, the configuration involved multiple processors without shared memory access. In order to preserve data integrity, the I/O must be read only. The dramatic reduction in workstation cost projected in
1995 is predicated on the use of symmetric multiprocessors to provide CPUs with shared memory and on the use of coprocessors providing the cached controllers necessary to support parallel I/O. (Parallel I/O enables multiple I/O requests to several devices
to be serviced concurrently with host CPU processing.) However, the costs are still projected to be 75 percent greater than costs on the mainframe for this high rate of I/O.
The difference in price and functionality is primarily explained by the fact that the IBM 3090-600 is an example of a massively parallel processor optimized to do I/O. Every channel, DASD controller, tape controller, and console contains other
processors. The processing capacity of these other processors is three to eight times the processing capacity of the main processor. These processors have direct memory access (DMA) to the shared memory of the configuration, with minimal impact on the
processing capacity of the main processor. These processors enable I/O operations to proceed in parallel with little or no main processor involvement.
For the immediate future, forecasts show little slackening in demand for large host processors to provide enterprise database engine services for large companies, especially Fortune 500 firms. Ad hoc processing demands generated by the availability of
workplace requestors will further increase the I/O demand. The RISC and Intel processors, as configured today and envisioned over the next five years, continue to use the main processor to perform much of the processing involved in I/O functions. This is
an economical strategy for most client applications and many server applications where the I/O demands do not approach those found in large host mainframe configurations. Distributed database technology reduces the demands for I/O against a single database
configuration and distributes the I/O with the data to the remote server processors. Despite the dramatic increase in CPU power, there hasn't been a corresponding increase in the capability to do "real" I/O. Some mechanical limitations are not
solved by increased CPU power. In fact, the extra CPU merely enables I/O requests to be generated more rapidly.
Figure 2.9 illustrates that CPU to I/O ratios became significantly unbalanced between 1980 and 1990. Between 1980 and 1990, for the same dollar expenditure, processor capacity increased by 100 times while I/O capacity increased by only 18 times. There
is no indication that this rate of change will decline in the future. In fact, it is likely that with increased use of symmetric multiprocessors, CPU power availability will increase more rapidly. This in turn will generate even greater I/O demands and
further widen the gap.
Only through the effective use of real storage (D-RAM) can we hope to use the available CPU power. Data can be accessed from D-RAM without the need to do physical I/O except to log the update. Database technology uses a sequential log to record
changes. These sequential writes can be buffered and done very rapidly. The random updates to the database are done when the system has nothing better to do or when the shared D-RAM containing the updated data is required for other data. The log is used to
recover the database after any failure that terminates the application.
Figure 2.9. Processor power versus I/O capacity. (Source: International Data Corporation.)
Another complication in the I/O factor is the steadily decreasing cost of permanent data storage devices. As the cost of traditional data storage devicesdisk and tapedecreases, new technologies with massively greater capacity have evolved.
Optical storage devices provide greater storage for less cost but with a somewhat slower rate of access than magnetic disk technologies. Most industry experience demonstrates that the amount of data an organization wants to store depends on the cost of
storage, not on any finite limit to the amount of data available. If the cost of storage is halved, twice as much data will be available to store for the same budget. This additional data may come from longer histories, external sources, or totally new
forms of data, such as image, audio, video, and graphics. New applications may be justified by the reduction in cost of data stores.
Workstation technologies can deal with personal data, data extracted from central systems for analysis by the end user, data from integrated external sources for comparison, and integrated new types of data such as voice annotation to documents. All
these data forms provide additional uses for lower-cost, permanent data storage. Decision-support systems can use workstation technologies and massive amounts of additional data to provide useful, market-driven recommendations.
Relational database technologies also can limit the amount of real I/O required to respond to information requests. The use of descriptor indexes that contain data values extracted from columns of the database tables enables search criteria to be
evaluated by accessing only the indexes. Access to the physical database itself is required only when the index search results in the identification of rows from the relational table that satisfy the search criteria. Large relational tables, which are
accessed through complex searches, can demonstrate dramatically different performance and cost of access depending on the effectiveness of the database search engine. Products such as DB2 and Ingres, which do extensive query optimization, often demonstrate
significantly better performance than other products in complex searches. Products that were developed to deal with a small memory model often exhibit dramatic CPU overhead when the size of resident indexes gets very large. DB2 achieves linear improvement
in performance as indexes are allocated more D-RAM. Oracle, on the other hand, does not perform well in the IBM System 370 MVS implementation because of its overhead in managing very large main storage buffer pools.
Arguably, the most dramatic technological revolution affecting the computer industry today is caused by the increase in the amount of main storage (D-RAM) available to an application. D-RAM is used for the execution of programs and the temporary
storage of permanent data.
Computer users have entered the era of very large and inexpensive D-RAM. Figure 2.10 represents the manner in which this technology has evolved and continues to evolve. Every three years, a new generation of D-RAM technology is released. Each new
generation is released with four times the capacity of the previous generation for the same chip price. At the point of introduction and at any given time during its life cycle, the cost of these chips is reduced to a price equal to the price of chips from
the previous generation. As the capacity of individual D-RAM chips has increased, the quantity of D-RAM available to the client (and server) has increased massively. Laboratory and manufacturing evidence reveals that this trend will continue at least
Figure 2.10. D-RAM chip evolution.
Desktop workstations purchased in 1988 with 1 megabit (Mbit) D-RAM chips were available in 1992 with 4Mbit DRAM chips for the same or lower cost. In 1988, typical desktop workstations contained 1 to 4 megabytes (Mbytes) of D-RAM. In 1992, these same
configurations contain from 4 to 16Mbytes. In 1995, these configurations will use 16Mbit chips and be available with 16 to 64Mbytes for the same price. By 1998within the life span of many applications being developed todaythese configurations
will use 64Mbit chips and contain from 64 to 256Mbytes of D-RAM for the same price.
A revolutionary change is occurring in our capability to provide functionality at the desktop. Most developers cannot generate anywhere near the amount of code necessary to fill a 64Mbyte processor on the desk. Yet applications being built today will
be used on desktop processors with this amount of D-RAM. As Chapter 3 discusses more fully, the client workstation can now contain in D-RAM all the software that the user will want to use. This eliminates the delay that was previously inherent in program
switchingthat is, program loading and startup. It is now practical to use a multitasking client workstation with several active tasks and to switch regularly among them. Virtual storage is a reality. Workstation D-RAM costs were less than $50 per
megabyte in 1992. The cost difference for an additional 4 megabytes is only $200. Only one year earlier, short-sighted application designers may have made system design decisions based on a cost of $1000 for 4Mbytes.
The same chip densities used for desktop processors are used in host servers. The typical mainframe computer in 1988 contained from 64 to 256Mbytes of D-RAM. In 1992, 256 to 1,024Mbytes were typical. By 1995, these same host servers will contain 1,024
to 4,096Mbytes of D-RAM. After 1998, host servers will contain 4,096 to 16,192Mbytes of D-RAM. These quantities are large enough to mandate that we take a completely different view of the way in which software will be built and information will be managed.
During the useful life of systems being conceived today, the I/O bottleneck will be eliminated by the capability to access permanent information from D-RAM.
We are on the verge of the postscarcity era of processor power. In this era, essentially unlimited computing power will become available. With the client/server model, this processing power is available in every workplacea fundamental paradigm
shift to the information-processing industry and to its customers. We expect to see a significant shakeout in the industry as hardware-only vendors respond to these changes. What will this mean for developers and consumers?
To achieve the benefit of this advance in technology, organizations must choose software that can use it. Traditional development tools, operating systems, character mode user interfaces, and non-SQL-based database technology cannot take advantage of
this quantity of D-RAM and the power available from workstation technology.
Graphical user interfaces (GUIs) require large amounts of D-RAM to hold the screen image, pull-down lists, help text, navigation paths, and logic associated with all possible selectable events. Because a GUI enables processing to be selected randomly
rather than in the traditional sequential, top-to-bottom order, all possible process logic and GUI management code associated with the image must be available in D-RAM to provide appropriate responses.
GUI functions require subsecond response time. Industry analysis has determined, and our experience confirms, that pull-down lists, button selects, and event invocation should take place within 0.1 second to provide a suitable user interface.
Suitable means that the user is unaware of the GUI operations but is focused on the business function being performed. This performance is feasibly provided with today's workstations configured with reasonable amounts of $50 per megabyte D-RAM (in 1992)
and properly architected applications.
CICS developers do not good GUI developers make.3 GUI application development requires a special mindset. Education, experience, and imagination are prerequisites for moving from the character mode world to the GUI world. Laying out a character mode
screen requires that fields are lined up row to row and the screen is not cluttered with too many fields. GUI layout is more difficult, because there are so many options. Colors, pull-down lists, option buttons, text boxes, scrollbars, check boxes, and
multiple windows are all layout capabilities. The skills that a layout artist commonly possesses are more appropriate to the task than those which a programmer usually demonstrates.
Another dramatic change in software is in the area of database management. Traditional file system and database technologies rely on locality of reference for good performance in accessing data. Locality of reference implies that all data needed to
satisfy a request is stored physically close together. However, today's business environment requires multikeyed access to rows of information derived from multiple tables. Performance is only possible in these environments when database searches are
performed in main storage using extracted keys organized into searchable lists. Physical access to the database is restricted to the selection of rows that satisfy all search criteria.
Relational database technology, using SQL, best meets these criteria. Despite the protestations symbolized in Figure 2.11, this commonly held view of relational technology is no longer valid. This incorrect view is frequently promulgated by those who
have invested their careers in becoming experts in nonrelational technology. Experience indicates that in concert with good development standards and current technology, relational systems perform as well or better than previous technologies. In addition
to providing independence of the physical storage from the logical view, SQL processors extract the row descriptors (column values) to separate indexes that are managed in main storage. The search request can be evaluated against the indexes to identify
the rows that satisfy all search criteria. Only these identified rows are physically retrieved from external storage.
Figure 2.11. Doubting database administrators.
Standards for use are an important part of a successful implementation of any tool. For example, developers can defeat the effectiveness of SQL in the client/server implementation by coding boolean selection criteria with program logic rather than
embedded SQL. Boolean selection criteria retrieves all rows that satisfy the first SELECT condition so that the program logic can be executed to filter unwanted rows. When all the application logic and database processing reside on the same processor, this
is a manageable overhead. In a client/server implementation, this causes database selection to operate at the LAN or WAN communication rates rather than at the I/O subsystem rates. Frequently, developershoping to reduce the overhead of query
optimizationuse the boolean technique for dynamic SQL, with the unfortunate result that performance is dramatically reduced as the additional physical data access time is incurred. It is important to select tools in the client/server world that
generate fully qualified SQL SELECT statements.
Relational systems can and do perform, but poor standards of use can defeat them. An example of successful performance, this book has implemented an application, described in Appendix A, that processes more than 400 update transactions per second into
a five-table relational database view. This specific example is implemented under DB2 on a midsize ES9000 processor.
The era of desktop workstations began in 1981 with the introduction of the IBM personal computer (PC). The PC provided early users with the capability to do spreadsheets, word processing, and basic database services for personal data. Within three
years, it became clear that high-quality printers, backup tapes, high-capacity disk devices, and software products were too expensive to put on everyone's desktop. LAN technology evolved to solve this problem. Novell is and has been the most successful
vendor in the LAN market.
Figure 2.12 shows the trend in the introduction of PCs into organizations during the period from 1980 until 1995. In most large organizations, desktop workstations provide personal productivity and some workgroup functions, but host services still
provide most other business functions. The lack of desktop real estate encourages the addition of terminal emulation services to the workstation. This emulation capability connects the workstation directly to the corporate systems. The connection was and
generally still is provided by a direct connection from the workstation to the host server or its controller. It is possible to use a sub-$5,000 workstation as a $500 dumb terminal.
Connectivity provides the opportunity to move beyond terminal emulation to use the full potential of the workstation. Often the first client/server applications in a large organization use existing mainframe applications. These are usually presentation
Figure 2.12. Trends in PC-micro expenditures.
The next step in connectivity is the implementation of specialized servers to provide database and communications services. These servers provide LAN users with a common database for shared applications and with a shared node to connect into the
corporate network. The communications servers eliminate the need for extra cabling and workstation hardware to enable terminal emulation. The LAN cabling provides the necessary physical connection, and the communications server provides the necessary
With its implementation of communications and database servers in place, an organization is ready for the next step up from presentation services-only client/server applications to full-fledged client/server applications. These new applications are
built on the architecture defined as part of the system development environment (SDE).
Personal computer users are accustomed to being in control of their environment. Recently, users have been acclimated to the GUI provided by products such as Windows 3.x, OPEN LOOK, MacOS, and NeXtStep. Productivity is enhanced by the standard look and
feel that most applications running in these environments provide. A user is trained both to get into applications and to move from function to function in a standard way. Users are accustomed to the availability of context-sensitive help,
"friendly" error handling, rapid performance, and flexibility.
Compare the productivity achieved by a financial or budget analyst using a spreadsheet program such as Lotus 1-2-3 or Excel to that achieved when similar functionality is programmed in COBOL on a mainframe. Adding a new variable to an analysis or
budget is a trivial task compared to the effort of making functions perform a similar change in the mainframe-based COBOL package. In the first instance, the change is made directly by the user who is familiar with the requirement into a visible model of
the problem. In the instance of the mainframe, the change must be made by a programmer, who discusses the requirement with the analyst, attempts to understand the issues, and then tries to make the change using an abstraction of the problem.
The personal computer user makes the change and sees the result. The mainframe programmer must make the change, compile the program, invoke the program, and run the test. If the user understands the request, the implications, and the syntactical
requirements, he or she may get it right the first time. Usually, it takes several iterations to actually get it right, often in concert with a frustrated user who tries to explain the real requirement.
We aren't suggesting that all applications can be developed by nonprogrammers using desktop-only tools. However, now that it has become rather easy to build these types of applications on the desktop, it is important for professional IS people to
understand the expectations raised in the minds of the end-user community.
Client/server-developed applications may achieve substantially greater performance when compared with traditional workstations or host-only applications.
Database and communications processing are frequently offloaded to a faster server processor. Some applications processing also may be offloaded, particularly for a complex process, which is required by many users. The advantage of offloading is
realized when the processing power of the server is significantly greater than that of the client workstation. Shared databases or specialized communications interfaces are best supported by separate processors. Thus, the client workstation is available to
handle other client tasks. These advantages are best realized when the client workstation supports multitasking or at least easy and rapid task switching.
Database searches, extensive calculations, and stored procedure execution can be performed in parallel by the server while the client workstation deals directly with the current user needs. Several servers can be used together, each performing a
specific function. Servers may be multiprocessors with shared memory, which enables programs to overlap the LAN functions and database search functions. In general, the increased power of the server enables it to perform its functions faster than the
client workstation. In order for this approach to reduce the total elapsed time, the additional time required to transmit the request over the network to the server must be less than the saving. High-speed local area network topologies operating at 4, 10,
16, or 100Mbs (megabits per second) provide high-speed communications to manage the extra traffic in less time than the savings realized from the server. The time to transmit the request to the server, execute the request, and transmit the result to the
requestor, must be less than the time to perform the entire transaction on the client workstation.
As workstation users become more sophisticated, the capability to be simultaneously involved in multiple processes becomes attractive. Independent tasks can be activated to manage communications processes, such as electronic mail, electronic feeds from
news media and the stock exchange, and remote data collection (downloading from remote servers). Personal productivity applications, such as word processors, spreadsheets, and presentation graphics, can be active. Several of these applications can be
dynamically linked together to provide the desktop information processing environment. Functions such as Dynamic Data Exchange (DDE) and Object Linking and Embedding (OLE) permit including spreadsheets dynamically into word-processed documents. These links
can be hot so that changes in the spreadsheet cause the word-processed document to be updated, or they can be cut and paste so that the current status of the spreadsheet is copied into the word-processed document.
Systems developers appreciate the capability to create, compile, link, and test programs in parallel. The complexity introduced by the integrated CASE environment requires multiple processes to be simultaneously active so the workstation need not be
dedicated to a single long-running function. Effective use of modern CASE tools and workstation development products requires a client workstation that supports multitasking.
Excessive network traffic is one of the most common causes of poor system performance. Designers must take special care to avoid this potential calamity.
In the centralized host model, network traffic is reduced to the input and output of presentation screens. In the client/server model, it is possible to introduce significantly more network traffic if detailed consideration is not given to the
In the file server model, as implemented by many database products, such as dBASE IV, FoxPro, Access, and Paradox, a search is processed in the client workstation. Record-level requests are transmitted to the server, and all filtering is performed on
the workstation. This has the effect of causing all rows that cannot be explicitly filtered by primary key selection to be sent to the client workstation for rejection. In a large database, this action can be dramatic. Records that are owned by a client
cannot be updated by another client without integrity conflicts. An in-flight transaction might lock records for hours if the client user leaves the workstation without completing the transaction. For this reason, the file server model breaks down when
there are many users, or when the database is large and multikey access is required.
However, with the introduction of specific database server products in the client/server implementation, the search request is packaged and sent to the database server for execution. The SQL syntax is very powerful andwhen combined with server
trigger logicenables all selection and rejection logic to execute on the server. This approach ensures that the answer set returns only the selected rows and has the effect of reducing the amount of traffic between the server and client on the LAN.
(To support the client/server model, dBASE IV, FoxPro, and Paradox products have been retrofitted to be SQL development tools for database servers.)
The performance advantages available from the client/server model of SQL services can be overcome. For example, if by using an unqualified SQL SELECT, all rows satisfying the request are returned to the client for further analysis. Minimally qualified
requests that rely on the programmer's logic at the workstation for further selection can be exceedingly dangerous. Quite possibly, 1 million rows from the server can be returned to the client only to be reduced by the client to 10 useful rows. The JOIN
function in SQL that causes multiple tables to be logically combined into a single table can be dangerous if users don't understand the operation of the database engine.
A classic problem with dynamic SQL is illustrated by a request to Oracle to JOIN a 10-row table at the client with a 1-million-row table at the server. Depending on the format of the request, either 10 useful rows may be transferred to the client or 1
million rows may be transferred so that the useless 999,990 can be discarded. You might argue that a competent programmer should know better; however, this argument breaks down when the requestor is a business analyst. Business analysts should not be
expected to work out the intricacies of SQL syntax. Their tools must protect them from this complexity. (Some DBMSs are now making their optimizers more intelligent to deal with just these cases. So, it is important to look beyond transaction volumes when
looking at DBMS engines.) If your business requirement necessitates using these types of dynamic SQL requests, it is important, when creating an SDE, that the architecture definition step selects products that have strong support for query optimization.
Products such as Ingres are optimized for this type of request.
Online Transaction Processing (OLTP) in the client/server model requires products that use views, triggers, and stored procedures. Products such as Sybase, Ellipse, and Ingres use these facilities at the host server to perform the join, apply edit
logic prior to updates, calculate virtual columns, or perform complex calculations. Wise use of OLTP can significantly reduce the traffic between client and server and use the powerful CPU capabilities of the server. Multiprocessor servers with shared
memory are available from vendors such as Compaq, Hewlett Packard, and Sun. These enable execution to be divided between processors. CPU-intensive tasks such as query optimization and stored procedures can be separated from the database management
The use of application and database servers to produce the answer set required for client manipulation will dramatically reduce network traffic. There is no value in moving data to the client when it will be rejected there. The maximum reduction in
network overhead is achieved when the only data returned to the client is that necessary to populate the presentation screen. Centralized operation, as implemented in minicomputer and mainframe environments, requires every computer interaction with a user
to transfer screen images between the host and the workstation. When the minicomputer or mainframe is located geographically distant from the client workstation, WAN services are invoked to move the screen image. Client/server applications can reduce
expensive WAN overhead by using the LAN to provide local communications services between the client workstation and the server. Many client/server applications use mixed LAN and WAN services: some information is managed on the LAN and some on the WAN.
Application design must evaluate the requirements of each application to determine the most effective location for application and database servers.
Cost of operation is always a major design factor. Appropriate choice of technology and allocation of the work to be done can result in dramatic cost reduction.
Each mainframe user requires a certain amount of the expensive mainframe CPU to execute the client portion of the application. Each CICS user uses CPU cycles, disk queues, and D-RAM. These same resources are orders of magnitude cheaper on the
workstation. If the same or better functionality can be provided by using the workstation as a client, significant savings can be realized. Frequently existing workstations currently used for personal productivity applications, such as terminal emulation,
e-mail, word processing, and spreadsheet work may be used for mission-critical applications. The additional functionality of the client portion of a new application can thus be added without buying a new workstation. In this case, the cost savings of
offloading mainframe processing can be substantial.
When you use a communications server on a LAN, each client workstation does not need to contain the hardware and software necessary to connect to the WAN. Communications servers can handle up to 128 clients for the cost of approximately six client
communications cards and software. Despite the dramatic reductions in the price of D-RAM, companies will continue to need their existing client workstations. These devices may not be capable of further D-RAM upgrades, or it may not be feasible from a
maintenance perspective to upgrade each device. The use of server technology to provide some of the functionality currently provided within a client workstation frees up valuable D-RAM for use by the client applications. This is particularly valuable for
The WAN communications functions and LAN services may each be offloaded in certain implementations. The use of WAN communications servers has the additional advantage of providing greater functionality from the dedicated communications server.
If client and server functionality is clearly split and standards-based access is used, there can be considerable vendor independence among application components. Most organizations use more expensive and more reliable workstations from a mainstream
vendor such as Compaq, IBM, Apple, Sun, or Hewlett-Packard for their servers. Other organizations view client workstation technology as a commodity and select lower-priced and possibly less-reliable vendor equipment. The mainstream vendors have realized
this trend and are providing competitively priced client workstations. Each of the mainstream vendors reduced its prices by at least 65 percent between 1991-93, primarily in response to an erosion of market share for client workstations.
The controversy over whether to move from offering a high-priced but best-quality product line to offering a more competitive commodity traumatized the industry in 1991, forcing Compaq to choose between retaining its founder as CEO or replacing him
with a more fiscally aware upstart.
The resulting shakeout in the industry has significantly reduced the number of vendors and makes the use of traditionally low priced clones very risky. Hardware can generally be supported by third-party engineers, but software compatibility is a
serious concern as organizations find they are unable to install and run new products.
The careful use of SQL and RPC requests enable database servers and application services to be used without regard to the vendor of the database engine or the application services platform. As noted previously, the operating system and hardware
platform of the server can be kept totally independent of the client platform through the proper use of an SDE. However, use of these types of technologies can vastly complicate the development process.
An excellent example of this independence is the movement of products such as FoxPro and Paradox to use client services to invoke, through SQL, the server functions provided by Sybase SQL Server. A recent survey of client development products that
support the Sybase SQL Server product identified 129 products. This is a result of the openness of the API provided by Sybase. Oracle also has provided access to its API, and several vendorsnotably Concentric Data Systems, SQL Solutions, and
DataEasehave developed front-end products for use with Oracle. ASK also has realized the importance of open access to buyers and is working with vendors such as Fox and PowerBuilder to port their front ends in support of the Ingres database engine.
An application developed to run in a single PC or file server mode can be migrated without modification to a client/server implementation using a database server. Sybase, Oracle, and Ingres execute transparently under Windows NT, OS/2, or UNIX on many
hardware platforms. With some design care, the server platform identity can be transparent to the client user or developer. Despite this exciting opportunity, programmers or manufacturers often eliminate this transparency by incorporating UNIX-, Windows
NT-, or OS/2-specific features into the implementation. Although FoxPro can work with SQL and Sybase, the default Xbase format for database access does not use SQL and therefore does not offer this independence. To take advantage of this platform
transparency, organizations must institute standards into their development practices.
Some software development and systems integration vendors have had considerable success using client/server platforms for the development of systems targeted completely for mainframe execution. These developer workstations are often the first true
client/server applications implemented by many organizations. The workstation environment, powerful multitasking CPU availability, single-user databases, and integrated testing tools all combine to provide the developer with considerable productivity
improvements in a lower-cost environment. Our analysis shows that organizations that measure the "real" cost of mainframe computing will cost justify workstation development environments in 3 to 12 months.
Client/server application development shows considerable productivity improvement when the software is implemented within an SDE. As previously noted, organizational standards-based development provides the basis for object-oriented development
techniques and considerable code reuse. This is particularly relevant in the client/server model, because some natural structuring takes place with the division of functionality between the client and server environments. Reuse of the server application
functionality, database, and network services is transparent and almost automatic. Because the applications are built with little regard to standard front-end functionality, many features are part of the standard GUI and are automatically reused.
Client/server applications frequently are involved with data creation or data analysis. In such applications, the functionality is personal to a single user or a few users. These applications frequently can be created using standard desktop products
with minimal functionality. For example, data may be captured directly into a form built with a forms development tool, edited by a word processor, and sent on through the e-mail system to a records management application. In the back end, data may be
downloaded to a workstation for spreadsheet analysis.
Mainframes provide the stable, reliable environment that is desirable and necessary for production execution. This same stability is the bane of developers who require rapid changes to their test environments. The workstation environment is preferable
because it is personal and responds to the user's priorities. Developers can make changes at their own pace and then deal with the mainframe bureaucracy if and when the application goes into production in the mainframe environment.
Many users typically run applications on the mainframe. Changes made to such applications affect all their users. In some instances, the entire mainframe may be unavailable during the implementation of a new application. Network reconfiguration,
database utilities, application definition, and system software maintenance all can impact users beyond those specifically involved in a change. It is awkward to migrate only a portion of the users from the previous implementation to the new one.
Typically, it is all or none of the users who must upgrade. This change process requires thorough and all-encompassing tests and careful control over the move to production.
The client/server environment provides more flexibility for phased implementation of the new production environment. The application is replicated at many different locations so the users may implement the new software individually rather than all at
once. This environment adds the additional and significant complication of multiple updates. New products are now available from vendors such as Synchrony, Hewlett-Packard, and IBM that automate and control this function.
Workgroup client/server applications frequently are used by fewer users. These users can be directly supported by the developer immediately after implementation. Corrections can be made and reimplemented more readily. This is not to suggest that in the
client/server world change and production control procedures are not necessary, only that they can be less onerous for workgroup applications. Remote LAN management will be required for enterprise applications implemented throughout the corporation. Only
in this way will support equivalent to that available today for host-based applications be available to remote client/server users.
1 Edelstein, Herbert A., "Database World Targets Next-Generation Problems," Software Magazine Vol. VII, No. 6 (May 1991), p. 81.
2 IBM Santa Teresa laboratory meetings, 1990-1991.
3 Gary Pollreis of Systemhouse, in frustration after a day spent with first-time GUI designers and developers.