Previous Page TOC Next Page

— 6 —
Client/Server Systems Development

Executive Summary

If the selling price of automobiles had kept with the selling price of computer hardware, in 1992 dollars, a Geo would sell for $500. If the productivity improvement of telephone operators had kept pace with the productivity improvement in systems development, 60 percent of the adult U.S. population would need to work as telephone operators to handle the current volume of calls compared to the volume of the 1920s.

An Index Group survey found that up to 90 percent of information technology (IT) departments' budgets are spent maintaining and enhancing existing systems.1 This maintenance and enhancement continues to be done using old, inefficient, and undisciplined processes and technology. Figure 6.1 documents the change in maintenance effort measured in Fortune 1000 companies from the 1970s until today. As the number of installed systems increases, organizations find more of their efforts being invested in maintenance. Ed Yourdon claims that the worldwide software asset base is in excess of 150 billion lines of code. Most of this code was developed in the 1960s and 1970s with older technologies. Thus, this code is unstructured and undocumented, leading to what the Gartner Group is calling the "Maintenance Crisis." We simply must find more effective ways to maintain systems.

Figure 6.1. Percentage of IS budgets dedicated to maintenance.

Business Process Reengineering (BPR) techniques help organizations achieve competitive advantage through substantive improvements in quality, customer service and costs. BPR must be aligned with technology strategy to be effective. Organizations must use technology to enable the business change defined by the BPR effort. In too many organizations technology is inhibiting change. Many CIOs are finding that their careers are much shortened when they discover that the business strategy identified by their organization cannot be realized because the technical architecture employed lacks the openness to support the change.

Senior executives look for new applications of technology to achieve business benefit. New applications must be built, installed, and made operational to achieve the benefits. Expenses incurred in maintenance and enhancement are not perceived to produce value. Yet, most measurements show that 66 percent of the cost of a system is incurred after its initial production release during the maintenance and enhancement phases. In this period of tight budgets it is increasingly difficult to explain and justify the massive ongoing investment in maintenance of systems that do not meet the current need.

Figure 6.2 illustrates how demand for new systems is increasing as technology costs decline and performance improves. Our challenge is to change the expenditures from ongoing maintenance to new development. Buying off-the-shelf application solutions frequently will meet the need. However, unless the packaged solution perfectly matches the needs of the organization, additional and expensive maintenance will be required to modify the package to make it fit.

Figure 6.2. Systems development demand.

Clearly, the solution is to design and build systems within a systems development environment (SDE). Applications and systems within an SDE are built to be maintained and enhanced. The flexibility to accept enhancements is inherent in the design. A methodology defines the process to complete a function. The use of a systems integration life cycle methodology ensures that the process considers the ramifications of all decisions made from business problem identification through and including maintenance and operation of the resultant systems. The changes implied by BPR and the movement from mainframe-centered development to client/server technology requires that you adopt a methodology that considers organizational transformation. Object-oriented technologies (OOTs) can now be used to define the necessary methodology and development environment to dramatically improve our ability to use technology effectively.

With effective use of OO technologies productivity improvements of 10:1 are being measured. Systems are being built with error rates that are one-third that of traditionally developed systems. The creation and reuse of objects supports the enterprise on the desk through the reuse of standard technology to support the user and developer. OO technology allows business specialists to work as developers assembling applications by reusing objects previously constructed by more technical developers.

Factors Driving Demand for Application Software Development

Strategic planning, development, and follow-on support for applications software is a vital,—albeit expensive—process, that may yield enormous benefits in terms of cost savings, time to market for new products, customer satisfaction, and so on. There are opportunities to influence and compress application development planning time—through the use of an existing enterprise-wide architecture strategy or the adoption of a transformational outsourcing strategy. BPR and total quality management (TQM) programs demand software development and enhancements. A competitive market insists that companies demonstrate their value to a skeptical buyer through increasing the value of product and services.

Rising Technology Staff Costs

Coincident with the increasing demand for systems development, enrollment in university-level technology programs is declining, and the pool of available technical talent is shrinking relative to the exploding demand. As a result, technology personnel costs are rising much faster than inflation. In 1994, we see a 22-percent increase in demand for computer technologists. Many organizations find that technology professionals, in whom much organization specific application and technology knowledge has been invested, change jobs every three to five years. This multiplies the burden of reinvestment and retraining in organ-izations that are struggling to reduce costs. If organizations are to maximize their return on technology investments, they must develop a continuous learning program to ensure reuse of training programs, standard development procedures, developer tools and interfaces built for other systems.

Pressure to Build Competitive Advantage by Delivering Systems Faster

There is tremendous pressure on organizations to take advantage of new technology to build competitive advantage. This can be most easily accomplished by bringing innovative service offerings to market sooner than a competitor does. In most cases, new service offerings are required just to keep pace with competitors. The application backlog is horrific. Studies show that 80 to 90 percent of the traditional host-based MIS shop's staff time is devoted to maintaining or enhancing existing—often technically obsolete—applications. Some portion of the relatively small amount of time remaining is available for development of new applications.

For many organizations, implementing systems that not only increase efficiency and effectiveness but also transform fundamental processes to create a competitive advantage is absolutely essential to survival. For many companies, the prospects of global competition and uncertain recessionary times add fuel to the fire to succeed. Companies that cannot find inventive ways to refine their business process and streamline the value chain quickly will fall behind companies that can.

Need to Improve Technology Professionals' Productivitiy

The Index Group reports that the Computer-Aided Software Development (CASE) and other technologies that speed software development are cited by 70 percent of the tope IT executives surveyed as the most critical technologies to implement. The CASE market is growing at a rate of 30 percent per year, and Index's estimates predict it will be a $5 billion market by 1995, doubling from 1990 figures.

This new breed of software tools helps organizations respond more quickly by cutting the time it takes to create new applications and making them simpler to modify or maintain. Old methods, blindly automating existing manual procedures, can hasten a company's death knell. Companies need new, innovative mission-critical systems to be built quickly, with a highly productive, committed professional staff partnered with end-users during the requirements, design, and construction phases. The client/server development model provides the means to develop horizontal prototypes of an application as it is designed. The user will be encouraged to think carefully about the implications of design elements. The visual presentation through the workstation is much more real than the paper representation of traditional methods.

Yourdon reports that less than 20 percent of development shops in North America have a methodology of any kind, and even a lower percentage actually use the methodology. Input Research reports that internally developed systems are delivered on time and within budget about 1 percent of the time. They compare this result to those outsourced through systems integration professionals who use high-productivity environments, which are delivered on time and within budget about 66 percent of the time.

The use of a proven, formal methodology significantly increases the likelihood of building systems that satisfy the business need and are completed within their budgets and schedules. Yourdon estimates that 50 percent of errors in a final system and 75 percent of the cost of error removal can be traced back to errors in the analysis phase. CASE tools and development methodologies that define systems requirements iteratively with high and early user involvement have been proven to significantly reduce analysis phase errors.

Need for Platform Migration and Reengineering of Existing Systems

Older and existing applications are being rigorously reevaluated and in some cases terminated when they don't pay off. A 15-percent drop in proprietary technology expenditures was measured in 1993 and this trend will continue as organizations move to open systems and workstation technology. BPR attempts to reduce business process cost and complexity by moving decision making responsibility to those individuals who first encounter the customer or problem. Organizations are using the client/server to bring information to the workplace of empowered employees.

The life of an application tends to be 5 to 15 years, whereas the life of a technology is much shorter—usually one to three years. Tremendous advances can be made by reengineering existing applications and preserving the rule base refined over the years while taking advantage of the orders-of-magnitude improvements that can be achieved using new technologies.

Need for a Common Interface Across Platforms

Graphical user interfaces (GUIs) that permit a similar look and feel and front-end applications that integrate disparate applications are on the rise.

A 1991 Information Week survey of 157 IT executives revealed that ease of use through a common user interface across all platforms is twice as important as the next most important criteria as a purchasing criterion for software. This is the single-system image concept.

Of prime importance to the single-system image concept is that every user from every workstation have access to every application for which they have a need and right without regard to or awareness of the technology.

Developers should be equally removed from and unconcerned with these components. Development tools and APIs isolate the platform specifics from the developer. When the single-systems image is provided, it is possible to treat the underlying technology platforms as a commodity to be acquired on the basis of price-performance without concern for specific compatibility with the existing application. Hardware, operating systems, database engines, communication protocols—all these must be invisible to the application developer.

Increase in Applications Development by Users

As workstation power grows and dollars-per-MIPS fall, more power is moving into the hands of the end user. The Index Group reports that end users are now doing more than one-third of application development; IT departments are functioning more like a utility. This is the result of IT department staff feeling the squeeze of maintenance projects that prevent programmers from meeting critical backlog demand for new development.

This trend toward application development by end-users will create disasters without a consistent, disciplined approach that makes the developer insensitive to the underlying components of the technology. End-user application developers also must understand the intricacies of languages and interfaces.

Object-oriented technologies embedded in SDE have regularly demonstrated to produce new development productivity gains of 2 to 1 and maintenance productivity improvements of 5 to 1 over traditional methods—for example, process-driven or data-driven design and development. More recently mature OO SDEs with a strong focus on object reusability are achieving productivity gains of 10 to 1 over traditional techniques.

Production-capable technologies are now available to support the development of client/server applications. The temptation and normal practice is to have technical staff read the trade press and select the best products from each category, assuming that they will combine to provide the necessary development environment. In fact, this almost never works. When products are not selected with a view as to how they will work together, they do not work together.

Thus, the best Online Transaction Processing (OLTP) package may not support YOUR best database. Your security requirements may not be met by any of your tools. Your applications perform well, but it may take forever to change them. Organizations must architect an environment that takes into account their particular priorities and the suite of products being selected. The selection of tools will provide the opportunity to be successful.

An enterprise-wide architecture strategy must be created to define the business vision and determine a transformation strategy to move from the current situation to the vision. This requires a clear understanding of industry standards, trends, and vendor priorities. Combining the particular business requirements with industry direction it is possible to develop a clear strategy to use technology to enable the business change. Without this architecture strategy, decisions will be made in a vacuum with little business input and usually little clear insight into technology direction.

The next and necessary step is to determine how the tools will be used within your organization. This step involves the creation of your SDE. Without the integration of an SDE methodology, organizations will be unable to achieve the benefits of client/server computing. Discipline and standards are essential to create platform-independent systems. With the uncertainty over which technologies will survive as standards, the isolation of applications from their computing platforms is an essential insurance policy.

Client/Server Systems Development Methodology

The purpose of a methodology is to describe a disciplined process through which technology can be applied to achieve the business objectives. Methodology should describe the processes involved through the entire life cycle, from BPR and systems planning through and including maintenance of systems in production. Most major systems integrators and many large in-house MIS groups have their own life cycle management methodology. Andersen Consulting, for example, has its Foundation, BSG has its Blueprint, and SHL Systemhouse has its own SHL Transform—the list goes on and on. These companies offer methodologies tuned for the client/server computing environment. However, every methodology has its own strengths, which are important to understand as part of the systems integration vendor selection process.

Figure 6.3 shows the processes in a typical systems integration life cycle. It is necessary to understand and adhere to the flow of information through the life cycle. This flow allows the creation and maintenance of the systems encyclopedia or electronic repository of data definitions, relationships, revision information, and so on. This is the location of the data models of all systems. The methodology includes a strict project management discipline that describes the deliverables expected from each stage of the life cycle. These deliverables ensure that the models are built and maintained. In conjunction with CASE tools, each application is built from the specifications in the model and in turn maintains the model's where-used and how-used relationships.

Table 6.1 details the major activities of each stage of the systems integration life cycle methodology. No activity is complete without the production of a formal deliverable that documents, for user signoff, the understanding gained at that stage. The last deliverable from each stage is the plan for the next stage.

Figure 6.3. Systems integration life cycle.

SILC Phase

Major Activities

Systems Planning

Initiate systems planning

Gather data

Identify current situation

Describe existing systems

Define requirements

Analyze applications and data architectures

Analyze technology platforms

Prepare implementation plan

Project Initiation

Screen request

Identify relationship to long-range systems plan

Initiate project

Prepare plan for next phase

Architecture Definition

Gather data

Expand the requirements to the next level of detail

Conceptualize alternative solutions

Develop proposed conceptual architecture

Select specific products and vendors


Gather data

Develop a logical model of the new application system

Define general information system requirements

Prepare external system design


Perform preliminary design

Perform detailed design

Design system test

Design user aids

Design conversion system


Set up the development environment

Code modules

Develop user aids

Conduct system test

Facilities Engineering

Gather data

Conduct site survey

Document facility requirements

Design data center

Plan site preparation

Prepare site

Plan hardware installation

Install and test hardware


Develop contingency procedures

Develop maintenance and release procedures

Train system users

Ensure that production environment is ready

Convert existing data

Install application system

Support acceptance test

Provide warranty support


Initiate support and maintenance



Support hardware and communication configuration

Support software

Perform other project completion tasks as appropriate

Project Management

Many factors contribute to a project's success. One of the most essential is establishing an effective project control and reporting system. Sound project control practices not only increase the likelihood of achieving planned project goals but also promote a working environment where the morale is high and the concentration is intense. This is particularly critical today when technology is so fluid and the need for isolating the developer from the specific technology is so significant.

The objectives of effective project management are to

  1. Plan the project:

    Define project scope

    Define deliverables

    Enforce methodology

    Identify tasks and estimates

    Establish project organization and staffing

    Document assumptions

    Identify client responsibilities

    Define acceptance criteria

    Define requirements for internal quality assurance review

    Determine project schedules and milestones

    Document costs and payment terms

  2. Manage and control project execution:

    Maintain personal commitment

    Establish regular status reporting

    Monitor project against approved milestones

    Follow established decision and change request procedures log and follow up on problems

  3. Complete the project:

    Establish clear, unambiguous acceptance criteria

    Deliver a high-quality product consistent with approved criteria

    Obtain clear acceptance of the product

New technologies such as client/server place a heavy burden on the architecture definition phase. The lack of experience in building client/server solutions, combined with the new paradigm experienced by the user community, leads to considerable prototyping of applications. These factors will cause rethinking of the architecture. Such a step is reasonable and appropriate with today's technology. The tools for prototyping in the client/server platform are powerful enough that prototyping is frequently faster in determining user requirements than traditional modeling techniques were.

When an acceptable prototype is built, this information is reverse engineered into the CASE tool's repository. Bachman's Information Systems' CASE products provide among the more powerful available tools to facilitate this process.

Architecture Definition

The purpose of the architecture definition phase in the methodology is to define the application architecture and select the technology platform for the application. To select the application architecture wisely, you must base the choice on an evaluation of the business priorities. Your organization must consider and weight the following criteria:

These application architecture issues must be carefully evaluated and weighed from a business perspective. Only after completing this process can managers legitimately review the technical architecture options. They must be able to justify the technology selection in the way it supports the business priorities. Figure 6.4 illustrates the conundrum we face as we move from application architecture to technical architecture. There is always a desire to manage risk and a corresponding desire to use the best technology. A balance must be found between the two extremes of selecting something that fits the budget and is known to work versus the newest, best, and unproven option. Cost is always a consideration.

Figure 6.4. The objectives of an architecture.

Once managers understand the application architecture issues, it becomes appropriate to evaluate the technical architecture options. Notice that staff are not yet selecting product, only architectural features. It is important to avoid selecting the product before purchasers understand the baseline requirements.

The following is a representative set of technical architecture choices:

Figure 6.5 illustrates the layering of technical architecture and applications architecture. One should not drive the other. It is unrealistic to assume that the application architects can ignore the technical platform, but they should understand the business priorities and work to see that these are achieved. Interfaces must isolate the technical platform from the application developers. These interfaces offer the assurance that changes can be made in the platform without affecting functioning at the application layer.

Figure 6.5. Components of the technical and applications architectures.

With the technical architecture well defined and the application architecture available for reference, you're prepared to evaluate the product options. The selection of the technology platform is an important step in building the SDE. There will be ongoing temptation and pressure to select only the "best products." However, the classification of "best product in the market," as evaluated in the narrow perspective of its features versus those of other products in a category, is irrelevant for a particular organization. Only by evaluating products in light of the application and technical architecture in concert with all the products to be used together can you select the best product for your organization.

Figure 6.6 details the categories to be used in selecting a technology platform for client/server applications. Architectures and platforms should be organizational. There is no reason to be constantly reevaluating platform choices. There is tremendous benefit in developing expertise in a well-chosen platform and getting repetitive benefit from reusing existing development work.

Figure 6.6. Building the technology platform.

Systems Development Environment

Once your organization has defined its application and technical architectures and selected its tools, the next step is to define how you'll use these tools. Developers do not become effective system builders because they have a good set of tools; they become effective because their development environment defines how to use the tools well.

An SDE comprises hardware, software, interfaces, standards, procedures, and training that are selected and used by an enterprise to optimize its information systems support to strategic planning, management, and operations.

IBM defined its SDE in terms of an application development cycle, represented by a product line it called AD/Cycle, illustrated in Figure 6.7. Another way of looking at the SDE is illustrated in Figures 6.8 and 6.9. The SDE must encompass all phases of the systems development life cycle and must be integrated with the desktop. The desktop provides powerful additional tools for workstation users to become self-sufficient in many aspects of their information-gathering needs.

Figure 6.7. IBM AD/Cycle model.

Figure 6.8. An SDE architecture.

Figure 6.9. An office systems architecture.

The most significant advantages are obtained from an SDE when a conscious effort is made to build reusable components. These are functions that will be used in many applications and will therefore improve productivity. Appendix A's case studies illustrate the benefits of projects built within the structure of an SDE. With the uncertainty surrounding product selection for client/server applications today, the benefits of using an SDE to isolate the developers from the technology are even more significant. These technologies will evolve, and we can build applications that are isolated from many of the changes. The following components should be included in any SDE established by an organization:

Every platform includes a set of services that are provided by the tools. This is particularly true in the client/server model, because many of the tools are new and take advantage of object-oriented development concepts. It is essential for an effective SDE to use the facilities and not to redevelop these because of elegance or ego.

Figure 6.10 illustrates the development environment architecture built for a project using Natural 4GL from Software AG. Software AG has successfully ported its Natural product from a mainframe-only environment to the workstation, where it can be used as part of a client/server architecture.

Figure 6.10. Software AG's natural architecture.

The ACTS example shown in Appendix A uses this SDE architecture with Easel and Telon. Users and developers can move between these environments with minimal difficulty because there is such a high degree of commonalty in the look and feel and in the services provided. Development within the justice application (of which ACTS is a part) included the Software AG products, Easel, and Telon. The same developers were productive throughout because of the common architecture. This occurred despite the fact that portions of the application were traditional mainframe, portions were mixed workstation-to-mainframe programs, and portions were pure client/server.

The advantages of building an SDE and including these types of components are most evident in the following areas:

Productivity Measures

It is difficult to accurately quantify productivity gains obtained by using one method versus another, because developers are not willing to build systems twice with two different teams with the same skill set. However, a limited number of studies have been done estimating the expected cost of developing and maintaining systems without a formal SDE compared to the actual results measured with an SDE. One such analysis studied U.S. competitiveness. The researchers determined that, on average, a Japanese development team produces 170 percent of the debugged lines of code per year that a U.S. development team does. Japanese literature describes the Japanese approach to building systems as very consistent with the SDE approach described here. The necessity for Japanese developers to deal with U.S. software and a Japanese script language user interface has taught them the value of software layers. This led naturally to the development of reusable software components. Measurements by the researchers of errors in systems developed by Japanese and United States development teams showed that the Japanese had only 44 percent of the errors measured in the U.S. code.

Japanese developers work in a disciplined style that emphasizes developing to standards and reuse of common components. Our experience with SDE-based development is showing a 100-percent productivity improvement for lines of debugged source code per work year for new development and a 400-percent productivity increase for maintenance of existing systems. It's easy to understand the new code improvement rate from the facts noted earlier, but it is not as clear why the maintenance improvement is so great.

A significant reason for better productivity appears to be the reduction in testing effort that results from fewer errors. It is difficult to make changes to a production application. The cost and effort involved in changing production code is dramatically greater than changes to a test system. Developers and testers are careful about changes to production products. If you eliminate half the errors, you not only have happier users but also a substantial reduction in effort to correct the problems. The ability to make global changes and the reduction in complexity that comes from the familiar environment also improve maintenance productivity.


CASE tools are built on an "enterprise model" of the processes to be automated; that is, systems integration and software development. This underlying enterprise model or "metamodel" used by CASE is crucial to the tool's usefulness. Tools based on a poor model suffer from poor integration, are unable to handle specific types of information, require duplicate data entry, cannot support multiple analyst-developer teams, and are not flexible enough to handle evolving new techniques for specifying and building systems solutions. Tools with inadequate models limit their users' development capabilities.

All the leading CASE products operate and are used in a client/server environment. Intel 486-based workstations operating at 50MHz or faster, with 16-24 Mbytes of memory and 250Mbyte hard disks and UNIX workstations of similar size are typically required. Thus, combining hardware and CASE software costs, CASE costs up to $20,000 per user workstation/terminal.

Unfortunately, a thorough review of the available CASE products shows that none adequately provide explicit support for development of client/server applications and GUIs. This lack of support occurs despite the fact that they may operate as network-based applications that support development of host-based applications. There is considerable momentum to develop products that support the client/server model. The Bachman tools are in the forefront in this area because of their focus on support for business process reengineering. With many client/server applications being ported from a minicomputer or mainframe, the abilities to reuse the existing models and to reverse engineer the databases are extremely powerful and time-saving features.

It seems likely that no single vendor will develop the best integrated tool for the entire system's life cycle. Instead, in the probable scenario, developers mix the best products from several vendors. This scenario is envisioned by IBM in their AD/Cycle product line, by Computer Associates in their CA90 products, and by NCR in their Open Cooperative Computing series of products.

As an example, an organization may select Bachman, which provides the best reengineering and reusability components and the only true enterprise model for building systems solutions for their needs. This model works effectively in the LAN environment and supports object-oriented reuse of specifications. The organization then integrates the Bachman tools with ParcPlace Parts product for Smalltalk code generation for Windows, UNIX or OS/2 desktops and server applications and with Oracle for code generation in the UNIX, OS/2, and Windows NT target environment. The visual development environments of these products provide the screen painting, business logic relationship, and prototyping facilities necessary for effective systems development.

A more revolutionary development is occurring as CASE tools like the Bachman products are being integrated with development tools from other vendors. These development tools, used with an SDE, allow applications to be prototyped and then reengineered back into the CASE tool to create process and data models. With the power of GUI-based development environments to create and demonstrate application look and feel, the prototyping approach to rapid application design (RAD) is the only cost-effective way to build client/server applications today.

Users familiar with the ease of application development on the workstation will not accept paper or visual models of their application. They can only fully visualize the solution model when they can touch and feel it. This is the advantage of prototyping, which provides a "real touch and feel." Except in the earliest stages of solution conceptualization, the tools for prototyping must be created using the same products that are to be used for production development.

Not all products that fall into the CASE category are equally effective. For example, some experts claim that the information engineering products—such as Texas Instruments' product, IEF—attempt to be all things to all people. The criticism is that such products are constrained by their need to generate code efficiently from their models. As a result, they are inflexible in their approach to systems development, have primitive underlying enterprise models, may require a mainframe repository, perform poorly in a team environment, and provide a physical approach to analysis that is constrained by the supported target technologies (CICS/DB2 and, to a lesser extent, Oracle). Critics argue that prototyping with this class of tool requires developers to model an unreasonable amount of detail before they can present the prototype.

Object-Oriented Programming (OOP)

OOP is a disciplined programming style that incorporates three key characteristics: encapsulation, inheritance, and dynamic binding. These characteristics differentiate OOP from traditional structured programming models, in which data has a type and a structure, is distinct from the program code, and is processed sequentially. OOP builds on the concepts of reuse through the development and maintenance of class libraries of objects available for use in building and maintaining applications.

Object-oriented programming is most effective when the reusable components can be cut and pasted to create a skeleton application. Into this skeleton the custom business logic for this function is embedded. It is essential that the standard components use dynamic binding so that changes can be made and applied to all applications in the environment. This provides one of the major maintenance productivity advantages.

Certain programming languages are defined to be object-oriented. C++, Objective C, SmallTalk, MacApp, and Actor are examples. With proper discipline within an SDE it is possible to gain many of the advantages of these languages within the more familiar environments of COBOL and C. Because the state of development experience in the client/server world is immature, it's imperative for organizations to adopt the discipline of OOP to facilitate the reuse of common functions and to take advantage of the flexibility of global changes to common functions.

Objects are easily reused, in part because the interface to them is so plainly defined and in part because of the concept of inheritance. A new object can inherit characteristics of an existing object "type." You don't have to reinvent the wheel; you can just inherit the concept. Inheritance gives a concise and precise description of the world and helps code reusability, because every program is at the level in the "type hierarchy" at which the largest number of objects can share it. The resulting code is easier to maintain, extend, and reuse.

A significant new component of object-oriented development has been added with the capability to use server objects with RPC requests. During 1994, the introduction of CORBA compliant object stores will dramatically open the client/server paradigm to the "anything anywhere" dimension. Objects will be built and stored on an arbitrary server for use by any client or server anywhere. The earliest implementations of this model are provided by NeXT with its Portable Distributed Objects (PDO) and Suns Distributed Objects Everywhere (DOE) architecture.

And what about object-oriented database management system (OODBMS)? It combines the major object-oriented programming concepts of data abstraction, encapsulation, and type hierarchies with the database concepts of storage management, sharing, reliability, consistency, and associative retrieval.

When is an OODBMS needed, and when will an extended relational data-base management system (DBMS) do? Conventional database management products perform very well for many kinds of applications. They excel at processing large amounts of homogeneous data, such as monthly credit card billings. They are good for high-transaction-rate applications, such as ATM networks. Relational database systems provide good support for ad hoc queries in which the user declares what to retrieve from the database as opposed to how to retrieve it.

As we traverse the 1990s, however, database management systems are being called on to provide a higher level of database management. No longer will databases manage data; they must manage information and be the knowledge centers of the enterprise. To accomplish this, the database must be extended to

Many RDBMS products already handle binary large objects (BLOBs) in a single field of a relation. Many applications use this capability to store and provide SQL-based retrieval of digital laboratory data, images, text, and compound documents. Digital's Application Driven Database Systems (ADDS) have been established to enable its SQL to handle these complex and abstract data types more explicitly and efficiently.

But applications that require database system support are quickly extending beyond such traditional data processing into computer-aided design (CAD) and CASE, sophisticated office automation, and artificial intelligence. These applications have complex data structuring needs, significantly different data accessing patterns, and special performance requirements. Conventional programming methodologies are not necessarily appropriate for these applications and conventional data management systems may not be appropriate for managing their data.

Consider for a moment the factors involved in processing data in applications such as CAD, CASE, or generally in advanced office automation. The design data in a mechanical or electrical CAD database is heterogeneous. It consists of complex relationships among many types of data. The transactions in a CASE system don't lend themselves to transaction-per-second measurement; transactions can take hours or even days. Office automation applications deal with a hierarchical structure of paragraphs, sentences, words, characters, and character attributes along with page position and graphical images. Database access for these applications is typically a directed graph structure rather than the kind of ad hoc query that can be supported in SQL. Each object contains within its description reference to many other objects and elements. These are automatically collected by the object manager to provide the total view. In typical SQL applications, the developer makes explicit requests for related information.

In trying to manipulate such complex data using a relational system, a programmer writes code to map extremely complex in-memory data structures onto lower-level relational structures using awkward and resource-intensive recursive programming techniques. The programmer finds himself or herself doing database management instead of letting the DBMS handle it. Worse, even if the programmer manages to code the translation from in-memory objects to relational tables, performance is unacceptable.

Thus, relational systems have not been any help for the programmer faced with these complex coding tasks. The object-oriented programming paradigm, on the other hand, has proven extremely useful. The complex data structures CAD and CASE programmers deal with in memory are often defined in terms of C++ or Smalltalk objects.

It would be helpful if the programmer didn't have to worry about managing these objects, moving them from memory to disk, then back again when they're needed later. Some OOP systems provide this object "persistence" just by storing the memory image of objects to disk. But that solution only works for single-user applications. It doesn't deal with the important concerns of multiuser access, integrity, and associative recall.

Persistence means that objects remain available from session to session. Reliable means automatic recovery in case of hardware or software failures. Sharable means that several users should be able to access the data. All of these qualities may require systems that are larger than many that are currently available. In some cases, of course, programmers aren't dealing with overwhelmingly complex data, yet want to combine the increased productivity of object-oriented programming with the flexibility of an SQL DBMS. Relational technology has been extended to support binary large objects (BLOBs), text, image and compound documents, sound, video, graphics, animation, and abstract data types. As a result, organizations will be able to streamline paper-intensive operations to increase productivity and decrease business costs—assuming they use a database as a repository and manager for this data.

[footnote]1Index Group Survey, Fortune 1000, December 1990.

Previous Page TOC Next Page