Excerpts from

"The Encyclopedia of Networking"

by Tom Sheldon

Note: This is small sampling of topics from the book and the CD-ROM. Also note that the CD-ROM that comes with the book is fully hyperlinked, meaning that you can quickly jump to any related topic on the CD-ROM or to any Internet link.

Reader testimonials!!!

Book order form...

List of Sample Topics


Backbone Networks

Cellular Communication Systems

Circuit-Switching Services

Collaborative Computing

Communication Services

Component Software Technology

Connection Technologies


Document Management

File Systems

File Systems, Distributed

High-Speed Networking

Information Publishing


Internet Backbone

Internet Organizations and Committees

IP Switching

Middleware and Messaging

Mobile IP

Object Technologies

Packet and Cell Switching

Protocol Concepts

QoS (Quality of Service)

Standards Groups, Associations, and Organizations

Web Technologies and Concepts


An acknowledgment is a confirmation of receipt. When data is transmitted between two systems, the recipient can acknowledge that it received the data. Acknowledgments compensate for unreliable networks. However, acknowledgments can reduce performance on a network. If every packet sent requires an acknowledgment, then up to half of the network's throughput is used to confirm receipt of information.

Modern networks such a LANs and WANs are considered highly reliable. There is little need to acknowledge every packet, so acknowledgments are used for groups of packets or not at all. Unreliable networks still exist, however-especially if you are building WANs in third-world countries or using wireless devices to transmit data. Acknowledgments were traditionally handled in the data link layer. This is extremely wasteful of bandwidth, but often there is little choice. X.25 packet-switching networks use this technique. The alternative is to acknowledge higher up in the protocol stack and remove error checking from the network so it can transmit bits as fast as it can. Transport layer services such as TCP provide reliability services in the TCP/IP protocol suite.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Connection Establishment; Connection-Oriented and Connectionless Services; Data Communication Concepts; Flow-Control Mechanisms; NAK (Negative Acknowledgment); and Transport Protocols and Services

Return to top of this sample page

Return to Home page

Backbone Networks

The backbone is the most basic and predominant network topology. It is used to tie together diverse networks in the same building, in different buildings in a campus environment, or over wide areas. Backbones handle internetwork traffic. If the networks it ties together are departmental networks, then the backbone handles interdepartmental traffic. With the increasing use of Web technology, more traffic has been flowing across the backbone. Users click buttons on Web pages that call up documents on servers throughout the organization or on the Internet. This places more traffic on the backbone than ever before.

There are distributed backbones which tie together distinct LANs and there are collapsed backbones, which exist as wiring hubs and switches. The two topologies are illustrated in Figure B-1.

FIGURE B-1. Distributed and collapsed backbones

The distributed backbone on the left in Figure B-1 shows how the network (in this case, an FDDI ring) extends to each department or floor in a building. Each network is connected via a router to the backbone network. In the collapsed backbone shown on the right, a cable runs from each department (or floor) network to a central hub, usually located in a building wiring closet or possibly in the IS (information systems) department. The hub or switch becomes the backbone, moving traffic among the different networks.

The distributed backbone approach can retain its availability. If one of the routers fails, the rest of the network is not affected. However, a packet on one LAN must make a minimum of two router hops to reach another LAN.

In the collapsed backbone, routing is centralized, minimizing hops between LANs. If the departmental devices and the backbone device are switching hubs, then performance can increase dramatically because VLANs (virtual LANs) can be created between any two workstations, eliminating router hops altogether and in some cases, providing nonshared, collision-free circuits between workstations.

So far, our backbone has been limited to a single building. A backbone can link multiple networks in the campus environment or connect networks over wide area network links. These two approaches are pictured in Figure B-2.

FIGURE B-2. Campus and wide area backbones

The primary reason for a campus backbone is that it is impractical to use the collapsed backbone approach over an entire campus area. Running cable from each site back to a central site is costly, so the alternative is to install a single network such as an FDDI ring throughout the campus.

As for wide area networks, two approaches are possible. The private network approach is pictured on the right in Figure B-2. Dedicated leased lines are installed between each site-a costly proposition, especially if the sites are far from each other, because the cost of leased lines increases with distance.

A better approach to building wide area network backbones is discussed under "Frame Relay." Service providers most likely have access points into their frame relay networks near your offices if they are located in metropolitan areas. That means you make a short leased-line connection into the "frame relay" cloud, and the frame relay network takes care of routing packets to distant offices at a fraction of the cost of using dedicated leased lines.

The 80/20 and 20/80 Rules

An old rule for backbones was that 80 percent of the traffic stayed in the department while 20 percent crossed the backbone. With this model, high data throughput rates on the backbone were not a priority. If your departmental networks used 10-Mbit/sec Ethernet, you could possibly get by with a 10-Mbit/sec Ethernet backbone or at most, a 100-Mbit/sec FDDI backbone for expanding traffic loads.

However, the 80/20 rule no longer applies for most networks. In fact, it is reversed to the point where backbone networks now handle a majority of the network traffic. This is due to a number of factors, including the following:

Because of these factors, the backbone has become the focus of a need for higher and higher data throughput capabilities. Added to that are increasing traffic loads put on the network by multimedia applications. Videoconferencing requires quality of service (QoS) guarantees, meaning that a certain part of the network bandwidth must be set aside to ensure that packets arrive on time. On top of this, many organizations want to integrate their voice telephone system into the network which will demand even more bandwidth.

A hierarchical wiring structure is a priority for being able to handle this type of data traffic. As pictured on the right in Figure B-1, this type of wiring provides a topology that keeps local traffic local (from station to station on the same hub/switch) and funnels traffic to the backbone only when necessary.

ATM (Asynchronous Transfer Mode) has been the most likely contender to satisfy the requirements for collapsed backbones. ATM hubs have high-performance switching matrices that can handle millions of packets per second. Also, ATM provides QoS guarantees for video and voice.

Another more recent contender is Gigabit Ethernet, which provides Gbit/sec (1,000-Mbit/sec) throughput on the backbone switch or between switches. It fits in well with existing Ethernet networks because the same frame format, medium access method, and other defining characteristics are retained. In many cases, a Gigabit Ethernet switch can replace an existing Fast Ethernet switch with little other modification to the network.

In general, switched-based building blocks are the components you need to build a high-speed hierarchical network that maintains high performance under big traffic loads. Refer to "Switched Networks," for more information on how to build these "new" networks.

In the wide area, a new approach is to build VPNs (virtual private networks) over the Internet. The strategy is simple: install encrypting routers at each site with connections to the Internet. The routers encrypt the data contents of packets but keep the network addresses readable so they can be routed across the Internet. This is a cheap way to build secure links between your remote sites; however, the Internet is subject to delays, as you probably well know.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ATM (Asynchronous Transfer Mode); Frame Relay; Gigabit Ethernet; Network Design and Construction; Structured Cabling Standards; Switched Networks; VLAN (Virtual LAN); and VPN (Virtual Private Network)

Return to top of this sample page

Return to Home page

Cellular Communication Systems

Cellular radiotelephone systems in the United States began in 1983 and have grown rapidly ever since. The systems were largely analog until the early 1990s, when the first digital systems were put in place. There are many benefits to digital systems, and a number of standards have been introduced. The abundance of standards in the digital market along with continued advances in analog services has slowed the move to fully digital systems. Still, there are benefits in all-digital systems, and the latest trend is to build a global system that allows cellular phone users to make calls anywhere in the world with a single device and a single phone number.

Cellular phone systems consist of mobile units, base stations, and switches. A cellular phone is different from a regular phone in that it has a transceiver (transmitter/receiver). The transmitter sends radio signals to an antenna at a base station, and the receiver receives radio signals from the base station antenna.

A base station is a stationary antenna located within the middle of a cell-shaped transmit and receive area. The cells have a limited geographic area that may cover an entire town or a segment of a large metropolitan area, as shown in Figure C-3. A typical cell is 20 kilometers across but can vary in size.

FIGURE C-3. Cells in a cellular communication system

Cellular systems in large metropolitan areas may be overused, so cells are made smaller to service more customers. Adjoining cells have their own tower and cannot use similar operating frequencies. However, stations that are separated by at least one cell may reuse frequencies, and this provides expansion of the system. The technique is referred to as frequency reuse.

As a mobile user travels from one cell to another, there is a hand-off from one base station to another. This hand-off requires a certain amount of time and allocation of a new frequency in the new cell. It can produce a discontinuity in the call, which might disrupt data transmissions.

Analog Systems

The first cellular system in the United States, AMPS (Advanced Mobile Phone Service), was built in 1983. It is a fully analog system (i.e., voice is not digitized for transmission) that uses FDMA (Frequency Division Multiple Access) techniques for creating user communication channels. A channel consists of a specific frequency within the bandwidth of frequencies allocated to the system. An AMPS system consists of up to 818 channel pairs (transmit and receive). The transmitter in each cell has relatively low power so its transmission does not overlap too much into adjacent cells. No adjacent cells can use the same frequencies.

An advanced AMPS standard called NAMPS (Narrowband AMPS) was developed by Motorola to provide up to three times greater capacity than AMPS. The goal was to provide an interim technology until digital systems could be developed. Refer to "AMPS (Advanced Mobile Phone Service)" for more information.

A standard called CDPD (Cellular Digital Packet Data) was developed in 1993 that defines how to handle data transmissions on the AMPS system. CDPD's primary claim to fame is that it makes use of idle channels and sends data frames whenever possible. The system make efficient use of the existing AMPS system. See "CDPD (Cellular Digital Packet Data)" for more information.

Digital Systems

The first digital systems were set up in 1993 by McCaw and Southwestern Bell. Some providers have chosen to wait for standards to settle and feel that current analog systems provide superior performance. Digital systems use a specific transport mechanism to move information between the mobile user and the base station. The transport mechanisms in this case are the channel allocation schemes for digital radio. There are two schemes in use:

Emerging Standards

The PCS (Personal Communications Services) and the Iridium system being deployed by Motorola and a consortium of global users will change the way people communicate. Both systems are designed for wireless use.

PCS (also called PCN, or Personal Communications Network, outside the United States) is a personalized communication system that assigns an individual one phone number. Note that the individual has the phone number, not a device. This phone number works no matter where the user travels to on the planet. PCS is fully digital and has some unique features. It implements 50- to 100-meter wide microcells with low transmission rates, making equipment very inexpensive. The U.S. government has already auctioned off licenses for the service. One drawback is that some of the frequencies are already allocated to other services, which must apparently move out to make room for PCS. The investment and the changes required to get the system going may be too high to make it practical in the near future. Refer to "PCS (Personal Communications Services)" for more information.

Another system is Iridium. When established in 1998, it will provide wireless telephone services by using low-orbit satellites and connections into the existing telephone system. Users can travel anywhere in the world, and their handset will provide the system with information about their location. The Iridium system consists of 66 satellites, weighing approximately 689 kilograms (1,500 pounds), launched into low-earth orbit at an altitude of 420 nautical miles. The satellite project tightly focused beams over the ground to create a cellular-like system. Refer to "Iridium System" for more information.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

AMPS (Advanced Mobile Phone Service); CDPD (Cellular Digital Packet Data); GSM (Global System for Mobile Communications); Iridium System; Mobile Computing; PCS (Personal Communications Services); and Wireless Communications


International Trade Administration's cellular site http://www.ita.doc.gov/industry/tai/telecom/cellular.html

CTIA (Cellular Telecommunications Industry Association) site http://www.wow-com.com

Cellular Network Perspective, Inc. (wireless publications) http://www.cadvision.com/cnp-wireless

Return to top of this sample page

Return to Home page

Circuit-Switching Services

Circuit switching, as opposed to packet switching, sets up a dedicated communication channel between two end systems. Voice calls on telephone networks are an example. For a home or office connection, a circuit starts out on a pair of twisted wires from the caller's location to a telephone switching center in the local area. If the connection is between two phones in the same area, the local switch completes the circuit. This is pictured as connection A1-A2 in Figure C-5. If the connection is between phones in two different areas, a circuit is set up through an intermediate exchange as shown by circuit C1-C2. Long-distance circuits are made through a remote switching office as shown by circuit B1-B2.

FIGURE C-5. Circuits in a hierarchical telephone switching system

The difference between dedicated and switched circuits is that a dedicated circuit is always connected, and a switched circuit can be set up and disconnected at any time, reducing connect charges. The difference between circuit- and packet-switching services is pictured in Figure C-6.

FIGURE C-6. Circuit switching compared to packet switching

Switched circuits can supplement a dedicated line. For example, an appropriate bridge or router may use a dial-on-demand protocol to automatically dial a switched line if the traffic on the dedicated line exceeds its capabilities. Refer to "DDR (Dial-on- Demand Routing)" for more information. Switched circuits are also used to perform occasional data transfers between remote offices. A switched circuit might connect every 15 minutes to transfer the latest batch of electronic mail.

In a packet-switching network, data is divided into packets and transmitted across a network that is shared by other customers. The packets are interleaved with packets from other sources. This uses the network more efficiently and reduces usage charges, but packet-switched networks are subject to delays if another customer overloads the network with too much traffic. The phone companies have developed high-speed switching networks that implement ATM (Asynchronous Transfer Mode) to solve this problem. ATM uses fixed-size cells and high-speed switching to improve service.

Note in Figure C-6 that dedicated circuits are used to access a packet-switched network. These circuits are usually local leased lines or circuit-switched connections that funnel packets from a customer's site into the packet-switched network. They may be ISDN lines or high-capacity T-1 (1.544-Mbit/sec) lines.

Here are some things to note about circuit-switched services:

As mentioned, ISDN (Integrated Services Digital Network) is an example of a circuit-switching service. Basic rate ISDN provides two 64-Kbit/sec circuit-switched channels that can be used for either voice calls or data communications. ISDN phones digitize analog voice into digital information for transmission across the circuit. The two 64-Kbit/sec channels can be combined into a single 128-Kbit/sec channel for data transfers. Users can "dial" any location to set up a circuit, thus the connection is switched. In contrast, broadband ISDN has a packet-switched orientation and can be scaled up to very high data rates. ISDN is a product of the phone company's desire to create a fully digital telephone system with circuit-switching capabilities. It was first proposed in the early 1980s and is still under construction.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Communication Service Providers; Communication Services; DSL (Digital Subscriber Line); ISDN (Integrated Services Digital Network); Telecommunications and Telephone Systems; Virtual Circuit; and WAN (Wide Area Network)

Return to top of this sample page

Return to Home page

Collaborative Computing

Collaborative computing allows users to work together on documents and projects, usually in real time, by taking advantage of underlying network communication systems. Whole new categories of software have been developed for collaborative computing, and many existing applications now include features that let people work together over networks. Here are some examples:

A good example of collaborative applications designed for Internet use are Microsoft's NetMeeting and NetShow. NetMeeting allows intranet and Internet users to collaborate with applications over the Internet while NetShow lets users set up audio and graphic (nonvideo) conferences. These products are described below as examples of the type of collaborative applications available in the intranet/Internet environment. More information about the products is available at http://www.microsoft.com.


NetMeeting uses Internet phone voice communications and conferencing standards to provide multiuser applications and data sharing over intranets or the Internet. Two or more users can work together and collaborate in real time using application sharing, whiteboard, and chat functionality. NetMeeting is included in Microsoft's Internet Explorer.

NetMeeting can be used for common collaborative activities such as virtual meetings. It can also be used for customer service applications, telecommuting, distance learning, and technical support. The product is based on ITU (International Telecommunication Union) standards, so it is compatible with other products based on the same standards. Some of NetMeeting's built in features are listed here.


Provides point-to-point audio conferencing over the Internet. A sound card with attached microphone and speaker is required.


Locates users who are currently running NetMeeting so you can participate in a conference. Internet service providers can implement their own ULS server to establish a community of NetMeeting users.


Provides a multipoint link among people who require virtual meetings. Users can share applications, exchange information through a shared clipboard, transfer files, use a shared whiteboard, and use text-based chat features.


Allows a user to share an application running on his computer with other people in a conference. Works with most Windows-based programs. As one user works with a program, other people in the conference see the actions of that user. Users may take turns editing or controlling the application.


Allows users to easily exchange information by using familiar cut, copy, and paste operations.


Lets you transfer a file to another person by simply choosing a person in the conference and specifying a file. File transfers occur in the background as the meeting progresses.


Provides a common drawing surface that is shared by all users in a conference. Users can sketch pictures, draw diagrams, or paste in graphics from other applications and make changes as necessary for all to see.


Provides real-time text-based messaging among members of a conference.


NetShow is basically a low-bandwidth alternative to videoconferencing. It provides live multicast audio, file transfer and on-demand streamed audio, illustrated audio, and video. It is also a development platform on which software developers can create add-on products. According to Microsoft, NetShow provides "complete information-sharing solutions, spanning the spectrum from one-to-one, fully interactive meetings to broadly distributed, one-way, live, or stored presentations."

NetShow takes advantage of important Internet and network communication technologies to minimize traffic while providing useful tools for multiuser collaboration.

IP multicasting is used to distribute identical information to many users at the same time. This avoids the need to send the same information to each user separately and dramatically reduces network traffic. Routers on the network must be multicast-enabled to take advantage of these features.

NetShow also uses streaming technology, which allows users to see or hear information as it arrives, rather than wait for it to be completely transferred.

Other Products

A number of other companies are working on collaborative products that do many of the same things as NetMeeting and NetShow. Netscape Conference and SuiteSpot are similar products. SuiteSpot integrates up to ten collaborative applications into a single package. Additional information is available at http://www.netscape.com.

Netscape Collabra Server, which is included in its SuiteSpot enterprise suite of applications lets people work together over intranets or over the Internet. Companies can create discussion forums and open those forums to partners and customers. Collabra Server employs a standards-based NNTP (Network News Transport Protocol) and it allows discussions to be opened to any NNTP-compliant client on the Internet. In addition, discussions can be secured and encrypted.

Another interesting product is one called CyberHub from Blaxxun Interactive (http://www.blaxxun.com). It provides a high-end virtual meeting environment that uses 3-D graphics and VRML (Virtual Reality Modeling Language).

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Compound Documents; Document Management; Electronic Mail; Groupware; IBM Network Computing Framework; Information Publishing; Lotus Notes; Microsoft BackOffice; Microsoft Exchange Server; Netscape SuiteSpot; Videoconferencing and Desktop Video; and Workflow Management

Return to top of this sample page

Return to Home page

Communication Services

Organizations build wide area enterprise networks to link remote users and create LAN-to-LAN links that allow users in one geographic area to use resources on LANs in other areas. A variety of carrier services are available to create these links:

Managers must evaluate the volume of network traffic and its destination to determine the type of services to use. The following table lists the transmission requirements for various types of activities. If traffic is light, dial-up services are sufficient. For continuous traffic between two points, dedicated lines or high data rate switching services such as frame relay and ATM (Asynchronous Transfer Mode) are recommended.



Personal communications

300 to 9,600 bits/sec or higher

E-mail transmissions

2,400 to 9,600 bits/sec or higher

Remote control programs

9,600 bits/sec to 56 Kbits/sec

Database text query

Up to 1 Mbit/sec

Digital audio

1 to 2 Mbits/sec

Access images

1 to 8 Mbits/sec

Compressed video

2 to 10 Mbits/sec

Medical transmissions

Up to 50 Mbits/sec

Document imaging

10 to 100 Mbits/sec

Scientific imaging

Up to 1 Gbit/sec

Full-motion video (uncompressed)

1 to 2 Gbits/sec

Switching can be done on the customer's site (private networking) or by the carrier (public networking). If the customer does switching, appropriate equipment is installed and the customer sets up dedicated lines between all the points that require connections. This private networking strategy gets more expensive as distance between sites grows. If the carrier provides switching, the customer funnels all its traffic to the carrier, which then routes the traffic to various destinations. This is how frame relay is handled.

If you decide to install dedicated leased lines between two geographically remote locations, you might have to deal with a number of carriers, including the local exchange carrier at the local site, a long-distance carrier, and the local exchange carrier at a remote site.

The following sections describe services available from the local and long-distance carriers. For additional information about the services provided by specific providers, refer to the Web sites of the companies listed in Appendix A under "Telecommunications Companies." For information about building WANs over the Internet, refer to "VPN (Virtual Private Network)."

Traditional Analog Lines

Traditional analog voice lines provide a convenient and relatively inexpensive way to set up point-to-point links between computers and remote LANs. Specialized modems are now available that operate as high as 56 Kbits/sec (in one direction over appropriate lines). The two types of connections over the analog telephone network are described next. See also "Leased Line" and "Modems" for more information.


Connections are made only when needed for file transfers, e-mail connections, and remote users sessions.


These analog lines provide the same data rates as dial-up lines, except that customers contract with the carrier to keep the lines available for immediate use when necessary.

Circuit-Switched Services

A circuit-switched service provides a temporary dedicated point-to-point circuit for data transmission through a carrier's switching systems (see "Circuit-Switching Services"). Customers can contract for various types of services, depending on their anticipated bandwidth needs. Each of the services discussed in the following paragraphs are covered in more detail under separate headings.


Switched-56 is a digital switched service that operates at 56 Kbits/sec. A special Switched-56 data set device interfaces between the carrier's wire pairs and the customer's internal device (usually a router). Switched-56 services were originally intended to provide an alternate backup route for higher-speed leased lines such as T1. If a leased line failed, a Switched-56 line would quickly establish an alternate connection. Switched-56 can still be used in this way, but it is also used to handle peaks in traffic, fax transmissions, backup sessions, bulk e-mail transfers, and LAN-to-LAN connections. Rates are calculated by the minute in most cases.


This is a carrier offering that expands on Switched-56. Basically, it provides toll-free (800 number) digital switched services. Inverse multiplexing can be used to combine multiple circuits into a wide-bandwidth circuit as network traffic increases.


ISDN is a circuit-switched service that provides three channels for voice or data transmissions. Two of the channels provide 64-Kbit/sec data or voice, and a third provides signaling to control the channels. ISDN is offered in selected areas. See "ISDN (Integrated Services Digital Network)" for more details.

Dedicated Digital Services

Digital circuits can provide data transmission speeds up to 45 Mbits/sec. Currently, digital lines are made possible by "conditioning" normal lines with special equipment to handle higher data rates. The lines are leased from the telephone company and installed between two points (point to point) to provide dedicated, full-time service. You'll need bridges or routers to connect LANs to digital lines. Voice and data multiplexors are also required if you plan to mix both voice and data channels.

The traditional digital line service is the T1 channel, which provides transmission rates of 1.544 Mbits/sec. T1 lines can carry both voice and data, so they are often used to provide voice telephone connections between an organization's remote sites. The lines are fractional, meaning that they can be leased as subchannels. T1 can be divided into 24 channels of 64-Kbit/sec bandwidth each, which is the bandwidth needed for a digitized voice call. Alternatively, a T3 line can provide the equivalent of 28 T1 lines for users who need a lot of bandwidth. See "T1/T3."


DSL (Digital Subscriber Line) services are emerging that use the existing twisted-pair copper wire in the local loop to provide data rates of up to 60 Mbits/sec. Interestingly, the closer an end user is to the telephone company's switching office, the faster the data rate. In the past, these rates were not possible because the carrier's switching equipment was only designed to handle a narrow bandwidth for voice. But carriers see DSL as a way to provide bandwidth-hungry Internet users with all the speed they need. The equipment will typically require a PC with an Ethernet card and a DSL modem. The service is dedicated, not dial-up, so the typical configuration is to run a line from the customer's site to an ISP (Internet service provider). From there, customer data is packet-switched to appropriate destinations. Both voice and data can be transported using this scheme, so voice calls over the Internet products should become more popular as these schemes are put into place. There are many levels of serv ice, and these are discussed further under "DSL (Digital Subscriber Line)."

Packet-Switching Services

A packet-switched network transports data (or digitized voice) over a mesh of interconnected circuits. The term packet is used loosely here because the carriers deliver data in either frames (i.e., frame relay) or cells (i.e., ATM, Asynchronous Transfer Mode). Here, packet refers to a generic block of information that is transmitted through the mesh from one point to another. The important point about a switched service is that it provides any-to-any connections, as shown in Figure C-15.

FIGURE C-15. A switched network

The carriers prefer to preprogram VCs (virtual circuits) through their networks and lease them. You specify the locations where you want to send data (i.e., your remote branch offices) and the carrier programs the routers at each of the junctions to immediately switch the packets along an appropriate path.

Note that a circuit of appropriate bandwidth is required between the customer site and the carrier's access point into the switched network. This circuit might be a dial-up line or a dedicated circuit. However, because the distance between the customer and the access point is small, charges are minimal when compared to running a dedicated circuit end to end since such circuits carry distance charges. Organizations can use these services to create virtual data networks over wide areas that connect all of their remote sites.


X.25 is a standard, well-tested, protocol that has been a workhorse packet-switching service since 1976. It is suitable for light loads and was commonly used to provide remote terminal connections to mainframe systems. X.25 packet- switched networks are not suitable for most LAN-to-LAN traffic because they are slow and because a large portion of the bandwidth is used for error checking. This error checking was important in the days of low-quality analog telephone lines, but today's high-quality fiber-optic circuits do not usually need these controls. See "X.25" for more information.


Frame relay provides services similar to X.25, but is faster and more efficient. Frame relay assumes that the telecommunication network is relatively error-free and does not require the extensive error-checking and packet acknowledgment features of X.25. Frame relay is an excellent choice for organizations that need any-to-any connections on an as-needed basis. See "Frame Relay" for more details.


Cell-switching networks, namely ATM, provide "fast-packet" switching services that can transmit data at megabit- and potentially gigabit-per- second rates. Carriers have already made a major switch to ATM switching and are now moving the services into local areas. The goal is to eventually use ATM all the way up to the customer premises. See "ATM (Asynchronous Transfer Mode)" and "Cell Relay" for more details.


SMDS is cell-based service provided by the RBOCs (regional Bell operating companies) in selected areas. SMDS uses ATM switching and provides services such as usage-based billing and network management. The service will eventually grow from MAN (metropolitan area network) use to wide area network use and take advantage of SONET (Synchronous Optical Network) networks. See "SMDS (Switched Multimegabit Data Service)."

PDNs (Public Data Networks)

PDN providers (called value added carriers or VACs) have created data networks that they make available to organizations at leased-line rates (monthly charge) or at dial-up rates (per-use charge). A typical PDN network forms a web of global connections between the PDN's switching equipment. Circuits consist of the PDN's own private lines or lines leased from major carriers such as AT&T. Dial-up or dedicated lines are used to connect with a PDN at access points that are available in most metropolitan areas. Services provided typically include X.25, frame relay, and even ATM. Some of the major PDNs offering these services are listed here:

Using a PDN saves you the trouble of contracting for the lines and setting up your own switching equipment. The service provider handles any problems with the network itself and can guarantee data delivery through a vast mesh of switched lines.

Other Services

There are a variety of other services that network managers can choose for wide area connections or other types of network activity. These are listed here.


The Internet provides access to a wide range of information services, electronic mail services, and connectivity services. With the popularity of the Internet and Web, ISPs (Internet service providers) are everywhere. Refer to "VPN (Virtual Private Network)" for more information.


Wireless communication consists of either local area services or wide area services. Local area services involve wireless communication between workstations that are in a fixed position within an office. Wide area services involve mobile workstations that communicate using technologies such as packet radio, cellular networks, and satellite stations. See "Cellular Communication Systems" for more details.


World Wide Web virtual library telecommunication links

Telecom Information Resources on the Internet

PTC Telecom Resources Links

Computer and Communication hot links

The ITU's list of Web sites

Return to top of this sample page

Return to Home page

Component Software Technology

Component software technology breaks large and complex applications into smaller software modules called components or objects. (In the Java environment, objects are called applets). By breaking applications into components, complex software is simplified. Each component is like a building block that performs a specific task and has an interface that lets developers combine it with other components to build applications. Some other advantages are listed here:

Component software is revolutionizing the Web. Components can be downloaded from Web sites to expand the functionality of Web browsers or provide a user with a whole new interface for accessing information at a Web site. Java applets and ActiveX components are available for download all over the Web. The simplest applet or components might provide a spinning logo or rolling text banner. Complex components can be combined to create complete applications.

For example, most financial sites have a portfolio management utility, which you can download to run in your Web browser. The component stays on your computer and goes into action every time you visit the financial site by downloading the latest information about the holdings in your portfolio. If the software needs updating, the Web site managers simply post new components at the site. The next time you visit, the components are downloaded automatically.

Marimba's Castanet is a good example of a technology called broadcasting that lets users tune into sites on the Web as if they were tuning in a radio or TV broadcast. Information updates at the sites are automatically delivered to users that are tuned into the site. See "Marimba Castanet" for more information. Also see "Broadcast News Technology" and "Push."

Component Models

The component-based software model consists of the components themselves, containers, which are the shells where objects are assembled, and scripting, which allows developers to write the directives that make components interact. The primary containers are ActiveX and JavaBeans. Scripting languages are ActiveX Script and JavaScript. When a component is put in a container, it must register and publish its interfaces so that other components know about it and know how to interact with it.

Components are usually deployed in distributed computing environments. For example, components that read sensor information in remote computers may report this information at various intervals to a central monitoring system. The way these components communicate in object environments is through ORBs (object request brokers). The most common ORB is used in CORBA (Common Object Request Broker Architecture). CORBA is cross platform and allows components written for different operating systems and environments to work together.

Component technology enables multitiered environments in which applications are broken up into different services that run on different computers. Services such as application logic, information retrieval, transaction monitoring, data presentation, and management may run on different computers that communicate with one another to provide end users with a seamless application interface.

To run sophisticated applications and transactions over networks, there is a need to register components and coordinate their activities so that critical transactions can take place. For example, if data is being written to multiple databases at different locations, a transaction monitor is needed to make sure that all those writes take place. Otherwise, they must all be rolled back. A funds transfer is a good example. If money is withdrawn from one account but fails to be deposited in another due to a communication glitch, you'll want to make sure that the original withdrawal is rolled back, otherwise you'll end up with no money in any account!

Microsoft has developed the Microsoft Transaction Server to coordinate the interaction of components and ensure that transactions are implemented safely. Microsoft Transaction Server provides transaction processing and monitoring along with message queuing and other traditional features normally found on high-end transactions systems. Because it provides these features in an object-based environment, it is essentially a transaction-based object request broker. Refer to "Microsoft Transaction Server" for more information.

The following list provides definitions of related entries in this book that talk about component technology.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ActiveX; COM (Component Object Model); Compound Documents; DCOM (Distributed Component Object Model); Distributed Object Computing; Java; Middleware and Messaging; Netscape ONE Development Environment; Object Technologies; OMA (Object Management Architecture); Oracle NCA (Network Computing Architecture); ORB (Object Request Broker); Sun Microsystems Solaris NEO; and Web Technologies and Concepts


OMG (Object Management Group)

ComponentWare Consortium


Marimba, Inc.

Microsoft Corporation

The Active Group

Return to top of this sample page

Return to Home page

Connection Technologies

There are a number of technologies available for connecting devices together in networks or in a simple data-sharing arrangement. Some of these technologies are listed here and covered in more detail under the appropriate headings.

The following standards are peripheral interfaces that can also provide some level of networking service.

Return to top of this sample page

Return to Home page


Cryptography is concerned with keeping information, usually sensitive information, private. Information is encrypted to make it private and decrypted to restore it to human-readable form.

Encryption is performed using an algorithm, which is usually well known. The algorithm takes some input, which is called the plaintext, and converts it to ciphertext. A key is applied to the algorithm that affects the ciphertext output. A different key used on the same plaintext will produce different ciphertext. Because algorithms are well known, the strength of encryption relies on the key and its length.

One of the most well-known encryption algorithms is the U.S. government- endorsed DES (Data Encryption Standard). It uses a 56-bit key and an algorithm that scrambles and obscures a message by running it through multiple iterations or rounds of an obfuscation algorithm. This process is pictured in Figure C-21 and is greatly simplified in the following description. It helps to visualize this process as threads being woven together and the key providing a color change during each iteration.

FIGURE C-21. DES encryption algorithm (simplified)

1. The plaintext is divided into 64-bit blocks. Each block is worked independently through 16 iterations of the algorithm.

2. At the same time, the 56-bit key is divided in half. In each iteration, the bits in each half are shifted to the left to change the key values (like changing the color to be applied to the thread).

3. A 64-bit block is divided in half (now we have two threads) and the right half is combined with the two key halves created in step 2 (this is like coloring one of the threads).

4. The results of step 3 are converted again using some specific techniques (too complex to discuss here), then the results are combined with the left half of the 64-bit block (like weaving in another thread).

5. The results of the above steps become the new right half. Now the next iteration for the same 64-bit block is ready to start. The right half from the previous iteration is brought down to become the new left half (the thread to be colored). Also, the left and right halves of the key are bit-shifted left and combined to create a new key (like changing the color).

6. The process repeats again using the new left half and new right half for 15 more iterations. This produces the first 64-bit block of ciphertext.

The next 64-bit block is processed using the same procedure.

One of the interesting things about this process is that the algorithm is well known, so anyone who is trying to break your DES-encrypted ciphertext will have the algorithm to work with. This is true of most encryption schemes, so the strength of the system is in the size of the key used and how well the algorithm does its job. However, the 56-bit key size of DES is now considered insecure. In fact, it was broken by a group of computers linked over the Internet as this book was being written. More-advanced encryption schemes such as IDEA (discussed under "Types of Ciphers," next) have been developed.

Types of Ciphers

DES is a block cipher because it takes the plaintext and divides it up into blocks that are processed individually in multiple rounds (iterations). There are also stream ciphers that work on streams of raw bits and are much faster.

In addition, there are symmetrical (single-key) and asymmetrical (two-key) ciphers. Symmetric schemes are also called private-key or secret-key encryption schemes. A single key is used for both encrypt and decrypt messages. If you send a decrypted message to someone, you must get them a copy of the key. This is a problem in some environments, especially if you don't know the recipient and need to transport or transmit the key using untrusted people or channels. How can you be sure the key has not been compromised? Asymmetric schemes solve this problem, as discussed in a moment.

DES is a symmetrical (single-key) cryptographic scheme. However, DES has shown its age, as mentioned previously. A replacement is IDEA (International Data Encryption Algorithm), a block-oriented secret-key encryption algorithm developed by the Swiss Federal Institute of Technology. It uses a 128-bit key compared to DES's 56-bit key, and so its resulting ciphertext is substantially more secure. IDEA is generally considered a better choice than DES, and the algorithm is publicly available, easy to implement, and exportable. As of this writing, there have been no successful attacks against IDEA.

Asymmetrical public-key cryptography is an alternative to the private-key DES and IDEA schemes. In this scheme, everybody gets a set of keys. One key, called the public key, is made freely available. The other, called the private key, is held secretly. Sending encrypted messages to someone is simple. You obtain a copy of their public key, either directly from the person or from a public-key server, and use it to encrypt the message. This message can only be decrypted with the recipient's private key. The private key is never made publicly available.

The public-key scheme is revolutionizing computer security by providing ways to enable electronic commerce, authenticate users, validate and timestamp documents and programs, exchange secure electronic messages, and more. Refer to "Public-Key Cryptography," for more details.

There is another important encryption scheme called the one-way function. With this scheme, there is often no intention of ever deciphering the text. Encrypting with a one-way function is easy and takes little computer processing power. Reversing the encryption is considered impossible. Why encrypt something that never gets decrypted?

Suppose you want to send a business partner a message and provide some proof that the message contents have not been altered in transit. The first step is to encrypt the message with a one-way function to produce a hash (also called a message digest). The hash represents the original document as a unique set of numbers. You then send the document to your business partner, encrypting it with his or her public key. Upon receipt, he or she runs the same one-way function on the contents of the document. The resulting hash should be the same as the one you sent. If not, the document is suspect.

One-way functions are also used to store passwords. Actually, the technique doesn't store the password, but its hash, and that is why it is secure. When a user logs on, a hash of the password is created and compared to the hash in the password file. If they compare, the user is considered authentic. Refer to "Challenge/Response Protocol" for more details on this technique.


Cryptanalysis has to do with analyzing a cryptosystem. A person analyzes a system to verify its integrity. An attacker also analyzes a system to find its weaknesses. Cryptanalysts are often paid for what they do. Attackers do what they do to gain illegal access to documents and systems.

How are systems broken? Successful attacks usually take place under optimal conditions, i.e., using million dollar computer systems that are run by expert cryptanalysts (such as people at the National Security Agency) or by coordinating many interconnected network computers.

One method is called the brute force attack. This method was used to break DES. Every possible key is tried in an attempt to decrypt the ciphertext. Often, a dictionary of common passwords (freely available on the Internet) is used. This type of attack is often successful if weak passwords are used. A weak password is a common name, words out of the dictionary, and common abbreviations. Brute force attacks are difficult if long keys are used and if the keys consist of mixed numbers and characters in a nonsense pattern. It is estimated that a 100-bit key could take millions to billions of years to break. However, a weakness in a system might reduce the number of keys that need to be tried, thus making an attack feasible.

Another possibility is that the cryptanalyst knows something about what is inside an encrypted message and has the algorithm used to create the ciphertext. In this case, the cryptanalyst can analyze the original plaintext, the algorithm, and the resulting ciphertext to find some pattern or weakness in the system. Message content is often not hard to figure out. Documents created in popular word processors often have hidden formatting codes and header information. Invoices or other business documents have a company's name and address. The names of persons or systems may be repeated throughout a document.

The cryptanalyst might even find a way to get some text inserted into a sensitive document before it is encrypted, then use the techniques described above to look for the message in the ciphertext. There are also some special techniques, called differential cryptanalysis, in which an interactive and iterative process works through many rounds and uses the results of previous rounds to break ciphertext.

One of the best sources of information on cryptography is RSA Security System (now part of Security Dynamics) cryptography FAQ (Frequently Asked Questions). You can find it at the site listed under "Information on the Internet," at the end of this section.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Authentication and Authorization; Digital Signatures; PKI (Public-Key Infrastructure); Public-Key Cryptography; Security; and SET (Secure Electronic Transaction)


RSA's Crypto FAQ

Ron Rivest's Links (possibly the most complete set of links on the Web)

Peter Gutmann's Security and Encryption-related Resources and Links

Terry Ritter's Cyphers by Ritter page

The Computer Virus Help Desk (has a good archive of encryption software and information)

Return to top of this sample page

Return to Home page

Document Management

Document management is the storing, categorizing, and retrieval of documents, spreadsheets, graphs, and imaged (scanned) documents. Each document has an index-card-like record that holds information such as the author, document description, creation date, and type of application used. Such documents usually are targeted for archiving on less expensive tape or optical disk where they remain available for future access if necessary.

Document management is often referred to as EDM (electronic document management). In the case of a law firm, document management tracks all activities occurring with a document, such as the number of keystrokes, revisions, and printings, so clients can be charged for the services.

AIIM (Association for Information and Image Management International)

AIIM is a 9,000-member organization that manages document imaging and interoperability standards. It is also the umbrella organization for the DMA (Document Management Alliance) and the ODMA (Open Document Management API). AIIM is accredited by the ANSI (American National Standards Institute). The organization's Web site is at http://www.aiim.org.

DMA is a task force of AIIM that is concerned with ensuring that document management applications, services, and repositories are interoperable. DMA has developed a document management framework that defines programming interfaces and services for document management systems. For example, a query service allows users to search multiple repositories anywhere on the network and a library service provides version and access control to reduce the risk of users working on outdated documents.

The ODMA specification is a platform-independent API (application programming interface) and platform-specific registration and binding specification. It runs on Windows, Macintosh, and other platforms and is supported by major vendors including Novell, Saros, Documentum, Interleaf, and FileNet. There is also an ODMA Extension for Workflow. Additional information can be found at the AIIM Web site mentioned above.

According to AIIM, documents hold the "intellectual assets of organizations, the knowledge and expertise of its people, and the information and data they have compiled. These valuable assets must be managed and protected. Everything a company knows about itself, its products and services, its customers, and the business environment in which it exists are stored in documents."

Modern electronic documents contain multimedia information, including graphics, voice clips, and video clips. Physical documents may be scanned, indexed, and stored on computer for quick access. Imaged and archived documents can be retrieved in a matter of seconds. Optical character recognition is used to "read" documents and turn them into computer text files. Once stored, the document can be duplicated indefinitely. Parts of the document can be cut and pasted into other documents.

Document management becomes essential as network users begin to take advantage of these technologies. According to AIIM, a document management system should do the following:

Web-Based Document Management Products

Most vendors of document management software and systems have made the move to Web technologies. Web servers and HTML (Hypertext Markup Language) provide an example of a document management system. Documents are stored in the HTML format, which is readable by any Web browser. Web browsers provide a universal interface to documents on any Web server. Because HTML documents support hyperlinking, users can quickly move between document references. HTML documents also hold objects such as buttons or icons that automatically launch program code or database queries, or execute ActiveX controls and Java applets.

Two examples are given here:

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Backup and Data Archiving; Compound Documents; EDI (Electronic Data Interchange); Groupware; Imaging; Information Publishing; Storage Management Systems; and Workflow Management


AIIM International

Document Management Industries Association

FORM Magazine online

Document management links at FORM Magazine

Return to top of this sample page

Return to Home page

File Systems

A file system provides persistent storage of information. The file system interfaces with disk drives and provides a system for organizing the way information is stored in the tracks and clusters of the drives. Users interface with the file system by working with files and directories, both of which can have various attributes such as read-only, read/write, execute, and so on.

A local file system may allow users to only access local disk drives. However, most operating systems also allow users to access disk drives located on other network computers and to share or "publish" directories on their own systems. These network-aware operating systems require a higher level of security since unknown users may access the files over the network. Therefore, more advanced network operating systems like Windows NT and NetWare provide special attributes to access files. In high-security environments, remote users will almost always require a user account and must be properly authenticated before they can access a file.

Most file systems today use the hierarchical directory structure in which a directory tree is built from a root directory. Directories are like containers that can hold other directories (subdirectories) or files. A directory is a parent to its subdirectories (the child directories). Directories have attributes that are usually "inherited" by all the files and subdirectories of the directory; however, attributes can be changed for individual files and directories in most cases.

Common file systems are briefly described here. Some of these file systems are discussed elsewhere in this book, as noted.

As mentioned above, there are file-sharing systems as well, but these systems take advantage of the underlying file systems that run on individual computers. For example, NFS is a file-sharing system that takes advantage of existing UNIX file systems. Likewise, Microsoft's SMB (Server Message Blocks) takes advantage of FAT and NTFS, as does the new CIFS (Common Internet File System). Microsoft's new DFS (Distributed File System) is covered under "DFS (Distributed File System), Microsoft" and "Distributed File Systems."

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

AFP (AppleTalk Filing Protocol); AFS (Andrew File System); AppleShare; CIFS (Common Internet File System); Compression Techniques; DFS (Distributed File System), Microsoft; DFS (Distributed File System), The Open Group; Directory Attributes and Management; Distributed File Systems; Information Warehouse; Network Operating Systems; NFS (Network File System); Novell NetWare File System; Rights and Permissions; SMB (Server Message Blocks); Storage Management Systems; Storage Systems; UNIX File System; Volume and Partition Management; and Windows NT File System

Return to top of this sample page

Return to Home page

File Systems, Distributed

See DFS (Distributed File System), Microsoft; Distributed File Systems.

Return to top of this sample page

Return to Home page

High-Speed Networking

High-speed network designs are motivated by the limitations of existing network topologies. The basic concept has been to simply increase the data rate of the network. For example, 10-Mbit/sec Ethernet was improved tenfold with the standardization of Fast Ethernet (100 Mbits/sec). For technical reasons, increasing the data rate reduces the maximum station-to-station distance, so alternative schemes such as FDDI (Fiber Distributed Data Interface) are often employed as a backbone technology when long distance and high data rate are required, such as in campus environments. Fast Ethernet can fulfill backbone requirements as long as the network is usually within the confines of a single building.

The typical strategy is to connect servers to the backbone, where they can take advantage of the higher throughput. For example, a server connected to a 100-Mbit/sec backbone can simultaneously handle ten clients operating at 10 Mbits/sec with ease.

Pushing the bandwidth even further is Gigabit Ethernet, which operates at a data rate of 1,000 Mbits/sec. Its primary purpose is for use in the network backbone or as a replacement for existing 100-Mbit/sec switches.

Still, pumping up the bandwidth is not always a complete solution. While Gigabit Ethernet can improve backbone performance, local network traffic may still suffer from bottlenecks due to the shared nature of the LANs or the collisions caused under heavy traffic loads on Ethernet networks. Switching and virtual networking can provide a solution.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ATM (Asynchronous Transfer Mode); Backbone Networks; Fast Ethernet; FDDI (Fiber Distributed Data Interface); Frame Relay; Gigabit Ethernet; Switched Networks; and VLAN (Virtual LAN)


NIST's High Speed Networking Technologies page

Doug Lawlor's High Bandwidth page

Return to top of this sample page

Return to Home page

Information Publishing

Information publishing in the context of this book is the publishing of information on computer networks. A Web server is a good example of an information publishing system. The Web browser is the container into which Web information is published. This section is about the standards and protocols used for information publishing and about future directions in information publishing.

By far, the most practical way to publish information is with Web technologies. The Internet and intranets provide the platform for disseminating information either internally or externally. End users can visit Web sites to obtain published information or have it automatically "pushed" to them using technologies described under "Broadcast News Technology," "Marimba Castanet," "PointCast," and "Push."

Publishing information is more than creating content. It is also about managing security, document flow, copyright, and other factors. Document management is a science unto itself. See "Document Management" for a discussion of services for storing, tracking, versioning, indexing, and searching for documents, as well as providing "audit trails" to track who has read and altered a document, if necessary. This is usually done on internal networks.

The Windows and Macintosh environments were important in developing the concept of compound documents. A compound document starts as a document created in an application like Microsoft Word, i.e., it starts as a text document. The document is viewed as a container that can hold objects such as graphics, voice clips, and video clips. These objects are either embedded or linked. An embedded object such as a picture travels with the document but can still be edited using an appropriate application. A linked object may be stored in another location. The document simply contains a link to the object at that location and may display it as well. For example, a document might contain a link to a picture or to some spreadsheet data. The advantage of links is that if the linked object is edited or updated, the contents of the compound document change as well.

Web page are compound documents that can hold text and individual objects like pictures, sounds, videos, Java applets, and ActiveX controls. An electronic mail message with an attachment such as a graphic is also a compound document. The original purpose of compound documents was to provide a single place where users could store all the elements related to a document and, if necessary, send the document to someone else.

Document Interchange Standards

The exchange of computer information would be impossible without character formatting standards such as ASCII (American Standard Code for Information Interchange). ASCII identifies each letter of the alphabet with a 7-bit code and provides an extended character set using 8-bit codes. Almost every computer recognizes the ASCII code set, making file exchange possible without conversion in most cases. However, ASCII does not support document formatting, such as page layout, paragraph alignment, and font selections. Preserving document formats during file exchange is essential.

Document interchange standards attempt to provide universal document formatting. Users on different computer platforms should be able to exchange documents and retain formats even though the applications and operations are different. Document standards should at a minimum provide a way to describe the following document features in a language that is understood by any system that needs to open and display the document:

One way users can exchange documents is by using the same application from the same vendor across platforms. For example, users of Microsoft Word can exchange documents with users on any other platform where Word runs. Microsoft and other vendors have their own document standards for this purpose. However, this doesn't work on large networks like the Internet where people have their own preferences for applications. What is needed is a pervasive document standard. Some of the standards that have been developed are outlined here.

EDI (Electronic Data Interchange)

EDI is a business-to-business electronic document exchange standard defined by ANSI (American National Standards Institute) that defines structures for business forms such as purchase orders, invoices, and shipping notices and provides a way for organizations to exchange those forms over communication links. See "EDI (Electronic Data Interchange)" and "Electronic Commerce" for more information.

Acrobat, Adobe Systems Inc.

Adobe Systems' Acrobat provides portable document exchange capability that lets document recipients view a document as it was formatted. It is also widely used as a document interchange format on the Internet and on the Web. For information on Adobe Acrobat, see "Acrobat." Adobe's Web site is at http://www.adobe.com.

MIME (Multipurpose Internet Mail Extension)

MIME is an Internet standard that provides a way to include different types of data, such as graphics, audio, video, and text in electronic mail messages. See "MIME (Multipurpose Internet Mail Extension)" for more information.

SGML (Standard Generalized Markup Language)

SGML is a portable document language that defines the structure and content of a document. SGML documents contain attributes to define components like paragraphs and headers, thus making the documents hardware and software independent. Information in documents is translated to perform actions or formatting on other systems. See "SGML (Standard Generalized Markup Language)" for more information.

HTML (Hypertext Markup Language)

HTML is the page description language for the Web. While related to SGML, it provides many advanced features. Documents written in HTML will display in a Web browser on any system. Many companies and organizations are advancing the HTML standard. Microsoft and Netscape keep improving HTML to provide more features in their Web browsers. The W3C is also improving the standard. Refer to "HTML (Hypertext Markup Language)" for more information.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Component Software Technology; Compound Documents; Document Management; Groupware; Imaging; Web Technologies and Concepts; and Workflow Management

Return to top of this sample page

Return to Home page


The Internet is a global web of interconnected computer networks. It is a network of networks that link schools, libraries, businesses, hospitals, federal agencies, research institutes, and other entities into a single, large communication network that spans the globe. The underlying connections include the dial-up telephone network, satellite and ground-based microwave links, fiber-optic networks, and even the cable TV (CATV) network. The actual network cannot be mapped at any one time because new computers and networks are constantly joining the network and electronic pathways are constantly changing.

While the Internet was originally conceived as a communication network for researchers, today it is used by millions of people in business, in education, or for just plain communication. The thousands of interconnected networks that make up the Internet include millions of computers and users. It provides the basis for global electronic mail and information exchange via the Web (World Wide Web) and other services.

The Internet grew out of an earlier U.S. Department of Defense project, the ARPANET (Advanced Research Projects Agency Network), that was put into place in 1969 as a pioneering project to test packet-switching networks. ARPANET provided links between researchers and remote computer centers. In 1983, the military communication portion of ARPANET was split off into MILNET (Military Network), although cross-communication was still possible. ARPANET was officially dismantled in 1990. Its successor, Internet, continues to grow.

The Internet is based on TCP/IP (Transmission Control Protocol/Internet Protocol), an open internetwork communication protocol. TCP/IP networks consist of router-connected subnetworks that are located all over the world. Packet-switching techniques are used to move packets from one subnetwork to another.

No person, government, or entity owns or controls the Internet. The process of setting standards on the Internet is handled by organizations based on input from users. These organizations and the standards process are covered under "Internet Organizations and Committees."

Figure I-7 shows roughly how the Internet is structured. At the bottom, organizations and home users connect to local ISPs (Internet service providers). ISPs are in turn connected to regional networks with data pipes that are "fat" enough to handle all the traffic produced by the ISP's subscribers. The regional networks are connected to the U.S. backbone network, which has even bigger data pipes. The U.S. backbone is also connected internationally. Internet backbone is discussed fully under "Internet Backbone."

FIGURE I-7. The Internet

Anyone interested in the history of the Internet should visit the Internet Society's page at http://www.isoc.org/internet-history.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Internet Backbone; Internet Connections; Internet Organizations and Committees; Intranets and Extranets; IP (Internet Protocol); ISPs (Internet Service Providers); NII (National Information Infrastructure); Standards Groups, Associations, and Organizations; TCP (Transmission Control Protocol); TCP/IP (Transmission Control Protocol/Internet Protocol); and Web Technologies and Concepts


Internet Society

World Wide Web Consortium

IETF (Internet Engineering Task Force)

InterNIC Directory and Database Services

InterNIC Directory of Directories (links to resources on the Internet)

U.S. National Information Infrastructure Virtual Library

The InterNIC's search page for RFCs (requests for comment)

All the RFCs, listed in order with author names and descriptions (a very long list)

FYI on Questions and Answers to Commonly asked "New Internet User" Questions

FYI on Questions and Answers to Commonly asked "Experienced Internet User" Questions

Yahoo!'s Internet links page

Prof. Jeffrey MacKie-Mason's Telecom Information Resources on the Internet

InternetSourceBook.com guide to Internet companies

Internet-related site listings

Return to top of this sample page

Return to Home page

Internet Backbone

The structure of the Internet backbone has changed radically over the years. Not only has the data rate increased, but the topology has changed extensively. This topic describes the Internet's backbone topology and how global, national, regional, and local ISPs (Internet service providers) fit into the picture.

As explained under the topic "Internet," the Internet began as the ARPANET in the late 1960s. That network grew through the 1970s with funding from the U.S. government, which provided an internetwork for the U.S. Department of Defense, government agencies, universities, and research organizations.

By 1985, the network was congested enough that the NSF (National Science Foundation) started evaluating new network designs. It built a network, called NSFNET, that connected six sites with 56-Kbit/sec lines. The TCP/IP protocol tied the network segments together and provided traffic routing through gateways (now called routers). NSFNET formed a core that other regional networks could connect into.

An outline of the NSFNET history along with interesting maps and pictures is available at http://www.nlanr.net/INFRA/NSFNET.html.

During the years from 1987 to 1990, the backbone was upgraded to T1 circuits, and management was passed over to Merit. NSF also realized that it could not fund NSFNET forever. In addition, many private organizations wanted to get involved and had the resources to support the network. In 1990, NSF began the process of commercializing the network. It helped Merit, IBM, and MCI form ANS (Advanced Network and Services), which took control of NSFNET and eventually upgraded the backbone to run at DS-3 (45 Mbits/sec).

By 1992, NSF had defined a new architecture that would supersede NSFNET. The new network was to consist of the following features and components:

In 1994, NSF awarded the contract for the vBNS to MCI, the Routing Arbiter contract to Merit (which was in partnership with the Information Science Institute at the University of Southern California), and the NAP contracts to MFS Communications in Washington D.C., Sprint in New York; Ameritech and Bellcore (now Lucent) in Chicago, and Pacific Bell and Bellcore in San Francisco.

The vBNS is described under "vBNS (Very high speed Backbone Network Service)." The remainder of this topic outlines the current structure of the Internet based on the NSF design and contracts issued in 1994.

NAPs and MAEs

NAPs are Internet exchange points. ISPs connect into these exchange points to do two things: exchange traffic with other ISPs (called peering) or sell transit services to other ISPs. Transit services are used by smaller regional and local ISPs who need to send traffic across a national ISP's network. Internet exchanges provide layer 2 switching services which means that no routing takes place. However, ISPs connect their routers directly to the Internet exchanges. The general agreement among NAP users is that they exchange routing information and that traffic passing through the NAP is not filtered, examined, or tampered with.

While the four original NAPs were funded by the NSF, many other Internet exchanges have been built:

MFS Communications maintains a number of MAEs, include MAE EAST in Washington D.C., which connects the major ISPs as well as European providers. In June of 1996, MAE EAST's FDDI switch was switching 380 Mbits/sec. MFS also maintains MAE WEST in California's Silicon Valley, as well as MAE CHICAGO, MAE DALLAS, MAE HOUSTON, MAE LOS ANGELES, AND MAE NEW YORK. A typical NFS MAE consists of a switching platform that is a combination of an Ethernet switch, an FDDI concentrator, and/or an FDDI switch. All the devices are linked with FDDI and provide ISPs with a choice of connection options.

Figure I-8 illustrates how the NAPs and MAEs provide traffic exchange points for the national ISPs. There are many national ISPs, four NAPs, and many MAEs. Note the dotted lines show typical connections between Web clients and servers. Not shown are the FIXs (Federal Internet exchanges) at the University of Maryland and NASA Ames Research Center (Mountain View, California) that provide the connection points for federal networks and some international traffic.

FIGURE I-8. The Internet backbone hierarchy

CERFnet has an interesting map of NAPs, and the ISPs connected into those NAPs are mapped at http://www.cerf.net/cerfnet/about/interconnects/orig- interconnects.html.

LAPs similar to those operated by CRL Network Services are designed to avoid the problems of packets needing to travel outside of a metropolitan (or regional) area to a NAP, MAE, or other switching center, and then back into the same metropolitan area to reach their destination. CRL's LAPs allow regional customers to establish frame relay PVC (permanent virtual circuit) connections to all other LAP participants. The services are available in most major U.S. cities and help ease traffic problems on the Internet by keeping local traffic local. Reducing the number of hops reduces packet loss and corruption. The CRL LAPs are also connected into the NAPs and MAEs.

Another type of Internet exchange is offered by companies like Digital. Its Digital IX, located in Palo Alto, California, provides a combined switching and commercial data center. The IX provides ISPs and their customers with a mission-critical full-time operation, a full range of Internet services, and a choice of telecommunication carriers such as Pacific Bell and MFS Telecom. Service providers can put their production systems (i.e. server farms) at a major hub of the Internet and receive system management and administration services. The charges of reaching a hub over a dedicated line are eliminated, and ISPs benefit from 7x24 staffing, redundant electrical power, and high-speed links. Digital IX information is at http://www.ix.digital.com.

The Routing Hierarchy

While NAPs and MAEs provide switching, they don't provide routing services. National ISPs connect their routers to the NAP and MAE switches. NAPs and MAEs are at the top of the routing hierarchy. Next down in this hierarchy are routers connected to regional ISPs. Still further down are routers connected to local ISPs.

Packets that are destined for networks across the country or around the planet probably go through a NAP or MAE, although in many cases, a national ISP may have a direct link to a remote destination network, thus avoiding the NAP or MAE. In other cases, packets may go through LAPs.

This hierarchy minimizes the amount of information that local and regional routers need to know about. For example, routers in local areas only need to keep track of networks that are in the local area. If one of these routers receives a packet for an unknown network, it forwards the packet to a higher-level router. That router may know what to do with the packet. If not, it will also forward the packet to the next higher-level router. This process may continue until the packet reaches the top-level routers connected to the NAPs and MAEs. As explained next, these routers theoretically know about all other networks connected to the Internet and are able to forward the packet appropriately.

TIP: You can trace the route a packet takes from your computer to a destination with the traceroute utility in UNIX or the TRACERT command in Windows. For example, type TRACERT www.whitehouse.gov to see a list of router hops to the White House.

The Pacific Bell NAP in San Francisco is pictured in Figure I-9. This NAP is in an interim state as Pacific Bell upgrades from FDDI to ATM switching.

FIGURE I-9. Pacific Bell's San Francisco NAP (interim)

One of the defining features of NAP- and MAE-connected routers is that they keep track of other networks that are connected to the NAPs and MAEs so that packets can be routed anywhere on the Internet. Internet routers typically use the BGP (Border Gateway Protocol) to exchange routing information among themselves and thus learn about the networks connected to the NAPs and MAEs. Routers may also be programmed with routes by administrators.

The NSF sponsors the RA (Routing Arbiter) service, which builds a master routing table that includes all the networks on the Internet and provides a single place where ISP routers can obtain this information. Without the RA, the alternative would be for each router to query and obtain this information from all the other routers connected to a NAP or MAE. Still, not all ISPs choose to use the Routing Arbiter services. Instead, many ISPs have established agreements with one another to exchange routing information and network traffic, as discussed next.

Peering and Transit Agreements

ISPs that have routers connected to NAPs and MAEs establish what are called peering agreements with one another to exchange traffic. There is no requirement that the ISP establish peering agreements nor do the NAP and MAE authorities get involved in these agreements. The agreements are strictly between ISPs who want to exchange information.

Peering arrangements are normally set up between ISPs of the same size in cases where both can benefit from the other's infrastructure. Usually, no money changes hands because the data flow and infrastructure usage is equal between the two ISPs. However, smaller ISPs inevitably need to use the services of the larger ISPs, which creates an unbalanced relationship. Basically, the larger ISPs provide transit services by delivering packets across the national (or global) backbones to other networks, but the smaller ISPs cannot provide equal services.

In the past, many larger ISPs have allowed the smaller ISPs to use their national networks for free. But recently, this has changed. UUNET Technologies, one of the world's largest ISPs, announced in early 1997 that it would only peer with ISPs that can route traffic on a bilateral and equitable basis, and that it will no longer accept peering requests from ISPs whose infrastructures do not allow for the exchange of similar traffic levels. UUNET only plans to peer with ISPs that operate national networks with dedicated and diverse DS-3 (or faster) backbones with connections in at least four geographically diverse locations.

ISPs that connect into the NAPs and MAEs form bilateral or multilateral peering agreements with other NAP-attached networks by signing MLPAs (multilateral peering agreements). A typical peering agreement has the participants agree to exchange routes at the NAP. This involves the advertising of routes via BGP-4 (Border Gateway Protocol-4). Participants also agree to exchange traffic among the customers of all the MLPA-participating ISPs. ISPs are entitled to select routing paths of their choice among the MLPA-participating ISPs. In addition, ISPs can make additional peering agreements on their own, and participants usually agree to use the Routing Registry provided by the RA.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Internet; Internet Connections; Internet Organizations and Committees; Intranets and Extranets; IP (Internet Protocol); ISPs (Internet Service Providers); NII (National Information Infrastructure); NSF (National Science Foundation) and NSFNET; Routers; Routing Protocols and Algorithms; and Web Technologies and Concepts


The Routing Arbiter

Pacific Bell's NAP information

Ameritech's NAP information

MFS's MAE information

NSFNET information

Russ Haynal's ISP page

Bill Manning's Exchange Point Information

David Papp's Connectivity page (links to ISP, NAPs, MAE, etc.)

Gassman's Internet Service Provider Information Pages

MicroWeb Technology, Inc. (good links to other sites)

Randall S. Benn's ISP Information page

G. Huston's ISP Peering paper

Return to top of this sample page

Return to Home page

Internet Organizations and Committees

The Internet is a collection of autonomous and interconnected networks that implement open protocols and standards. The protocols and standards are defined by organizations and committees after working their way through review and standardization processes.

No person, government, or entity owns or controls the Internet. Instead, a volunteer organization called ISOC (Internet Society) controls the future of the Internet. It appoints a technical advisory group called the IAB (Internet Architecture Board) to evaluate and set standards.

Input on standards can come from anybody-individuals, research groups, companies, and universities. An Internet draft is a preliminary document that authors use to describe their proposals and solicit comments. It may eventually become an RFC (request for comment), which is a formal document describing a new standard. Both Internet drafts and RFCs are submitted to the IESG (Internet Engineering Steering Group). RFCs are the official publications of the Internet and have been used since 1969 to describe and get comments about protocols, procedures, programs, and concepts.

In general, the best place to go for information about all the Internet organizations and committees and for a starting place for further links is the IETF Web site at http://www.ietf.org.


The ISOC is a nongovernmental international organization to promote global cooperation and coordination for the Internet and its internetworking technologies and applications. The ISOC approves appointments to the IAB from nominees submitted by the IETF nominating committee. The ISOC Web site is at http://www.isoc.org.


The IAB is a technical advisory group of the ISOC. Its responsibilities are to appoint new IETF chairs and IESG candidates, serve as an appeals board, manage editorial content and publication of RFCs, and provide services to the Internet Society. IAB's Web site is at http://www.iab.org/iab.


The IESG is chartered by the ISOC to provide technical management of IETF activities and the Internet standards process. The IESG Web site is at http://www.ietf.org/iesg.htm.


The IETF is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It provides technical and development services for the Internet and creates, tests, and implements Internet standards which are eventually approved and published by the ISOC. The actual technical work of the IETF is done in its working groups. The IETF Web site is at http://www.ietf.org.


The purpose of the IRTF is to create research groups that focus on Internet protocols, applications, architecture, and technology. The groups are small and long term and are put together to promote the development of research collaboration and teamwork in exploring research issues. The IRTF Web site is at http://www.irtf.org.


The IAHC is an international multi-organization effort for specifying and implementing policies and procedures relating to DNS (Domain Name System) assignment procedures. The IAHC Web site is at http://www.iahc.org.


The IANA acts as the clearinghouse to assign and coordinate the use of numerous Internet protocol parameters such as Internet addresses, domain names, protocol numbers, and more. The IANA Web site is at http://www.isi.edu/iana.


The InterNIC is a cooperative activity of the NSF (National Science Foundation), AT&T, and Network Solutions, Inc. It provides domain name registration and IP network number assignment, a directory of directories, white page services and publicly accessible databases, and tools and resources for the Internet community. The InterNIC Web site is at http://www.internic.net.


The W3C was founded in 1994 as an industry consortium to develop common protocols for the evolution of the World Wide Web. It maintains vendor neutrality and works with the global community to produce specifications and reference software that is made freely available throughout the world. The W3C Web site is at http://www.w3.org.


FNC membership consists of one representative from 17 U.S. federal agencies (for example, NASA, National Science Foundation, Dept. of Education, and Dept. of Commerce's NTIA) whose programs utilize interconnected Internet networks. The FNC Web site is at http://www.fnc.gov.


IETF (Internet Engineering Task Force)

Return to top of this sample page

Return to Home page

IP Switching

IP switching is a scheme for using the intelligence in layer 3 protocols to add functionality to layer 2 switching. Layer 3 is the network layer, and it provides routing services. The technology is designed to take advantage of networks that are built around switches (as opposed to networks built with repeater hubs and routers, for example).

IP switching works by locating paths in a network using routing protocols and then forwarding packets along that path at layer 2. This technology is called by many names, including layer 3 switching, multilayer switching, short-cut routing, and high-speed routing. The lack of a naming convention, along with a variety of techniques from different vendors and standards bodies, has only made things more confusing.

The technology was originally made popular by Ipsilon with its IP Switching technology, so the name IP switching has stuck. However, IP is not the only protocol under consideration. Multilayer switching is probably a better name since it implies switching of multiple protocols.

The vast majority of today's corporate networks have become intranets that implement TCP/IP protocols and Web technologies. On these networks, traffic rarely stays within a local area but travels all over the internetwork. Users hyperlink to servers on the other side of a network by clicking buttons on their Web browser. They quickly jump out of their local LAN across routers that are already strained by traffic loads.

One solution is to install new super-routers that process millions of packets per second. But these devices are expensive and may represent the wrong investment if your network strategy is to build switched networks.

The goal of IP switching is to reduce the need to send a packet through a router (or routers) when a more direct Layer 2 path is available between a source and a destination. Legacy routers are slow and can't handle the throughput now required on corporate networks. IP switching techniques generally first determine a route through a network using layer 3 routing protocols. The route may be stored for later reference or used only once. Packets are then quickly sent through a layer 2 virtual circuit that bypasses the router.

One way to think about these schemes is to consider how you might find your way around in a big city. One method is to ask directions at each intersection, where someone points you to the next intersection where you again get directions. This is how traditional routing works. Packets move from one router to the next, with each router pointing the packet in the general direction of the destination. The process is relatively slow because routers must process each packet in turn.

IP switching could be compared to getting on a bus that takes you directly to your destination. The only thing is, you still need to know which bus will take you there. Initially, you must ask someone about the bus routes or look them up on a chart. This is analogous to initially finding a path to a destination. Some techniques establish virtual circuits in advance, then simply pick a circuit that will get a packet to its destination. Another technique is to discover paths on the fly.

Performance improvements are considerable. While routers typically handle 500,000 to 1 million packets per second, IP switching techniques can provide up to 10 times that amount.

Sounds great, except that over 20 vendors are implementing different IP switching techniques. At the time of this writing, many are still under development. Some vendors have joined forces. In addition, the IETF and IEEE are working out interoperability issues and coming up with standards. Several of the most important techniques are described in the next section. Most discussions of IP switching involve ATM switching backbones, but other switching environments also figure into the picture, such as Gigabit Ethernet.

Switching Techniques

The following switching techniques provide a good sampling of what the industry has come up with to provide IP switching. Many specifications are in the early stages of development. Some may become less important than others. You can refer to appropriate sections in this book for a more detailed discussion of each technique.

In particular, refer to "MPLS (Multiprotocol Label Switching)" for more information about the IETF label-switching specifications, or visit the IETF's Web site on this topic at the address listed under "Information on the Internet" at the end of this section.

Noritoshi Demizu of Sony has created the Multi Layer Routing Web page to track the latest developments in this technology. The Web address is given at the end of this topic under "Information on the Internet."


Ipsilon pioneered IP Switching, which identifies a long flow of packets and switches them in layer 2 if possible, thus bypassing routers and improving throughput. Ipsilon modifies an existing ATM switch by removing software in the control processor that performs normal ATM functions. It then attaches an IP switch controller that communicates with the ATM switch. Ipsilon's technique is appropriate for in-house LANs and campus networks. See "Ipsilon IP Switching."


Like IP Switching, Cisco's Tag Switching is both proprietary for Cisco and used generically to identify various tag switching schemes. It assigns tags (also called labels) to packets that switching nodes read and use to determine how the packet should be switched through the network. This scheme is designed for large networks and for use on the Internet. In fact, one of the reasons for tag switching is to reduce the size of the routing tables, which are becoming too large and unmanageable in routers on the Internet. See "Tag Switching."


3Com's scheme focuses on traffic policy management, a prioritization and quality-of-service scheme. The FAST IP protocol's focus is on ensuring that the end system with priority data, such as real-time audio or video, get the bandwidth it needs to transmit the data. Workstations and servers tag frames as appropriate. The tag is then read by switches and if switching rather than routing can be performed, the frames are sent over a wire-speed circuit. Fast IP supports other protocols such as IPX and runs over switched environments other than ATM. Clients require special software since they set their own priorities. See "Fast IP."


ARIS, like Cisco's Tag Switching, attaches labels to packets that guide them through a switched network. IP switching is usually associated with ATM networks, but ARIS can be extended to work with other switching technologies. Edge devices are entry points into the ATM switching environment and have routing tables that are referred to when mapping layer 3 routes to layer 2 virtual circuits. Virtual circuits extend from an ISR on one edge of the network to an ISR on the other edge. Aggregation features provide a way to allow two or more computers on one side of an ATM network to send their datagrams through the same VC, thus reducing network traffic. See "ARIS (Aggregate Route-based IP Switching)."


MPOA is an ATM Forum specification for overlaying layer 3 network routing protocols like IP (Internet Protocol) over ATM. In this scheme, a source client first requests a route from a route server. The route server performs route calculation services and defines optimal routes through the network. An SVC (switched virtual circuit) is then established across subnet (VLAN) boundaries without any further need for routing services. An important feature is that routing functions are distributed between route servers and switches at the edge of the network. See "IP over ATM" and "MPOA (Multiprotocol over ATM)."

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ARIS (Aggregate Route-based IP Switching); ATM (Asynchronous Transfer Mode); ION (Internetworking over NBMA); I-PNNI (Integrated-Private Network-to-Network Interface); IP over ATM; IP over ATM, Classical; Ipsilon IP Switching; MPLS (Multiprotocol Label Switching); MPOA (Multiprotocol over ATM); NHRP (Next Hop Resolution Protocol); Switched Networks; Tag Switching; and VLAN (Virtual LAN)


The ATM Forum

IETF Multiprotocol Label Switching site

Noritoshi Demizu's Multi Layer Routing page

HTML versions of more than 50 major RFCs

Return to top of this sample page

Return to Home page

Middleware and Messaging

Middleware is a layer of software or functionality that sits between one system (usually a client) and another system (usually a server) and provides a way for those systems to exchange information or connect with one another even though they have different interfaces. Messaging is one of the methods that has become integral to the way that middleware is implemented.

Middleware and messaging may be employed within an organization to tie together its LAN and legacy systems, its diverse clients and back-end databases, and its local and remote systems. Middleware is also coming into widespread use on the Internet as a way to run sophisticated client/server applications with integral transaction processing, security, and management, something that is difficult to do with current Web protocols such as HTTP (Hypertext Transfer Protocol).

Here are some basic definitions of middleware:

Types of Middleware

The following types of middleware are available for building distributed applications, heterogeneous networks, and Internet-based distributed software systems.

Complete development environments exist which provide many of the features described above, in addition to authentication and authorization services, distribution and management services, directory services, and time services.

See "DCE (Distributed Computing Environment), The Open Group" for an example of middleware that supports building heterogeneous networks. See "Netscape ONE Development Environment" for an example of a development environment that takes advantage of Web protocols.

Web Middleware

Using middleware for Web-based client/server development is quite a bit different than using middleware for in-house use. Instead of developing applications to tie together a variety of in-house clients and servers, applications reach out to potentially millions of global users.

Many middleware vendors have successfully made the Web transition in their products. The usual technique is to add support for HTTP and HTML (Hypertext Markup Language). However, these protocols are really only appropriate for Web publishing, not running sophisticated mission-critical applications over the Web that require transaction monitoring. A more common technique is to bypass HTTP and its limitations. Many vendors do this using proprietary calls between the client and server. Another method is to use CORBA and its industry standard IIOP (Internet Inter-ORB Protocol), which can bypass HTTP. The Web server only gets involved when the user first contacts the Web site. Components are downloaded and a connection-oriented session is initiated. In this case, the Web browser is more like an Internet program launcher that gets out of the way.

One of the primary reasons for bypassing HTTP is that it does not provide state management. That is, clients send requests to servers, but there is no session connection between the systems. If a user clicks a button on a Web page that was just received, another connection must be set up. Client/server applications, on the other hand, rely on state management. A transaction usually involves some critical operation that must be monitored to completion. A session is established so that both systems can exchange data and status information about the session. Data may be written to multiple locations, and the monitor must ensure that all those writes are completed. Products that include this state management include Microsoft Transaction Server and BEA Systems' BEA Jolt (which uses Tuxedo transaction monitoring).

Note that many middleware products are also development environments that provide middleware interface components as well as tools for building Java and ActiveX components to work with the systems. This topic is carried further under "Web Middleware and Database Connectivity."

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Client/Server Computing; Component Software Technology; CORBA (Common Object Request Broker Architecture); Database Connectivity; DCE (Distributed Computing Environment), The Open Group; DCOM (Distributed Component Object Model); Distributed Applications; Distributed Computer Networks; Distributed Database; Distributed Object Computing; IIOP (Internet Inter-ORB Protocol); Multitiered Architectures; Netscape ONE Development Environment; Object Technologies; ODBC (Open Database Connectivity), Microsoft; OLE DB; The Open Group; Oracle NCA (Network Computing Architecture); ORB (Object Request Broker); SQL (Structured Query Language); Sun Microsystems Solaris NEO; Transaction Processing; Web Middleware and Database Connectivity; and Web Technologies and Concepts


MOMA (Message-Oriented Middleware Association)

Innergy's middleware links

The Lewis Group's Roadmap to the Middleware Kingdom

AFP Technology Ltd. (search for "middleware")

Active Software's ActiveWeb

BEA Systems' Jolt

Bluestone Software's Sapphire

Digital Equipment's ObjectBroker

Expersoft's PowerBroker

Hewlett-Packard's ORB Plus

IBM's Component Broker Connector (CBConnector)

IONA Technologies' Orbix

Microsoft's Transaction Server

Oracle's NCA

PeopleSoft, Inc.


Sun Microsystems' Solaris NEO

Visigenic's VisiBroker

Wayfarer Communications' QuickServer

Return to top of this sample page

Return to Home page

Mobile IP

Traditionally, IP has assumed that a host on the Internet always connects to the same point of attachment. Any person or system that wants to send datagrams to that host addresses the datagrams to an IP address that identifies the subnetwork where the host is normally located. If the host moves, it will not receive those datagrams.

Today, a growing number of Internet users move their systems from place to place. If you normally connect to an ISP (Internet service provider) to establish an Internet connection and receive Internet mail, you'll need to dial long-distance into that ISP if you travel to another state or country. The alternative is to have a different IP address at your destination location, but this does not help if people are used to contacting you at your normal IP address.

Mobile IP, as defined in IETF RFC 2002, provides a mechanism that accommodates mobility on the Internet. It defines how nodes can change their point of attachment to the Internet without changing their IP address. The complete RFC is located at http://ds2.internic.net/rfc/rfc2002.txt.

Mobile IP assumes that a node's address remains the same as it is moved from one network location to another. It also allows a user to change from one media type to another (i.e., from Ethernet to a wireless LAN).

Mobile IP consists of the entities described in the following list. Before describing the entities, it is helpful to know about the basic operation of Mobile IP. A mobile user has a "home" network where his or her computer is normally attached. The "home" IP address is the address assigned to the user's computer on that network. When the computer moves to another network, datagrams still arrive for the user at the home network. The home network knows that the mobile user is at a different location, called the foreign network, and forwards the datagrams to the mobile user at that location. Datagrams are encapsulated and delivered across a tunnel from the home network to the foreign network.

Tunneling is a process of encapsulating datagrams into other datagrams for delivery across a network. Encapsulation is required because the datagrams are addressed to the network from which they are being shipped! By encapsulating the datagrams, the outer datagram can be addressed to the foreign network where the mobile user now resides. Note that the mobile node uses its home address as the source address of all IP datagrams that it sends, even when it is connected to a foreign network.

The important point is that the mobile node retains its IP address whether it is connected to the home network or some foreign network. When the mobile system is away from its home network, the home agent on the home network maintains a "care of" address that is the IP address of the foreign agent where the mobile node is located.

When a mobile node is attached to its home network, it operates without mobility services. If the mobile node is returning from a foreign network, it goes through a process that reregisters it as being attached to the home network rather than the foreign network. The details of this procedure are outlined in RFC 2002.

When a node moves to a foreign network, it obtains a "care of" address on the foreign network. The mobile node operating away from home then registers its new "care of" address with its home agent through the exchange of registration information. When datagrams arrive for the mobile node at the home network, the home agent on that network intercepts the datagrams and tunnels them to the mobile node's "care of" address, which as mentioned is usually the foreign agent router. This router then unencapsulates the datagrams and forwards them to the mobile host.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Internet; IP (Internet Protocol); Mobile Computing; Remote Access; TCP/IP (Transmission Control Protocol/Internet Protocol); Virtual Dial-up Services; and Wireless Communications


Mobile IP (RFC 2002)

IETF drafts (search for "mobile IP")

Return to top of this sample page

Return to Home page

Object Technologies

Object-oriented technology brings software development past procedural programming, into a world of reusable programming that simplifies development of applications. Operating systems and applications are created as multiple modules that are linked together to create a functional program. Any module can be replaced or updated at any time without a need to update the entire operating system or program. Modules may also be located in different places, thus supporting distributed computing and the Internet.

A Web browser can be thought of as a container into which users can add objects that provide additional functionality. For example, a user might connect with a Web server and download an object in the form of a Java applet or an ActiveX component that improves the feature set of the Web browser itself or provides some utility that runs inside the Web browser, such as a mortgage calculation program from a real estate Web site. See "Component Software Technology" and "Distributed Object Computing" for more information.

The entire Windows NT operating system is built upon an object-based architecture. Devices like printers and other peripherals are objects, as are processes and threads, shared memory segments, and access rights.

An object may be a self-contained program or package of data with external functions. An ATM (automated teller machine) is perhaps the best example of an object in the real world. You don't care about the internal workings of the machine. You request cash and the machine delivers. The machine has an external interface that you access to get something, such as cash or an account balance.

Program and database objects are the same. They are like boxes with internal code or data that perform some action when the external interface is manipulated. The external interface has methods, which can be compared to the buttons on the ATM that make it do things. For example, an object might display columnar data sorted on a column selected by a user. A window that displays a list of files in a graphical desktop operating system (i.e., Windows 95) is an example of an object. It is a box with data (the list of files) that has controls for manipulating the data. You can click the button at the top of any column to sort the data on that column.

Objects are combined to create complete programs. Objects interact with one another by exchanging messages. One object will act as a server and another object will act as a client. They may switch those roles at any time. If a program needs updating, only the objects that need updating are replaced.

Objects are categorized into hierarchical classes that define different types of objects. Parent classes have characteristics that are passed down to subclasses of the object. This is called class inheritance. Inheritance can be blocked if necessary. In a database, the class "people" may have the subclasses of "male" and "female." Each subclass has generalized features inherited from its parents, along with specialized features of its own.

If we consider the ATM machine at your bank as a parent class, then the much smaller credit card machines at supermarket checkout lines could be considered a subclass. They are designed with the same specifications as the larger ATM, but without some features, such as the ability to check your account balance.

Objects talk to each other by sending messages or by establishing connection-oriented links. A message usually requests that an object do something. In a network environment, objects may be located on many different computers, and they talk to each other using network protocols. An ORB (object request broker) is a sort of message-passing bus that helps objects locate one another and establish communications. See "ORB (Object Request Broker)" for more information.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ActiveX; Client/Server Computing; COM (Component Object Model); Component Software Technology; Compound Documents; CORBA (Common Object Request Broker Architecture); DCOM (Distributed Component Object Model); Distributed Object Computing; Java; Multitiered Architectures; OLE (Object Linking and Embedding); OLE DB; OMA (Object Management Architecture); ORB (Object Request Broker); and UNO (Universal Networked Object)


OMG (Object Management Group)

Object Database Management Group

W3C's Object page

Object Magazine Online

Object Basics tutorial
http://www.qds.com/people/apope/ ap_object.html

Bob Hathaway's Object-Orientation FAQ

Ricardo Devis's Object Oriented page

Terry Montlick's object technology introduction (interesting)

Sysnetics (see links page)

Object links at Texas A&M
http://www.cs.tamu.edu/people/ganeshb/ obj_tech.html

Object-Oriented Information Sources

Jeff Sutherland's Object Technology page

Return to top of this sample page

Return to Home page

Packet and Cell Switching

Packet switching is a technique for transmitting packets of information through multiple linked networks. A packet is defined just above under "Packet." A cell is similar to a packet, except that it is has a fixed length. ATM uses 53-byte cells. The advantage of switching fixed-length cells as opposed to variable-length packets is speed and deterministic data transmissions. This is described under "ATM (Asynchronous Transfer Mode)" and "Cell Relay." The rest of this section discusses packet switching in general, but cell switching has many of the same features.

A simple packet-switched network is shown in Figure P-1. Consider the redundant topology of this network. LANs are interconnected with routers to form a mesh topology. If the link between R1 and R2 goes down, packets from R1 can still travel through R3 and R4 to reach R2. Note that the router-connected network essentially allows any station to communicate with any other station. Packet-switched networks are often called any-to-any networks.

FIGURE P-1. A packet-switched network

Packets make the trip through the switched network in "hops." If computer A needs to send a packet to computer Z, the packet first travels to R1. R1 uses a store-and-forward procedure, in which it receives the packet and puts it in memory long enough to read the destination address. Looking in a routing table, it determines that the port connected to R2 is the best way to forward the packet to its destination.

Routers run routing protocols to discover neighboring routers and the networks attached to them. These protocols let routers exchange information about the network topology, which may change often as devices and links go down. See "Routers" and "Routing Protocols and Algorithms."

As mentioned, the Internet is a packet-switched network that spans the globe. IP (Internet Protocol) is an internetwork protocol that defines how to put information into packets (called datagrams) and transmit those packets through the Internet (or your private intranet). IP provides a near-universal delivery mechanism that works on almost any underlying network. See "Datagrams and Datagram Services" and "IP (Internet Protocol)" for more information.

One interesting aspect of packet-switched networks is the ability to emulate circuits within the network. This gives the appearance of a straight-through wire from one location to another that provides faster delivery and better service. A virtual circuit is a permanent or switched path through a public packet-switched network that is set up by a carrier for a customer to use. Frame relay and ATM networks provide virtual circuit services.

Virtual circuits eliminate the overhead of routing but add administrative overhead such as path setup. For example, setting up a frame relay virtual circuit may require a call to the carrier and a wait of a few minutes to a few days. Customers contract with carriers for virtual circuits that have a specific data rate, called the CIR (committed information rate). The carrier will guarantee that it can supply this rate by not overbooking its network. Customers can go over this rate, but additional charges are applied and the traffic may be delayed. See "Frame Relay" and "Virtual Circuit" for more information.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ATM (Asynchronous Transfer Mode); Cell Relay; Circuit-Switching Services; Communication Services; Connection-Oriented and Connectionless Services; Data Communication Concepts; Datagrams and Datagram Services; Data Link Protocols; Digital Circuits and Services; Frame Relay; Internetworking; IP (Internet Protocol); Network Concepts; Network Layer Protocols; Packet; Point-to-Point Communications; Routers; Routing Protocols and Algorithms; TCP (Transmission Control Protocol); Transport Protocols and Services; Virtual Circuit; WAN (Wide Area Network); and X.25

Return to top of this sample page

Return to Home page

Protocol Concepts

Network communication protocols are defined within the context of a layered architecture, usually called a protocol stack. The OSI (Open Systems Interconnection) protocol stack is often used as a reference to define the different types of services that are required for systems to communicate. Figure P-11 compares the OSI protocol stack to the more common protocols found today.

FIGURE P-11. Common protocol stacks

The lowest layers define physical interfaces and electrical transmission characteristics. The middle layers define how devices communicate, maintain a connection, check for errors, and perform flow control to ensure that one system does not receive more data than it can process. The upper layers define how applications can use the lower network layer services.

The protocol stack defines how communication hardware and software interoperate at various levels. Layering is a design approach that specifies different functions and services at levels in the protocol stack. Layering allows vendors to build products that interoperate with products developed by other vendors.

Each layer in a protocol stack provides services to the protocol layer just above it. The service accepts data from the higher layer, adds its own protocol information, and passes it down to the next layer. Each layer also carries on a "conversation" with its peer layer in the computer it is communicating with. Peers exchange information about the status of the communication session in relation to the functions that are provided by their particular layer.

As an analogy, imagine the creation of a formal agreement between two embassies. At the top, formal negotiations take place between ambassadors, but in the background, diplomats and officers work on documents, define procedures, and perform other activities. Diplomats have rank, and diplomats at each rank perform some service for higher-ranking diplomats. The ambassador at the highest level passes orders down to a lower-level diplomat. That diplomat provides services to the ambassador and coordinates his or her activities with a diplomat of equal rank at the other embassy. Likewise, diplomats of lower rank, who provide services to higher- level diplomats, also coordinate their activities with peer diplomats in the other embassy. Diplomats follow established diplomatic procedures based on the ranks they occupy. For example, a diplomatic officer at a particular level may provide language translation services or technical documentation. This officer communicates with a peer at the other embassy regarding tran slation and documentation procedures.

In the diplomatic world, a diplomat at one embassy simply picks up the phone and calls his or her peer at the other embassy. In the world of network communication, software processes called entities occupy layers in the protocol stack instead of diplomats of rank. However, these entities don't have a direct line of communication between one another. Instead, they use a virtual communication path in which messages are sent down the protocol stack, across the wire, and up the protocol stack of the other computer, where they are retrieved by the peer entity. This whole process is illustrated in Figures P-12 and P-13. Note that the terminology used here is for the OSI protocol stack. The more popular TCP/IP protocol suite uses slightly different terminology, but the process is similar.

As information passes down through the protocol layers, it forms a packet called the PDU (protocol data unit). Entities in each layer add PCI (protocol control information) to the PDU in the form of messages that are destined for peer entities in the other system. Although entities communicate with their peers, they must utilize the services of lower layers to get those messages across. SAPs (service access points) are the connection points that entities in adjacent layers use to communicate messages; they are like addresses that entities in other layers or other systems can use when sending messages to a system. When the packet arrives at the other system, it moves up through the protocol stack, and information for each entity is stripped off the packet and passed to the entity.

FIGURE P-12. Communication process between two separate protocol stacks

Figure P-13 illustrates what happens as protocol data units are passed down through the layers of the protocol stack. Using the previous diplomatic analogy, assume the ambassador wants to send a message to the ambassador at the other embassy. He or she creates the letter and passes it to an assistant, who is a diplomat at the next rank down. This diplomat places the letter in an envelope and writes an instructional message on the envelope addressed to his or her peer at the other embassy. This package then goes down to the next-ranking diplomat, who puts it in yet another envelope and writes some instructions addressed to his or her peer at the other embassy. This process continues down the ranks until it reaches the "physical" level, where the package is delivered by a courier to the other embassy. At the other embassy, each diplomat reads the message addressed to him or her and passes the enclosed envelope up to the next-ranking officer.

FIGURE P-13. How data and/or messages are packaged for transport to another computer

Each layer performs a range of services. In particular, you should refer to "Data Communication Concepts," "Data Link Protocols," "Network Layer Protocols," and "Transport Protocols and Services" for more information. The sections "IP (Internet Protocol)" and "TCP (Transmission Control Protocol)" also provide some insight into the functions of the two most important layers as related to the Internet protocol suite.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

Connection Establishment; Data Communication Concepts; Datagrams and Datagram Services; Data Link Protocols; Encapsulation; Flow-Control Mechanisms; Fragmentation of Frames and Packets; Framing in Data Transmissions; Handshake Procedures; IP (Internet Protocol); Network Concepts; Network Layer Protocols; Novell NetWare Protocols; OSI (Open Systems Interconnection) Model; Packet; Sequencing of Packets; TCP (Transmission Control Protocol); and Transport Protocols and Services

Return to top of this sample page

Return to Home page

QoS (Quality of Service)

QoS describes what you get if you can guarantee the timely delivery of information on networks, control bandwidth, set priorities for selected traffic, and provide a good level of security. QoS is usually associated with being able to deliver delay-sensitive information such as live video and voice while still having enough bandwidth to deliver other traffic, albeit at a lower rate. Prioritization is related to tagging some traffic so that it gets through congested networks before lower-priority traffic.

This section is closely related to the section "Prioritization of Network Traffic." Prioritization is also called class of service, or CoS.

Providing QoS requires improvements in the network infrastructure. One technique is to increase bandwidth by building network backbones with ATM (Asynchronous Transfer Mode) or Gigabit Ethernet switches. It may also mean upgrading a traditional shared LAN to a switched LAN. In addition, new protocols are required that can manage traffic priorities and bandwidth on the network. The following analogy will help put QoS in perspective:

Assume you have the opportunity to redesign metropolitan area freeway systems. Current freeway systems provide no guarantees that you will arrive at your destination on time, nor do they provide priority levels for special traffic, such as emergency vehicles or people who might be willing to pay more for an uncongested lane. This situation is analogous to the "best-effort" delivery model of the Internet, where packets are treated equally and must vie for available bandwidth.

The first task is to determine how to improve the quality of service provided by the freeway. That means reducing or avoiding delays, providing more predictable traffic patterns, and providing some priority scheme so that some traffic can get through in a hurry if necessary.

One obvious solution is to add more lanes, which is equivalent to improving the bandwidth of a network by upgrading to ATM or Gigabit Ethernet. Another solution is to create more direct routes to major destinations, which is equivalent to creating a switched network environment in which dedicated circuits can be set up between any two systems.

The inevitable laws of freeways and networks dictate that new lanes or increased bandwidth will be quickly used up. In the network environment, emerging multimedia applications will eventually use up any extra bandwidth you provide. If you increase the bandwidth, you will still need to provide services that manage it. This is where new network QoS protocols come into play.

In the freeway analogy, assume you set aside one lane for special use, for emergency vehicles and buses, for example. Another lane is set aside for people that qualify to use it, such as diamond lanes that can be used by vehicles with two or more passengers. On networks, we can reserve some bandwidth and only make it available to qualified users such as managers, special applications such as videoconferencing, or special protocols such as SNA (Systems Network Architecture), which must be delivered within a specific time period to prevent time-outs.

Prioritization assumes that someone is managing priorities. If more people ride-share, then even the diamond lanes become jammed, so you might want to establish some other means of controlling access, such as pay-for-use through a tollgate. Then, anyone that is willing to pay for uncongested lanes can get access to the lanes. If usage increases, so can the fee. Eventually, the system will balance, at least in theory. Some people may gain access to special lanes because of political connections, government service, or credits earned through community service. The driver is identified and authorized via a computerized system that controls such policies.

In the internal network environment, user priorities are set by network managers on policy servers. On the Internet, priorities and bandwidth are provided on a pay-for-use basis. This prevents anyone from hogging bandwidth, but it requires that ISPs (Internet service providers) have agreements with other ISPs to establish a user's requested QoS across the Internet and that they have accounting/billing systems in place to charge customers.


Providing QoS on ATM networks is relatively easy for a number of reasons. First, ATM uses fixed-size cells for delivering data, as opposed to the variable-size frames used in the LAN environment. The fixed size makes it easy to predict throughput and bandwidth requirements. Assume you are trying to figure out how many vehicles pass through a tunnel per hour. That's easy if all the vehicles are the same size, but if the vehicles are cars, buses, and semi trucks, the varying sizes make it difficult to determine the throughput in advance. The advantage of ATM's fixed-size cells is that service providers can allocate network bandwidth in advance and create contracts with their clients that guarantee a QoS.

ATM is also connection oriented. Cells are delivered over virtual circuits in order, an important requirement for real-time audio and video. Before any data can be sent, a virtual circuit must be set up. This circuit may be preestablished or set up on demand (switched). In the latter case, the network will only provide the circuit if it can fulfill the user's request. QoS for in-house networks is set up based on administrative or other policies. If the network is connected to a carrier's ATM network, the QoS parameters may be passed on to that network as well.

Applications are just emerging that can make QoS requests of ATM networks for services such as emulated circuits with a specific bandwidth. Common ATM QoS parameters include peak cell rate (maximum rate in cells per second required to deliver the user data), minimum cell rate (minimum acceptable cell rate that the ATM network must provide; if the network cannot provide this rate, the circuit request is rejected), cell loss ratio (cell loss that is acceptable), cell transfer delay (delay that is acceptable), and cell error ratio (cell error rate that is acceptable).

During the call setup phase, ATM performs a set of procedures called CAC (connection admission control) to determine whether it can provide the requested ATM connection. Admission is determined by calculating the bandwidth requirements that will be needed to satisfy the user's request for service. If the circuit is granted, the network monitors the circuit to ensure that the requested parameters are not exceeded. If traffic exceeds the contracted level for the circuit, the network may drop packets in that circuit rather than other circuits. However, if bandwidth is available, the network may allow traffic to exceed the contracted amount.

Bandwidth and QoS in the Carrier Networks

Having enough bandwidth has always been a problem in the WAN environment. On fixed-rate leased lines, packets are dropped when traffic exceeds the available rate. Techniques for providing bandwidth on demand solved these problems to some extent. Upon sensing an excess load, a router would dial one or more additional lines to handle the excess load. This is discussed under "Bandwidth on Demand."

Carrier-based packet-switched networks such as frame relay and ATM are designed to handle temporary peaks in traffic. Customers contract for a specific CIR (committed information rate), and that CIR can be exceeded for an additional charge if network bandwidth is available.

Still, packet-switched networks must provide some guarantees that priority traffic can get through ahead of nonpriority traffic and that real-time traffic can make it through the network in time. That is where QoS comes in. X.25 packet-switched networks support a variety of QoS features, which were needed in order to guarantee delivery. However, data rates on X.25 were slow. In contrast, frame relay networks do not have many QoS features because the designers traded them off for speed. On the other hand, ATM was designed from the ground up for speed and for quality of service, as described earlier.

Cisco IOS QoS Services

Cisco IOS (Internetwork Operating System) is a platform for delivering and managing network services. Cisco IOS QoS is a set of IOS extensions that provide end-to-end QoS across heterogeneous networks. ISPs can use it to provide QoS across their networks and to charge customers for bandwidth usage.

Cisco IOS QoS services can provide congestion control; preferential treatment of higher-priority traffic; sorting and classifying of packets into traffic classes or service levels; the ability to commit bandwidth and enforce the commitment; measurement and metering of traffic for billing/accounting and network performance monitoring; and resource allocation based on physical port, address, and/or application. Another important feature is support for networks built with different topologies (such as routers, frame relay, ATM, and tag switching) to cooperate in providing QoS from end system to end system. These services are provided by the following features:

Applications can request a specific QoS through the RSVP (Resource Reservation Protocol). Cisco IOS QoS services then take the RSVP requests and map them into high-priority packets that are tunneled through the ISP backbone to the far-end router where they are converted back to RSVP signals. According to Cisco, this method maintains the benefits of RSVP but avoids the overhead of using it in the backbone network.

In general, Cisco IOS QoS provides a way for ISPs to "generate revenue by defining, customizing, delivering and charging for differentiated, value-added network services," according to Cisco. It allows ISPs to offer multiple service tiers with different pricing policies based on usage, time of day, and traffic class.

QoS on the Internet and Intranets

A number of trends are providing a network infrastructure for delivering real-time multimedia over internal networks. These are the explosive growth of Web protocols, the use of switched networks that provide dedicated Ethernet, and the use of high-speed backbones (ATM or Gigabit Ethernet). Bandwidth management protocols are needed to complete the picture.

The Internet community has come up with RSVP as a way to provide QoS on the Internet and on intranets. RSVP is mostly a router-to-router protocol in which one router requests that another router set aside (reserve) a certain bandwidth for a specific transmission. Each router along a path from the source to the destination is asked to set aside bandwidth. RSVP is discussed further under "RSVP (Resource Reservation Protocol)."

At the time of this writing, a number of IETF (Internet Engineering Task Force) working groups were working on QoS-related network protocols, as outlined here:

QoS from the Application Perspective

Much of the work being done to provide QoS on intranets and the Internet was still under development at the time of this writing. However, applications such as Microsoft NetMeeting provide some insight into how an application itself can optimize bandwidth usage. NetMeeting is basically a videoconferencing solution that operates over corporate networks and the Internet. It allows users to transfer files and engage in "whiteboard" sessions (displaying and editing graphics) during the videoconference.

Microsoft calls NetMeeting a "bandwidth-smart" application because it has built-in mechanisms for caching, compressing, and optimizing transmissions. Policies can be set to restrict the amount of bandwidth that the application uses for audio and video so that administrators can prevent the application from hogging bandwidth.

During normal NetMeeting operation, separate audio, video, and data streams are transmitted across the network. Data streams constitute the whiteboard sessions and control information. NetMeeting treats the audio stream with the highest priority, followed by the data stream and then the video stream. Four transmission modes can be selected: 14.4 Kbits/sec, 28.8 Kbits/sec, ISDN (Integrated Services Digital Network), and LAN speeds. NetMeeting then automatically balances the three separate streams according to their priorities and the available bandwidth. In the lowest-bandwidth configuration, the video image may appear mostly as a still image that changes only occasionally.

NetMeeting transmits a complete video frame every 15 seconds and then refreshes the image with changes as they occur. It also does some unique things to reduce the amount of data going over the line. For example, graphic information may reside in a queue temporarily before being transmitted. If part of that waiting image changes while it is still in the queue, only the new information is sent and the old information is discarded rather than being sent across the link and then immediately overlapped by the new image.

RELATED ENTRIES (These entries are hyperlinked on the book's CD-ROM)

ATM (Asynchronous Transfer Mode); Bandwidth on Demand; Cell Relay; CIF (Cells in Frames); Fast IP; IP over ATM; IP Switching; ISA (Integrated Services Architecture); Prioritization of Network Traffic; RSVP (Resource Reservation Protocol); RTP (Real-time Transport Protocol); Switched Networks; Tag Switching; and VLAN (Virtual LAN)


IETF Integrated Services (intserv) Working Group

IETF Integrated Services over Specific Link Layers (issll) Working Group

Cisco Systems, Inc.

European Workshop for Open Systems' QoS page

Medhavi Bhatia's QoS over the Internet: The RSVP protocol paper

Applied Research Laboratory QoS paper with links

Saurav Chatterjee's QoS-related papers and links

Return to top of this sample page

Return to Home page

Standards Groups, Associations, and Organizations

The primary standards groups and organizations related to networking and the Internet are listed in the following table. Refer to the related headings in this book for more information. A complete list of standards organizations, associations, consortiums, and other groups would fill a whole book. Therefore, this section provides links to sites on the Web that have hotlinks to standards groups, organizations, and associations. Also refer to "Internet" and "Internet Organizations and Committees" for information about Internet-related organizations and standards. Also refer to Appendix A for additional references.

ANSI (American National Standards Institute)

EIA (Electronic Industries Association)

IEEE (Institute of Electrical and Electronic Engineers)

ISO (International Organization for Standardization)

ITU (International Telecommunication Union)


Telecoms Virtual Library (complete subject list with subpages of links), http://www.analysys.co.uk/vlib

Standards and Standardization Bodies, http://www.iso.ch/VL/Standards.html

Prof. Jeffrey MacKie-Mason's links to associations, nonprofits: foundations and professional, trade, and interest groups, http://www.spp.umich.edu/telecom/associations.html

Prof. Jeffrey MacKie-Mason's links to standards bodies, http://www.spp.umich.edu/telecom/standards.html

Telstra Corp.'s Telecommunications Information Sources, http://www.telstra.com.au/info/ communications.html

Stefan Dresler's links, http://www.telematik.informatik.uni-karlsruhe.de/~dresler/communications.html

Cellular Networking Perspectives' telecom links, http://www.cadvision.com/cnp-wireless/ pointers.html

Webstart Communications' standards links, http://www.cmpcmm.com/cc/standards.html

Return to top of this sample page

Return to Home page

Web Technologies and Concepts

The World Wide Web (or "Web") is built on top of the Internet, which itself is made possible by the TCP/IP protocols. Web clients and Web servers communicate with one another using a protocol called HTTP (Hypertext Transfer Protocol). Web servers provide information that is formatted with HTML (Hypertext Markup Language), which is basically a page description language. Public Web servers provide information on the Internet to anyone with a Web browser. At the same time, Web technology can be used to build private in-house information systems, called intranets, over TCP/IP networks.

Web browsers provide a unique tool for accessing information on any network, whether it is an internal intranet or the Internet. They remove the mystery of the Internet and eliminate the need for users to understand arcane commands. Most people begin accessing resources the first time they use a browser. Little training is necessary, and most browser software is free. Browsers do most of the work of accessing and displaying the documents, making the process almost transparent to the user.

Much of the World Wide Web's impact has come from its making the public aware of hypertext and hypermedia. Hypertext and hypermedia are interactive navigation tools; mouse clicking on a hypertext or hypermedia link takes the user directly to the desired Web document. Hypertext refers to documents containing only text. Hypermedia refers to documents with text, audio, video, images, graphics, animation, or other active content.

The traditional role of a Web browser has been to contact a Web site and obtain information from the site in the form of an HTML page. Today, even this is changing. So-called "push" technology makes the Web browser and/or desktop a place where dynamic information can automatically appear from sites on the Internet. For example, weather information may appear in the right-hand corner of your desktop while stock quotes may banner across the bottom portion. This information is "pushed" to you on a real-time basis, so the concept of starting up a Web browser and actively searching for information is only one way to access the Web. The new paradigm is multicasting, which provides true broadcasting on the Internet. Refer to "Multicasting" and "Push" for more information.

Despite the impact of new technologies that are changing the way content is delivered, browsers can also work with other, older Internet services such as e-mail, FTP, and Gopher.

How the Web Works

HTTP is a fast and efficient communication protocol that controls many different operations that take place between the Web browser client and the server. HTTP uses the TCP (Transmission Control Protocol) to transport all of its control and data messages from one computer to another.

Web pages are typically grouped at a Web site, where the main page is referred to as the home page. The user navigates by mouse clicking on hyperlinks displayed as text, buttons, or images. These hyperlinks reference other information. When you click a hyperlink, you jump to another part of the same page, a new page at the same Web site, or to another Web site. You might also execute a program, display a picture, or download a file. All of this hyperlinking is done with HTML, which works in concert with HTTP. HTTP is the command and control protocol that sets up communication between a client and a server and passes commands between the two systems. HTML provides the document with formatting instructions that control how a Web page displays on a browser. See "Hypermedia and Hypertext" and "HTML (Hypertext Markup Language)" for more information.

To connect with a Web site, you type the URL (Uniform Resource Locator) for the site into the Address field of a Web browser. Here is an example of the URL that retrieves a document called INFO.HTM in the public directory of a Web site called www.tec-ref.com:


When you type this request, the Web browser first gets the IP address of www.tec-ref.com from a DNS (Domain Name System) server, and then it connects with the target server. The server responds to the client and the tail end of the URL (public/info.html) is processed. In this case, the info.html file in the /public directory has been requested, so the Web server transfers this HTML-coded document to your Web browser. Your Web browser then translates and displays the HTML information.

The browser can also handle much of the processing in this relationship. It formats and displays the HTML information, which is transferred as a simple, relatively small file from the Web server. The browser can also include add-ons that allow it to display video, sound, and 3-D images. These add-ons do most of the processing based on a relatively small set of commands or data transferred from the Web server, thus reducing excessive dialog between the Web client and server.

The Web client/server relationship is stateless, meaning that the server does not retain any information about the client, and the connection between Web browser and Web server is terminated as soon as the requested information is sent. While this is efficient for activities such as downloading a single Web document, it produces a lot of overhead if the client keeps requesting additional information from the server. Each new connection requires a negotiation phase, which takes time and requires that packets be exchanged between client and server. In addition, each object on a Web page, such as a graphic image, is sent using a separate connection. The inefficiency of this process has prompted the development of new Web protocols and component applications that are more efficient at transmitting information.

Component Software

The latest trend is component technology in the form of Java applets and Microsoft ActiveX controls. These self-contained programs follow object-oriented program design methods. They download to a Web browser and either run as separate programs or attach themselves to existing components to enhance the features of those components. When a Web user visits a Java-enabled or ActiveX-enabled Web site, the appropriate components download to the user's computer.

There are three types of Web documents:


HTTP sets up the connection between the client and the server, and in many cases that connection must be secure. For example, assume your company sets up an Internet Web server to provide sensitive documents to its mobile work force. These documents should not be accessible to anyone else (since the site is connected to the Internet, anyone could attempt to access them). To prevent unauthorized access, you set up private directories and require logon authentication. You can also do the following:

Related Topics of Interest

See "Internet" for related information and links to other sections in this book about the Web. The following list also provides useful links:


Web content development, data access, and application development: ActiveX; COM (Component Object Model); Component Software Technology; Compound Documents; CORBA (Common Object Request Broker Architecture); Database Connectivity; DBMS (Database Management System); DCOM (Distributed Component Object Model); Distributed Applications; Distributed Object Computing; IIOP (Internet Inter-ORB Protocol); Java; Middleware and Messaging; Multitiered Architectures; Netscape ONE Development Environment; Object Technologies; OLE DB; OMA (Object Management Architecture); Oracle NCA (Network Computing Architecture); ORB (Object Request Broker); Sun Microsystems Solaris NEO; Transaction Processing; and Web Middleware and Database Connectivity

Security for internal networks and the Internet: Authentication and Authorization; Certificates and Certification Systems; Digital Signatures; Hacker; IPSec (IP Security); PKI (Public-Key Infrastructure); Public-Key Cryptography; and Token-Based Authentication


World Wide Web Consortium

Yahoo!'s WWW links page

See additional links under "Internet."

Return to top of this sample page

Return to Home page

Updated: February, 2001 TCSN

Email: [email protected]

All material Copyright © 1996, 2001 [Big Sur Multimedia, Inc]. All rights reserved. Information in this document is subject to change without notice. Other products and companies referred to herein are trademarks or registered trademarks of their respective companies or mark holders.