Frelay

Overview
Frame relay has gotten much exposure in the press in the early 1990s, to be displaced by ATM as the hot new technology. However, unlike ATM, frame relay technology has matured significantly in 1993 and is now poised for widespread use. This booklet is intended to provide you with the basic information you need to understand what frame relay is all about, and how it could possibly benefit you.

Q. What is frame relay?
A. Frame relay is a link layer protocol occupying layer 2 of the OSI model (See Figure 1). To understand where frame relay fits in the OSI model, let us review the functions of the bottom four layers, which are the primary layers associated with communications. The Physical Layer (Layer 1) refers to the actual physical infrastructure, such as wiring, the carrier, or a modem. The Link Layer (Layer 2), is responsible for data transport over a particular link. The Network Layer (Layer 3) is responsible for routing of data through the network or networks. The Transport Layer (Layer 4) is responsible for end to end transport of data.

Figure 2 shows an example. There are 4 link layers for the 4 networks depicted. There is one transport layer end to end between actual nodes on the network and one network layer to determine the best route. In Figure 2, the route from a PC on LAN A to the server on LAN C could either go directly from LAN A to LAN C through one WAN link or could go through LAN B and two WAN links.

Frame relay is a layer 2 WAN protocol. As such, it is responsible for delivery of information over a link, e.g. a WAN link in Figure 2. Frame relay's origins are in the telecommunications industry, where it was originally conceived to support packet mode ISDN. Although ISDN has had mixed success, many elements of ISDN technology have become useful in their own right, with frame relay a prime example.

Q. Why has frame relay gotten a lot of attention these past few years?
A. There are two compelling reasons why frame relay has gained great attention in the last few years. The first is that it is a considerably simpler link layer protocol than many in the past. This allows for higher speed links with low delay due to the need for less processing power. Higher speed is obviously desirable as more data gets generated by ever more powerful computers and this data needs to be transported. Low delay is also very important for some application layer protocols, e.g. DEC's Local Area Transport (LAT) terminal server protocol. The second is its capability to handle bursts, which are very characteristic of LAN traffic. We will be discussing these two facets of frame relay in detail.

Q. What makes frame relay simple?
A. To understand the simplicity of the protocol, you need to understand that there are two basic Link Layer philosophies for dealing with errors or congestion: Automatic Repeat Request (ARQ) vs. throw on the floor (see Figure 3).

WAN link layer protocols developed in the 70s had to cope with links with error rates of 10-6, or 1 in 1 million. In this environment, the best strategy was for the link layer to correct errors. A good example of link layer error correction is LAP B, the link layer associated with X.25. LAP B uses an ARQ error correction scheme, where all frames are acknowledged as good or bad and bad ones are retransmitted. However, today's digital WAN links, with widespread deployment of fiber, have error rates of about 10-9, or 1000 times better. This has resulted in WAN link layers that simply discard bad frames and let the transport layer (Layer 4) provide the recovery mechanism. This allows the link layer to be simple and have low delay. Frame relay is a good example of a link layer of this sort. Note the simple frame structure in frame relay as shown in Figure 4. Besides the starting and ending flags, there is an address field that is normally 2 octets, an information field that is normally less than 4096 octets, and a 2 octet CRC for error detection. This simple structure allows frame relay to operate at higher speeds with lower latency than ARQ type link layers.
Figure 5: Address Fields

Two non DLCI bits are in the first octet of the address field, the C/R bit used to indicate whether the frame is a command or a response bit, and an EA bit to indicate whether the address field is extended or not. The second address octet has four non DLCI bits. The Forward Explicit Congestion Notifier (FECN) and the Backward Explicit Congestion Notifier (BECN), affectionately called "Feckin" and "Beckin", are explicit indicators of congestion in frame relay networks, and are used to indicate congestion to higher layer protocols so that they can throttle back the rate of information flow until the congestion clears. FECN is intended for protocols that can provide flow control at the destination, such as OSI's TP4 transport protocol. BECN is intended for use by protocols that can implement flow control at the source, such as certain HDLC elements of procedure. However, because frame relay is most often used in conjunction with LAN internetworking devices and LAN protocols typically cannot use this information, the FECN and BECN bits are usually either ignored or simply counted by internetworking devices like routers to provide an indicator of congestion in the network.

The Discard Eligibility (DE) bit is very important in handling bursts, and will be discussed in detail with the discussion on burst handling. The fourth bit of the second octet (EA) is also an indicator of address extension or not. There can be 2, 3, or 4 octets of address field. Addresses in frame relay identify links, hence the name Data Link Connection Identifier (DLCI). There is no notion of source and destination address as in some other protocols, but rather there is one DLCI to identify a virtual circuit from the local location to a remote location (see Figure 6). DLCIs have local significance only and may be reused in the same network. For example, the diagram shows the link from LA to N.Y. assigned DLCI 40, whereas from N.Y. to LA, the same link is identified in N.Y. as DLCI 55.
Q. How does frame relay's burst handling capability work, and why is it important?

A. Besides simplicity which allows for higher speed operation with low latency, the second big advantage of frame relay is the ability to handle traffic bursts efficiently. This is important since today's primary sources of traffic, namely LANs, create bursty traffic patterns. Figure 8 shows some typical traffic patterns on today's LANs. This shows a steady background level of traffic with peaks 2-5 times the background. Studies have shown this to be correlated with peak e-mail usage at certain times of the day on some networks. Other causes can be update of databases or large file transfers at a particular time of day.
Figure 8: Typical Traffic Patterns from LANs

Frame relay has a feature called the Committed Information Rate (CIR) that is designed to accommodate bursts in an efficient manner. The CIR is the rate that is "guaranteed" to a particular subscriber on a particular DLCI. The physical port to the subscriber is usually set at a rate higher than the CIR. If the data source tries to send data at a rate higher than the CIR, the network attempts to send the excess burst on a best effort basis. There can be multiple DLCIs in one physical link, each one with individual CIRs.

This is how burst handling works. A committed burst size of Bc bits is defined over time T as shown in Figure 9. T is a sliding time window triggered by receipt of user data, defined as T=Bc/CIR assuming Bc and CIR are not zero. An excess burst size, Be, is also defined. If the arriving data rate is such that the number of bits arriving into the network during time T is Bc + Be, then the excess bits in Be will have their Discard Eligibility (DE) bit set to one (DE=1), and these frames will be delivered on a best effort basis, i.e., if congestion occurs, these frames will be discarded before DE=0 frames (frames covered within CIR). The network should be designed such that there will not be overload congestion under normal conditions if all entryways to the network are only sending at the CIR rate or below. This would mean that all CIR data would get through assuming no errors.


Figure 10: T as a Sliding Window

What if the arrival rate exceeds Bc+Be over time T you might ask? In this case, bits over Bc+Be are eligible for discard at the network boundary and no delivery is normally even attempted. Figure 11 diagrams actual frame arrivals to illustrate the relationship of the parameters. In this case, the physical access rate is about twice the CIR. Let us summarize the bandwidth admission rules:

1. Incoming bits (per DLCI logical link) totaling less than or equal to Bc during T have DE set to 0 and are transparently transmitted to the destination.
2. Incoming bits (per DLCI logical link) in excess of Bc bits, but less than Bc+Be bits during T are delivered on a best effort basis with DE set to one (DE=1).
3. Incoming bits (per DLCI logical link) totaling in excess of Bc+Be bits during T are subject to immediate discard.
There are also some interesting special cases. If there is one DLCI on a particular physical port and CIR=physical port speed, all data by definition is within the CIR and will be delivered. Another special case is when the CIR=0 and Be>0. In this case, all data has DE=1 and is eligible for discard. Some carriers offer this service at low tariffs.

In most implementations, T can be up to a second in duration, meaning that bursts of less than a second over the CIR, but still within the excess burst capacity, are accommodated on a best effort basis. However, AT&T in their frame relay service offering, has recently advertised the capability of coping with bursts of over an hour.
New Softwares
Platforms and availability
AutoSync enables network managers to schedule simultaneous synchronization of content such as video, audio
StarBurst customers include automotive companies such as General Motors, Ford, Chrysler and Honda;
Newsletter Signup
Contact : 01-234-567800
Email :