A website for Electronics and Communications Engineering students. Different information/references for different major subjects of this course are made available for them. Disclaimer: Not all information posted in this site are accurate. These are only for reference and comparison. Thank you.

Saturday, February 10, 2007

ECE124 Data Communications - Research Work No. 7 - Open Systems Interconnection

ECE 124

Research Work

Open Systems Interconnection

The Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model for short) is a layered, abstract description for communications and computer network protocol design, developed as part of the Open Systems Interconnection initiative. It is also called the OSI seven layer model.

In the late 1970s, the European-dominated International Organization for Standardization (ISO), began to develop its Open Systems Interconnection (OSI) networking suite. OSI has two major components: an abstract model of networking (the Basic Reference Model, or seven-layer model), and a set of concrete protocols. The standard documents that describe OSI are for sale and not currently available online.

Parts of OSI have influenced Internet protocol development, but none more than the abstract model itself, documented in OSI 7498 and its various addenda. In this model, a networking system is divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacts directly only with the layer immediately beneath it, and provides facilities for use by the layer above it. Protocols enable an entity in one host to interact with a corresponding entity at the same layer in a remote host.

Layer 7: Application layer

The Application layer provides a means for the user to access information on the network through an application. This layer is the main interface for the user(s) to interact with the application and therefore the network. Some examples of application layer protocols include Telnet, applications which use File Transfer Protocol (FTP), applications which use Simple Mail Transfer Protocol (SMTP) and applications which use Hypertext Transfer Protocol (HTTP). Applications built to utilize a protocol, such as FTP, should not be confused with the protocols themselves, which often reside at the session layer.

Layer 6: Presentation layer

The Presentation layer transforms data to provide a standard interface for the Application layer. MIME encoding, data compression, data encryption and similar manipulation of the presentation is done at this layer to present the data as a service or protocol developer sees fit. Examples: converting an EBCDIC-coded text file to an ASCII-coded file, or serializing objects and other data structures into and out of, e.g., XML.

Layer 5: Session layer

The Session layer controls the dialogues/connections (sessions) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for either full-duplex or half-duplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a property of TCP, and also for session checkpointing and recovery, which is not usually used in the Internet protocols suite.

Layer 4: Transport layer

The Transport layer provides transparent transfer of data between end users, thus relieving the upper layers from any concern while providing reliable data transfer. The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state and connection oriented. This means that the transport layer can keep track of the packets and retransmit those that fail. The best known example of a layer 4 protocol is the Transmission Control Protocol (TCP). The transport layer is the layer that converts messages into TCP segments or User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), etc. packets. Perhaps an easy way to visualize the Transport Layer is to compare it with a Post Office, which deals with the dispatching and classification of mail and parcels sent.

Layer 3: Network layer

The Network layer provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks while maintaining the quality of service requested by the Transport layer. The Network layer performs network routing functions, and might also perform segmentation/desegmentation, and report delivery errors. Routers operate at this layer—sending data throughout the extended network and making the Internet possible (also existing at layer 3 (or IP) are routers). This is a logical addressing scheme – values are chosen by the network engineer. The addressing scheme is hierarchical. The best known example of a layer 3 protocol is the Internet Protocol (IP). Perhaps it's easier to visualize this layer as the actual Air Mail or Consolidated Carrier that transfers the mail from Point A to Point B.

Layer 2: Data link layer

The Data Link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the Physical layer. The best known example of this is Ethernet. Other examples of data link protocols are HDLC and ADCCP for point-to-point or packet-switched networks and Aloha for local area networks. On IEEE 802 local area networks, and some non-IEEE 802 networks such as FDDI, this layer may be split into a Media Access Control (MAC) layer and the IEEE 802.2 Logical Link Control (LLC) layer. It arranges bits from physical layer into logical chunks of data, known as frames.

This is the layer at which the bridges and switches operate. Connectivity is provided only among locally attached network nodes forming layer 2 domains for unicast or broadcast forwarding. Other protocols may be imposed on the data frames to create tunnels and logically separated layer 2 forwarding domain.

Layer 1: Physical layer

The Physical layer defines all the electrical and physical specifications for devices. This includes the layout of pins, voltages, and cable specifications. Hubs, repeaters, network adapters and Host Bus Adapters (HBAs used in Storage Area Networks) are physical-layer devices. The major functions and services performed by the physical layer are:

  • Establishment and termination of a connection to a communications medium.
  • Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control.
  • Modulation, or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and fiber optic) or over a radio link.

Parallel SCSI buses operate in this layer. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data-link layer. The same applies to other local-area networks, such as Token ring, FDDI, and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.

Labels: , ,

ECE124 Data Communications - Research Work No. 6 - Public Data Network

ECE 124

Research Work

Public Data Network

Public Data Network (PDN)

  • It is a network established and operated by a telecommunications administration, or a recognized private operating agency, for the specific purpose of providing data transmission services for the public.
  • It is a packet or circuit-switched service provided by carriers. Examples include X.25, frame relay, SMDS, Switched 56 Kb/s, ISDN and ATM service.
  • A communications network provided by a carrier organization that makes its transport available to companies. Customers attach to such networks using a variety of protocols including frame relay, ATM, IP, etc., and pay by the month or on a per-byte basis.

The basic principle behind a PDN is to transport data from a source to a destination through a network of intermediate switching nodes and transmission media. The switching nodes are not concerned with the content of the data, as their purpose is to provide end stations access to transmission and other switching nodes that will transport data from node to node until it reaches its final destination. The switching nodes are interconnected with transmission links (channels). The end-station devices can be personal computers, servers, mainframe computers, or any other piece of computer hardware capable of sending or receiving data. End stations are connected to the network through switching nodes. Data enter the network where they are routed through one or more intermediate switching nodes until reaching their destination.

Some switching nodes connect only to other switching nodes (sometimes called tandem switching nodes or switchers switches), while some switching nodes are connected to end stations as well. Node-to-node communications links generally carry multiplexed data (usually time-division multiplexing). Public data networks are not direct connected; that is, they do not provide direct communication links between every possible pair of nodes.

Public switched data networks combine the concepts of value-added networks (VANs) and packet switching networks.

Value-Added Network

A value-added network “adds value” to the services or facilities provided by a common carrier to provide new types of communication services. Examples of added values are error control, enhanced connection reliability dynamic routing, failure protection, logical multiplexing, and data format conversions. A VAN comprises an organization that leases communications lines from common carriers such as AT&T and MCI and adds new types of communications services to those lines.

Packet Switching Network

Packet switching involves dividing data messages into small bundles of information and transmitting them through communications networks to their intended destinations using computer-controlled switches. Three common switching techniques are used with public data networks: circuit switching, message switching, and packet switching.

Circuit switching is used for making a standard telephone call on the public telephone network. The call is established, information is transferred, and then the call is disconnected.

Message switching is a form of store-and-forward network. Data, including source and destination identification codes, are transmitted into the network and stored in a switch. Each switch within the network has message storage capabilities. The network transfers the data from switch to switch when it is convenient to do so.

With packet switching, data are divided into smaller segments, called packets, prior to transmission through the network. Because a packet can be held in memory at a switch for a short period of time, packet switching is sometimes called a hold-and-forward network.

Labels: , , ,

ECE124 Data Communications - Research Work No. 5 - SDLC and HDLC

ECE 124

Research Work


IBM developed the Synchronous Data Link Control (SDLC) protocol in the mid-1970s for use in Systems Network Architecture (SNA) environments. SDLC was the first link layer protocol based on synchronous, bit-oriented operation. This chapter provides a summary of SDLC's basic operational characteristics and outlines several derivative protocols.

After developing SDLC, IBM submitted it to various standards committees. The International Organization for Standardization (ISO) modified SDLC to create the High-Level Data Link Control (HDLC) protocol. The International Telecommunication Union-Telecommunication Standardization Sector (ITU-T; formerly CCITT) subsequently modified HDLC to create Link Access Procedure (LAP) and then Link Access Procedure, Balanced (LAPB). The Institute of Electrical and Electronic Engineers (IEEE) modified HDLC to create IEEE 802.2. Each of these protocols has become important in its domain, but SDLC remains the primary SNA link layer protocol for WAN links.

SDLC Types and Topologies

SDLC supports a variety of link types and topologies. It can be used with point-to-point and multipoint links, bounded and unbounded media, half-duplex and full-duplex transmission facilities, and circuit-switched and packet-switched networks.

SDLC identifies two types of network nodes: primary and secondary. Primary nodes control the operation of other stations, called secondaries. The primary polls the secondaries in a predetermined order, and secondaries can then transmit if they have outgoing data. The primary also sets up and tears down links and manages the link while it is operational. Secondary nodes are controlled by a primary, which means that secondaries can send information to the primary only if the primary grants permission.

SDLC primaries and secondaries can be connected in four basic configurations:

Point-to-point—Involves only two nodes, one primary and one secondary.

Multipoint—Involves one primary and multiple secondaries.

Loop—Involves a loop topology, with the primary connected to the first and last secondaries. Intermediate secondaries pass messages through one another as they respond to the requests of the primary.

Hub go-ahead—Involves an inbound and an outbound channel. The primary uses the outbound channel to communicate with the secondaries. The secondaries use the inbound channel to communicate with the primary. The inbound channel is daisy-chained back to the primary through each secondary.

SDLC Frame Format

The SDLC frame is shown in Figure 16-1.

Figure 16-1 Six Fields Comprise the SDLC Frame

The following descriptions summarize the fields illustrated in Figure 16-1:

Flag—Initiates and terminates error checking.

Address—Contains the SDLC address of the secondary station, which indicates whether the frame comes from the primary or secondary. This address can contain
a specific address, a group address, or a broadcast address. A primary is either a communication source or a destination, which eliminates the need to include the address of the primary.

Control—Employs three different formats, depending on the type of SDLC frame used:

Information (I) frame—Carries upper-layer information and some control information. This frame sends and receives sequence numbers, and the poll final (P/F) bit performs flow and error control. The send sequence number refers to the number of the frame to be sent next. The receive sequence number provides the number of the frame to be received next. Both sender and receiver maintain send and receive sequence numbers.

A primary station uses the P/F bit to tell the secondary whether it requires an immediate response. A secondary station uses the P/F bit to tell the primary whether the current frame is the last in its current response.

Supervisory (S) frame—Provides control information. An S frame can request and suspend transmission, report on status, and acknowledge receipt of I frames. S frames do not have an information field.

Unnumbered (U) frame—Supports control purposes and is not sequenced. A U frame can be used to initialize secondaries. Depending on the function of the U frame, its control field is 1 or 2 bytes. Some U frames have an information field.

Data—Contains a path information unit (PIU) or exchange identification (XID) information.

Frame check sequence (FCS)—Precedes the ending flag delimiter and is usually a cyclic redundancy check (CRC) calculation remainder. The CRC calculation is redone in the receiver. If the result differs from the value in the original frame, an error is assumed.

A typical SDLC-based network configuration is shown in Figure 16-2. As illustrated, an IBM establishment controller (formerly called a cluster controller) in a remote site connects to dumb terminals and to a Token Ring network. In a local site, an IBM host connects (via channel-attached techniques) to an IBM front-end processor (FEP), which also can have links to local Token Ring LANs and an SNA backbone. The two sites are connected through an SDLC-based 56-kbps leased line.

Figure 16-2 An SDLC Line Links Local and Remote Sites over a Serial Line

Derivative Protocols

Despite the fact that it omits several features used in SDLC, HDLC is generally considered to be a compatible superset of SDLC. LAP is a subset of HDLC and was created to ensure ongoing compatibility with HDLC, which had been modified in the early 1980s. IEEE 802.2 is a modification of HDLC for LAN environments. Qualified Logical Link Control (QLLC) is a link layer protocol defined by IBM that enables SNA data to be transported across X.25 networks.

High-Level Data Link Control

HDLC shares the frame format of SDLC, and HDLC fields provide the same functionality as those in SDLC. Also, as in SDLC, HDLC supports synchronous, full-duplex operation.

HDLC differs from SDLC in several minor ways, however. First, HDLC has an option for a 32-bit checksum. Also, unlike SDLC, HDLC does not support the loop or hub go-ahead configurations.

The major difference between HDLC and SDLC is that SDLC supports only one transfer mode, whereas HDLC supports three:

Normal response mode (NRM)—This transfer mode is also used by SDLC. In this mode, secondaries cannot communicate with a primary until the primary has given permission.

Asynchronous response mode (ARM)—This transfer mode enables secondaries to initiate communication with a primary without receiving permission.

Asynchronous balanced mode (ABM)ABM introduces the combined node, which can act as a primary or a secondary, depending on the situation. All ABM communication occurs between multiple combined nodes. In ABM environments, any combined station can initiate data transmission without permission from any other station.



Labels: , ,

ECE124 Data Communications - Research Work No. 4 - Asynchronous and Synchronous Protocols

ECE 124

Research Work

Asynchronous and Synchronous Protocols

A protocol establishes a means of communicating between two systems. As long as the sender and receiver each use the same protocol, information can be reliably exchanged between them. There are two common protocols used in Serial data communications, the first is known as Asynchronous, the second as Synchronous.

A protocol is a set of rules which governs how data is sent from one point to another. In data communications, there are widely accepted protocols for sending data. Both the sender and receiver must use the same protocol when communicating. One such rule is . . . .


Asynchronous Transmission

The asynchronous protocol evolved early in the history of telecommunications. It became popular with the invention of the early tele-typewriters that were used to send telegrams around the world.

Asynchronous systems send data bytes between the sender and receiver by packaging the data in an envelope. This envelope helps transport the character across the transmission link that separates the sender and receiver. The transmitter creates the envelope, and the receiver uses the envelope to extract the data. Each character (data byte) the sender transmits is preceded with a start bit, and suffixed with a stop bit. These extra bits serve to synchronize the receiver with the sender.

In asynchronous serial transmission, each character is packaged in an envelope, and sent across a single wire, bit by bit, to a receiver. Because no signal lines are used to convey clock (timing) information, this method groups data together into a sequence of bits (five - eight), then prefixes them with a start bit and appends the data with a stop bit.

The purpose of the start and stop bits was introduced for the old electromechanical Tele-typewriters. These used motors driving cams that actuated solenoids that sampled the signal at specific time intervals. The motors took a while to get up to speed, thus by prefixing the first data bit with a start bit, this gave time for the motors to get up to speed. The cams generate a reference point for the start of the first data bit.

At the end of the character sequence, a stop bit was used to allow the motors/cams etc to get back to normal. In addition, it was needed to fill in time in case the character was an end of line, when the Tele-typewriter would need to go to the beginning of a new-line. Without the stop character, the machine could not complete this before the next character arrived.

It's important to realize that the receiver and sender are re-synchronized each time a character arrives. What that means is that the motors/cams are restarted each time a start bit arrives at the receiver.

Nowadays, electronic clocks that provide the timing sequences necessary to decode the incoming signal have replaced the electromechanical motors.

This method of transmission is suitable for slow speeds less than about 32000 bits per second. In addition, notice that the signal that is sent does not contain any information that can be used to validate if it was received without modification. This means that this method does not contain error detection information, and is susceptible to errors.

In addition, for every character that is sent, an additional two bits is also sent. Consider the sending of a text document which contains 1000 characters. Each character is eight bits, thus the total number of bits sent are 10000 (8 bits per character plus a start and stop bit for each character). This 10000 bits is actually 1250 characters, meaning that an additional 250 equivalent characters are sent due to the start and stop bits. This represents a large overhead in sending data, clearly making this method an inefficient means of sending large amounts of data.

Summary for Asynchronous

Transmission of these extra bits (2 per byte) reduce data throughput. Synchronization is achieved for each character only. When the sender has no data to transmit, the line is idle and the sender and receiver are NOT in synchronization. Asynchronous protocols are suited for low speed data communications, and there is no method of error checking inherent in this protocol.

Synchronous Transmission

One of the problems associated with asynchronous transmission is the high overhead associated with transmitting data. For instance, for every character of 8 bits transmitted, at least an additional overhead of 2 bits is required. For large amounts of data, this quickly adds up. For example, to transmit 1000 characters, this requires 12000 bits, an extra 2000 bits for the start and stops. This is equivalent to an overhead of 250 characters. Another problem is the complete lack of any form of error detection. This means the sender has no method of knowing whether the receiver is correctly recognizing the data being transmitted.

In synchronous transmission, greater efficiency is achieved by grouping characters together, and doing away with the start and stop bits for each character. We still envelop the information in a similar way as before, but this time we send more characters between the start and end sequences. In addition, the start and stop bits are replaced with a new format that permits greater flexibility. An extra ending sequence is added to perform error checking.

A start type sequence, called a header, prefixes each block of characters, and a stop type sequence, called a tail, suffixes each block of characters. The tail is expanded to include a check code, inserted by the transmitter, and used by the receiver to determine if the data block of characters was received without errors. In this way, synchronous transmission overcomes the two main deficiencies of the asynchronous method, that of inefficiency and lack of error detection.

There are variations of synchronous transmission, which are split into two groups, namely character orientated and bit orientated. In character orientated, information is encoded as characters. In bit orientated, information is encoded using bits or combinations of bits, and is thus more complex than the character orientated version. Binary synchronous is an example of character orientated, and High Level Data Link Control (HDLC) is an example of bit orientated.

In asynchronous transmission, if there was no data to transmit, nothing was sent. We relied on the start bit to start the motor and thus begin the preparation to decode the incoming character. However, in synchronous transmission, because the start bit has been dropped, the receiver must be kept in a state of readiness. This is achieved by sending a special code by the transmitter whenever it has no data to send.

Synchronous serial data

In bit orientated protocols, the line idle state is changed to 7E, which synchronizes the receiver to the sender. The start and stop bits are removed, and each character is combined with others into a data packet.

User data is prefixed with a header field, and suffixed with a trailer field which includes a checksum value (used by the receiver to check for errors in sending).

The header field is used to convey address information (sender and receiver), packet type and control data. The data field contains the users data (if it can't fit in a single packet, then use multiple packets and number them). Generally, it has a fixed size. The tail field contains checksum information which the receiver uses to check whether the packet was corrupted during transmission.

Asynchronous transmission
is suited for low speed serial transmission and does not include error checking as part of the protocol. Each character is contained in an envelope of a start and stop bit. It is inefficient.

Synchronous transmission can achieve higher speeds than asynchronous. In addition, it is an error checking protocol, and much more efficient because i t groups characters together into blocks.



Labels: , , ,

ECE124 Data Communications - Research Work No. 3 - Synchronization

ECE 124

Research Work No. 3


Synchronization is a problem in timekeeping which requires the coordination of events to operate a system in unison. Systems operating with all their parts in synchrony are said to be synchronous or in sync. Some systems may be only approximately synchronized, or plesiochronous. For some applications relative offsets between events need to be determined, for others only the order of the event is important. Today, synchronization can occur on a global basis due to GPS-enabled timekeeping systems. For digital logic and data transfer, a synchronous object requires a clock signal. Timekeeping technologies such as the GPS satellites and Network time protocol (NTP) provide real-time access to a close approximation to the UTC timescale, and are used for many terrestrial synchronization applications.

Serialized data is not generally sent at a uniform rate through a channel. Instead, there is usually a burst of regularly spaced binary data bits followed by a pause, after which the data flow resumes. Packets of binary data are sent in this manner, possibly with variable-length pauses between packets, until the message has been fully transmitted. In order for the receiving end to know the proper moment to read individual binary bits from the channel, it must know exactly when a packet begins and how much time elapses between bits. When this timing information is known, the receiver is said to be synchronized with the transmitter, and accurate data transfer becomes possible. Failure to remain synchronized throughout a transmission will cause data to be corrupted or lost.

Two basic techniques are employed to ensure correct synchronization. In synchronous systems, separate channels are used to transmit data and timing information. The timing channel transmits clock pulses to the receiver. Upon receipt of a clock pulse, the receiver reads the data channel and latches the bit value found on the channel at that moment. The data channel is not read again until the next clock pulse arrives. Because the transmitter originates both the data and the timing pulses, the receiver will read the data channel only when told to do so by the transmitter (via the clock pulse), and synchronization is guaranteed.

Techniques exist to merge the timing signal with the data so that only a single channel is required. This is especially useful when synchronous transmissions are to be sent through a modem. Two methods in which a data signal is self-timed are nonreturn-to-zero and biphase Manchester coding. These both refer to methods for encoding a data stream into an electrical waveform for transmission.

Assuming that we are sending a collection of bits across the medium in bit-serial mode, regardless of the underlying transmission characteristics, the receiver will receive a bit pattern representing the message. Therefore, the receiver needs to determine:

» the start of each bit cell - bit or clock synchronization

» the start and end of each element (character, byte or octet) - character or byte synchronization

» the start and end of each complete message block (frame) - block or frame synchronization

The complete frame (block) of characters is transmitted as a contiguous stream of bits and the receiver endeavors to keep in synchronization with the incoming bit stream for the duration of the complete block (no start stop codes). There is a need to synchronize at three levels:

» transmitted bit stream, encoded so receiver can be kept in synchronization (digital encoding we discussed last time)

» frames are preceded by one or more reserved bytes to ensure receiver reliably interprets received bit stream on correct byte boundary

» contents of each frame encapsulated between pair of reserved characters for frame synchronization

Between frames either (1) continuous idle bytes or (2) special synch characters are sent to maintain synchronization.

Error Control

– Need mechanism to detect when a bit is in error (parity bits, block checksums, etc)

– Need protocol to request retransmitted copy

Flow Control

– The receiver may not be able to handle all the information sent due to limited processing power, buffer space, etc. A mechanism is needed to negotiate the flow of information.

Data Link Protocols

– We need to agree on format of data being exchanged (bits per byte and encoding) and type and order of messages to achieve reliability.

Labels: ,

ECE124 Data Communications - Research Work No. 2 - Error Control

ECE 124

Research Work No. 2

Error Control

Error control coding provides the means to protect data from errors. Data transferred from one place to the other has to be transferred reliably. Unfortunately, in many cases the physical link can not guarantee that all bits will be transferred without errors. It is then the responsibility of the error control algorithm to detect those errors, and in some cases correct these so upper layers will see an error free link.

In computer science and information theory, the issue of error correction and detection has great practical importance. Error detection is the ability to detect errors that are made due to noise or other impairments in the course of the transmission from the transmitter to the receiver. Error correction has the additional feature that enables localization of the errors and correcting them. Given the goal of error correction, the idea of error detection may seem to be insufficient. However, error-correction schemes may be computationally intensive, or require excessive redundant data which may be inhibitive for a certain application. Error correction in some applications, such as a sender-receiver system, can be achieved with only a detection system in tandem with an automatic repeat request scheme to notify the sender that a portion of the data sent was received incorrectly and will need to be retransmitted, however where efficiency is important, it is possible to detect and correct errors with far less redundant data.


Several schemes exist to achieve error detection, and are generally quite simple. Most of these involve check bits—extra bits which accompany data bits for the purpose of error detection.

Repetition schemes

Variations on this theme exist. Given a stream of data that is to be sent, the data is broken up into blocks of bits, and in sending, each block is sent some predetermined number of times. For example, if we want to send "1011", we may repeat this block three times each.

Suppose we send "1011 1011 1011", and this is received as "1010 1011 1011". As one group is not the same as the other two, we can determine that an error has occurred. This scheme is not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g. "1010 1010 1010" in the example above will be detected as correct in this scheme).

The scheme however is extremely simple, and is in fact used in some transmissions of numbers stations.

Parity schemes

Given a stream of data that is to be sent, the data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" near the block is set or cleared if the number of one bits is odd or even. If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error is isolated to one bit: this is the principle of the Hamming code.

There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) have an error, the parity bit records the correct number of ones, even though the data is corrupt.

Polarity schemes

One less commonly used form of error correction and detection is transmitting a polarity reversed bitstream simultaneously with the bitstream it is meant to correct. This scheme is very weak at dectecting bit errors, and marginally useful for byte or word error detection and correction. However, at the physical layer in the OSI model, this scheme can aid in error correction and detection.

Polarity symbol reversal is (probably) the simplest form of Turbo code, but technically not a Turbo code at all.

  • Turbo codes DO NOT work at the bit level.
  • Turbo codes typically work at the character or symbol level depending on their placement in the OSI model.
  • Charater here refers to Baudot, ASCII-7, the 8 bit byte or the 16 bit word.

Original tranmitted symbol 1011

  • transmit 1011 on carrier wave 1 (CW1)
  • transmit 0100 on carrier wave 2 (CW2)

Receiver end

  • do bits polarities of (CW1) <> (CW2)?
  • if CW1 == CW2, signal bit error (triggers more complex ECC)

This polarity reversal scheme works fairly well at low data rates (below 300 baud) with very redundant data like telemetry data.

Cyclic redundancy checks

Many more complex error detection (and correction) methods make use of the properties of finite fields and polynomials over such fields. The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then divides by a fixed, predetermined polynomial. The coefficients of the result of the division is taken as the redundant data bits, the CRC. On reception, one can recompute the CRC from the payload bits and compare this with the CRC that was received. A mismatch indicates that an error occurred.

Hamming distance based checks

If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1 This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distace x <= d+1 from any word in the mapping) it can successfully detect it as an errored word. Even more, d or less errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.


The above methods are sufficient to determine whether some data has been received in error. But often, this is not enough. Consider an application such as simplex teletype over radio (SITOR). If a message needs to be received quickly and needs to be completely without error, merely knowing where the errors occurred may not be enough, the second condition is not satisfied as the message will be incomplete. Suppose then the receiver waits for a message to be repeated (since the situation is simplex), the first condition is not satisfied since the receiver will have to wait (possibly a long time) for the message to be repeated to fill the gaps left by the errors.

It would be advantageous if the receiver could somehow determine what the error was and thus correct it. Is this even possible? Yes, consider the NATO phonetic alphabet -- if a sender were to be sending the word "WIKI" with the alphabet by sending "WHISKEY INDIA KILO INDIA" and this was received (with * signifying letters received in error) as "W***KEY I**I* **LO **DI*", it would be possible to correct all the errors here since there is only one word in the NATO phonetic alphabet which starts with "W" and ends in "KEY", and similarly for the other words. This idea is also present in some error correcting codes (ECC).

Error-correcting schemes also have their limitations. Some can correct a certain number of bit errors and only detect further numbers of bit errors. Codes which can correct one error are termed single error correcting (SEC), and those which detect two are termed double error detecting (DED). There are codes which can correct and detect more errors than these.

Error-correcting code

An error-correcting code (ECC) is a code in which each data signal conforms to specific rules of construction so that departures from this construction in the received signal can generally be automatically detected and corrected. It is used in computer data storage, for example in dynamic RAM, and in data transmission. Examples include Hamming code, Reed-Solomon code, Reed-Muller code, Binary Golay code, convolutional code, and turbo code. The simplest error correcting codes can correct single-bit errors and detect double-bit errors. Other codes can detect or correct multi-bit errors. ECC memory provides greater data accuracy and system uptime by protecting against soft errors in computer memory.

Shannon's theorem is an important theory in error correction which describes the maximum attainable efficiency of an error-correcting scheme versus the levels of noise interference expected. In general, these methods put redundant information into the data stream following certain algebraic or geometric relations so that the decoded stream, if damaged in transmission, can be corrected. The effectiveness of the coding scheme is measured in terms of the Coding gain, which is the difference of the SNR levels of the uncoded and coded systems required to reach the same BER levels.

Two error control strategies have been popular in practice:

Forward Error Correction (FEC), which employs error correcting codes to combat bit errors (due to channel imperfections) by adding redundancy (henceforth parity bits) to information packets before they are transmitted. This redundancy is used by the receiver to detect and correct errors.

Automatic Repeat Request (ARQ), wherein only error detection capability is provided and no attempt to correct any packets received in error is made; instead it is requested that the packets received in error be retransmitted.

The ARQ strategy is generally preferred for several reasons. The main reason is that the number of overhead bits needed to implement an error detection scheme is much less then the number of bits needed to correct the same error.

The FEC strategy is mainly used in links where retransmission is impossible or impractical. The FEC strategy is usually implemented in the physical layer and is transparent to upper layers of the protocol. When the FEC strategy is used, the transmitter sends redundant information along with the original bits and the receiver makes its best to find and correct errors. The number of redundant bits in FEC is much larger then in ARQ.

ARQ is simple and achieves reasonable throughput levels if the error rates are not very large. However, in its simplest form, ARQ leads to variable delays which are not acceptable for real-time services. FEC schemes maintain constant throughput and have bounded time delay. However, the post decoding error rate rapidly increases with increasing channel error rate. In order to obtain high system reliability, a variety of error patterns must be corrected. This means that a powerful long code is necessary, which makes the coder-decoder pair hard to implement and also imposes a high transmission overhead. Further complicating matters is that the wireless channel is non-stationary, and the channel bit error rate varies over time. Typical FEC schemes are stationary and must be implemented to guarantee a certain Quality of Service (QOS) requirement for the worst case channel characteristics. As a consequence, FEC techniques are associated with unnecessary overhead that reduces throughput when the channel is relatively error free.





Labels: , , ,

ECE124 Data Communications - Research Work No. 1 - Advancement of Data Communications in the 21st Century

ECE 124

Research Work No. 1

Advancement of Data Communications in the 21st Century

Enhancements in Internet Protocol (IP) Communications with Video Advancement and Rich Media technologies

Used in conjunction with Cisco's industry-leading IP Telephony office phones, the Cisco Video Telephony (VT) Advantage delivers affordable, high-quality, and simple to use person-to-person video images seamlessly integrated with phone conversations. In addition to VT Advantage, Cisco is also rolling out an assortment of new rich-media collaboration, security and interoperability features to further boost its IP Communications family of products. (2004)

Bluetooth 2.0+EDR Access Server

Bluegiga has upgraded its successful WRAP Access Server product family to support Bluetooth 2.0+EDR. Enhanced Data Rate introduces up to three times faster data rate compared to existing Bluetooth 1.2 solutions. The new WRAP Access Server will be equipped with Bluegiga’s WT11 class 1 modules, so also the range and general performance will be improved. The size of the flash memory has doubled, so the server features now 32MB of flash memory. Also the operating system has been updated to version 3.1 featuring now for example Linux kernel 2.6. This offers a new level of performance and security to the users of WRAP Access Server.

Bluegiga offers three different variants of the Access Server: 2293 offering three Bluetooth radios and up to 21 simultaneous connections; 2291 supporting 7 connections via one Bluetooth radio. The 2291 is also available with external antenna connector. All these versions can be ordered with higher encryption option. The product is also 100% RoHS compatible. Bluegiga is now offering a new battery pack option for +12 hours operation time as a WRAP Access Server accessory. (2006)

Wireless Local Energy Metering (Wi-LEM)

LEM launches the Wi-LEM (wireless local energy meter) family of components, an innovative solution that allows electricity consumption to be monitored and reduced. By using wireless communication, Wi-LEM greatly reduces the time, cost and disruption involved in deploying a submetering installation, increasing both the potential financial savings and removing many of the barriers to adopt this proven approach to reduce energy consumption.

There are three parts to the Wi-LEM family. Energy meter nodes are assemblies of up to three current transducers with a signal processing module.They can be deployed to measure energy consumption at any point in the power cabinet and transmit the data. Mesh nodes are repeaters linking various nodes. They enable wireless communications throughout a large installation. The mesh gate is a gateway managing the mesh network. It provides data through serial interface to a PC.

By measuring active, reactive and apparent energy plus maximum current and minimum voltage, energy meter nodes provide much more information than a simple submeter. A variety of energy meter nodes - all of which have their accuracy certified to IEC62053 Active Energy Class 1 and Reactive Energy Class 3 - are available for 120 and 240V AC voltage and configured for nominal currents from 5 to 100A. The compact sized, split-core transducers can easily be installed inside the limited free space of existing cabinets. Energy meter nodes take measurements at 5 to 30min intervals and transmit the results over the 2.4GHz ISM band.

The technologies make installation and commissioning very easy. Mesh nodes act as repeaters, and can be added to the network without any need for additional configuration or programming. The mesh gate is a stand-alone wireless network management gateway that connects the transducer network with a PC using a serial interfaces RS232 or RS485 with a Modbus RTU protocol. Each mesh gate allows monitoring of up to 240 energy meter nodes. The 802.15.4 communication standard has a proven robustness in industrial and commercial environments. Mesh gate and mesh nodes were developed in close co-operation by Millennial Net, leader in wireless sensor mesh network technology. (2006)

Long-range Wireless for Handhelds

With its new Jett-Wave Radio option, Two Technologies now offers its most popular brands of handheld computers with the longest range wireless data communications solution in the industry. By taking advantage of the Jett-Pack peripheral connection system, the Jett-Wave Radio can be easily added to many of Two Technologies' handheld computers, including the Jett-XL and Jett-eye. With this module, Two Technologies' handheld computers can wirelessly communicate with equipment and other handheld computers at longer ranges - up to several kilometres in line-of-sight applications and up to 900m in typical indoor/urban environments.

Incorporating a 900MHz spread spectrum radio supplied by MaxStream, the JETT-Wave Radio supports various communication configurations, including point-to-point, point-to-multipoint, and multidrop networking topologies. With this module, Two Technologies' handheld computers can selectively communicate different sets of data to specific points with minimal configuration. And because the JETT?Wave Radio supports 65,000 network addresses on 10 hop sequences, multiple handheld computers can be configured to operate on different channels or to communicate with each other wirelessly within the security of a single network. (2006)

WiMAX Broadband Wireless Technology Access

WiMAX (World Interoperability for Microwave Access), based on the IEEE 802.16 standard, is expected to enable true broadband speeds over wireless networks at a cost point to enable mass market adoption. WiMAX is the only wireless standard today that has the ability to deliver true broadband speeds and help make the vision of pervasive connectivity a reality.

There are two main applications of WiMAX today: fixed WiMAX applications are point-to-multipoint enabling broadband access to homes and businesses, whereas mobile WiMAX offers the full mobility of cellular networks at true broadband speeds. Both fixed and mobile applications of WiMAX are engineered to help deliver ubiquitous, high-throughput broadband wireless services at a low cost.

Experience your mobile entertainment centerMobile WiMAX is based on OFDMA (Orthogonal Frequency Division Multiple Access) technology which has inherent advantages in throughput, latency, spectral efficiency, and advanced antennae support; ultimately enabling it to provide higher performance than today's wide area wireless technologies. Furthermore, many next generation 4G wireless technologies may evolve towards OFDMA and all IP-based networks as an ideal for delivering cost-effective wireless data services.

Intel is poised to deliver the key components needed for successful WiMAX networks. It delivered the fixed WiMAX solution, Intel® PRO/Wireless 5116 wireless modem, and is now shipping a fixed/mobile dual-mode solution, Intel® WiMAX Connection 2250. This highly cost-effective solution was designed to support both standards with an easy upgrade path from fixed to mobile and is expected to further accelerate the deployment of WiMAX networks. (2005)

Third Generation (3G) Technology

3G technology is used in the context of mobile phone standards. The services associated with 3G provide the ability to transfer simultaneously both voice data (a telephone call) and non-voice data (such as downloading information, exchanging email, and instant messaging). In marketing 3G services, video telephony has often been used as the killer application for 3G.

The first country which introduced 3G on a large commercial scale was Japan. In 2005, about 40% of subscribers used 3G networks only, with 2G being on the way out in Japan. It was expected that during 2006 the transition from 2G to 3G would be largely completed in Japan, and upgrades to the next 3.5G stage with 3 Mbit/s data rates were underway.

The most significant features offered by third generation (3G) mobile technologies are the momentous capacity and broadband capabilities to support greater numbers of voice and data customers - especially in urban centres - plus higher data rates at lower incremental cost than 2G.

By using the radio spectrum in bands identified, which is provided by the ITU for Third Generation IMT-2000 mobile services, it subsequently licensed to operators. 3G uses 5 MHz channel carrier width to deliver significantly higher data rates and increased capacity compared with 2G networks.

The 5 MHz channel carrier provides optimum use of radio resources for operators who have been granted large, contiguous blocks of spectrum. On the other hand, it also helps to reduce the cost to 3G networks while being capable of providing extremely high-speed data transmission to users.

It also allows the transmission of 384kbps for mobile systems and 2Mbps for stationary systems. 3G users are expected to have greater capacity and improved spectrum efficiency, which will allow them to access global roaming between different 3G. (2005)

Labels: , ,