A website for Electronics and Communications Engineering students. Different information/references for different major subjects of this course are made available for them. Disclaimer: Not all information posted in this site are accurate. These are only for reference and comparison. Thank you.

Saturday, February 10, 2007

ECE124 Data Communications - Research Work No. 2 - Error Control

ECE 124

Research Work No. 2

Error Control

Error control coding provides the means to protect data from errors. Data transferred from one place to the other has to be transferred reliably. Unfortunately, in many cases the physical link can not guarantee that all bits will be transferred without errors. It is then the responsibility of the error control algorithm to detect those errors, and in some cases correct these so upper layers will see an error free link.

In computer science and information theory, the issue of error correction and detection has great practical importance. Error detection is the ability to detect errors that are made due to noise or other impairments in the course of the transmission from the transmitter to the receiver. Error correction has the additional feature that enables localization of the errors and correcting them. Given the goal of error correction, the idea of error detection may seem to be insufficient. However, error-correction schemes may be computationally intensive, or require excessive redundant data which may be inhibitive for a certain application. Error correction in some applications, such as a sender-receiver system, can be achieved with only a detection system in tandem with an automatic repeat request scheme to notify the sender that a portion of the data sent was received incorrectly and will need to be retransmitted, however where efficiency is important, it is possible to detect and correct errors with far less redundant data.

ERROR DETECTION

Several schemes exist to achieve error detection, and are generally quite simple. Most of these involve check bits—extra bits which accompany data bits for the purpose of error detection.

Repetition schemes

Variations on this theme exist. Given a stream of data that is to be sent, the data is broken up into blocks of bits, and in sending, each block is sent some predetermined number of times. For example, if we want to send "1011", we may repeat this block three times each.

Suppose we send "1011 1011 1011", and this is received as "1010 1011 1011". As one group is not the same as the other two, we can determine that an error has occurred. This scheme is not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g. "1010 1010 1010" in the example above will be detected as correct in this scheme).

The scheme however is extremely simple, and is in fact used in some transmissions of numbers stations.

Parity schemes

Given a stream of data that is to be sent, the data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" near the block is set or cleared if the number of one bits is odd or even. If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error is isolated to one bit: this is the principle of the Hamming code.

There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) have an error, the parity bit records the correct number of ones, even though the data is corrupt.

Polarity schemes

One less commonly used form of error correction and detection is transmitting a polarity reversed bitstream simultaneously with the bitstream it is meant to correct. This scheme is very weak at dectecting bit errors, and marginally useful for byte or word error detection and correction. However, at the physical layer in the OSI model, this scheme can aid in error correction and detection.

Polarity symbol reversal is (probably) the simplest form of Turbo code, but technically not a Turbo code at all.

  • Turbo codes DO NOT work at the bit level.
  • Turbo codes typically work at the character or symbol level depending on their placement in the OSI model.
  • Charater here refers to Baudot, ASCII-7, the 8 bit byte or the 16 bit word.

Original tranmitted symbol 1011

  • transmit 1011 on carrier wave 1 (CW1)
  • transmit 0100 on carrier wave 2 (CW2)

Receiver end

  • do bits polarities of (CW1) <> (CW2)?
  • if CW1 == CW2, signal bit error (triggers more complex ECC)

This polarity reversal scheme works fairly well at low data rates (below 300 baud) with very redundant data like telemetry data.

Cyclic redundancy checks

Many more complex error detection (and correction) methods make use of the properties of finite fields and polynomials over such fields. The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then divides by a fixed, predetermined polynomial. The coefficients of the result of the division is taken as the redundant data bits, the CRC. On reception, one can recompute the CRC from the payload bits and compare this with the CRC that was received. A mismatch indicates that an error occurred.

Hamming distance based checks

If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1 This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distace x <= d+1 from any word in the mapping) it can successfully detect it as an errored word. Even more, d or less errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.

ERROR CORRECTION

The above methods are sufficient to determine whether some data has been received in error. But often, this is not enough. Consider an application such as simplex teletype over radio (SITOR). If a message needs to be received quickly and needs to be completely without error, merely knowing where the errors occurred may not be enough, the second condition is not satisfied as the message will be incomplete. Suppose then the receiver waits for a message to be repeated (since the situation is simplex), the first condition is not satisfied since the receiver will have to wait (possibly a long time) for the message to be repeated to fill the gaps left by the errors.

It would be advantageous if the receiver could somehow determine what the error was and thus correct it. Is this even possible? Yes, consider the NATO phonetic alphabet -- if a sender were to be sending the word "WIKI" with the alphabet by sending "WHISKEY INDIA KILO INDIA" and this was received (with * signifying letters received in error) as "W***KEY I**I* **LO **DI*", it would be possible to correct all the errors here since there is only one word in the NATO phonetic alphabet which starts with "W" and ends in "KEY", and similarly for the other words. This idea is also present in some error correcting codes (ECC).

Error-correcting schemes also have their limitations. Some can correct a certain number of bit errors and only detect further numbers of bit errors. Codes which can correct one error are termed single error correcting (SEC), and those which detect two are termed double error detecting (DED). There are codes which can correct and detect more errors than these.

Error-correcting code

An error-correcting code (ECC) is a code in which each data signal conforms to specific rules of construction so that departures from this construction in the received signal can generally be automatically detected and corrected. It is used in computer data storage, for example in dynamic RAM, and in data transmission. Examples include Hamming code, Reed-Solomon code, Reed-Muller code, Binary Golay code, convolutional code, and turbo code. The simplest error correcting codes can correct single-bit errors and detect double-bit errors. Other codes can detect or correct multi-bit errors. ECC memory provides greater data accuracy and system uptime by protecting against soft errors in computer memory.

Shannon's theorem is an important theory in error correction which describes the maximum attainable efficiency of an error-correcting scheme versus the levels of noise interference expected. In general, these methods put redundant information into the data stream following certain algebraic or geometric relations so that the decoded stream, if damaged in transmission, can be corrected. The effectiveness of the coding scheme is measured in terms of the Coding gain, which is the difference of the SNR levels of the uncoded and coded systems required to reach the same BER levels.

Two error control strategies have been popular in practice:

Forward Error Correction (FEC), which employs error correcting codes to combat bit errors (due to channel imperfections) by adding redundancy (henceforth parity bits) to information packets before they are transmitted. This redundancy is used by the receiver to detect and correct errors.

Automatic Repeat Request (ARQ), wherein only error detection capability is provided and no attempt to correct any packets received in error is made; instead it is requested that the packets received in error be retransmitted.

The ARQ strategy is generally preferred for several reasons. The main reason is that the number of overhead bits needed to implement an error detection scheme is much less then the number of bits needed to correct the same error.

The FEC strategy is mainly used in links where retransmission is impossible or impractical. The FEC strategy is usually implemented in the physical layer and is transparent to upper layers of the protocol. When the FEC strategy is used, the transmitter sends redundant information along with the original bits and the receiver makes its best to find and correct errors. The number of redundant bits in FEC is much larger then in ARQ.

ARQ is simple and achieves reasonable throughput levels if the error rates are not very large. However, in its simplest form, ARQ leads to variable delays which are not acceptable for real-time services. FEC schemes maintain constant throughput and have bounded time delay. However, the post decoding error rate rapidly increases with increasing channel error rate. In order to obtain high system reliability, a variety of error patterns must be corrected. This means that a powerful long code is necessary, which makes the coder-decoder pair hard to implement and also imposes a high transmission overhead. Further complicating matters is that the wireless channel is non-stationary, and the channel bit error rate varies over time. Typical FEC schemes are stationary and must be implemented to guarantee a certain Quality of Service (QOS) requirement for the worst case channel characteristics. As a consequence, FEC techniques are associated with unnecessary overhead that reduces throughput when the channel is relatively error free.

References:

http://www2.rad.com/networks/1994/err_con/intro.htm

http://mia.ece.uic.edu/~papers/Surveys/pdf00002.pdf

http://en.wikipedia.org/wiki/Error_detection

Labels: , , ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home