Binary erasure channelIn coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit correctly, or with some probability receives a message that the bit was not received ("erased") . A binary erasure channel with erasure probability is a channel with binary input, ternary output, and probability of erasure . That is, let be the transmitted random variable with alphabet .
AlphabetAn alphabet is a standardized set of basic written graphemes (called letters) representing phonemes, units of sounds that distinguish words, of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character represents a syllable, and logographic systems use characters to represent words, morphemes, or other semantic units. The Egyptians have created the first alphabet in a technical sense.
Channel capacityChannel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel. Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E.
Noisy-channel coding theoremIn information theory, the noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.
Linear codeIn coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. syndrome decoding). Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g.
Block codeIn coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way.
Communication channelA communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Hadamard codeThe Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science.
Error correction codeIn computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors.
Linear network codingIn computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network.
Greek alphabetThe Greek alphabet has been used to write the Greek language since the late 9th or early 8th century BC. It is derived from the earlier Phoenician alphabet, and was the earliest known alphabetic script to have distinct letters for vowels as well as consonants. In Archaic and early Classical times, the Greek alphabet existed in many local variants, but, by the end of the 4th century BC, the Euclidean alphabet, with 24 letters, ordered from alpha to omega, had become standard and it is this version that is still used for Greek writing today.
Repetition codeIn coding theory, the repetition code is one of the most basic linear error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel corrupts only a minority of these repetitions. This way the receiver will notice that a transmission error occurred since the received data stream is not the repetition of a single message, and moreover, the receiver can recover the original message by looking at the received message in the data stream that occurs most often.
Cyrillic alphabetsNumerous Cyrillic alphabets are based on the Cyrillic script. The early Cyrillic alphabet was developed in the 9th century AD and replaced the earlier Glagolitic script developed by the Byzantine theologians Cyril and Methodius. It is the basis of alphabets used in various languages, past and present, Slavic origin, and non-Slavic languages influenced by Russian. As of 2011, around 252 million people in Eurasia use it as the official alphabet for their national languages. About half of them are in Russia.
History of the Greek alphabetThe history of the Greek alphabet starts with the adoption of Phoenician letter forms in the 9th–8th centuries BC during early Archaic Greece and continues to the present day. The Greek alphabet was developed during the Iron Age, centuries after the loss of Linear B, the syllabic script that was used for writing Mycenaean Greek until the Late Bronze Age collapse and Greek Dark Age. This article concentrates on the development of the alphabet before the modern codification of the standard Greek alphabet.
Turbo codeIn information theory, turbo codes (originally in French Turbocodes) are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity or Shannon limit, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/4G mobile communications (e.g.
Syriac alphabetThe Syriac alphabet (ܐܠܦ ܒܝܬ ܣܘܪܝܝܐ ʾālep̄ bêṯ Sūryāyā) is a writing system primarily used to write the Syriac language since the 1st century AD. It is one of the Semitic abjads descending from the Aramaic alphabet through the Palmyrene alphabet, and shares similarities with the Phoenician, Hebrew, Arabic and Sogdian, the precursor and a direct ancestor of the traditional Mongolian scripts. Syriac is written from right to left in horizontal lines. It is a cursive script where most—but not all—letters connect within a word.
Coding theoryCoding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods.
Latin alphabetThe Latin alphabet, also known as the Roman alphabet, is the collection of letters originally used by the ancient Romans to write the Latin language. Largely unaltered with the exception of extensions (such as diacritics), it forms the Latin script that is used to write many modern European languages, including English. With modifications, it is also used for other alphabets, such as the Vietnamese alphabet. Its modern repertoire is standardised as the ISO basic Latin alphabet.
Serbian Cyrillic alphabetThe Serbian Cyrillic alphabet (Српска ћирилица / Srpska ćirilica, sr̩̂pskaː tɕirǐlitsa) is a variation of the Cyrillic script used to write the Serbian language, adapted in 1818 by the Serbian philologist and linguist Vuk Karadžić. It is one of the two alphabets used to write modern standard Serbian, the other being Gaj's Latin alphabet. Karadžić based his alphabet on the previous Slavonic-Serbian script, following the principle of "write as you speak and read as it is written", removing obsolete letters and letters representing iotated vowels, introducing from the Latin alphabet instead, and adding several consonant letters for sounds specific to Serbian phonology.
Low-density parity-check codeIn information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph (subclass of the bipartite graph). LDPC codes are , which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum (the Shannon limit) for a symmetric memoryless channel.