HOME



picture info

Low-density Parity-check Code
Low-density parity-check (LDPC) codes are a class of error correction codes which (together with the closely-related turbo codes) have gained prominence in coding theory and information theory since the late 1990s. The codes today are widely used in applications ranging from wireless communications to flash-memory storage. Together with turbo codes, they sparked a revolution in coding theory, achieving order-of-magnitude improvements in performance compared to traditional error correction codes. Central to the performance of LDPC codes is their adaptability to the iterative belief propagation decoding algorithm. Under this algorithm, they can be designed to approach theoretical limits (Channel capacity, capacities) of many channels at low computation costs. Theoretically, analysis of LDPC codes focuses on sequences of codes of fixed code rate and increasing block length. These sequences are typically tailored to a set of channels. For appropriately designed sequences, the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Error Correction Code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




DVB-S2
Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) is a digital television broadcast standard that has been designed as a successor for the popular DVB-S system. It was developed in 2003 by the Digital Video Broadcasting Project, an international industry consortium, and ratified by ETSI (EN 302307) in March 2005. The standard is based on, and improves upon DVB-S and the electronic news-gathering (or Digital Satellite News Gathering) system, used by mobile units for sending sounds and images from remote locations worldwide back to their home television stations. DVB-S2 is designed for broadcast services including standard and HDTV, interactive services including Internet access, and (professional) data content distribution. The development of DVB-S2 coincided with the introduction of HDTV and H.264 (MPEG-4 AVC) video codecs. Two new key features that were added compared to the DVB-S standard are: * A powerful coding scheme based on a modern LDPC code ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sparse Matrix
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements (e.g., ''m'' × ''n'' for an ''m'' × ''n'' matrix) is sometimes referred to as the sparsity of the matrix. Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Parity-check Matrix
In coding theory, a parity-check matrix of a linear block code ''C'' is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms. Definition Formally, a parity check matrix ''H'' of a linear code ''C'' is a generator matrix of the dual code, ''C''⊥. This means that a codeword c is in ''C ''if and only if the matrix-vector product (some authors would write this in an equivalent form, c''H''⊤ = 0.) The rows of a parity check matrix are the coefficients of the parity check equations. That is, they show how linear combinations of certain digits (components) of each codeword equal zero. For example, the parity check matrix :H = \left \begin 0&0&1&1\\ 1&1&0&0 \end \right, compactly represents the parity check equations, :\begin c_3 + c_4 &= 0 \\ c_1 + c_2 &= 0 \end, that must be satisfied for the vector (c_1, c_2, c_3, c_4) to be a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polar Code (coding Theory)
In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University. Notably, polar codes have modest encoding and decoding complexity , which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




5G NR
5G NR (5G New Radio) is a radio access technology (RAT) developed by the 3rd Generation Partnership Project (3GPP) for the 5G (fifth generation) mobile network. It was designed to be the global standard for the air interface of 5G networks. It is based on orthogonal frequency-division multiplexing (OFDM), as is the 4G (fourth generation) long-term evolution (LTE (telecommunication), LTE) standard. The 3GPP specification 38 series provides the technical details behind 5G NR, the successor of LTE. The study of 5G NR within 3GPP started in 2015, and the first specification was made available by the end of 2017. While the 3GPP standardization process was ongoing, the industry had already begun efforts to implement infrastructure compliant with the draft standard, with the first large-scale commercial launch of 5G NR having occurred in the end of 2018. Since 2019, many operators have deployed 5G NR networks and handset manufacturers have developed 5G NR enabled handsets. Frequency ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


BCH Code
In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called a '' Galois field''). BCH codes were invented in 1959 by French mathematician Alexis Hocquenghem, and independently in 1960 by Raj Chandra Bose and D. K. Ray-Chaudhuri. The name ''Bose–Chaudhuri–Hocquenghem'' (and the acronym ''BCH'') arises from the initials of the inventors' surnames (mistakenly, in the case of Ray-Chaudhuri). One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an algebraic method known as syndrome decoding. This simplifies the design of the decoder for these codes, using small ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bit Error Rate
In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors. The bit error rate (BER) is the number of bit errors per unit time. The bit error ratio (also BER) is the number of bit errors divided by the total number of transferred bits during a studied time interval. Bit error ratio is a unitless performance measure, often expressed as a percentage. The bit error probability ''pe'' is the expected value of the bit error ratio. The bit error ratio can be considered as an approximate estimate of the bit error probability. This estimate is accurate for a long time interval and a high number of bit errors. Example As an example, assume this transmitted bit sequence: 1 1 0 0 0 1 0 1 1 and the following received bit sequence: 0 1 0 1 0 1 0 0 1, The number of bit errors (the underlined bits) is, in this case, 3. The BER is 3 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

OFDM
In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission used in digital modulation for encoding digital (binary) data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/ 5G mobile communications. OFDM is a frequency-division multiplexing (FDM) scheme that was introduced by Robert W. Chang of Bell Labs in 1966. In OFDM, the incoming bitstream representing the data to be sent is divided into multiple streams. Multiple closely spaced orthogonal subcarrier signals with overlapping spectra are transmitted, with each carrier modulated with bits from the incoming stream so multiple bits are being transmitted in parallel.
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Wi-Fi
Wi-Fi () is a family of wireless network protocols based on the IEEE 802.11 family of standards, which are commonly used for Wireless LAN, local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks, used globally in small office/home office, home and small office networks to link devices and to provide Internet access with wireless routers and wireless access points in public places such as coffee shops, restaurants, hotels, libraries, and airports. ''Wi-Fi'' is a trademark of the Wi-Fi Alliance, which restricts the use of the term "''Wi-Fi Certified''" to products that successfully complete Interoperability Solutions for European Public Administrations, interoperability certification testing. Non-compliant hardware is simply referred to as WLAN, and it may or may not work with "''Wi-Fi Certified''" devices. the Wi-Fi Alliance consisted of more than 800 companies from ar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

10GBASE-T
10 Gigabit Ethernet (abbreviated 10GE, 10GbE, or 10 GigE) is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10  gigabits per second. It was first defined by the IEEE 802.3ae-2002 standard. Unlike previous Ethernet standards, 10GbE defines only full-duplex point-to-point links which are generally connected by network switches; shared-medium CSMA/CD operation has not been carried over from the previous generations of Ethernet standards so half-duplex operation and repeater hubs do not exist in 10GbE. The first standard for faster 100 Gigabit Ethernet links was approved in 2010. The 10GbE standard encompasses a number of different physical layer (PHY) standards. A networking device, such as a switch or a network interface controller may have different PHY types through pluggable PHY modules, such as those based on SFP+. Like previous versions of Ethernet, 10GbE can use either copper or fiber cabling. Maximum distance over copp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]