# Error Correction Techniques

## Classical Error Syndromes

Classical error detection and correction techniques use $q$-ary (base $q$) - $[n,M,d]$ codes to encode $M$ codewords within $q$-ary strings of length $n$, with minimum hamming distance $d$. The most common class of these codes are binary codes due to the simplicity of transmission. Such codes can also be defined over a field, where the binary case corresponds to the field $\mathbb{Z}_2$. Defining the codes over a field $\mathbb{F}$, allows us to generate the notion of a linear code. A linear code $C$ is a code which forms a linear subspace of the field $\mathbb{F}^n$. If $C$ is linear and forms a $k$-dimensional vector space over $F$, we call $C$ a linear $[n,k,d]$ code. Having a linear code is ideal because it provides a simple method of encoding and eventually decoding any message we wish to transmit. If $C$ is a linear code, we are able to define a $k \times n$ generator matrix $G$ for the code such that the rows of $G$ form a basis for $C$. We then consider the linear map $a \mapsto aG$ which is used to encode any codeword using $C$, note that this encoded message will always be in $C$ as all linear combinations of the rows of $G$ are in $C$.We now define the parity check matrix $H$ with respect to $C$. This is the generator for the dual code $C^{\perp}$ which is used to detect and correct errors when we receive a transmitted message. We have that $HG=0$, and consequently, we see that $x$ is in $C$ if and only if $Hx =0$.One efficient method for detecting and correcting errors within received messages uses the technique known as syndrome decoding. We divide the code $C$ into cosets which form partitions of $C$ (this is a result following from Lagrange’s Theorem). To each of these cosets we assign a coset leader which is an element of least weight within the coset. We define the syndrome of an element of $C$ to be the encoding $y\cdot H^t$ (with $H^t$ denoting the transpose of $H$) and calculate a table of syndromes for each of the coset leaders in $C$.We are now in a position to decode any received message word $y$. Suppose we have the transmitted message $x$ and received $y$, which will differ from $x$ if any errors occurred in transmission, Once we receive an encoded message $y$, we calculate its syndrome $S(y)$, and search our list of syndromes for each of the the coset leaders to find the coset leader $L$ which has equal syndrome, that is $S(L) = S(y)$. We then simply evaluate the received message $y$ by calculating $x= y-L$ to determine which message $x$ was sent.We assume here that if the received message is not in $C$, an error has occurred, else the message will decode directly as its syndrome will be the zero vector. Since $HG =0$ we have $y\cdot Ht =0$ if and only if $y \notin C$. There is clearly an upper limit to the amount of errors one code is able to detect or correct, for example if multiple errors occur, transforming the encoded message into another element of $C$ (or a message that corrects to the wrong element of $C$), we will not detect the error as it will appear to have been transmitted correctly, despite being the wrong message upon decryption. Although this is unlikely when transmitting through a channel with sufficiently low error probability, it is still possible and can only be minimised by increasing the amount of redundancy in the code, by increasing the minimum distance of the codewords in $C$.This process of syndrome decoding is one of the most efficient methods for larger codes and it is therefore desirable to seek a quantum analogue which can be applied to error detection and correction for quantum codes.

## Quantum Error Syndromes

The classical coding theory technique of syndrome decoding does in fact have a quantum mechanical analogue which can be applied to error detection and correction in quantum codes.

For general quantum error correcting codes we work with a group $G$, which is commonly a subgroup of the Pauli group $P_n$. The elements of this group are determined from the Pauli spin matrices $\sigma^x,\sigma^y,\sigma^z$, the identity matrix: $\mathbb{I}$, and products of each with $\pm 1, \pm i$. Higher dimensional Pauli groups $P_{n\geq 2}$ can be formed by considering tensor products of the elements of $P_n$.

This subgroup $G$ is therefore in general, a set of operators. It is then possible to take as a subset the largest set of elements of $G$ which commute with the other elements. This becomes what is known as the stabiliser set $S$. As this is a completely commuting subgroup of $G$ it automatically becomes itself an Abelian group, and is therefore often referred to as the stabiliser group.

We define the coding space $T$ as the set of all vectors which are fixed by $S$. That is:

$T = \{|\psi \rangle : P|\psi\rangle = |\psi \rangle, P\in S\}$

this set comprises each of the quantum code words.

We can now follow a similar procedure to the classical case to detect and correct errors in quantum codes. Using some of the group theoretical properties of the stabiliser group $S$, it is possible to show that our error correcting code will detect all errors $E \in S$ and errors such that $\{E,\Sigma\} =0$ for some $s\in S$.

We define the quantum error syndrome as a map $\sigma : G \rightarrow \mathbb{Z}_2$ such that:

$\sigma_M(E) = \begin{cases}0 & \text{if } [M,E] =0 \\ 1 & \text{if } \{M,E\} =0\end{cases}$

With $E$ as the error vector and:

$\sigma(E) = (\sigma_{M_1}(E), \dots, \sigma_{M_{n-k}}(E))$

where each $M_i$ is a generator for $S$.

Note that $\sigma$ is a well defined map as we know all correctable errors either are in $S$ or anti commute with some element of $S$.

The quantum error syndrome, much like the classical error syndrome becomes a block length $n-k$ binary string which is the main focus of the error correction process. Error correction is achieved by determining the eigenvalues for each $M_i$ that generates $S$. The eigenvalue $\varepsilon$, for each $M_i$ is given as:

$\varepsilon_{M_i}= (-1)^{\sigma_{M_i}(E)}.$

Much like the classical case, the syndrome measured can be compared to a known list of syndromes, meaning that the precise error operation can be identified and corrected simply by reversing the error inducing operation.