Outer Code Performance

The error-detecting outer layer will always recognise the correct message. It has a probability of also accepting an incorrect message - a false decode.

For a message supported by a CRC of size $c$ bits, the probability of a random message being incorrectly accepted is $1/2^c$ and if the inner Viterbi list decoder outputs a list of size $L$, the probability that none of these will be incorrectly accepted is $$ \left( 1 - \frac{1}{2^c} \right)^L $$

Redundancy in the source coding contributes to the strength of the outer code. Each message character is source-encoded into 6 bits but only 52 of the 64 possible codes are defined. In addition to validating the CRC, the outer code checks that each 6 bit group is one of the 52 valid character codes. The probability that it will incorrectly accept a random 6 bit code is 52/64 and the probability that it will incorrectly accept a random message of $N$ characters is $(52/64)^N = 1/2^s$ where $s$ is the effective number of bits of redundancy available from the source encoding.

The combined false decode probability is therefore $$ P_f(L) = 1 - \left( 1 - \frac{1}{2^{(c + s)}} \right)^L $$ where $$ s = N (6 - \log_2 52) \approx 0.3 N $$ Some calculated values are tabulated below for a range of message lengths.

3 chars5 chars12 chars25 chars50 chars100 chars
$P_f(200)$ 0.164 %0.108 %0.025 %0.002 %0.000 %0.000 %
$P_f(1000)$ 0.815 %0.539 %0.126 %0.008 %0.000 %0.000 %
$P_f(5000)$ 4.010 %2.665 %0.630 %0.042 %0.000 %0.000 %
$P_f(50000)$ 33.584 %23.674 %6.120 %0.424 %0.002 %0.000 %
$P_f(200000)$ 80.542 %66.061 %22.322 %1.684 %0.009 %0.000 %
$P_f(1000000)$ 99.972 %99.550 %71.719 %8.143 %0.047 %0.000 %

If the correct message is not in the list of $L$ Viterbi decodes, then $P_f(L)$ is the probability that an incorrect decode will be reported, and the probability that no decode will be reported is $1 - P_f(L)$.

The list output from the Viterbi decoder is ordered by decreasing likelihood and the outer code implements a rule whereby it accepts the first, and therefore the most likely, entry which has valid CRC and source coding. If in fact the correct message is in the list at position $U$, then the probability that it will beaten by a false decode from nearer the head of the list will be $P_f(U - 1)$. The decoder's log file records all the decodes accepted by the outer code, so when using long list sizes with short messages, it may be worth checking the log file to see if the correct message has been trumped by a higher significance false detection.

Calculation of Sphere Packing Limit

Calculation of the sphere packing limit for messages with information content of $k$ bits and encoded word length of $n$ binary symbols is as follows.

Solve numerically $$\Omega(\theta, n) = \frac{1}{2^k} $$
to determine $\theta$, where $$\Omega( \theta, n) = \frac{1}{\sqrt{\pi}} \frac{n - 1}{n} \frac{\Gamma\left(\frac{n}{2} + 1\right)}{\Gamma\left(\frac{n+1}{2}\right)} \int_{0}^{\theta} (\sin \phi)^{n-2} d\phi $$ Then the lower limit for the message error rate is given by $$P_w = \frac{1}{\sqrt{\pi}} \frac{(n-1)}{2^{n/2} \Gamma\left(\frac{n+1}{2}\right)} \int_{\theta}^{\pi} (\sin \phi)^{n-2} \int_{0}^{\infty} s^{n-1} e^{-(s^2 + nA^2-2s\sqrt{n}A\cos \phi)/2} ds d\phi$$ where $$A = \sqrt{\frac{2k}{n} \frac{E_b}{N_0}} $$

The numbers involved are quite large but within the range of long double precision arithmetic if you make thoughtful use of the expl() and lgammal() functions to avoid overflow of the exponent on large values of $n$.

There is a satisfactory approximation for $\Omega$ given by $$ \Omega \approx \frac{(\sin \theta)^{n-1}}{\sqrt{2\pi n} \cos \theta} $$ There is also an approximation for $P_w$ but this is not very good at very low values of $E_b/N_0$.

Timing and Phase Errors

An offset $\delta$ radians of the reference phase from the correct value will reduce the effective amplitude of the signal by the factor $\cos \delta$, equivalent to a dB loss of $20 \log_{10} \cos \delta$. A reference phase search in steps of 30 degrees therefore incurrs a maximum loss of 0.3 dB. This can be reduced to less than 0.1 dB by searching in 15 degree steps using a -PS15 option.

An offset $t$ seconds of the relative symbol timing between encoder and decoder is equivalent to a loss of $20 \log_{10} \left( 1 - 2|t|/T \right), 0 <= |t| < T/2$ where $T$ is the symbol period.