ML(D) (maximum likelihood (detection))
Maximum likelihood (ML) is a statistical method used to estimate the parameters of a probability distribution, given some observed data. The basic idea is to find the set of parameter values that maximize the likelihood of the observed data. ML has a wide range of applications in machine learning, including maximum likelihood estimation (MLE), which is used to fit statistical models to data, and maximum likelihood detection (MLD), which is used to detect signals in noisy environments.
In this article, we will focus on maximum likelihood detection (MLD), which is a technique used in signal processing and communication systems to detect signals in noisy environments. The basic idea of MLD is to find the signal that is most likely to have generated the observed data, given some probabilistic model of the signal and noise.
MLD can be used in a variety of settings, including radar, sonar, wireless communications, and biomedical imaging. In each of these settings, the goal is to detect a signal that is hidden in some kind of noise or interference. For example, in radar, the signal may be a reflected radar pulse from an object of interest, while the noise may be due to clutter, interference from other sources, or random thermal noise.
To understand how MLD works, let's start with a simple example. Suppose we have a binary communication system, where we want to transmit a bit (either 0 or 1) over a noisy channel. We can model the received signal as a noisy version of the transmitted signal:
y = s + n
where y is the received signal, s is the transmitted signal (either 0 or 1), and n is the noise. We can model the noise as a random variable with some probability distribution, such as a Gaussian distribution with mean zero and variance sigma^2:
n ~ N(0, sigma^2)
The goal of MLD is to find the transmitted signal s that is most likely to have generated the observed signal y. To do this, we need to compute the likelihood function:
L(s|y) = p(y|s)
which is the probability of observing y given the transmitted signal s. Using Bayes' theorem, we can write this as:
p(y|s) = p(s|y) * p(y) / p(s)
where p(s|y) is the posterior probability of s given y, p(y) is the marginal probability of y, and p(s) is the prior probability of s.
Assuming that the prior probability of s is equal for both possible values (0 and 1), we can simplify this to:
L(s|y) = p(y|s) = p(s|y)
Now, we need to compute the probability of observing the received signal y given the transmitted signal s, which can be written as:
p(y|s) = p(s) * p(y|s) / p(y)
where p(s) is the prior probability of s, p(y|s) is the probability of observing y given s, and p(y) is the marginal probability of y.
Assuming that the prior probability of s is equal for both possible values (0 and 1), we can simplify this to:
L(s|y) = p(y|s) = p(s) * p(y|s)
The probability of observing the received signal y given the transmitted signal s can be computed using the noise model:
p(y|s) = N(y|s, sigma^2)
where N(y|s, sigma^2) is the probability density function of a Gaussian distribution with mean s and variance sigma^2. Using this, we can write the likelihood function as:
L(s|y) = p(y|s) = (1 / sqrt(2 * pi * sigma^
We can further simplify the likelihood function by taking the logarithm of both sides:
log L(s|y) = log p(y|s) = -1 / (2 * sigma^2) * ||y - s||^2 + const
where const is a constant that does not depend on s. The likelihood function is now in the form of a quadratic function of s, where the minimum is at the point where s = y. Therefore, the transmitted signal s that maximizes the likelihood function is the one that is closest to the received signal y, which is also known as the maximum likelihood estimate (MLE) of s.
In the case of a binary communication system, the MLE of s is simply the sign of the received signal y:
s_MLE = sign(y)
where sign(x) = 1 if x >= 0 and -1 if x < 0. This means that if the received signal y is positive, we assume that the transmitted signal was 1, and if y is negative, we assume that the transmitted signal was 0. This is a simple example of a threshold-based detector, where we compare the received signal to a fixed threshold to make a decision.
In practice, MLD is often used in more complex settings where the signal and noise models are more complicated, and the noise is not necessarily Gaussian. In such cases, the likelihood function may not have a closed-form expression, and numerical methods such as the iterative expectation-maximization (EM) algorithm or the gradient descent algorithm may be used to find the MLE of the signal.
Another important aspect of MLD is the performance analysis, which involves calculating the probability of error (Pe) of the detector, which is the probability of making an incorrect decision about the transmitted signal. The Pe can be computed using the probability of false alarm (Pfa) and the probability of detection (Pd), which are defined as follows:
Pfa = P(y >= T | s = 0)
Pd = P(y >= T | s = 1)
where T is the threshold used by the detector, and P(y >= T | s) is the probability of the received signal exceeding the threshold T, given that the transmitted signal is s.
The Pe can be expressed in terms of the Pfa and Pd as follows:
Pe = Pfa * (1 - Pd) + (1 - Pfa) * Pd
where the first term represents the probability of false alarm and the second term represents the probability of miss (i.e., failing to detect the signal when it is present).
In summary, MLD is a powerful technique used in signal processing and communication systems to detect signals in noisy environments. The basic idea is to find the signal that is most likely to have generated the observed data, given some probabilistic model of the signal and noise. MLD can be used in a wide range of settings, and its performance can be analyzed using the probability of error, which depends on the probability of false alarm and the probability of detection.