SDR (semidefinite relaxation)


Semidefinite Relaxation (SDR) is a powerful mathematical technique that has gained significant attention in optimization and approximation algorithms. It is a relaxation method used to solve certain computationally challenging problems that involve combinatorial optimization, specifically those with quadratic constraints or non-convex quadratic objectives.

At its core, SDR is based on semidefinite programming (SDP), which is a convex optimization problem. SDP involves optimizing a linear objective function subject to linear matrix inequalities, where the variables are positive semidefinite matrices. The goal is to find the optimal values of these matrices that satisfy the given constraints.

The idea behind SDR is to relax the original problem, which may be non-convex and difficult to solve, into a semidefinite program that is easier to solve. By relaxing the problem, we transform it into a convex optimization problem that can be efficiently solved using existing algorithms.

To understand how SDR works, let's consider an example problem known as the Maximum Cut problem. The Maximum Cut problem aims to partition the vertices of an undirected graph into two sets such that the number of edges between the two sets is maximized. This problem is known to be NP-hard, meaning that finding an exact solution in polynomial time is unlikely.

To solve the Maximum Cut problem using SDR, we start by formulating the problem as a quadratic program. We introduce a binary variable xi for each vertex i, where xi takes the value 1 if vertex i belongs to the first set and 0 if it belongs to the second set. Then, we define the objective function as the sum of the weights of the edges between the two sets, and the constraints as xi^2 = xi for all i.

The original problem is non-convex due to the quadratic constraints. To relax it into an SDP, we reformulate the quadratic constraints using a technique called matrix lifting. We introduce a positive semidefinite matrix X that represents the outer product of the binary variables: X = ∑xi⋅xi^T for all i. This matrix X captures the correlations between the binary variables and provides a relaxation of the original problem.

Now, by replacing the quadratic constraints with the positive semidefinite constraint X ⪰ 0, we obtain a semidefinite program. The objective function remains the same, and the problem becomes convex. This semidefinite relaxation allows us to find an upper bound on the optimal solution of the original problem.

To solve the SDP, various algorithms and solvers can be used, such as interior-point methods or first-order methods. These algorithms exploit the special structure of semidefinite programs to efficiently compute an approximate solution to the relaxation.

After solving the SDP, we obtain a positive semidefinite matrix X that satisfies the constraints. The next step is to extract a feasible solution to the original problem from this matrix. This can be achieved by applying a rounding procedure known as matrix rank-one rounding.

Matrix rank-one rounding involves computing the rank-one approximation of the matrix X, which corresponds to the outer product of a vector y. The vector y captures the assignment of vertices to the two sets, and the elements of y can be thresholded to obtain a binary solution.

The quality of the solution obtained through SDR depends on the accuracy of the relaxation and the quality of the rounding procedure. In general, the relaxation provides an upper bound on the optimal solution, meaning that the value obtained from the SDP is at least as good as the true optimal value. However, due to the relaxation, the solution obtained may not be the true optimal solution.

Despite this limitation, SDR has proven to be a valuable technique in various applications. It has been successfully applied to solve problems in machine learning, graph theory, network optimization, and other domains. SDR provides a flexible framework for approximating difficult problems and can often deliver solutions that are close to the optimal value.

In summary, semidefinite relaxation (SDR) is a mathematical technique used to solve computationally challenging optimization problems. By relaxing the original problem into a semidefinite program, SDR transforms the problem into a convex optimization problem that can be efficiently solved. Although the solution obtained from SDR may not be the true optimal solution, it provides a valuable upper bound and can be used to approximate solutions to difficult combinatorial optimization problems.