4. Lagrangian and Dual:
Part 1: Let us start with Linear Regression. Recall that the loss function of linear regression is
Suppose, we add a constraint that the L2-distance between w and a prior weight vector w0 is less than k – i.e. suppose we have some reason to believe that the weight vector should be close to w0. How can you pose this as an optimization problem? What is the Lagrangian formulation? Can you put down the steps involved to get the dual? It may be a little difficult to exactly write down the dual, but even if you can write down the steps involved, you will get the points.
Part 2: Let us do this for SVM. For simplicity, let us consider linear separability. Recall that the SVM formulation becomes:
Similar to part 1, what if we add the constraint that the L2-distance between w and a prior weight
vector w0 is less than k? How can you pose this as an optimization problem? What is the Lagrangian
formulation? Obtain the dual solution from the Lagrangian.
Students succeed in their courses by connecting and communicating with an expert until they receive help on their questions
Consult our trusted tutors.