# Multivariable Calculus_3 Application on Optimization

It’s exciting to apply multivariable calculus into solving real problems.

First is to find the max/min points. It’s largely required in optimization for instance, there are a number of factors in a financial model, tax, tracking error, turn over so on and so forth, you need to find the solution maximizing the performance while keeps those factors in a minimal fashion.

In linear equations, we know we can use the first derivative to tell if there is a flat or angular tangent line, but to know if the curvature is convex or concave, we need second derivative, if the second derivative is positive, its convex, conversely, it’s concave. What if it’s zero, then the line is straight, no curve.

Similarly, second partial derivative test is used in telling curvature of multi-variable graphs except a bit more tricky – second partial derivative of each variable needs to be evaluated. To make the computation easy to grasp, the concept of Hessian is applied.

Saddle point is when the it either can be max or min tangent plane. Seeing a picture is helpful:

So here is a real problem, for function f(x,y) = x^2y, what’s the point(s) it reaches the maximum value and also satisfy x^2 + y^2 = 1?

After visualizing these two functions as below, we know we need to find the tangent point by applying gradient calculation, the gradient of two function should be of the same vector directional, but different magnitude, so lagrange multiplier is inserted.

Another interesting problem is of making widgets: labor cost is \$20 per hour, steel cost is \$2000 per ton, the profit generated is computed via the R function below, but there is a constraint of budget setting at \$20000. It’s an optimization problem too. The rest is pure math deduction, easy to solve.

To generalize and make it more intuitive to computers, not human brains, Lagranian function is constructed to accomplish the same but packaged in math form:

Then set the gradient of L function to be zero vector:

What’s more, there is meaning of Lagranian, it gives the multiple by which a slight change of constraint b can cause the change of M function.

To prove this lagranian meaning, it’s quite subtle and clever, I can only manage to follow by pivoting the constraint be as a variable, so at that critical(maximum) point, (note c is equivalent to b constraint here)

Take derivative of M function at c point is equivalent to take derivative of Lagranian function with the fourth variable c:

Because at that critical point, gradient of L relative to x, y and lymbda is zero vector, so

Lagranian function’s partial derivative to c is, conveniently, lymbda (go back to take a look at langrian function construction above), hence,

This site uses Akismet to reduce spam. Learn how your comment data is processed.