Today we are dealing with the mathematical

programming problem; that is non-linear programming problem with inequality constraints. And,

we will develop the necessary coefficient condition for handling the non-linear programming

problem; for that thing let us consider a general non-linear programming problem; where

n number of decision variables are there m number of inequality constraints are there. Let us consider the problem minimize f (x)

subject to g j (x) lesser than 0. There are m number of constraints that is why j will

run from one to m. And, we are considering the X as a n dimensional vector belongs to

R n. And, for this we are considering the functions f (x) and g j (x). These are all

non-linear in nature not only that the functions are continuous. Now, we will develop the necessary

conditions; that is why we will adopt a Lagrange multiplier technique once again. And, we will

extend this Lagrange multiplier technique to Karush Kuhn Tucker conditions; just let

me tell the process for this one thing is that for this non-linear programming problem.

The problem may have several problem may have several minimum. But there are certain unique

condition; when the problem is having unique global optimum that is why we will just we

will write down the inequality constraints to the equality constraints.

Then, we will just formulate the Lagrange functions. And, further we will go for necessary

conditions just we do for the classical optimizations. Now, if we just convert the non if this problem

is number one. Then, we are introducing m number of slag variables here to make this

inequality, equality slag variables; we get the in equations as

the equations. And, we are considering the square of the slag variables to consider that

all the slag variables are positive. Then, our problem is reduced to a general

classical optimization problem with equality constraint that is minimize f (x). And, subject

to this number of equality constraints. Then, we can formulate the Lagrange function; after

that where the Lagrange function is having n number of decision variables, m number of

that is slag variables. And, m number of Lagrange multipliers that is why L function will have

2 m plus n number of variables. x 1, x 2 up to x n, s 1, s 2 up to s m, lambda 1, lambda

2 up to lambda. That is why 2 m plus n number of variables are 3. And, the formulation would

be f (x) plus summation j is equal to 1 to m lambda j g j (x) plus s j square.

Now, here lambda j s are Lagrange multipliers. Now, we will develop the necessary condition

that is why we will go for the first order partial derivatives of L with respect to the

variables 2 m plus m number of variables. That is why we will get a set of equations

like this in the next. And, these are the necessary condition for

optimality. Since, we are minimizing the objective function that is why we will search for the

local or relative minima for these objective for this Lagrange function. That is why Del

L by Del x i equal to 0; which will reduce to Del f by Del x i plus summation j is equal

to 1 to m lambda j Del g j (x) by Del x i equal to 0.

And, we will have n number of these equations like this; alright let me name it as 2. Another

set of partial derivatives that is lambda j is equal to 0. And, this will reduce to

g j (x) plus s j square equal to 0. And, we will have m number of equations like this.

Similarly, if we just take the partial derivative with respect to j th slag variable. And, if

we just equate to 0. It is another set of necessary conditions.

Then, we will have 2 lambda j s j is equal to 0 again we are having another m number

of equations here. This 6 set of equations are called Karush Kuhn Tucker conditions.

These are the necessary conditions for obtaining relative or local minima for non-linear given

non-linear problem. And, this set of constraints are called Karush Kuhn Tucker conditions;

in short we are saying as a KKT conditions. And, full form of KKT is that Kraush. These

this is the name after 3 mathematicians; if we just look at the set of conditions.

The first condition is the that is the number 2 this condition is the condition for optimality.

The second condition the set 3 reduces to g j (x) lesser than equal to 0 that is why

this is the condition for the feasibility. There are m number of constraints like this.

Now, if we just look at the third set of conditions that is the condition 4 we are getting here

lambda j s j is equal to 0. And, this set of equations are called as a complementary

slackens property as I mentioned these are the g j these are

the feasibility conditions. And, these are the optimality conditions. And, all together

we call it as Karush Kuhn Tucker conditions or KKT conditions.

These are very important for solving non-linear programming problems; even we are applying

KKT conditions for the linear programming; as well for with when we are dealing with

the several constraints together for finding out the local optima; we adopt the same process

for non-linear programming problem with several inequality constraints. Now, we will study

further this Karush Kuhn Tucker conditions and we will just see what is happening here. Let us consider the third condition that is

lambda j s j is equal to 0. Now, if we see that if lambda j is equal to 0. Then, we are

having s j naught equal to 0. And, if they are having lambda j naught is equal to 0 we

are having s j is equal to 0. What is the meaning of that? It means that, if we are

considering the case lambda j is equal to s j naught equal to 0; that means, the inequality

constraints z j x g j (X) plus s j square equal to 0. Here, this conditions are being

satisfied with the lesser than sign. Now, here I like to mention that since we are looking

for the relative minima. Let us consider there is a point that is call

X star; which is local minima. And, this local minima is satisfying and this local minima

we are saying for the case one g j (X star) s j square is equal to 0 this implies g j

(X) lesser than equal to 0. That means, these constraints are the inactive constraints.

These constraints are not taking part for obtaining the optimal solution that is why

these are the inactive constraints. These constraints are constraints are being satisfied

with a lesser than sign. Now, let us consider for the second case;

when lambda j naught equal to 0 s j is equal to 0. In that case we are getting at the local

minima for this case at the local minima g j (X star) is equal to 0; that means, these

are the constraints which are active for optimality in finding out the optimal solution. This

is very important to study with what are the constraints? Who are responsive for finding

out the optimal solutions? And, the constraints which are not active for finding out the optimal

solutions; we can we will see form a set of inequalities few constraints are inactive

constraints. And, few constraints are active constraints. And, here one thing we will see

that when lambda j is not equal to 0 we will concentrate more on this case.

And, we will see for the minimization problem we can prove it; further that for the minimization

problem all of these lambda j greater than 0 for the active constraints. And, for the

inactive constraints lambda j equal to 0; that is why whatever I have said about the

Karush Kuhn Tucker conditions or the KKT conditions. There are 3 set of equations; one is the optimality,

one set is the feasibility conditions. And, another set is the complemented slackness

property. And, along with that there will be another set of in equations would be there

that would be lambda j greater few lambda j will be greater than 0. And, we will that

study further with some geometrical interpretation. And, that is why we are considering a smaller

problem; where number of variables are 2, number of decision variables are 2. And, number

of inequality constraints is 3. And, we will study further what is happening and how we

are achieving to the condition that lambda j is greater than 0 for the active constraints

that is why our study goes further with that let us consider that there are? One problem we are considering minimization

of f (X) subject to there are 3 sets of constraints g 1 (X) lesser than equal to 0 g 2 (X) lesser

than equal to 0. And, g 3 (X) is lesser than equal to 0. And, we are considering that g

1 and g 2 are the active constraints we are assuming, because we will analyze the situation

with proper graphical representation in the next. That means, whatever optimal solution

we will get? That will lie on g one and g 2, but it will not lie on g 3.

Let us draw the picture further let us see let this is my g 3 this is g 1 and this is

g 2; let me write down this is g 1 equal to 0, g 2 equal to 0 and this is g 3 is equal

to 0. And, our problem is the minimization of f (X); if we develop the Karush Kuhn Tucker

conditions here. Since, there are 2 number of variables we will get the first set that

is the optimality conditions like this Del f by Del x i plus lambda 1 Del g 1 by Del

x i plus lambda 2 Del g 2 by Del x i equal to 0. Here, lambda 2 equal to 0, because we

have assumed g 3 is the inactive constraints. And, we have just proved that for the inactive

constraints lambdas will be the Lagrange multipliers would be always equal to 0.

Now, in the situation say here is the optimal solution that is X star. Now, at this point

if we consider the objective functions like this; here the optimality attained that is

a objective function would be like this. And, we are looking for the minimization that is

why we can say that the gradient of the objective function would be in this direction; that

means, at this point X star. If I just move in this direction the objective function value

will increase. Since, we are looking for the minimization problem that is why objective

function is coming down from this point and further and further and it is just touching

the point X star. And, it cannot go further by because if I just leave this point. Then,

I will be out of the feasible region that is why there are few things here to be studied

that what is the gradient of the objective function at this point? What is the gradient

of g 1 and what is the gradient of g 2 as well? That is why we are going to the next.

And, here it is to mention that x i is equal to 1 and 2. Because we have considered 2 number

of decision variables only. If I just write down the entire 2 equations together. Then,

ultimately we are getting by considering both the equations together. Because we are having

2 equations here Del f that is the gradient of f. That is Del f by Del x i plus Del f

by Del x Del x 1 and Del f by Del x 2 the plus. If we the start 2 we are getting Del

f plus lambda 1 Del g 1 plus lambda 2 Del g 2 plus equal to 0 or we are getting minus

grad f is equal to lambda 1 Del g 1 plus lambda 2 Del g 2.

Here, one thing should be seen that at point X star let us see what is Del g 1, what is

Del g 2? And, what is Del f at this point? If I just draw Del g 1 that would be the normal

to the plane g 1 is equal to 0 that is why this is my Del g 1 all right. Now, at point

X star; if we look for the Del g 2 that is the gradient of the surface; that is the curve

g 2 is equal to 0. That would be normal at the at this point on the curve that is why

this would be Del g 2 what is further? If I see at X star this direction is the increasing

objective function direction that is why this is delta f. And, the opposite direction would

be minus delta f all right. Then, what we see here; we see that at point X star. If

this is the optimal this is the local optimal point for the minimization problem. Then,

we will see that minus delta f lies with in the cone spanned by delta g 1 and delta g

2. Graphically, we could see here, but this is algebraically also we can prove form this;

that minus lambda is the linear combination of delta. The grad of g 1 and grad of g 2;

where we are considering the multipliers lambda 1 lambda 2 as width of the linear combination.

And, we can study further to find out that lambda what is our objective to find out the

lambda is lambda 1 is greater than 0; lambda 2 as well greater than 0 for minimization

problem. That is why we will do further study form

here; what we will do? We will again see the feasibility direction. And, we will do some

operation of this equation with that feasibility direction; up to this we can conclude that

the negative of the gradient of objective function lies within the cone of gradient

of the active constraints. These are this is the conclusion for us. Now, we will multiply

both side with feasibility direction what is feasibility direction I will just detail further in the next. Then, we get minus S t grad f is equal to lambda 1 S t grad g 1 plus lambda 2 S t grad g 2 what is

S? Let me tell you in the next. Let me draw the figure once again; that there

are g 1 and g 2 are the active constraints. This is the optimal solution; that is why

we are having the this is g 1 equal to 0 this is g 2 is equal to 0. And, we are having this is grad g 2, this is grad of g 1. And, this is the optimal point; this direction is the grad of objective function just negative direction is the minus grad of f. Here, I will study on the feasibility direction.

This is the feasible space for the previous problem; that is why feasible direction from

X star; if this is X star for us; that must be here. This side graphically we can we can

conclude, but we have to prove that fact that is why we are doing certain we are learning

certain definition. One is that X hat is the regular point. That means, any point in the

feasible space; if we consider the grad of g j at X hat. These are all linearly independent.

Then only we can see that X hat is the regular point within the feasible space. And, we will

say X hat is the feasible point; if we consider a small increment; if we just move x from

X hat to the point. Another point X hat alpha s; where alpha is very small amount and s

is the direction. And, we are saying that s is the feasible direction. Then, we will

see that g (X hat) plus alpha hat is always lesser than 0 for all constraints. And, from

here; if I just expand this function we tell us expansion from here we can conclude that

S t grad g j (X) rather hat. That is at the feasible point is always lesser

than 0; then if this is so. Then, we can say that S is the feasible direction. Now, for

studying further what is feasible direction let me check one example for understanding

better the what is the gradient direction of a function etcetera. Let us consider, one

small example f is equal to x 1 x 2 this is a function of 2 variables. And, we are a it

is given that there is a point x naught that is (1,3). And, we are just want to check whether

the direction S that is given (1, 1). The question is whether this is descent direction?

That means, is that the function that f function is decreasing in this direction. If I just

move from x naught 1 3 2 in the direction of 1 or we will check whether S is a assign

direction for proving that thing; that means, we are looking for gradient of f. Gradient

of f means whatever functional the functional level surfaces we will get for f and grad

of f at the normal directions of this label surfaces. That is why grad f can be calculated

as Del f by Del x 1 that would be x 2; and Del f by Del x 2 that would be x 1. Then,

grad f at point x naught that is at point (1,3) would be equal to (3,1) alright.

Now, we want to check whether the direction S is the descent direction or not? That is

why we will go for S t grad f; we will calculate 1 into 3 plus 1 into 1. What is s t grad f

? This is nothing, but the we are taking the dot product of the vector S with the grad

f vector. Then, it would be 1 into 3 plus 1 into 1. Then, it is coming as 4; it is greater

than 0 which is the conclusion. This is not the descent direction. This direction is rather

the assign direction for the function f when the function is x 1, x 2.

If we draw the graph; then we can check easily for 2 variables. But if we just see for the

3 variables more number of variables. This is a process to find out which direction is

a descent direction for the given direction? And, which direction is the assign direction

for the given function? And, that is why if I just come back here; what we are saying

we are getting that S t grad g j (X) lesser than 0. Then, what is the conclusion here?

We are concluding that the vector S and the grad g j at optimal point are making obtuse

angle. If I just see the graph here; then this is

the s direction all right. And, it is making the obtuse angle with grad g 1; and its making

the obtuse angle with grad g 2, but if f is the objective function for minimization problem.

This is the direction minus delta f is my direction through which I will move; that

is why grad f would be the reverse to that direction. And, we are seeing that here that

S and delta f is making the acute angle; that is why we will get S t grad f. That is the

ease is always greater than 0, because s and grad f making acute angle all right.

Then, what we are concluding here; we are concluding that always the feasible. If f

is the feasible direction for the minimization problem S t grad f greater than 0 and S t

grad g j would be less than 0; if g j is the active constraints. And, here one thing I

just would like to mention that; we can draw a feasible cone here. And, how to draw the

feasible cone, just see at this point delta the grad g 1 is this direction. Then, minus

grad g 1 would be this direction grad that is why; if I just draw the graph, draw the

figure for S t grad g j. then is lesser than 0 that is why S t minus grad g j would be

greater than 0 that will corresponds to this half space.

Similarly, if grad of g 2 is this direction; then grad of g 1 would be this direction.

I am sorry grad of g 2 is this direction minus grad of g 2 would be this direction. And,

if again if I just draw the half space here; that is S t grad minus grad g 2 x. Then, we

will see this is the corresponding half space and intersections of these 2 half spaces that

is this is the direction for these. This is the feasible cone and all of us we will see

that grad f will lie within these feasible cone at the KKT point that is X star.

If you see that the if you are getting any local minima; where or rather any regular

point where if we see that point grad f does not lie within these feasible cone. Then,

we will say that that point is not the KKT point. That is why graphically we can say

something about the feasible direction, about the gradient of the objective functions, gradient

of the constraints. And, as well we can say something about the intersections of the half

spaces formed by the equations with active constraints as the feasible cone.

Now, is this is so; we are coming back to this equation once again. Then, what we are

have just now we have achieved that for this equation we have just now seen that S t grad

g 1 is lesser than 0, S t grad g 2 is lesser than 0. Because the feasible direction and

gradient of the active constraints are making the obtuse angle; what else we got; we got

S t grad f is greater than 0. Because the feasible direction and gradient of objective

function forms the acute angle. And, if we just substitute the values here this is positive,

this is negative, this is negative. Then, we can conclude here only possibility is that

lambda 1 must be greater than 0 lambda 2 must be greater than 0.

These are the condition 2 conditions we will just include in our KKT conditions; to get

entire picture of the necessary conditions of the non-linear programming with inequality

constraints. Now, one thing once again I would like to mention that; if we consider we have

consider only with the case where we are having 3 number of inequality constraints and 2 decision

variables. And, we could see that for active constraints g 1 and g 2 lambda 1 greater than

0 and lambda 2 greater than 0 and for g 3. Since, this is inactive lambda 3 equal to

0. But if you are having the minimization problem

with the inequality sign instead of the less than or equal to we are having the greater

than equal to sign inequality constraint. One thing should be mentioned that the corresponding

Lagrange multiplier; that means the constraint if the k th constraint is having greater than

equal to sign. Then, the corresponding Lagrange multiplier that is lambda k would be always

lesser than 0. And, here we can extend the number of decision variables further.

And, we will see that it will hold for all j’s; where for the active or inactive constraints

always lambda j is non negative for minimization problem involving lesser than equal to inequality

constraint. And, same thing we can prove for the maximization problem as well in the maximization

problem; if the inequality constraints are all or lesser than equal to type. Then, the

Lagrange multipliers would be non positive; that would be lambda j lesser than equal to

0. And, we can prove it very easily that is why let me write down the KKT conditions in

specific. These are the necessary conditions for optimality.

And, we will apply these conditions for obtaining the optimal point of some problems non linear

programming problems further. Now, we are assuming that we are summarizing all the facts

we have achieved; till now are all differentiable function. Here, we are considering X as x

1 x 2 x n. And, we are considering the constraints as lesser than or equal to type minimization

of f (X) subject to g j (X) lesser than equal to 0. Then, we will have the KKT conditions

like this first is the optimality conditions; that would be Del f by Del x i plus summation

j is equal to 1 to m lambda j g j(X) equal to 0; where lambda js are the Lagrange multipliers;

we will get the feasibility condition. There are n number of optimality conditions and

feasibility conditions would be g j (X) lesser than equal to 0.

There are m number of equations like this we are have having complementary slackness

property; that would be lambda j g j (X) equal to 0. As we have seen that we got previously

lambda j S j is equal to 0. If we just multiply both side with j z. Then, we are getting lambda

j S j square is equal to 0. S j square can be replaced with g j (X); that is why we are

considering that g j (X) equal to 0. And, there is another set of constraints that is

non negativity of the Lagrange multipliers. These are the non negativity constraints lambda

j greater than equal to 0. And, as we have proved for the active constraints for the

corresponding lambda j’s would be greater than 0, for the inactive constraint lambda

j equal to 0. If we will say X star is a KKT point; if it

satisfies all above properties. And, if we get any point X hat from the feasible space

which is regular point; if it does not satisfy all these conditions together. Then, we cannot

say that is the KKT point or the local or relative optima point. Now, we will look for

the sufficient conditions I have mentioned again and again that KKT conditions are the

necessary conditions. These are not the sufficient conditions. And, we have seen before for the

classical optimization technique; that for finding out the sufficient condition we need

to find out the second order derivative of the function; that is why we will consider

the function. And, we will if the function is twice differentiable. Then, only we can

take the sufficient condition of the function that is why we can mention that. The sufficient condition for optimality KKT

conditions at the sufficient conditions; if the objective function for the minimization

problem; rather we are going for the sufficient condition for the minimization problem. The

same logic we can extend for the maximization problem as well. Once again I am repeating

the KKT conditions are the sufficient conditions; if f (X) is convex and the feasible space

is convex. That means the functions involved in the constraints

are all convex in nature. Rather we can say whatever optimal solutions we are getting

through the KKT conditions; we can declare as global optima. If the objective function

is convex as well as the constraints are convex; we can prove it very easily just see; just

now we have constructed the Lagrange function with the condition f (X) plus lambda j g j

(X), g j is equal to 1 to m. I should write g j (X) plus S j square.

As we have seen in the KKT conditions; that all delta lambda j is greater than equal to

0. And, for the classical optimization technique we can say for an unconstrained optimization

problem L. The function L is convex; if Del 2 L is positive definite I am sorry L is at

point L we can get at point X star L is the L gives if (X) star is the relative minima.

As we have learnt in our classical optimization technique; X star would be the global minima;

if Del 2 l at X star is positive definite. Now, since delta j’s are all greater than

0, Del 2 l would be positive definite only when f (X) is convex and g j’s are convex.

Rather the KKT conditions of the sufficient conditions we can say if f (X) is convex and

the feasible space is convex. And, this is a being named as the convex programming problem

in optimization problem. The optimization problems are which are named as the convex

programming problem for the if we see for the minimization of that optimization problem;

if we see the objective function is convex. And, the associated constraints are convex.

Then, we will get the global minima; that is why the corresponding problem is being

named as the convex programming problem. Similarly, we can extend this logic the sufficient

condition for the maximization problem as well; we will see that if (X) star is the

local minima for local maxima for the corresponding maximization problem. That it would be global

maxima as well if we see that objective function f (X) is concave. And, the associated constraints;

that is the g j (X) rather than the feasible space is convex. And, in other way we can

say that KKT conditions are the sufficient conditions; if f (X) is concave and feasible

space is convex for the maximization problem. This is because of as we know for the maximization

problem; if we just construct the Lagrange function. And, for the unconstrained optimization

problem X star would be maximum value. If Del 2 l is negative definite from that fact

only as we know lambda j’s are lesser than equal to 0 from here we can conclude that

Del 2 l must be negative definite. Only, when f (X) is convex and lambda that is the g j’s

g j (X) are all convex. Now, the same let us see the KKT conditions for a general kind

of non-linear programming problem; where we are having the inequality constraints as well

as equality constraints together. That is why we are developing. The necessary condition for the general non-linear

programming problem; let us consider a general non-linear programming problem minimize f

(X) subject to g j (X) lesser than equal to 0; where j is equal to one to m. And, another

set of equality constraints are there h k (X) equal to 0 and k is equal to say 1 to

L. And, we are considering X these are all positive and there are n number of decision

variables, for this problem we can write down; we can say if (X) star is the regular point.

Then, X star is also a local minima of f; if the following optimality feasibility complimentary

constraints and non negativity constraints are satisfied.

Let me write down all the conditions together first the optimality condition; that would

be same as Del f by Del x i plus summation j is equal to 1 to m lambda j g j (X) sorry

Del of g j (X) by Del x i plus we are taking another set of Lagrange multipliers. Because

once we construct the Lagrange function; we will have together m plus L number of Lagrange

multipliers. That is why if I consider another set Del k is running from 1 to L lambda k

h k all right. Then, the optimality number of optimality conditions would be 1 to n.

Let me construct the Lagrange function then only it is understandable in a better way.

The Lagrange function L would be is equal to f plus lambda j g j, j is equal to 1 to

m plus k is equal to 1 to L lambda k prime h k. If this is the Lagrange function by considering

the first order derivative of L with respect to the decision variables; we are getting

the optimal solution. And, the feasibility conditions in the next; that would be g j

(X) lesser than equal to 0. These are all the give conditions together h k equal to

0; where j is equal to 1 to m and k is equal to 1 to L.

Complementary slackness property, that would be lambda j g j (X) equal to 0 all right only

for the inequality constraints. And, non negativity constraints that would be lambda j; these

are all running from 1 to m lambda j must be greater than equal to 0, for j is equal

to 1 to m. There is a one more condition that is lambda k prime k is from 1 to L or unrestricted

in sign. This could be positive this could be negative as well for equality constraints;

that is all about KKT conditions for non-linear programming problem; let us now apply for

the numerical examples in the next let us consider. So, first numerical examples with true variables;

we are looking for the minimum point for the function f (X) is equal to x 1 square minus

4 x 1 plus x 2 square minus 6 x 2; subject to a set of constraints x 1 plus x 2 lesser

than equal to 3; minus 2 x 1 plus x 2 lesser than equal to 2 x 1 x 2 greater than equal

to 0. Now, for this problem we are looking for the relative minima. And, for that thing

we will apply first the KKT conditions. These are the optimality feasibility complementary

slackness and the non negativity conditions. And, from there we will find out the relative

minima. Now, let me explain the whole process; first

let us construct the Lagrange function we did not do. Because we will go directly to

the optimality conditions; we have just deduced. That would be Del f by Del x 1 we are considering

first plus lambda 1 Del g 1 by Del x 1 plus lambda 2. There are 2 constraints I do not

know which one is the active which one is the inactive or both are active or not. Then,

if it is o we are getting the condition 2 x 1 minus four plus lambda 1 minus 2 lambda

2 equal to 0. And, similarly, if I considered the case for

2 as well this is 2 as well we are getting the second condition 2 x 2 minus x that is

from here x 2 minus x. And, from here lambda 1 plus lambda 2 equal to 0 g 1 is the first

constraint and g 2 is the second constraint. And, now we go for the next that is the feasibility

condition feasibility conditions are x 1 plus x 2 lesser than equal to 3; minus 2 x one

plus x 2 lesser than equal to 2. Next we will write down the complementary slackness condition.

Slackness properties lambda 1 x 1 plus x 2 minus 3 is equal to 0; lambda 2 minus 2 x

one plus x 2 minus 2 equal to 0. This is coming from the first constraint,

this is coming from the second constraint what else we are having another 2 constraints?

That is non negativity constraint lambda 1 greater than equal to 0, lambda 2 greater

than equal to 0. These are all together are called the KKT conditions and from here we

will find out the optimal solutions. Now, if we just see I do not know which one

is the active which is the inactive constraint; it is very difficult for us to draw the graph

every time. Since, it is the problem of 2 variables we can try for the graph. But if

this is the problem for more number of more number of 2 or 3 or 4 number of variables

very difficult; that is why we should have a general process to find out the optimal

solution and we are looking for that. Now, since lambda 1 is greater than equal to 0

lambda 2 greater than equal to 0; we can have several cases, one case it could be both are

0; that means, both are inactive constraints. The second case it would be lambda 1 naught

is equal to 0; lambda 2 is equal to 0. The third case it would be lambda 1 equal to 0,

lambda 2 naught equal to 0. The fourth case lambda 1 naught equal to 0; lambda 2 naught

is equal to 0. That means, we are checking all the possible cases where both the constraints

are inactive; when the second constraint is inactive third case when the first constraint

it is inactive. And, the fourth case when both the constraints are active. Now, we will

see at which case we are getting the optimal solution. Let us consider the first case; that is case

1 lambda 1 equal to 0; lambda 2 equal to 0 what we get from these from these equations;

if lambda 1 0 lambda 2 0 from these 2 equations we are getting x 1 is equal to 2 and x 2 equal

to 3. But what we see; if I just put these value here. The third condition x 1 equal

to 2 x 2 equal to 3 it is violated that is why this cannot be the optimal point.

Let us go for the second case; case 2 when lambda 1 not equal to 0; lambda 2 equal to

0. Then, if I just see lambda 1 not equal to 0; lambda 2 equal to 0; we are getting

one equation 2 x 1 minus 4 plus 1 lambda equal to 0. Another one 2 x 2 minus 6 plus lambda

1 is equal to 0. And, here also we are getting that if lambda 1 not equal to 0. Then, x 1

plus x 2 minus 3 must be equal to 0; that is why we are getting 3 equations and 3 unknowns

very easily we can find out the values. Further, we are getting 2 x 1 plus lambda 1 is equal

to 4. Another one is that 2 x 2 plus lambda 1 is equal to 6.

And another one plus x 1 plus x 2 equal to 3 from these equations we are getting the

value for x 1 equal to 2, x 2 equal to no x 1 plus x 2; if I just substitute here. Then,

if I substitute here we are getting the value for x 1 as 1; x 2 as 2 and lambda 1 as 2.

And, if I just look here 1 2 this is being satisfied. This is also satisfied minus 2

plus 2 these are all satisfied. That is why we can see that (1, 2) could be a KKT point

rather this is a KKT point. Let us see the third case; case 3 when we

are considering lambda 1 equal to 0; lambda 2 not equal to 0. Then, we are getting 3 equations

together one equation is 2 x 1 minus 2 lambda 2 from the optimality condition. 2 x 2 plus

lambda 2 is equal to 6 and from the third complementary slackness condition; if lambda

2 not equal to 0. Then, minus 2 x 1 plus x 2 minus x 2 must be equal to 0; that is why

we are getting that equation 2 x 1 plus x 2 equal to 2.

If I just put together what we get the values for 3 equations and 3 unknowns. And, we are

getting x 1 equal to 4 by 6; x 2 is equal to 18 by 5 and lambda 2 is equal to we are

getting minus 6 by five. But this is not acceptable why because just now we have proved that for

minimization problem always lambda i should be non negative. But here we are getting the

negative value that is why this point is not a KKT point. So, that is why we will go for next case when

lambda 1 not equal to 0 and lambda 2 not equal to 0; if both are not equal to 0. Then, from

the complementary slackness conditions we are getting if both are not equal to 0 these

must be equal to 0. That means, both are the active constraints, both are being satisfied,

both the constraints are being satisfied with the equality condition. And, if we just solve

these 2 problem we are getting x 1 is equal to 1 by 3; x 2 equal to 8 by 3 and, but we

are getting lambda 2 is equal to minus 8 by 9.

This is again negative; this is not accepted for to us that is why we will declare that

this is the solution for the problem, this is our X star. And, in this way for a general

non-linear programming problem we can solve we can apply the KKT conditions. And, we can

get the solution for this for the non-linear programming problem; let me write down one example for practicing in the next that solve the following non-linear problem. And, this is a maximization problem same logic can be extended here maximize7 x 1 square plus 6 x 1 plus 5 x 2 square subject

to x 1 plus 2 x 2 less than equal to 10 x 1 minus 3 x 2 less than equal to 9 and x 1

x 2 greater than equal to 0. And, if we just solve it n and since this

is the maximization problem informing the Kuhn tucker conditions; we have to be careful

about the conditions for Lagrange multipliers. Then, accordingly we have to take decision

about the optimal solutions. That is all about the KKT conditions. The KKT conditions are

very useful for getting the optimal solution for the non-linear programming problem. But

KKT conditions are really the necessary conditions these are all sufficient conditions as I said;

if we just a can conclude anything about the convexity of the function f for the minimization,

and the concave of f in the maximization problem. But the if the feasible spaces are convex

in both cases. Then, only we are get then only the optimal solutions which one we are

getting through KKT conditions are all the global optimal. These are this is the benefit

of the problem is benefit of the solution processes are very easily we can get the optimal

solution for a complicated non-linear programming problem as well with the KKT conditions. But

there is one disadvantage is that that if the functions are continuous differentiable

twice differentiable. Then, only we can go for the optimal solutions for with this process.

But the another disadvantage is that ensuring the global optimality’s very difficult.

Because checking the convexity property or the concavity property of the objective function

is very difficult in some complicated situations. And, that is all about the non-linear programming

problem solution process with inequality and equality constraints.

Thank you.