The minimization of a linear function
subject to the constraints
with an -matrix is called a linear
program.
A solution
satisfies
the Kuhn-Tucker conditions,
with
(for a maximum,
).
For the concrete example
min
we have
and the Kuhn-Tucker conditions are
with , , ,
.
Obviously, exactly one of the Lagrange multipliers
must be zero.
For each of the three cases, two constraints are
active and determine and uniquely:
A comparison of the values of the target function
shows that the minimum is attained at .
The corresponding Lagrange multipliers are
, .
The figure illustrates a geometric construction of the
solution.
The solution is the point where a level
line of the target function touches the shaded
admissible region.
Clearly, the target function increases (decreases) if
the level lines begin to intersect (not intersect)
the admissible region.
If the rows of , which are multiplied with positive
, are linearly independent,
the characterization is an immediate consequence of
the Kuhn-Tucker conditions for the general nonlinear
case.
For linear problems, redundant constraints, which
cause linear dependence, can be omitted, and we can
restrict ourselves to the nondegenerate case.
|
automatisch erstellt
am 26. 1. 2017 |