## Lecture #1 clarification

(Sorry this is so late… — Anupam)

For Lecture #1, Ankit had asked a question about how Theorem 1.3 helped solve an equational form LP in time $\binom{n}{m}$. We didn’t quite answer is completely in class, here’s the complete explanation.

His question was this: Theorem 1.3 says that if the LP is feasible and bounded, then the optimum is achieved at a BFS (and we could try all of them). But what if the LP was unbounded or infeasible? How could we detect those cases in the same amount of time? Here’s one way.

To start, Fact 2.1 says  any equational form LP that is feasible has a BFS. So if all the basic solutions are infeasible (i.e., there is no BFS), we can safely answer “Infeasible”.

So now it suffices to differentiate between the bounded/unbounded subcases. So consider the BFS that achieves the lowest objective function value among all the BFSs (assuming the LP is a minimization LP). We know the optimal value is either this value (call it $\delta$), or it is $-\infty$. Consider the LP obtained by adding in the new constraint $c^\top x = \delta - 1$. This is another equational form LP with $m+1$ constraints, and we can use the previous argument to decide feasibility. If this new LP is feasible, the original LP had optimum value $-\infty$, else the optimum of the original LP was $\delta$.

Anyone see a different/even simpler solution?