And so Homework #3 came to pass, being due on the day of the full moon.

## Lecture #5 clarification

Kevin asked in class: in the “geometric” proof of duality we talked about, did we need the fact that existed a BFS, i.e., that there were tight constraints? (Recall we were considering the LP ; since we were not talking about equational form LPs, the optimum could be achieved at a non-BFS.) He was right: we don’t. Here’s a proof.

Pick an optimal feasible point for the LP, call this point . Let for be all the constraints tight at —there may be linearly independent constraints in there, or there may be fewer, we don’t care. Now the claim is that the objective function vector is contained in the cone generated by these vectors . If we prove this claim, we’re done–the rest of the argument follows that in lecture.

(Indeed, this was precisely where we were being hand-wavy earlier, even when we assumed was a BFS. So this also answers Jamie’s request.)

Ok, back to business at hand. Suppose does not lie in the cone generated by the vectors . Then there must be a separating hyperplane between and : i.e., there exists a vector such that for all , but . So now consider the point for some tiny . Note the following:

- We claim that for small enough , the point satisfies the constraints . Consider for : since this constraint was not tight, we won’t violate it if we choose small enough. And for with ? See, , since and .
- What about the objective function value? .

But this contradicts the fact that was optimal.

## Lecture #1 clarification

*(Sorry this is so late… — Anupam)*

For Lecture #1, Ankit had asked a question about how Theorem 1.3 helped solve an equational form LP in time . We didn’t quite answer is completely in class, here’s the complete explanation.

His question was this: Theorem 1.3 says that if the LP is feasible and bounded, then the optimum is achieved at a BFS (and we could try all of them). But what if the LP was unbounded or infeasible? How could we detect those cases in the same amount of time? Here’s one way.

To start, Fact 2.1 says any equational form LP that is feasible has a BFS. So if all the basic solutions are infeasible (i.e., there is no BFS), we can safely answer “Infeasible”.

So now it suffices to differentiate between the bounded/unbounded subcases. So consider the BFS that achieves the lowest objective function value among all the BFSs (assuming the LP is a minimization LP). We know the optimal value is either this value (call it ), or it is . Consider the LP obtained by adding in the new constraint . This is another equational form LP with constraints, and we can use the previous argument to decide feasibility. If this new LP is feasible, the original LP had optimum value , else the optimum of the original LP was .

Anyone see a different/even simpler solution?

## Homework 1 — comments

Homework 1 is graded. The average was 17/20. Here are some comments on your solutions:

1. Several people erred by claiming that if P is nonempty it has a basic feasible solution. This is true for LPs in equational form, but not in the general form described in the problem. Most everybody had the idea of trying to turn inequalities into equalities; however only some people saw the observation that if turning an inequality into an equality makes the LP infeasible, then you can actually throw away the inequality. A few people solved the problem by using the result of Q4 and then doing binary search — that’s acceptable.

2. Most people more or less got this. Only Nathaniel made the observation that we kind of screwed up the definition in 2b: a weakly separating hyperplane always exists if you’re allowed to take a = 0, b = 0! (The proper definition should disallow a = 0.)

3. Again, most people got this. Some of you made small mistakes with the signs. A couple solutions left things in the form , and either did not attempt to say why this was OK in an LP (it is not!), or claimed the techniques of problem 2 handled this case (they don’t, since the solution space is not convex in this case!) — you’ve got to be careful with absolute values in LPs.

4. Here a common mistake was to bound the maximum numerical value of the determinant and claim that was at most the log of that quantity. But note that the size of the determinant could be large also because its value is a rational number with a huge denominator (and hence requires a lot of bits to write down, even though potentially has a small numerical value).

## Lecture 4: The Simplex Method

A draft of the scribe notes for this lecture has been posted.

## Lecture 3: LP examples: Max-Flow, Matching, Vertex-Cover

A draft of the scribe notes for this lecture is posted.

## Homework #2

Homework #2 (and exercises) now on the webpage. It’s due in a week, i.e., next Thursday (9/29). And again, ask questions and make comments on the homework in the comments below.

Clarification: for problem #3, feel free to use the result of Hwk2(#1) that we can reduce the search version of feasibility to the decision version.

## History of max-flow and min (bipartite) perfect matching

In class today Richard pointed out that a polynomial-time algorithm for min (bipartite) perfect matching was known prior even to the existence of linear programming. This problem, also called the *assignment problem*, was implicitly shown to be solvable in polynomial time by the Hungarian Jenő Egerváry in 1931 (and perhaps arguably also by Dénes Kőnig in 1916). This was not known in the West in the ’40s and early ’50s, when quite a few mathematicians (Merrill Flood, Julia Robinson, David Votaw, Alexander Orden) were thinking about the problem. These people knew it could be solved using linear programming.

In 1955, Harold Kuhn published on the assignment problem. He “discovered” Egerváry’s work (apparently it took him 2 weeks to translate the paper) and called the algorithm the “Hungarian Algorithm”. He also showed how to solve the problem faster (quadratic time rather than quartic time, I think).

Also… in 1954, Ford and Fulkerson published on Max-Flow in some Rand tech report (the paper later appeared in a journal in 1956). They knew that Max-Flow was solvable by LP, but they in particular showed it was also solvable in P (it was not known at the time that LP is solvable in P). They also showed the Max-Flow = Min-Cut theorem (though it was only slightly later that people pointed out it was a special case of LP duality). Finally, their algorithm also (implicitly) shows that if all the capacities are integers then so are the optimal solutions (Jamie asked about this).

## Lecture 2: Linear programming, a geometric view

Scribe notes for lecture 2 are posted.

## Homework #1

Homework #1 has been posted on the webpage (though you may have picked a copy from yesterday’s lecture): it’s due next Thursday (9/22). Feel free to post clarification questions about the HW as comments on this post.

The exercises — on the webpage now — do not need to be turned in, but do try to solve them, especially the one which asks you to get access to either Mathematica or Maple.

**Update:** fixed an error in problem #4c, should be . (thanks, Tim!)