The theory of second best is pretty interesting!

That any one un-fixable inefficiency could and should spiral out into a whole set of distortions to maximize overall efficiency is pretty fascinating.

The “un-fixable inefficiency” is best thought of as a constraint. And what the rest is saying is that if you apply this constraint and optimize for whatever outcome, other variables may well shift away from the points at which the constraint-free system is optimized for said outcome.

From Samuel Hammond, who named his blog Second Best:

many seemingly suboptimal outcomes belie a deeper, all-things-considered optimality that cannot be easily improved upon, at least without first reconstructing why things are the way they are.

What is Samuel Hammond saying? He’s saying some combination of:

  • when you see a “suboptimal” outcome, you should consider whether you’ve applied all the constraints that are actually being applied.
  • when you see a “suboptimal” outcome, you should consider whether the system is actually optimizing for said outcome.
  • when you see a variable which would be different in an optimal scenario, you should consider whether you’ve applied all the constraints that are actually being applied.

This is notably different from simply a local optimum, but you can construct some mapping from one to the other. Second best scenarios are local optima if the constraint/un-fixable inefficiency is in fact fixable. Local optima are second best scenarios under the constraint that you can’t take steps which decrease the outcome being optimized for.