The fundamental tradeoff has two parts:

  1. How small you can make the training error
  2. How well training error approximates the test error.

Generally,

  • Training error goes down when the model gets more complicated, however
  • Approximation error goes up when the model gets more complicated