Intelligence is (at least partly) a matter of using well what you know. An intelligent being learns from experience, and then uses what is has learned to guide expectations in the future. How does one select the appropriate frame or context for any given situation?
There are an infinite number of contextual frames for any given situation. How do we encode this in discrete systems like in AI?
Problem of installing in one way or another all the information needed by an agent to plan in a changing world.
Can be broken down into
- semantic problem: what information do we need to install?
- syntactic problem: what system, format, structure, or mechanism do we use to install it
However, one cannot realistically create a Spinozistic solution (a small set of axioms and definitions from which we can deduce the rest of our knowledge on demand)
We run into the problem of induction: give that I believe all of this (have all this evidence) what ought I to believe as well (about the future, or about unexamined parts of the world)? Clearly this will not work (see: Black swan theory)
We need a system that genuinely ignores most of what it knows and operate with a well-chosen portion of its knowledge at any moment
This is the qualification problem: how do we design a system that reliably ignores what it ought to ignore under a wide variety of different circumstances in a complex action environment?