Convergence of CS and Philosophy
Newell and Simon claimed that both digital computers and the human mind could be understood as physical symbol systems. They both use strings of bits or streams of neuron pulses as symbols representing the external world (formal symbol manipulation)
Intelligence (as claimed by Newell and Simon) requires making the appropriate inference from these internal representations
Turning rationalist philosophy into a research program
- Hobbes → reasoning was calculating
- Descartes → mental representations
- Leibniz → “universal characteristic” — a set of primitives in which all knowledge could be expressed
- Kant → concepts are rules
- Russell → logical atoms as the building blocks of reality
Symbolic AI as a degenerating research program
Problem of representing significance and relevance → how do you transfer the learnings to the real world
Commonsense knowledge problem: how do we represent ‘common sense’ in a way that is accessible to AI systems that use natural language
The problem isnt curating those facts, it’s knowing which facts are relevant in any given situations (the frame problem). We should be able to ignore something without having to figure out that it should ignore it
If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which would have to be updated?
If a certain proposition is true (e.g. there are no empty spots in a parking lot) will it stay true? for how long?
See also: incremental view maintenance