Historically, development of AI has been about systems that represent the world through symbols and manipulate those tokens in a systematic way to arrive at a result. Within these GOFAI systems, symbols are representative of aspects of our world. That is, it is a symbolic system. This type of AI was coined Good Old-Fashioned AI (GOFAI) by John Haugeland.
A very common example of GOFAI systems are expert systems, which are computer systems that emulate the decision making ability of a human expert. They solve problems via decision-tree reasoning, figuring out whether to perform certain actions based off of if-then rules.
At its core, GOFAI can be considered ‘artificially intelligent’ because of semantic interpretation. If the symbols represent aspects of our world, the result, which is also a symbol sequence, can be translated back into aspects of our world. This is called semantic interpretation, which “seeks to construe a body of symbols so that what they mean (‘say’) turns out to be consistently reasonable and sensible, given the situation” (see semantics)
However, because of how symbols map to the world, GOFAI is very narrow-minded and vulnerable to unexpected variations and oddities in the problems and information they were given. That is, the potemkin village that a GOFAI system may construct will hold up if only seen from the intended angles, but any slight deviation from an intended or expected input would shatter the illusion immediately.