European Union adopted new data-protection rules in 2016 that include a legal right to an explanation of decisions made by algorithms.

As AI systems become more influential in their power and incorporated into more and more important decision making, explainability is extremely important for the sake of algorithmic accountability.

For now though, the current advances in deep learning mean that most representations of the neural network state have a distributed representation of content, meaning that there is no ‘single-neuron’ for certain decisions as semantic symbols do not arise here.

Semantic meaning only arises in neural networks because we inject them or through inductive proding (e.g. LIME for explainability)