# jzhao.xyz

     * _
*_#  \/
\._/#
//


# Labels and Quantization

Last updated February 15, 2022

“Where is the knowledge we have lost in the information?” – T.S. Eliot’s “The Rock”

Our obsession of applying labels to everything extends to even whether a hot dog is a sandwhich or not

“Accuracy is more useful in entry-level jobs and for novices, because as skill increases, quantification of skill becomes harder.”

Why do we have labels in the first place?

• they help us to communicate complex ideas between each other without having to explain our entire mental models
• they are attached to societal connotations and perceptions of certain concepts
• they give legitimacy in the form of social proof to concepts More on this in terminology

Are overloaded terms still useful? (e.g. hacker has so many connotations attached to it) At that point, do we need to create new terminology? See: hermeneutical injustice

## Qualia

Is there any anyway to label or quantify the subjective human experience? Probably not.

If the same apple sends two very different signals to two different people’s brains, how is it that we decipher it to be semantically identical?

Not sure if there’s any way to easily do this

## # Non-semantic Information

“[the] shadows, wind, rust, in the signs of wear on a well-trodden staircase, the creaks of a battered bridge — all the indexical messages of our material environments” From A city is not a computer

## # Concepts

• McNamara Fallacy: Also known as the quantitative fallacy: making a decision involving purely quantitative observations (ignoring all others) is often wrong. Source

In a data-driven world, can we and should we try to quantize everything?

Some metrics that are inherently v difficult to quantize (e.g. quality of engagement) and others that are more easy to quantize and thus optimize for (like engagement)

## # On Algorithmic decision making

No matter how much data we collect, two people who look the same to the algorithm can always end up making different choices.

We gave you two definitions of fairness: keep the error rates comparable between groups, and treat people with the same risk scores in the same way. Both of these definitions are totally defensible! But satisfying both at the same time is impossible.

relevant bit on algorithms and algorithmic decision making -> To Live in their Utopia, Algorithms of Oppression