jzhao.xyz

Search

Search IconIcon to open search

Trust

Last updated Oct 8, 2021 Edit Source

In an empire of lies, telling the truth is a revolutionary act. In a fearful society; love and trust are the primary tools of resistance.

Source: Kernel on Trust

Trust is only meaningful once we have fully understood how people can lie.

Having clearly defined and encoded rules means that there is an implicit shift from trusting those who own the medium to those who are transacting. In essense, trusting peers rather than a regulatory power.

This feels like its missing an important step in the process: trusting the medium. If people don’t understand how the medium works to facilitate transactions, how can they trust it? Not only is access important, but how easily people understand why it works.

Curious whether this has relations to philosophy of science and Cartesian skepticism

Related: game theory and trust, PHIL240A Final Paper

To dream up important ideas you must think like an idealist; to build systems that will live up to those dreams, you must think like an adversary.

Money as Faith

What you believe informs what you pay attention to and how you act, which define what you vest value in.

# Web3

“Because blockchains allow us to define succinctly our shared truths, and because the record itself is shared across all participants, there is a whole new “trust space” we can explore, searching for more valuable kinds of transactions impossible within merely legal fictions.”

# Trust as an Unquestioning Attitude

C. Thi Nguyen

“We inhabit trust like we inhabit the air, and we only notice it when it has departed.”

Most theories of trust presume that trust is a conscious attitude that can be directed at only other agents. I sketch a different form of trust: the unquestioning attitude. What it is to trust, in this sense, is not simply to rely on something, but to rely on it unquestioningly. It is to rely on a resource while suspending deliberation over its reliability

# Baier’s Goodwill Theory

Colloquial use of “trust” blurs together two very distinct concepts

Question: can objects be trusted in the normative sense? Can we feel betrayed by objects?

This involves ascribing goodwill to the trusted and the sense of betrayal comes from discovery that there is no such goodwill after all.

# Responsiveness Theories

Thinking that the fact that you trust in them/it will give a reason to fulfill that trust.

a trustworthy person “takes the fact that they are counted on to be a reason for acting as counted on” (Jones 2012, 66)

Betrayal of trust here is failure to be properly responsive

Hawley’s definition of trust is that “to trust somebody is to take them to have made a commitment to do something and to rely on them to fulfill that commitment. Hawley’s account grounds the sence of betrayal in the trusted person’s failure to live up to their commitments”

# Non-agent based theories

Yet all of these theories share the presumption of agent-directedness or intentionality.

Related, agential gullibility and the Extended mind Hypothesis

“The veteran also suffers from a problem of trust, a building block on which all of social life is erected. The everyday, taken-for-granted reality of civilian life ignores much; civility assumes the nonlethal intentions of others. In war, however, all such assumptions evaporate: one cannot trust the ground one walks on, the air one breathes, nor can one expect with full assuredness that tomorrow will come again.” (Kearl 1989, 353) … The fact that many philosophers find it odd to speak of being betrayed by their environment is perhaps best explained by the fact that most philosophers have lead, by and large, pretty cushy lives.

Interesting to distinguish between what we are trusting when we trust designed (e.g. search engines, devices, websites, etc.) and non-designed objects (the ground, physics, etc.)

To lose trust is to shift from the unquestioning state to the endlessly skeptical and suspicious mood.

PHIL 240A Assignment 1: Trust as Unquestioning Attitude

# Partiality and prejudice in trusting

By Katherine Hawley

Is it reasonable to trust your friends?

Common definition of trust:

Interestingly, there is a gap between relying on someone to do something and believing that they will do it

Stroud (2006) and Kelly (2004) argue that we should have partiality towards friends not only in actions but beliefs as well, though this isn’t always the right thing to do. As Stroud says, ‘friendship requires epistemic irrationality’

The considerations are all in some sense selfish—they play on our wish to be right, to have been right, to be a good judge of character, and to avoid difficult situations.

Exploring potential conflicts between different types of trust

  1. Epistemic Trust: trust in someone as a speaker or source of knowledge
  2. Practical Trust: trust in someone as an actor

In fact, there is often a two-way causal interaction between friendship and trustworthiness. Roughly, people are more likely to behave in a trust-worthy manner towards their friends, and we are more likely to form friendships with people we consider trustworthy. Clearly, there are exceptions

  1. Friend might let you down instead of disappointing someone else as your friend hopes you will understand and forgive them
  2. Might find it more tempting to lie about some matters because they are concerned about maintaining a good image of themselves in your mind But if she takes these liberties too often, you will feel you have been taken for granted, and come to resent your friend. Friendship requires mutual respect and openness as well as forgiveness.

Stroud’s 4 demands of friendship

  1. Serious scrutiny: scrutinize negative claims about our friends
  2. Different conclusions: draw different conclusions and make difference inferences than they otherwise would about non-friends given the same information
  3. Interpretive Charity: interpret evidence against friends more charitably than with non-friends
  4. Reason: treat the fact someone is a friend as a reason when we believe about them

# Trust Circles

From Buzzard

Trust circles

# Trust between human and non-human systems

Trust has historically distinguished from mere reliance (Baier, 1986, p. 242) through an attitude of trust or extra factors which distinguish genuine trust from mere reliance (Hawley, 2014, p. 1) that we take to inanimate objects. However, algorithms and computerized decision making systems are beginning to play larger roles in our society – deciding jail time for criminals, giving medical diagnoses, and many more. How should we weigh the epistemic authority or trustworthiness of human versus non-human expert systems?

I posit that, until these algorithmic systems are able to reliably be held accountable for their decision making, they should not be epistemologically load-bearing. These systems should supplement human decision making rather than be considered an epistemic authority in and of itself. Let us construct a case study to examine this in more detail.

Suppose you are a hiring manager at a tech company. There is a potential candidate in the pipeline for you company that you are very on the fence about whether to hire or not. She has an incredibly strong ‘yes’ recommendation from a more senior hiring manager. You don’t know this higher up very well but you know that her and this candidate are close friends already. On the other hand, the company uses an internal AI-powered candidate ranking system. This system is quite complex and the original engineers who designed it have since long left the company. This system gives this candidate a strong ’no’ hire recommendation.

In this situation, both systems are ‘authorities’, having been approved by the company for use in the hiring process. The more senior hiring manager is clearly an expert, having been working in this company and hiring many stellar employees in the past. The algorithm can also be considered an expert here, having scored extremely highly on tests of accuracy in predicting based off of historical data whether candidates will do well in the company. It has been vetted for internal use.

However, it is important to note here that in the case example, while both systems are potentially biased, it is far more likely that the algorithmic system is biased.

The senior hiring manager could potentially be doxastically partial towards her friend but not because it is normative to be always partial to our friends. Notably, Crawford defines being a good friend constitutively involving forming attitudes about one’s friends that are appropriately responsive to the features that one’s friends have that appear to warrant those attitudes (2019, p. 1). It is unknown to you whether the senior hiring manager has any state-given reason to highly recommend her friend, so we cannot assume this to be the case as it is an unbased claim (as we have no evidence to believe so). Thus, we have solid reason to assume that the manager’s friend actually does have those features that she believes makes them such a good candidate.

However, there is one clear detail in this case that makes the algorithmic expert far more likely to be biased: it is trained on historical data that has been sanitized and decontextualized. In fact, historical data shows that in the past there have been more men in the women in the workforce. The forbidden base rate (Gendler, 2011) here is the statistical information about the relative number of male and female employees in the tech industry. If the data was sampled at random, then it is statistically optimal for the algorithm to prefer male applicants to female applicants rather than purely on the basis of qualification for the position. This, while epistemically rational, may not be the correct choice of action for moral reasons.

Lastly, I put forth the concept of epistemic accountability, a measure of whether there are ways to holding the agent in question accountable for their doxastic claims. Accountability here refers to the ability to reduce the epistemic trust in an authority after violating an epistemic norm (e.g. being incorrect). I argue that the algorithmic authority cannot be held accountable for its actions as it does not have capacity as an epistemic agent on its own – it cannot be held accountable for its decisions. As the algorithm itself is a designed object we can instead attribute it forms of derived trust whereas the trust is not only in the algorithm itself, but its designer (the engineers who created the algorithm, the data engineers who sourced and cleaned the data) or experts who know how to operate it. As both types of progenitors of this type of trust are absent, it would be epistemically irrational to trust this algorithm.

In conclusion, it is clear that despite potential biases from both parties, the algorithmic authority has clear flaws in its ability to be held accountable as an epistemic agent and highly likely to be partial against the female candidate due to the forbidden base rate in this case study. It is much more likely that the senior hiring manager is a trustworthy epistemic agent.

# References

  1. Baier A. 1986. Trust and antitrust. Ethics 96:231–60.
  2. Hawley, K., 2014. Partiality and prejudice in trusting. Synthese, 191(9), pp. 2029-2045.
  3. Crawford, L., 2019. Believing the best: on doxastic partiality in friendship. Synthese, 196(4), pp. 1575-1593.
  4. Gendler, T. 2011. On the Epistemic Cost of Implicit Bias. Philosophical Studies 156(57), pp. 33-63