The suffering of a few is okay as long as the greater good benefits? A utilitarian way of doing good.
Related: The ones who walk away from Omelas, quantization
What is EA?
Earning to give. Work can have huge impacts on the type of change we can make in the world. We spend ~80k hours, it makes sense to spend atleast 1% of it thinking about what to dedicate the other 99% to.
How do we time donations to maximize impact? Turns out there’s no ‘goldilocks zone’ in the continuum.
- When you’re young: justify putting it off by saying “oh I’ll donate when I have more money”
- When you’re old: justify not doing it by saying “oh that’s too much of my money I’m giving away”
Shallow Puddle Analogy
Story from source: The Drowning Child and the Expanding Circle by Peter Singer
One morning, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy, but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
Do you have any obligation to rescue the child? Unanimously, most people say they do. Does it matter if other people walk by the pond and do nothing? Most people still say no. Does it matter if the child were far away, in another country perhaps? Most people still say no.
“We are all in that situation of the person passing the shallow pond. We can all save the lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt, or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world.”
Ineffective Altruism
Ineffective altruism eschews metrics, because “What does doing good look like?” should be a continuously-posed question rather than an optimization problem.
Strengthening community is also important for our shared future, even if it isn’t measurable.
What is the best we can do as a collective and community rather than at the individual level?
I’m curious if there has been attempts to quantize efforts of mutual aid and solidarity? If so how what does that computation come out to? Thinking about it, don’t most quantizations have huge error bars? How do EA folks choose between prioritizing something that is almost certainly good vs something that could have a very small change of huge upside? Seems to me they are basing it off of expected value of the distribution which… isn’t reasonable in most cases I feel (especially in cases when the long tail for these distributions can explode as ) (see: interpreting small probabilities)
Potentials include capping tail end of distribution?
Seventh-generation decision making, for example, is an indigenous principle that is enshrined in the Constitution of the Iroquois Nation. It mandates Iroquois leaders to consider the effects of their actions over seven generations, encompassing hundreds of years. Seven generations is a long time, but it is also a finite amount of time. (see: traditional knowledge)
Karl Popper had a really good way of putting this, which now sides him very solidly on the side which would choose to save the child in The Ones Who Walk Away From Omelas
Similarly we must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next.