My interests are in areas where a statutory solution is infeasible.
I'm a Trustee at Action for Child Trauma International. We work on training people to deliver PTSD treatment to children after conflict. When there is a large number of children with PTSD, it often means that the state has collapsed or is not functioning.
I'm also the Product Lead at Samaritans. This is a suicide reduction charity in the UK & Ireland that provides 24/7 emotional support. When people are suicidal, it may be difficult or impossible to disclose this to friends, family or statutory services that may have obligations to act in ways the individual knows will be counter to their interests.
ACT International needs funding so that we can hire a permanent member of staff to operate the charity - we've outgrown what we can sustainably do with volunteers.
Product management; suicide; trauma
I'm not sure I agree with the conclusion, because people with dark triad personalities may be better than average at virtue signalling and demonstrating adherence to norms.
I think there should probably be a focus on principles, standards and rules that can be easily recalled by a person in a chaotic situation (e.g. put on your mask before helping others). And that these should be designed with limiting downside risk and risk of ruin in mind.
My intuition is that the rule "disfavour people who show signs of being low integrity" is a bad one, as:
I'd favour starting from the premise that everyone has the potential to act without integrity and trying to design systems than mitigate this risk.
Some possible worlds:
SBF was aligned with EA | SBF wasn't aligned with EA | |
---|---|---|
SBF knew this | EA community standards permit harmful behaviour. | SBF violated EA community standards. |
SBF didn't know this | EA community standards are unclear. | EA community standards are unclear. |
Some possible stances in these worlds:
Possible world | Possible stances | ||
---|---|---|---|
EA community standards permit harmful behaviour | 1a. This is intolerable. Adopt a maxim like "first, do no harm" | 1b. This is tolerable. Adopt a maxim like "to make an omelette you'll have to break a few eggs" | 1c. This is desirable. Adopt a maxim like "blood for the blood god, skulls for the skull throne" |
SBF violated EA community standards | 2a. This is intolerable. Standards adherence should be prioritised above income-generation. | 2b. This is tolerable. Standards violations should be permitted on a risk-adjusted basis. | 2c. This is desirable. Standards violations should be encouraged to foster anti-fragility. |
EA community standards are unclear | 3a. This is intolerable. Clarity must be improved as a matter of urgency. | 3b. This is tolerable. Improved clarity would be nice but it's not a priority. | 3c. This is desirable. In the midst of chaos there is also opportunity. |
I'm a relative outsider and I don't know which world the community thinks it is in, or which stance it is adopting in that world.
Some hypotheses:
2. Consequentialist ethics are inherently future-oriented. The future contains unknown unknowns and unknowable unknowns, so any system of consequentialist ethics is always working in the complex and chaotic domains. Consequentialism proceeds by analytical reasoning, even if a subset of the reasoning is probabilistic, and this is not applicable to the complex and chaotic domains, so it's not a useful framework.
3. What's actually happening when thinking through things from a consequentialist perspective is that you are imagining possible futures and identifying ways to get there, which is an imaginative process not an analytical one.
4. Better frameworks would be present-oriented, focusing on needs and how to sustainably meet them. Virtue ethics and deontological ethics are present-oriented ethical frameworks that may have some value. Orientation towards establishing services and infrastructure at the right scale and resilience level, rather than outputs (e.g. lives-saved) would be more fruitful over the long-term.
This is not a valid argument. The conclusion doesn't follow from the premise unless you assume that "help people" means "help people as much as possible". But it doesn't.
Nothwithstanding, let's say "I want to help people as much as possible." I still need a theory of change - help people in what way?
You could try "help as many of them avoid unnecessary death as possible". Then we are in the scenario you describe, but we have already excluded investing in the arts based on how we have defined our goal.
If everyone gave to the "best charity" you would pass the point of diminishing returns and likely hit the point of negative returns.
Therefore, everyone seeking the "best charity" to give to is not the best approach to capital allocation.
Therefore, some people should not be following this strategy.
This assumes it is ethically responsible to perpetuate a system where a person can earn $1000 per hour and others make $10 per hour, and that being a high-powered lawyer is otherwise morally neutral.