A

AnonymousTurtle

756 karmaJoined Aug 2022

Comments
78

In general, I think it's important to separate EA as in the idea from EA as in "a specific group of people". You might hate billionaires, MacAskill and GiveWell, but the equal consideration of similar interests can still be an important concept.

Just because you never met them, it doesn't mean that people like GiveDirectly recipients are not "real, flesh-and-blood human", who experience joys and sorrows as much as you do, and have a family or friends just as much as you have.

Tucker Carlson when writing a similar critique of effective altruism even used "people" in scare quotes to indicate how sub-human he considers charity beneficiaries to be, just because they happened to be born in a different country and never meet a rich person. Amy Schiller says that people you don't have a relationship with are just "abstract objects".

I see EA as going against that, acting on the belief that we are all real people, who don't matter less if we happen to be born in a low income country with no beaches.

As for your questions:

  1. Do folks agree EA's shortfalls form a pattern & are not one off incidents? (And, if so, what are those shortfalls?)

Yeah folks agree that EA has many shortfalls, to the point that people write about Criticism of Criticism of Criticism. Some people say that EA focuses too much on the data, and ignores non-RCT sources of information and more ambitious change, other people say that it focuses too much on speculative interventions that are not backed by data based on arbitrary "priors". Some say that it doesn't give enough to non-human animals, some say it shouldn't give anything to non-human animals.

Also, in general anything can call itself "EA", and some projects that have been associated with "EA" are going to be bad just on base rates.

2. How can we (as individuals or collectively) update or reform / what ought we do differently in light of them?

I'd guess it depends on your goals. I think donating more money is increasingly valuable if you think the existing donors are doing a bad job at it. (Especially if you have the income of a Stanford Professor)

Also, suggestions for individuals

CE has a fairly strong reputation of being hostile / non-collaborative

Could you elaborate on "being hostile"? Do they have a reputation for causing harm, or is it just about not listening to feedback?

  1. What do you donate to?
  2. What is your take on GiveDirectly?
  3. Do you think Mariam is not a "real, flesh-and-blood human", since you never met her?
  4. Do you think that spending money surfing and travelling the world while millions are starving could be considered by some a suboptimal use of capital?

METR ‘Model Evaluation & Threat Research’ might also be worth mentioning. I wonder if there's a list of capability evaluations projects somewhere

I think mainstream HR comes primarily from the private sector and is primarily about protecting the employer, often against the employee. They often cast themselves in a role of being there to help you, but a common piece of folk wisdom is "HR is not your friend". I think frankly that a lot of mainstream HR culture is at worst dishonest and manipulative, and I'd be really sad to see us uncritically importing more of that.

 

I see a lot of this online, but it doesn't match my personal experience. People working in HR that I've been in contact with seem generally kind people, aware of tradeoffs, and generally care about the wellbeing of employees.

I worry that the online reputation of HR departments is shaped by a minority of terrible experiences, and we overgeneralize that to think that HR cannot or will not help, while in my experience they are often really eager to try to help (in part because they don't want you and others to quit, but also because they are nice people).

Maybe it's also related to minimum-wage non-skilled jobs vs higher paying jobs, where employment tends to be less adversarial and less exploitative.

I agree, if anything data in the for-profit world probably updates me against very-small-sized companies being optimal.

That argument just doesn't go all the way to trillion dollar behemoths like I thought.

In the non-profit world, GiveWell's top charities seem to have very different team sizes, so maybe we just can't say much with generality.

I think it's bad to repeatedly accuse people of things they didn't do, or having responsibilities they didn't have, and then write "Oops, sorry!", and we should do less of this.

You could have easily checked in with them, as with Macaskill last time, so that RP didn't have to rush in immediately with a correction, since otherwise way fewer people will see the correction than the original claim/accusation (if any). It lowers this forum's epistemics, wastes people's time, and stains accused people's reputation for no reason.

The tradeoff between writing a claim instantly or spending some time to confirm its correctness usually favours the latter. If I were on the board of RP, having my name on this thread could be damaging, and I would feel lucky that it got corrected immediately. I downvoted because I want to see fewer comments like that.

This is a very good point, when huge companies get split the stock usually rises.

When Alibaba was forced to split into six separate groups the stock went up 10%. Please someone correct me if I'm wrong, but if I remember correctly when Standard Oil split into 43 companies the combined stocks also appreciated a lot.

Why wouldn't the same apply to Alphabet?

I'm curious to what extent these issues are linked to worries about the brand's reputation, specifically the negative connotations currently associated with the EA brand.

Thank you for writing this. I share many of these, but I'm very uncertain about them.

Here it is:

Giving a range of probabilities when you should give a probability + giving confidence intervals over probabilities + failing to realize that probabilities of probabilities just reduce to simple probabilities

I think this is rational, I think of probabilities in terms of bets and order books. I think this is close to my view, and the analogy of financial markets is not irrelevant.

Unstable beliefs about stuff like AI timelines in the sense of I'd be pretty likely to say something pretty different if you asked tomorrow

Changing literally day-to-day seems extreme, but month-to-month seems very reasonable given the speed of everything that's happening, and it matches e.g. the volatility of NVIDIA stock price.

Axiologies besides ~utilitarianism

To me, "utilitarianism" seems pretty general, as long as you can arbitrarily define utility and you can arbitrarily choose between Negative/Rule/Act/Two-level/Total/Average/Preference/Classical utilitarianism. I really liked this section of a recent talk by Toby Ord (Starting from "It starts by observing that the three main traditions in Western philosophy each emphasize a different focal point:"). (I also don't know if axiology is the right word for what we want to express here, we might be talking past each other)

Veg(etari)anism for terminal reasons; veg(etari)anism as ethical rather than as a costly indulgence

I mostly agree with you, but second order effects seem hard to evaluate and both costs and benefits are so minuscule (and potentially negative) that I find it hard to do a cost-benefit-analysis.

Thinking personal flourishing (or something else agent-relative) is a terminal goal worth comparable weight to the impartial-optimization project

I agree with you, but for some it might be an instrumentally useful intentional framing. I think some use phrases like "[Personal flourishing] for its own sake, for the sake of existential risk." (see also this comment for a fun thought experiment for average utilitarians, but I don't think many believe it)

Cause prioritization that doesn't take seriously the cosmic endowment is astronomical, likely worth >10^60 happy human lives and we can nontrivially reduce x-risk

Some think the probability of extinction per century is only going up with humanity increasing capabilities, and are not convinced by arguments that we'll soon reach close-to-speed-of-light travel which will make extinction risk go down. See also e.g. Why I am probably not a longtermist (except point 1). I find this very reasonable.

Deciding in advance to boost a certain set of causes [what determines that set??], or a "portfolio approach" without justifying the portfolio-items

I agree, I think this makes a ton of sense for people in community building that need to work with many cause areas (e.g. CEA staff, Peter Singer), but I fear that it makes less sense for private individuals maximizing their impact.

Not noticing big obvious problems with impact certificates/markets

I think many people notice big obvious problems with impact certificates/markets, but think that the current system is even worse, or that they are at least worth trying and improving, to see if at their best they can in some cases be better than the alternatives we have. The current funding systems also have big obvious problems. What big obvious problems do you think they are missing?

Naively using calibration as a proxy for forecasting ability

I agree with this, just want to mention that it seems better than a common alternative that I see: using LessWrong-sounding-ness/reputation as a proxy for forecasting ability

Thinking you can (good-faith) bet on the end of the world by borrowing money ... I think many people miss that utility is about ∫consumption not ∫bankroll (note the bettor typically isn't liquidity-constrained)

I somewhat agree with you, but I think that many people model it a bit like this: "I normally consume 100k/year, you give me 10k now so I will consume 110k this year, and if I lose the bet I will consume only 80k/year X years in the future". But I agree that in practice the amounts are small and it doesn't work for many reasons.

Load more