nonn

241Joined Feb 2018

Posts
1

Sorted by New

Comments
21

Agree that was a weird example.

Other people around the group (e.g. many of the non-Stanford people who sometimes came by & worked at tech companies) are better examples. Several weren't obviously promising at the time, but are doing good work now.

I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.

That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think this is conscious, just vague 'feels like this is wrong' when people say [thing I'm not the best at/dislike] is the most important. This is not to say set X doesn't have major problems

They might more often have useful community critiques imo, e.g. more likely to notice social blind spots that community leaders are oblivious to.

Also, I am concerned about motivated reasoning within the community, but don't really know how to correct for this. I expect the most-upvoted critiques will be the easy-to-understand plausible-sounding ones that assuage the problem above or social feelings, but not the correct ones about our core priorities. See some points here: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism

I'd add a much more boring cause of disillusionment: social stuff

It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)

Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.

Your second question "Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?" seems to ignore that a major group EAs will be running against will be democrats in primaries.

So it's not only that you're creating large incentives for republicans to attack EA, you're also creating it for e.g. progressive democrats. See: Warren endorsing Flynn's opponent & somewhat attacking flynn for crypto billionaire sellout stuff

That seems potentially pretty harmful too. It'd be much harder to be an active group on top universities if progressive groups strongly disliked EA.

Which I think they would, if EAs ran against progressives enough that Warren or Bernie or AOC more strongly criticized EA. Which would be in line the incentives we're creating & general vibe [pretty skeptical of a bunch of white men, crypto billionaires, etc].

Random aside, but does the St. Petersburg paradox not just make total sense if you believe Everett & do a quantum coin flip? i.e. in 1/2 universes you die, & in 1/2 you more than double. From the perspective of all things I might care about in the multiverse, this is just "make more stuff that I care about exist in the multiverse, with certainty"

Or more intuitively, "with certainty, move your civilization to a different universe alongside another prospering civilization you value, and make both more prosperous".

Or if you repeat it, you have "move all civilizations into a few giant universes, and make them dramatically more prosperous.

Which is clearly good under most views, right?

Another complication: we want to select for people who are good fits for our problems, e.g. math kids, philosophy research kids, etc. To some degree, we're selecting for people with personal-fun functions that match the shape of the problems we're trying to solve (where what we'd want them to do is pretty aligned with their fun)

I think your point applies with cause selection, "intervention strategy", or decisions like "moving to Berkeley". Confused more generally

I'm confused about how to square this with specific counterexamples. Say theoretical alignment work: P(important safety progress) probably scales with time invested, but not 100x by doubling your work hours. Any explanations here?

Idk if this is because uncertainty/ probabilistic stuff muddles the log picture. E.g. we really don't know where the hits are, so many things are 'decent shots'. Maybe after we know the outcomes, the outlier good things would be quite bad on the personal-liking front. But that doesn't sound exactly correct either

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
  • Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably

Agree we should usually avoid saying poorly-justified things when it's not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.

There were tons of cases from EAGx Boston (an area with lower covid case counts). I'm one of them. Idk exact numbers but >100 if I extrapolate from my EA friends.

Not sure whether this is good or bad tho, as IFR is a lot lower now. Presumably lower long covid too, but hard to say

Load More