nonn

Posts

Sorted by New

Topic Contributions

Comments

My bargain with the EA machine

Another complication: we want to select for people who are good fits for our problems, e.g. math kids, philosophy research kids, etc. To some degree, we're selecting for people with personal-fun functions that match the shape of the problems we're trying to solve (where what we'd want them to do is pretty aligned with their fun)

I think your point applies with cause selection, "intervention strategy", or decisions like "moving to Berkeley". Confused more generally

My bargain with the EA machine

I'm confused about how to square this with specific counterexamples. Say theoretical alignment work: P(important safety progress) probably scales with time invested, but not 100x by doubling your work hours. Any explanations here?

Idk if this is because uncertainty/ probabilistic stuff muddles the log picture. E.g. we really don't know where the hits are, so many things are 'decent shots'. Maybe after we know the outcomes, the outlier good things would be quite bad on the personal-liking front. But that doesn't sound exactly correct either

FTX/CEA - show us your numbers!

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
  • Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably

Agree we should usually avoid saying poorly-justified things when it's not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.

How about we don't all get COVID in London?

There were tons of cases from EAGx Boston (an area with lower covid case counts). I'm one of them. Idk exact numbers but >100 if I extrapolate from my EA friends.

Not sure whether this is good or bad tho, as IFR is a lot lower now. Presumably lower long covid too, but hard to say

Some thoughts on vegetarianism and veganism

An argument against that doesn't seem directly considered here: veganism might turn some high-potential people off without compensatory benefits, and very high base rates of non-veganism (~99% of western people are non-vegan IIRC) means this may matter even on relatively marginal effects.

Obviously many things can be mitigated significantly by being kind/accommodating (though at some level there's a little remaining implied "you are doing bad"). But even accounting for that, a few things remain despite accommodating. E.g.

  • People feel can vaguely outgroupy because most core EAs in many groups are vegan, & most new people will feel slightly awkward about that, which affects likely comfort & future involvement in EA spaces
  • On the margin, promising people may not repeatedly come to events that would expose them to EA ideas because they don't like the food (empirically this was a fairly common complaint at newbie events in my university). E.g. it may not be filling if you don't like tofu variants, which a significant fraction of the population doesn't
  • Probably more things. Diet & dinners are fairly central to people's social lives, so I'd expect other effects too.

And to be clear, there are plausible compensatory benefits that you highlight. Though they're not direct effects, so I wonder if they could be gotten in other ways without the possible downsides

nonn's Shortform

Still wondering why I never see moral circle expansion advocates make the argument I made here

That argument seems to avoid the suffering-focused problem where moral circle expansion doesn't address, or might even make worse, the worst suffering scenarios for the future (e.g. threats in multipolar futures). Namely, the argument I linked says despite potentially increasing suffering risk, it also increases the value of good futures enough to be worth it

TBC, I don't hold this view because I believe we need a solid "great reflection" to achieve the best futures anyway, and that such a reflection is extremely likely to produce the relevant moral circle expansion

JP's Shortform

Yeah I agree that's pretty plausible.  That's what I was trying to make an allowance for with "I'd also distinguish vacations from...", but worth mentioning more explicitly.

JP's Shortform

For the sake of argument, I'm suspicious of some of the galaxy takes.

Excellent prioritization and execution on the most important parts. If you try to do either of those while tired, you can really fuck it up and lose most of the value

I think relatively few people advocate working to the point of sacrificing sleep, prominent hard-work-advocate (& kinda jerk) rabois strongly pushes for sleeping enough & getting enough exercise.
Beyond that, it's not obvious working less hard results in better prioritization or execution.  A naive look at the intellectual world might suggest the opposite afaict, but selection effects make this hard.  I think having spent more time trying hard to prioritize, or trying to learn about how to do prioritization/execution well is more likely to work.  I'd count "reading/training up on how to do good prioritization" as work

Fresh perspective, which can turn thinking about something all the time into a liability

Agree re: the value of fresh perspective, but idk if the evidence actually supports that working less hard results in fresh perspective.  It's entirely plausibly to me that what is actually needed is explicit time to take a step back - e.g. Richard Hamming Fridays - to reorient your perspective.  (Also, imo good sleep + exercise functions as a better "fresh perspective" that most daily versions of "working less hard", like chilling at home)
TBH, I wonder if working on very different projects to reset your assumptions about the previous one or reading books/histories of other important project/etc works better is a better way of gaining fresh perspective, because it's actually forcing you into a different frame of mind.  I'd also distinguish vacations from "only working 9-5", which is routine enough that idk if it'd produce particularly fresh perspective.

Real obsession, which means you can’t force yourself to do it

Real obsession definitely seems great, but absent that I still think the above points apply.  For most prominent people, I think they aren't obsessed with ~most of the work their doing (it's too widely varied), but they are obsessed with making the project happen.  E.g. Elon says he'd prefer to be an engineer, but has to do all this business stuff to make the project happen.
Also idk how real obsession develops, but it seems more likely to result from stuffing your brain full of stuff related to the project & emptying it of unrelated stuff or especially entertainment, than from relaxing.

Of course, I don't follow my own advice.  But that's mostly because I'm weak willed or selfish, not because I don't believe working more would be more optimal

Open Philanthropy is seeking proposals for outreach projects

Minor suggestion:  Those forms should send responses after you submit, or give the option "would you like to receive a copy of your responses"

Otherwise, it may be hard to clarify whether a submission went through, or details of what you submitted

Cause Prioritization in Light of Inspirational Disasters

I think that depends a lot on framing. E.g. if this is just a prediction of future events, it sounds less objectionable to other moral systems imo b/c it's not making any moral claims (perhaps some by implication, as this forum leans utilitarian)

In the case of making predictions, I'd strongly bias to say things I think are true even if they end up being inconvenient, if they are action relevant (most controversial topics are not action relevant, so I think people should avoid them). But this might be important for how to weigh different risks against each other! Perhaps I'm decoupling too much tho

Aside: I don't necessarily think the post's claim is true, because I think certain other things are made worse by events like this which contributes to long-run xrisk. I'm very uncertain tho, so seems worth thinking about, though maybe not in a semi-public forum

Load More