Ardenlk

Topic Contributions

Comments

Bad Omens in Current Community Building

I found this post really useful (and persuasive), thank you!

One thing I I feel unconvinced about:

"Another red flag is the general attitude of persuading rather than explaining."

For what it's worth, I'm not sure naturally curious/thoughtful/critical people are particularly more put off by someone trying to persuade them (well/by answering their objections/etc.) than by them explaining an idea, especially if the idea is a normative thesis. It's weird for someone to be like "just saying the idea is that X could have horrific side effects and little upside because [argument]. Yes I believe that's right. No need to adopt any beliefs or change your actions though!" That just makes them seem like they don't take their own beliefs seriously. I'd much rather have someone say "I want to persuade you that X is bad, because I think it's important people know that so they can avoid X. OK here' goes: [argument]."

If that's right, does it mean that maybe the issue is more "persuade better"? e.g. by actually having answers when people raise objections to the assumptions being made?

At the opening session [Alice] disputes some of the assumptions, and the facilitators thank her for raising the concerns, but don’t really address them. They then plough on, building on those assumptions. She is unimpressed.

Seems like the issue here is more being unpersuasive, rather than too zealous or not focused enough of explaining.

How I torched my biggest career opportunity so far

This post is great - for the reasons AndreaM wrote, and additionally for reflecting on specific things you did right to put yourself in a position to have the opportunity (which have hopefully also put you in a better position than you would be otherwise even ex-post).

We need more stories like this, as well as stories of people going for the highest EV thing, which still seems highest-EV to them in hindsight, when it didn't work out. : )

Effective altruism’s odd attitude to mental health

Sorry, I should have been more mindful of how the brevity of my comment might come off. I didn't mean to suggest the question doesn't come down to what's most cost-effective, which I agree it does. I was trying to point to the explanation for my differing attitudes to the priority of mental health when thinking the cause area of making the ea community more effective vs. the cause area of present people's wellbeing more generally, which I'd guess is also the primary explanation for other people's differing attitudes, which is: debilitating and easily treatable physical illnesses are not that common among EAs, which is why they aren't a high priority for helping the EA community be more effective.

Effective altruism’s odd attitude to mental health

If malaria and other easily preventable/treatable debilitating physical issues were common among EAs, I'd guess that should be a much higher priority to address than poor mental health among EAs.

Pre-announcing a contest for critiques and red teaming

makes sense! yeah as long as this is explicit in the final announcement it seems fine. I also think "what's the best argument against X (and then separately do you buy it?)" could be a good format.

Pre-announcing a contest for critiques and red teaming

Cool! Glad to see this happening.

One issue I could imagine is around this criterion (which also seems like the central one!)

Critical — the piece takes a critical or questioning stance towards some aspect of EA, theory or practice

Will the author need to end up disagreeing with the piece of theory or practice for the piece to qualify? If so, you're incentivizing people to end up more negative than they might if they were to just try to figure out the truth about something that they were at first unsure of the truth/prudence of.

E.g. if I start out by thinking "I'm not sure that neglectedness should be a big consideration in EA, I think I'll write a post about it" and then I think/learn more about it in the course of writing my post (which seems common since people often learn by writing), I'll be incentivized to end up at "yep we should get rid of it" vs. "actually it does seem important after all".

Maybe you want that effect (maybe that's what it means to red team?) but it seems worth being explicit about so that people know how to interpret people's conclusions!

Grantmaking is more like a skill than a path

Arden here from 80,000 Hours - just an update: Ollie showed me this draft before posting and I thought he was right about a bunch of it, so we adjusted the write-up to put more emphasis on becoming skilled in an area before becoming a grantmaker in it being the ideal, plus added his 4 bullet point list to our section on assessing your fit.

We didn't want to move away from calling it a "path" because we use that term to describe jobs/sets of jobs that one can do for many years, which we think could be among highest impact phases of one's career, which this seems to fit.

In current EA, scalability matters

FWIW the way I conceptualise this situation is that cost effectiveness is still king, but: spending a dollar is a lot less expensive in terms of 'true cost' than it used to be, because it implies the inability to fund another thing to a greatly reduced extent (which is the real cost of spending money).

This in turn means that spending time/labour to find new opportunities is relatively more expensive than it used to be vs. the true cost of spending a dollar, which is why we want to take opportunities that have a much larger dollar spend:labor/time ratio than we used to.

If an opportunity is not scalable, that means it has a lot of labour/time costs that are hidden, because once you use up the opportunity you have to find another one before you can keep having impact, which costs labour/time, whereas scalable opportunities don't have that. Therefore they're cheaper in true cost, therefore more cost effective at the same level of effectiveness.

I don't think I'm disagreeing with you -- but this feels like the conceptually cleaner way of thinking about it for my brain.

Disentangling "Improving Institutional Decision-Making"

Nice post : )

I mostly agree with your points, though am a bit more optimistic than it seems like you are about untargeted, value-neutral IIDM having a positive impact.

Your skepticism about this seems to be expressed here:

And yet, it seems possible that there are some institutions that cause an overwhelming amount of harm (e.g. the farming industry or some x-risk-increasing endeavors like gain-of-function research), and that the value-neutral version of IIDM fails to take that into account.

I think this is true, but it still seems like the aims of institutions are pro-social as a general matter -- x-risk and animal suffering in your examples are side effects that aren't means to the ends of the institutions, which are 'increase biosecuirty' and 'make money', and if improving decisionmaking helps orgs get at their ends more efficiently then we should think they will have fewer bad side effects if they have better decisonmaking. Also generally orgs' aims (e.g. "make money") will presuppose the firm's, and therefore humanity's survival, so it still seems good to me as a general matter for orgs to be able to pursue their aims more effectively.

All Possible Views About Humanity's Future Are Wild

Am I right in thinking Paul your argument here is very similar to Buck's in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential.

Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.

Load More