Ardenlk

1370Joined Aug 2017

Comments
86

you might already be planning on dong this, but it seems like you increase the chance of getting a winning entry if you advertise this competition in a lot of non-EA spaces. I guess especially technical AI spaces e.g. labs, universities. Maybe also trying to advertise outside the US/UK. Given the size of the prize it might be easy to get people to pass on the advertisement among their groups. (Maybe there's a worry about getting flack somehow for this, though. And also increases overhead to need to read more entries, though sounds like you have some systems set up for that which is great.)

In the same vein I think trying to lower the barriers to entry having to do with EA culture could be useful - e.g. +1 to someone else here talking about allowing posting places besides EAF/LW/AF, but also maybe trying to have some consulting researchers/judges who find it easier/more natural to engage in non-analytic-philosophy-style arguments.

This isn't the main point of this post, but I feel like it's a common criticism of EA so feel like it might be useful to voice my disagreement with this part:

It's also the case that individual maximization is rarely optimal for groups. Capitalism harnesses maximization to provide benefits for everyone, but when it works, that leads to diversity in specializations, not crowding into the single best thing. To the extent that people ask "how can I be maximally impactful," I think they are asking the wrong question - they are part of a larger group, and part of the world as a whole, and they can't view their impact as independent from that reality.

I think viewing yourself as an individual is not in tension with viewing yourself as part of a whole. Your individual actions constitute a part of the whole's actions, and they can influence other parts of that whole. If everyone in the whole did maximize the impact of their actions, the whole's total impact would also be maximized.

diversity in specializations, not crowding into the single best thing.

100% agree. But again I don't think that's in tension with thinking in terms of where one as an individual can do the most good - it's just that for different people, that's different places.

Hi Elskivi,

Arden from 80,000 Hours here.

I think I’m part of that significant minority but cannot really find any further help or enough material regarding those topics from an EA angle, for example safeguarding democracy, risks of stable totalitarianism, risks from malevolent actors, global public goods etc.

Unfortunately there aren’t many materials on those issues -- they are mostly even more neglected (at least from a longtermist perspective) than issues like AI safety.

The resources I do know about are linked from the mini profiles on the page -- e.g. https://forum.effectivealtruism.org/posts/aSzxoj7irC5jNHceB/how-likely-is-world-war-iii for great power conflict, and https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience for artificial sentience. I think there should be something for each of the listed problems, and the readings often have ‘further resources’ of their own.

We’re also working on filling out the mini profiles more, but the truth is not much work has been done on these areas generally from an longtermist or generally EA perspective (that I know of at least), so I’d guess there won’t be a ton more resources like you’re looking for soon.

Thus getting started on issues like these probably means doing research to figure out what the best interventions seem to be, e.g. by looking into what people outside EA are doing on them and where the most promising gaps seem to be, and then trying to get started filling them - either by working in an existing org that works on the issue, doing further research (e.g. as an academic or in a think tank), or starting a project of your own (it’ll depend a lot on the issue). So it may take considerable entrepreneurial spirit (and willingness to try things that don’t end up working) to make headway on some of the issues.

I strongly agree with some parts of this post, in particular:

  • I think integrity is extremely important, and I like that this post reinforces that.
  • I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.

On the other hand, I disagree with some of it -- and thought I'd push back especially given that there isn't much pushback in the comments here:

I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have - a constant current that we need to swim against.

I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:

  1. As individuals, the desire to be liked by and get along with others, including people inside and outside of EA

  2. As individuals that have been raised in a mainstream ethical environment (most of us), a natural pluralism and strong attraction to common sense morality

  3. The desire to live a normal life full of the normal recreational, familial, and cultural stuff

  4. As a community, wanting to seem less weird to the rest of the world in order to be able to attract and/or work with people who are (currently) unfamiliar with the EA community.

  5. Implicit and explicit pressure from another against weirdness so that we don’t embarrass one another/hurt EA’s reputation

  6. Fear of being badly wrong in a way that feels less excusable because it’s not the case that everyone else is also badly wrong in the same way

  7. Whatever else is involved in the apparent phenomenon where as a community gets bigger, it often becomes less unique

We do face some sources of pressure away from pluralism and moderation, but they seem fewer and weaker to me:

  1. The desire to seem hardcore that you mentioned

  2. Something about a desire for interestingness/feeling interesting/specialness (possible overlap with the above)

  3. Selection effects-- EA tends to attract people who are really into consistency and following arguments wherever they lead (though I'd guess this is getting weaker over time bc of the above effects).

  4. Maybe other things?

I do agree that we should try hard to guard against bad maximising - but I think we also need to make sure we remember what is really important about maximising in the face of pressure not to.

Also, moral and empirical uncertainty strongly favour moderation and pluralism -- so I agree that it’s good to have reservations about EA ideas (though primarily in the same way it’s good to have reservations about a lot of ideas). I do not want to think of those ideas as separate from or in tension with the core ideas of EA. I think it would be better to think of them as an important part of the ideas of EA.


Somewhat speculating: I also wonder if the two problems you cite at the top are actually sort of a problem and a solution:

If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X - a terrible idea unless you’re really sure you have the right X. (This idea vaguely echoes some concerns about AI alignment, e.g., powerfully maximizing not-exactly-the-right-thing is something of a worst-case event.)

EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.

Maybe EA is avoiding the dangers of maximisation (insofar as we are) exactly because we are trying to maximize something we’re confused about. Since we’re confused about what ‘the good’ is, we’re constantly hedging our bets; since we’re unsure how to achieve the good, we go for robust approaches and try a variety of approaches and try not to alienate people who can help us figure out what the good is and how to make it happen. This uncertainty reduces the risks of maximisation greatly. Analogy: Stuart Russel’s strategy to make AI safe by making it unsure about its goals.

(Responding on Benjamin's behalf, as he's away right now):

Agree that it's hard to know what works in AI safety + it's easy to do things that make things worse rather than better. My personal view is that we should expect the field of AI safety to be overall good because people trying to optimise for a thing will overall move things in its direction in expectation even if they sometimes move away from it by mistake. It seems unlikely that the best thing to do is nothing, given that AI capabilities are racing forward regardless.

I do think that the difficulty of telling what will work is a strike against pursuing a career in this area, because it makes the problem less tractable, but it doesn't seem decisive to me.

Agree that a section on this could be good!

No problem of course - in-depth is great, and thanks for the offer to chat! Agree this is important to get right. I'll pass this on to the author and he'll get back in touch if it seems helpful to talk through : )

Hey! Arden here, from 80,000 Hours. Thanks for this in-depth feedback! The author of our climate change profile is away right now, so we'll take a look at this in a couple of weeks.

just an appreciation comment. I think this post was very well written and handled tricky questions well, especially the Q&A section.

And this seems great to highlight:

We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions.

Why would the community average dropping mean we go bust? I'd think our success is more related to the community total. Yes, there are some costs to having more people around who don't know as much, but it's further claim that these would outweigh the benefits.

I found this post really useful (and persuasive), thank you!

One thing I I feel unconvinced about:

"Another red flag is the general attitude of persuading rather than explaining."

For what it's worth, I'm not sure naturally curious/thoughtful/critical people are particularly more put off by someone trying to persuade them (well/by answering their objections/etc.) than by them explaining an idea, especially if the idea is a normative thesis. It's weird for someone to be like "just saying the idea is that X could have horrific side effects and little upside because [argument]. Yes I believe that's right. No need to adopt any beliefs or change your actions though!" That just makes them seem like they don't take their own beliefs seriously. I'd much rather have someone say "I want to persuade you that X is bad, because I think it's important people know that so they can avoid X. OK here' goes: [argument]."

If that's right, does it mean that maybe the issue is more "persuade better"? e.g. by actually having answers when people raise objections to the assumptions being made?

At the opening session [Alice] disputes some of the assumptions, and the facilitators thank her for raising the concerns, but don’t really address them. They then plough on, building on those assumptions. She is unimpressed.

Seems like the issue here is more being unpersuasive, rather than too zealous or not focused enough of explaining.

Load More