G Gordon Worley III

Director of Research at PAISRI

Comments

What are your main reservations about identifying as an effective altruist?

I'll note that I used to have some reservations but no longer do, so I'll answer about why I previously had reservations.

When EA got interested in what we now call longtermism, it didn't seem obvious to me that EA was for me. My read was that EA was about near concerns like global poverty and animal welfare and not far concerns like x-risk and aging. So it seemed natural to me that I was on the outside of EA looking in because my primary cause area (though note that I wouldn't have thought of it that way at the time) wasn't clearly under the EA umbrella.

Obviously this has changed now, but hopefully useful for historical purposes, and there may be folks who still feel this way about other causes, like effective governance, that are, from my perspective, on the fringes of what EA is focused on.

Some quick notes on "effective altruism"

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.

Is Democracy a Fad?

Running with the valley metaphor, perhaps the 1990s were when we reached the most verdant floor of the valley. It remains unclear if we're still there or have started to climb out and away from it, assuming the model to be correct.

Mentorship, Management, and Mysterious Old Wizards

The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.

There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.

Slight push back here in that I've seen plenty of folks who make good mentors but who wouldn't be doing a lot of mentoring if not for systems in place to make that happen (because they stop doing it once they aren't within whatever system was supporting their mentoring), which makes me think there's a large supply of good mentors who just aren't connected in ways that help them match with people to mentor.

This suggests a lot of the difficulty with having enough mentorship is that the best mentors need to not only be good at mentoring but also be good at starting the mentorship relationship. Plenty of people, it seems though, can be good mentors if someone does the matching part for them and creates the context between them and the mentees.

Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?

On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.

The Folly of "EAs Should"

I often make an adjacent point to folks, which is something like:

EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".

Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountain of amazing saltine crackers and literally nothing else, and so it makes sense even in the world where saltines really are the best food that generate the most benefit by their production that we instrumentally produce other things so we can enjoy our saltines in full.

I think the same is true of EA. I care a lot about AI  x-risk and it's what I focus on, but that doesn't mean I think everyone should do the same. In fact, if they did, I'm not sure it would be so good, because then maybe we stop paying attention to other causes that, if we don't address them, end up making trying to address AI risks moot. I'm always very glad to see folks working on things, even things I don't personally think are worthwhile, both because of uncertainty about what is best and because there's multiple dimensions along which it seems we can optimize (and would be happy if we did).

evelynciara's Shortform

I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or their loved ones to make more paperclips.

The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we'd still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.

There's a lot to unpack in that tweet. I think something is going on like:

  • fighting about who is really the most virtuous
  • being upset people aren't more focused on the things you think are important
  • being upset that people claim status by doing things you can't or won't do
  • being jealous people are doing good doing things you aren't/can't/won't do
  • virtue signaling
  • righteous indignation
  • spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn't have to be addressed or isn't an issue, but I think also worth keeping these kinds of criticisms in context.

I find others answers about what the actual low resolution version of EA they see in the wild fascinating.

I go with the classic and if people ask I give them a three word answer: "doing good better".

If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."

Load More