TH

Theo Hawking

675 karmaJoined

Bio

Pseudonymous pseudo-EA.

If you're reading this and thinking about messaging me, you should, and if that stops being true I will edit this.

Comments
6

I want to echo the other replies here, and thank you for how much you've already engaged on this post, although I can see why you want to stop now.

I did in fact round off what you were saying as being about PR risk yesterday, and I commented as such, and you replied to correct that, and I found that really helpful - I'm guessing a lot of others did too. I suppose if I had already understood, I wouldn't have commented.

 I'm not detailing specific decisions for the same reason I want to invest in fewer focus areas: additional information is used as additional attack surface area. The attitude in EA communities is "give an inch, fight a mile". So I'll choose to be less legible instead

At the risk of overstepping or stating the obvious:

It seems to me like there's been less legibility lately, and I think that means that a lot more confusion brews under the surface. So more stuff boils up when there is actually an outlet.

That's definitely not your responsibility, and it's particularly awkward if you end up taking the brunt of it by actually stepping forward to engage. But from my perspective, you engaging here has been good in most regards, with the notable exception that it might have left you more wary to engage in future.

Appreciate you engaging thoughtfully with these questions!

I'm slightly confused about this specific point - it seems like you're saying that work on digital minds (for example) might impose PR costs on the whole movement, and that you hope another funder might have the capacity to fund this while also paying a lot of attention to the public perception.

But my guess is that other funders might actually be less cautious about the PR of the whole movement, and less invested in comms that don't blow back on (for example) AI safety.

Like, personally I am in favour of funder diversity but it seems like one of the main things you lose as things get more decentralised is the ability to limit the support that goes to things that might blow back on the movement. To my taste at least, one of the big costs of FTX was the rapid flow of funding into things that looked (and imo were) pretty bad in a way that has indirectly made EA and OP look bad. Similarly, even if OP doesn't fund things like Lighthaven for maybe-optics-ish reasons, it still gets described in news articles as an EA venue.

Basically, I think better PR seems good, and more funding diversity seems good, but I don't expect the movement is actually going to get both?

(I do buy that the PR cost will be more diffused across funders though, and that seems good, and in particular I can see a case for preserving GV as something that both is and seems reasonable and sane, I just don't expect this to be true of the whole movement)

My understanding is that some philosophers do actually think 'consequentialism' should only refer to agent-neutral theories. I agree it's confusing - I couldn't think of a better way to phrase it.

My guess would be yes. I too would really like to see data on this, although I don't know how I'd even start on getting it.

I imagine it would also be fairly worthwhile just to quantify how much is being lost by people burning out and how hard it is to intervene - maybe we could do better, and maybe it would be worth it.

I certainly agree that outside EA, consequentialism just means the moral philosophy. But inside I feel like I keep seeing people use it to mean this process of decision-making, enough that I want to plant this flag.

I agree that the criterion of rightness / decision procedure distinction roughly maps to what I'm pointing at, but I think it's important to note that Act Consequentialism doesn't actually give a full decision procedure. It doesn't come with free answers to things like 'how long should you spend on making a decision' or 'what kinds of decisions should you be doing this for', nor answers to questions like 'how many layers of meta should you go up'. And I am concerned that in the absence of clear answers to these questions, people will often naively opt for bad answers.

I get the impression many orgs set up to support EA groups have some version of this. Here are some I found on the internet:

Global Challenges Project has a "ready-to-go EA intro talk transcript, which you can use to run your own intro talk" here: https://handbook.globalchallengesproject.org/packaged-programs/intro-talks

EA Groups has "slides and a suggested script for an EA talk" here: https://resources.eagroups.org/events-program-ideas/single-day-events/introductory-presentations

To be fair, in both cases there is also some encouragement to adapt the talks, although I am not persuaded that this will actually happen much, and that when it does, it might still be obvious that you're seeing a variant on a prepared script.