howdoyousay?

Topic Contributions

Comments

Leftism virtue cafe's Shortform

"70,000 hours back"; a monthly podcast interviewing someone who 'left EA' about what they think are some of EAs most pressing problems, and what somebody else should do about them.

howdoyousay?'s Shortform

Is it all a bit too convenient?

There's been lots of discussion about EA having so much money; particularly long-termist EA. Worries that that means we are losing the 'altruist' side of EA, as people get more comfortable, and work on more speculative cause areas. This post isn't about what's right / wrong or what "we should do"; it's about reconciling the inner tension this creates.

Many of us now have very well-paid jobs, which are in nice offices with perks like table-tennis. And that many people are working on things which often yield no benefit to humans and animals in the near-term but might in future; or indeed the first order effect of the jobs are growing the EA community, and 2nd and 3rd are speculative benefit to humans and animals or sentient beings in the future. These jobs are often high status. 

Though not in an EA org, I feel my job fits this bill as well. I get a bit pissed with myself sometimes feeling I've sold out; because it just seems to be a bit too convenient that the most important thing I could do gets me high profile speaking events, a nice salary, an impressive title, access to important people, etc. And that potential impact from my job, which is in AI regulation, is still largely speculative.

I feel long-termish, in that I aim to make the largest and most sustainable change for all sentient minds to be blissful, not suffer and enjoy endless pain au raisin. But that doesn't mean ignoring humans and animals today. To blatantly mis-quote Peter Singer the opportunity cost of not saving a drowning child today is still real, even if that means showing up 5 minutes late to work every day compromising on your productivity, which you believe is so important because you have a 1/10^7* chance of saving 10^700** children.

For me to believe I'm living my values, I think I need to still try to make an impact today. I try donate a good chunk to global health and  wellbeing initiatives, lean harder into animal rights, and (am now starting to) support people in my very deprived local community in London.

So two questions:

Do other long-termish leaning people feel this same tension?

And if so, how do you reconcile it within yourself?

*completely glib choice of numbers

**exponentially glibber

Responsible/fair AI vs. beneficial/safe AI?

Your question seems to be both about content and interpersonal relationships / dynamics. I think it's very helpful to split out the differences between the groups along those lines.

In terms of substantive content and focus, I think the three other responders outline very well; particularly on attitudes towards AGI timelines and types of models they are concerned about.

In terms of the interpersonal dynamics, my personal take is we're seeing a clash between left / social-justice and EA / long-termism play out stronger in this content area than most others, though to date I haven't seen any animus from the EA / long-termist side. In terms of explaining the clash, I guess it depends how detailed you want to get.

Could be minimalistic and sum it up as one or both sides hold stereotypical threat models of the other, and are not investigating these models but rather attacking based on them.

Could expand and explain why EA / long-termism evokes such a strong threat response to people from the left, especially marginalised communities and individuals who have been punished for putting forward ethical views - like Gebru herself.

I think the latter is important but requires lots of careful reflection and openness to their world views, which I think requires a much longer piece. (And if anyone is interested in collaborating on this, would be delighted!)

Responsible/fair AI vs. beneficial/safe AI?

To add to the other papers coming from the "AI safety / AGI" cluster calling for a synthesis in these views...

https://www.repository.cam.ac.uk/handle/1810/293033

https://arxiv.org/abs/2101.06110

Awards for the Future Fund’s Project Ideas Competition

I think taking this forward would be awesome, and I'm potentially interested to contribute. So consider this comment an earmarking for me to come speak with you and / or Rory about this at a later date :)

EA needs to understand its “failures” better

Thanks for writing this, completely agree.

I'd love if the EA community was able to have increasingly sophisticated, evidence -backed conversations about e.g. mega-projects vs. prospecting for and / or investing more in low-hanging fruits.

It feels like it will help ground a lot more debates and decision making within the community, especially around prioritising projects which might plausibly benefit the long term future compared with projects we've stronger reasons to think will benefit people / animals today (albeit not an almost infinitely large number of people / animals).

But also, you know, an increasingly better understanding of what seems to work is valuable in and of itself!

EA will likely get more attention soon

Equally there's an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example: https://medium.com/curious/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982

I'm sure / really hope Will's new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.

EA will likely get more attention soon

Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.  

 

In fact, I think I'll reflect on this list for a long time to ensure I continue not to respond on Twitter!

My experience with imposter syndrome — and how to (partly) overcome it

Agreed, and I was going to single out that quote for the same reason. 

I think that sentence is really the crux of imposter syndrome. I think it's also, unfortunately, somewhat uniquely triggered by how EA philosophy is a maximising philosophy, which necessitates comparisons between people or 'talent' as well as cause areas. 

As well as individual actions, I think it's good for us to think more about community actions around this as any intervention targeting the individual but not changing environment rarely makes the dent needed.

Load More