M

MalcolmOcean

26 karmaJoined Feb 2015

Comments
10

Ah I realized I actually wanted to quote this paragraph (though the one I quoted above is also spot on)

 It made me angry. I felt like I’d drunk the kool-aid of some pervasive cult, one that had twisted a beautiful human desire into an internal coercion, like one for a child you’re trying to get to do chores while you’re away from home.

I felt similarly angry when I realized that my well-meaning friends had installed a shard of panic in my body that made "I'm safe" feel like it would always be false until we had a positive singularity. I had to reclaim that, in several phases. And on reflection I had to digest some sense of social obligation there, like fear of people judging me, whether EAs or other obligation-driven activists. And maybe they do or will! But I'm not compromising on catching my breath.

Appreciating this. It's helping me see that part of how I didn't fall deeper into EA than I did is that I already had a worldview that viewed obligations as confused in much the sort of way you describe... and so I saw EA as sort of "to the extent that you're going to care about the world, do so effectively" and also "have you noticed these particular low-hanging fruit?" and also just "here's a bunch of people who care about doing good things and are interesting". These obligations are indeed a kind of confusion—I love how you put it here: 

The thing underlying my moral “obligation” came from me, my own mind. This underlying thing was actually a type of desire. It turned out that I wanted to help suffering people. I wanted to be in service of a beautiful world. Hm.

I did get infected with a bit of panic about x-risk stuff though, and this caused me to flail around a bunch and try to force reality to move faster than it could. I think Val describes the structure of my panic quite aptly in Here's the exit. It wasn't a sense of obligation, but it was a sense of "there is a danger; feeling safe is a lie" and this was keeping me from feeling safe in each moment even in the ways in which I WAS safe in those moments (even if a nuke were to drop moments later, or I were to have an aneurysm or whatever). It was an IS not an OUGHT but nonetheless generated an urgent sense of "the world's on fire and it's up to me to fix that". But no degree of shortness to AI timelines benefits from  adrenaline—even if you needed to pull an all-nighter to stop something happening tomorrow, calm steady focus will beat neurotic energy.

It seems to me that the obligation structure and the panic structure form two pieces of this totalizing memeplex that causes people to have trouble creatively finding good win-wins between all the things that they want. Both of them have an avoidant quality, and awayness motivation is WAY worse at steering than towardsness motivation.

Are there other elements? That seems worth mapping out!

There's also the EA Workspace, a virtual pomodoro coworking room on Complice. It hasn't been that active lately but maybe this new influx of people will reinvigorate it.

(I'm the creator of Complice (and also an effective altruist!) I found this EA forum post from seeing a bunch of new people sign up for Complice citing this as the source.)

Additions:

  • space travel could include more details, like lowering launch costs, and stuff like what Deep Space Industries is doing with asteroid mining (in some ways making money from mining asteroids is kind of an instrumental goal for them, with the terminal goal being to get humans living in space full-time as opposed to just being on the ISS briefly)
  • preventing large-scale violence could include some component about shifting cultural zeitgeists to be more open and collaborative. This is hella hard, but would be very valuable to the extent that it can be done
  • I would add something like "collecting warning signs" under disaster prediction. For instance, what AI Impacts is doing with trying to come up with a bunch of concrete tasks that AIs currently can't beat humans at, which we could place on a timeline and use to assess the rate of AI progress. There might be a better name than "collecting warning signs" though.

Props for doing this. I was recently reflecting that it would be great to have a bunch of the LW Sequences or other works describing AI value-alignment problems translated into Chinese. If anyone who knows Chinese sees this and it seems like their kind of thing, I'd say go for it!

Hmm... I'll gesture back at the "Effective Giving vs Effective Altruism" thing, and say that maybe while EAs qua "identify as part of EA movement and comment on the EA forum and hang out with other EAs" might be under 35, we might be able to find lots of candidate Effective Givers who are part of a totally different demographic.

I like the ideas here. I see a lot of potential value in having a core group of EAs who are focused on the movement itself and on cause prioritization, crucial considerations, etc... and then also trying to shift the mindset of the wider population of people-who-donate-to-things so that they tend to look at GiveWell's recommendations and so on... without trying to get those people to join the movement as a movement or whatever.

I will second the sentiment that this post seems super overmentiony of Intentional Insights. For a non-profit, it feels awfully self-interested. I'm not sure what I'd recommend instead exactly, but maybe if you're following the "do things + tell people" approach, shift your focus a bit more towards doing things.

"Furthermore, psychologically, earning-to-give seems to me to be a better fit for the average EA than direct work. Many EAs are already working in a company and can simply move to donate more of their salary or focus on increasing their salary, rather than quit their job and start a new one."

I want to expand what's in the second sentence here. There are a substantial number of EAs already working at a job that they like, that makes them enough money to be able to reasonably donate lots of it. But most of them are probably over 25. Which is only half of EAs—according to this survey the median age is 25.

So since probably much of 80k's audience is still choosing their career from an earlier stage, like still-in-university or fresh-out-and-unemployed or haven't-chosen-a-major-yet, it makes sense to me that 80k wouldn't emphasize earn to give for these people.

I'm also not sure the 15% funding the 85% quite holds. CFAR, for example, gets lots of donations but also gets money from people attending workshops. I don't know the details, but I'd expect object-level charities like AMF to be able to have fairly wide appeal and to therefore get a decent amount of money from people who don't identify as EAs. I'm not actually confident on that point and would welcome evidence in any direction about it.

:D

I'll add that since this will be hosted in the context of Complice, which is a larger app, it may not make sense for me to add too many EA-specific features to it. So short-term it'll probably look like linking to and/or pulling data from other sites like skillshare.im

But yeah, I do want to make a really great place for EAs to hang out online and to get to know each other while also being productive! So if there are ways for me to do that then I'm excited to work on them :)

Load more