Akash

1786Joined Oct 2020

Bio

Longtermist movement-builder. How can we find and mentor talented people to reduce existential risk?

Interested in community-building, management, entrepreneurship, communication, and AI Alignment.

Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.

Comments
64

+1 on questioning/interrogating opinions, even opinions of people who are "influential leaders."

I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves

My hope is that readers don't come away with "here is the set of opinions I am supposed to believe" but rather "ah here is a set of opinions that help me understand how some EAs are thinking about the world." Thank you for making this distinction explicit.

Disagree that these are mostly characterizing the Berkeley community (#1 and #2 seem the most Berkeley-specific, though I think they're shaping EA culture/funding/strategy enough to be considered background claims. I think the rest are not Berkeley-specific).

I'd be excited about posts that argued "I think EAs are overestimating AI x-risk, and here are some aspects of EA culture/decision-making that might be contributing to this."

I'm less excited about posts that say "X thing going on EA is bad", where X is a specific decision that EAs made [based on their estimate of AI x-risk]. (Unless the post is explicitly about AI x-risk estimates).

Related: Is that your true rejection?

Thanks for writing this, Eli. I haven't read WWOTF and was hoping someone would produce an analysis like this (especially comparing The Precipice to WWOTF).

I've seen a lot of people posting enthusiastically about WWOTF (often before reading it) and some of the press that it has been getting (e.g., cover of TIME). I've felt conflicted about this.

On one hand, it's great that EA ideas have the opportunity to reach more people.

On the other hand, I had a feeling (mostly based on quotes from newspaper articles summarizing the book) that WWOTF doesn't feature AI safety and doesn't have a sense of "hey, a lot of people think that humanity only has a few more decades [or less] to live." 

I hope that EAs concerned about AIS champion resources that accurately reflect their sense of concern, feature AI safety more prominently, and capture the emotion/tone felt by many in the AIS community. (List of Lethalities is a good example here, though it has its own flaws and certainly isn't optimizing for widespread appeal in the same way that WWOTF seems to be).

Not yet, and I'm to blame. I've been focusing on a different project recently, which has demanded my full attention. 

Will plan to announce the winners (and make winning entries public, unless authors indicated otherwise) at some point this month.

+1. The heuristic doesn’t always work.

(Though for an intro talk I would probably just modify the heuristic to “is the the kind of intro talk that would’ve actually excited a younger version of me.”)

Thanks for writing this, Emma! Upvoted :)

Here's one heuristic I heard at a retreat several months ago: "If you're ever running an event that you are not excited to be part of, something has gone wrong."

Obviously, it's just a heuristic, but I actually found it to be a pretty useful one. I think a lot of organizers spend time hosting events that feel more like "teaching" rather than "learning together or working on interesting unsolved problems together." 

And my impression is that the groups that have fostered more of a "let's learn together and do things together" mentality have tended to have the most success.

This seems like a good time to amplify Ashley's We need alternatives to intro EA Fellowships, Trevor's University groups should do more retreats, Lenny's We Ran an AI Timelines Retreat, and Kuhan's Lessons from Running Stanford EA and SERI.

Thank you for writing this, Ben. I think the examples are a helpful and I plan to read more about several of them. 

With that in mind, I'm confused about how to interpret your post and how much to update on Eliezer. Specifically, I find it pretty hard to assess how much I should update (if at all) given the "cherry-picking" methodology:

Here, I’ve collected a number of examples of Yudkowsky making (in my view) dramatic and overconfident predictions concerning risks from technology.

Note that this isn’t an attempt to provide a balanced overview of Yudkowsky’s technological predictions over the years. I’m specifically highlighting a number of predictions that I think are underappreciated and suggest a particular kind of bias.

If you were apply this to any EA thought leader (or non-EA thought leader, for that matter), I strongly suspect you'd find a lot clearcut and disputable examples of them being wrong on important things. 

As a toy analogy, imagine that Alice is widely-considered to be extremely moral. I hire an investigator to find as many examples of Alice doing Bad Things as possible. I then publish my list of Bad Things that Alice has done. And I tell people "look-- Alice has done some Bad Things. You all think of her as a really moral person, and you defer to her a lot, but actually, she has done Bad Things!"

And I guess I'm left with a feeling of... OK, but I didn't expect Alice to have never done Bad Things! In fact, maybe I expected Alice to do worse things than the things that were on this list, so I should actually update toward Alice being moral and defer to Alice more

To make an informed update, I'd want to understand your balanced take. Or I'd want to know some of the following:

  • How much effort did the investigator spend looking for examples of Bad Things?
  • Given my current impression of Alice, how many Bad Things (weighted by badness) would I have expected the investigator to find?
  • How many Good Things did Alice do (weighted by goodness)? 

Final comment: I think this comment might come across as ungrateful-- just want to point out that I appreciate this post, find it useful, and will be more likely to challenge/question my deference as a result of it.

Hey, Jay! Judging is underway, and I'm planning to announce the winners within the next month. Thanks for your patience, and sorry for missing your message.

Miranda, your FB profile & EA profile are great examples of #3 :) 

Load More