Luise

Researching US Frontier AI Regulation
492 karmaJoined Mar 2021Pursuing an undergraduate degreeWorking (0-5 years)Oxford, UK
admonymous.co/luisew

Bio

Currently researching how cost-benefit analysis is used in US regulatory decision-making and what this might imply for the regulation of Frontier AI. Supervised by John Halstead (GovAI).

In the past, I've done community building and operations at GovAI, CEA, and the SERI ML Alignment Theory Scholars program. My degree is in Computer Science.

I also sometimes worry about the big-picture epistemics of EA à la "Is EA just an ideology like any other?".

Comments
28

I found the framing of "Is this community better-informed relative to what disagreers expect?" new and useful, thank you!

To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on "EA tenets". For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it's fine that we were "unusual from the get-go" regarding this prior. (This is of course not very epistemically modest of them.)

Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:

  • It's kind of hard to determine whether EA beliefs are weird because we were weird from the get-go or because we did some novel piece of research/thinking. For example, was Toby Ord concerned about x-risks in 2009 because he had unusual priors or because he had thought about novel considerations that are obscure to outsiders? People would probably introduce their own biases while making this judgment. I think you could even try to make an argument like this about polyamory.
  • People probably generally think a community is better-informed than expected when spending more time engaging with it. At least this is what I see empirically. So for people who've engaged a lot with EA, your policy of updating towards EA beliefs if EA seems better-informed than expected probably leads to deferring asymmetrically more to EA than other communities. Since they will have engaged less with other communities. (Ofc you could try to consciously correct for that.)
  • I overall often have the concern with EA beliefs that "maybe most big ideas are wrong", just like most big ideas have been wrong throughout history. In this frame, our little inside pet theories and EA research provide almost no Bayesian information (because they are likely to be wrong) and it makes sense to closely stick to whatever seems most "common sense" or "established". But I'm not well-calibrated on how true "most big ideas are wrong" is. (This point is entirely compatible with what you said in the post but it changes the magnitude of updates you'd make.)
     

Side-note: I found this post super hard to parse and would've appreciated it a lot if it was more clearly written!

My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the "largest and smartest" expert group basically 'by default'. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.

Thanks a lot, I think it's really valuable to have your experience written up!

Luise
11mo4
-1Aim
1Clarity
😮 1

Thanks Max!

Sounds like a plausible theory that you lost motivation because you pushed yourself too hard. I'd also pay attention to "dumber" reasons like maybe you had more motivation from supervisors/social environment/more achievable goals in the past.

Similar to my call to take a vacation, maybe it's worth it for you to only do motivating work (like a side project) for 1.5 weeks and see if the tiredness disappears.

All of this with the caveat that you understand your situation a lot better than I do ofc!

yes! From reading about burnout it can seem like it only happens to people who hate their job, work in bad environments, etc. But it can totally happen to people who love their job!

thanks and big agree; I want to see many more different experiences of energy problems written up!

the causes of people's energy problems are so many and varied! It would be great to have many different experiences written up, including stress and anxiety-induced problems.

Thanks for feedback re:appendix, will see if others say the same :)

Optimistic note with low confidence:

In my impression, SBF thought he was doing an 'unpalatable' but right thing given the calculations (and his epistemic immodesty). Promoting a central meme in EA like "naïve calculations like this are too dangerous and too fallible" might solve a lot of the issue. I think dangerously-optimize-y people in EA are already updating in this direction as a result of FTX. Before FTX, being "hardcore" and doing naïve calculations was seen as cool sometimes. If we correct hard for this right now, it may be less of an issue in the future.

2 main caveats:

  1. The whole "don't do naïve calculations" idea is quite complex and not easy to communicate. This may make it hard to correct for it.
  2. The movements of the memes of a space as large and complex as EA are probably hard to predict. All sorts of crazy things might happen and I have no clue. Like for example there could be a new counterculture part of EA that becomes super dangerously-optimize-y. (But at least they would face more of an uphill battle in this world.)

ah, the thing about fragile cooperative equilibria makes sense to me.

I'm not as sure as you that this shift would happen to core EA though. I could also imagine that current EAs will have a very allergic reaction to new, unaligned people coming in and trying to take advantage of EA resources. I imagine something like a counterculture forming where aligned EAs start purposefully setting themselves apart from people who're only in it for a piece of the pie, by putting even more emphasis on high EA alignment. I believe I've already seen small versions of this happening in response to non-altruistic incentives appearing in EA.

The faster the flood of new people and change of incentives happens, the more confident I am in this view. Overall, I'm not extremely confident at all though.

On your last point, if I understand this right this is not the thing you're most worried about though? Like, these people hijacking EA are not the mechanism by which EA may collapse in your view?

It's unclear to me whether you are saying that the potentially huge number of new people in EA will try to take advantage of EA resources for personal gain or that WE, who are currently in EA for altruistic reasons, will do so. The former sounds likely to me, the latter doesn't.

 

I might be missing crucial context here since I'm not familiar with the Thielosphere and all that, but overall I also don't think a huge number of new, unaligned people will be the downfall of EA. As long as leadership, thought-leaders, and grantmakers in EA stay aligned, it may be harder for them to determine whom to give that grant (or that stamp of approval), but wouldn't that just simply lead to less grants? Which seems bad but not like the end?

 

Or are you imagining highly intelligent people with impressive resumes who strategically aim to hijack EA resources for their aims and get into important positions in EA?

Load more