Z

zchuang

488 karmaJoined Jan 2023

Comments
61

I wish there was a library of sorts for different base models of TAI economics growth that weren't just some form of the Romer Model and TFP goes up because PASTA automates science. 

To be clear you should still ask more people and look at the downstream effects on PhDs, research, etc. Again would echo the advice for 80k and reaching out to other people. 

Impacts being distributed heavy-tailed has a very psychologically harsh effect given the lack of feedback loops in longtermist fields and I wonder what interpersonal norms one could cultivate amongst friends and the community writ-large (loosely held/purely musing etc.):

  1. Distinguishing pessimism about ideas from pessimism about people.
  2. Ex-ante vs. ex-post critiques.
  3. Celebrating when post-mortems have led to more successful projects.
  4. Mergers/takeover mechanisms of competition between peoples/projects.

I think EAs in the FTX era were leaning hard on hard capital (e.g. mentioning no lean season close down) ignoring the social and psychological parts of taking risk and how we can be a community that recognises heavy-tailed distributions without making it worse for those who are not in the heavy-tail.

zchuang
6d2210

To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they're a longtermist. 

There are masters programs in the UK that take non-CS students. Anecdata from friends is that they've done PPE at Oxford then an Imperial CS Masters. 

zchuang
10d51

A underrated thing with the (post)-rationalists/adjacent is how open with their emotions they are. I really appreciate @richard_ngo 's replacing fear series and just a lot of the older Lesswrong posts about starting a family with looming AI risk. Just really appreciating the personal posting and when debugging comes from a place of openness and emotional generosity. 

zchuang
20d10

Yeah I should have written more but I try to keep my short form casual to make the barrier of entry lower and to allow for expansions based on different reader's issues.

zchuang
20d42

I notice a lot of internal confusion whenever people talk about macro-level bottlenecks in EA:

  1. Talent constraint vs. funding constraint.
    1. 80k puts out declarations on different funding situation changes such as don't found projects on the margins (RIP FTX).
    2. People don't found projects in AI Safety because of this switch up.
    3. Over the next 2 years people up-skill and do independent research or join existing organisations. 
    4. Eventually, there are not enough new organisations to absorb funding.
    5. [reverse the two in cycles I guess]
  2. Mentorship in AI Safety
    1. There's a mentorship bottleneck so people are pushed to do more independent projects.
    2. There's less new organisations started because people are told it's a mentorship and research aptitude bottleneck.
    3. Eventually the mentorship bottleneck catches up because everyone up-skilled but there aren't enough organisations to absorb the mentors etc. etc.

To be clear, I understand the counterarguments about marginality and these are exaggerated examples but I do fear at its core that the way EAs defer means we have the worst of both the social planner problem and none of the benefits of the theory of the firm. 

zchuang
20d30

I notice myself being confused about why trades have to happen at the OpenPhil level. I think Pareto Optimality in trades works best when there are more actors aggregating and talking. It's sad that donor lotteries have died out to an extent and so much regranting discourse is around internal EA social dynamics rather than impact in and of itself. 

zchuang
22d30

Examples of resources that come to mind:

  1. Platforms and ability to amplify. I worry a lot about the amount of money in global priorities research and graduate students (even though I do agree it's net good). For instance, most EA PhD students take teaching buyouts and probably have more hours to devote to research. A sharing of resources probably means good distribution of prestige bodies and amplification gatekeepers.
    1. To be explicitly my model of the modal EA is they have bad epistemics and would take this to mean fund a bad faith critic (and there are so many) but I do worry that sometimes EA wins in the marketplace of ideas due to money rather than truth. 
  2. Give access to the materials necessary to make criticisms (e.g. AI Safety papers should be more open with dataset documentation etc.).

Again this is predicated on good faith critics. 

Load more