H

Habryka

CEO @ Lightcone Infrastructure
21318 karmaJoined Working (6-15 years)

Bio

Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1358

Topic contributions
1

I think this is the closest that I currently have (in-general, "sharing all of my OP related critiques" would easily be a 2-3 book sized project, so I don't think it's feasible, but I try to share what I think whenever it seems particularly pertinent): 

https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st 

I also have some old memos I wrote for the 2023 Coordination Forum I would still be happy to share with people if they DM me that I referenced a few times in past discussions.

CEA seems to maintain control over most high-level aspects of EAGx, so I don't think this counts as competition.

The answer for a long time has been that it's very hard to drive any change without buy-in from Open Philanthropy. Most organizations in the space are directly dependent on their funding, and even beyond that, they have staff on the boards of CEA and other EA leadership organizations, giving them hard power beyond just funding. Lincoln might be on the EV board, but ultimately what EV and CEA do is directly contingent on OP approval.

OP however has been very uninterested in any kind of reform or structural changes, does not currently have any staff participate in discussion with stakeholders in the EA community beyond a very small group of people, and is majorly limited in what it can say publicly due to managing tricky PR and reputation issues with their primary funder Dustin and their involvement in AI policy.

It is not surprising to me that Lincoln would also feel unclear on how to drive leadership, given this really quite deep gridlock that things have ended up in, with OP having practically filled the complete power vacuum of leadership in EA, but without any interest in actually leading.

Oh, that's an interesting idea. In general seems good to take newsletters and digests and summaries and use other platforms for additional reach.

Agree! I am a bit worried the discussion is a bit social-drama heavy on LW, since it's kind of the only outlet for that kind of stuff on the frontpage, but that in itself is also somewhat of a success (having a place for more community-oriented discussion without taking over the whole site).

  • GiveWell, which takes a combined broad and HNW direct fundraising approach, seems to have hit some limiting factors in 2022 after having grown rapidly for more than 10 years.
  • Similarly, growth of The Life You Can Save, Effektiv Spenden, Animal Charity Evaluators, and Giving What We Can (all largely broad direct fundraising organisations at the time) seems to have stagnated somewhat at around the same time, suggesting this may have had something to do with external factors (e.g. the economic downturn and/or the FTX crisis), but there could also be other factors at play here, e.g. target groups becoming saturated.

I will take bets at relatively high odds that these external factors were the reason for the reduction in growth. Approximately anything EA-adjacent stopped growing during that period.

Oops, yep, that sure is the better functional representation. 

I haven't read the linked article or summary in detail, but clearly any measure of "success" must measure the costs of these policies as well? At least a quick skim seems to suggest the article didn't account for costs at all, which I feel like makes this abstraction kind of meaningless (since it basically means that the "most successful" ones will simply be the ones that were the ones that covered the largest countries/industries, but that doesn't tell us much, since that's also where the potential costs were located).

It still seems good to do these calculations, but I would feel very hesitant to call these policies "successful" without having measured their costs.

A much better measure of "success" would be something like "Co2 averted"/"economic costs" * "size of intervention".

I think that's inaccurate (though I will admit the bill text here is confusing). 

Critical harms is defined as doing more than $500M of damage, so at the very least you have to be negiligent specifically on the issue of whether your systems can cause $500M of harm. 

But I think more concretely the conditions under which the AG can sue for damages if no critical harm has yet occurred are pretty well-defined (and are not as broad as "fail to take reasonable care").

(1) There’s less point in saving the world if it’s just going to end anyway. Which is to say that pessimism about existential risk (i.e. higher risk) decreases the value of existential risk reduction because the saved future is riskier and therefore less valuable.

(2) Individual existential risks cannot be evaluated in isolation. The value of existential risk reduction in one area (e.g., engineered pathogens) is substantially impacted by all other estimated sources of risk (e.g. asteroids, nuclear war, etc.). It is also potentially affected by any unknown risks, which seems especially concerning.

Ok, but indeed all arguments for longtermism, as plenty of people have commented on the past, make a case for hingeyness, not just the presence of catastrophic risk. 

There is currently no known technology that would cause a space-faring civilization that sends out Von Neumann probes to the edge of the observable universe to go extinct. If you can manage to bootstrap from that, you are good, unless you get wiped out by alien species coming from outside of our Lightcone. And even beyond that, outside of AI, there are very few risks to any multiplanetary species, it does really seem like the risks at that scale get very decorrelated.

Load more