ekka

89Joined Aug 2021

Bio

Born and raised in Kenya but have lived in Canada for a while now. The huge disparities between my country of birth and my current country of residence have always drawn me to think about how things can be improved for the less fortunate in the world. I would like to move from just thinking about it to doing something about it. I'm still engaging and wrapping my head around the ideas of EA as it seems like it could help me move in the right direction.

Comments
17

I really appreciate the series of posts you have been making! Keep them coming!

Great work on the book Will! What do you think the impact of Longtermism and to a greater extent the Effective Altruism community will be by the end of this century? Examples of things I'm looking for are: How much do you think Longtermism and EA will have grown by the end of this century? How much will EA funded/supported organizations have brought reductions to existential risk and suffering in the world? How many new cause areas do you think will have been identified? (Some confidence intervals will be nice and a decade by decade breakdown of what you think the progression is going to look like towards those goals though I realize you're a busy fellow and may not have the capacity to produce such a detailed breakdown). I'm curious as to what concrete goals you think EA and Longtermism will have achieved by the end of this century and how you plan on keeping track on how close you are to achieving those goals.

Great post! It does seem prima facie infeasible that recapitulating evolution would even be computable. Another thing to consider is that trying to simulate evolution may not yield general intelligence if just run once and it may need to be simulated many times in order to stumble on general intelligence which adds to the amount of computation that may be needed if turns out that coming up with general intelligence is very unlikely.

Those may seem like the wrong metrics to be looking at given that the proportion of people doing direct work in EA is small compared to all the people engaging with EA. The organizations you listed are also highly selective so only a few people will end up working at them. I think the bias reveals itself when opportunities such as MLAB come up and the number of applicants is overwhelming compared to the number of positions available, not to mention the additional people who may want to work in these areas but don't apply for various reasons. I think if one used engagement on things like forum posts like a proxy of total time and energy people put engaging with EA then I think it would turn out that people engage disproportionately more with the topics the OP listed. Though maybe that's just my bias given that's the content I engage with the most!

Indeed. It just felt more grounded in reality to me than the other resources which may appeal more to us laypeople and the non laypeople prefer more speculative and abstract material.

Personally I find Human Compatible the best resource of the ones you mentioned. If it were just the others I'd be less bought into taking AI risk seriously.

Great post! I think this is a failure of EA. Lots of corporations and open source projects are able to leverage the efforts of many average intelligence contributors to do impressive things on a large scale through collaboration. It seems to me like there must be something wrong when there are many motivated people willing to contribute their time and efforts to EA but don't have lots of avenues to do so other than earning to give and maybe community building (which leaves a lot of people who feel motivated by EA with no concrete ways to easily engage). It seems to me that for direct contributions, EA prefers more of a superstar model where one has to stand out in order to be able to contribute effectively instead of a more incremental collaborative model where the superstars would still have an outsized impact but also lowers the bar for anyone to make an incremental contribution. Maybe there are good reasons why EA prefers one model over the other but I'd be surprised if the model that mobilizes less people is considered more impactful.

Another issue is that EA may target people who are smarter than average(at least smarter in very specific ways) but given that most people are average by definition or are smarter in different dimensions, these 'very smart people' may not be able to model other people correctly or how things happen in the world where reality doesn't usually line up well with mathematical abstractions and theoretical thinking. I have found myself questioning whether the balance of intellectualism and pragmatism is tilted too far on the side of the former. Hopefully this doesn't lead to a situation where the EA community cares more about seeming smart and having higher moral ground at the expense of actually doing good in the world.

Great post! Thanks for writing it! I'm not great at probability so just trying to understand the methodology.

  1. The cumulative probability of death should always sum up to 100% and P(Death|AGI) + P(Death|OtherCauses) = 100% (100% here being 100% of P(Death) i.e. all causes of death should be accounted for in P(Death) as opposed to 100% being equal to P(Death) + P(Life) sorry for the ambiguous wording ), so to correct for this would you scale natural death down as P(Death|AGI) increases i.e P(Death|OtherCauses) = (100-P(Death|AGI)) * Unscaled P(Death|OtherCauses)?(This assumes P(Death|OtherCauses) already sums up to 100 but maybe it doesn't?).
  2. I expect the standard deviation of P(Death|AGI) to be much higher than P(Death|Other) since AGI doesn't exist yet. What's the best way to take this into account?
  3. If you happen to have data on this, could you add an additional series with other Global Catastrophic Risks taken into account? It would be nice to see how risk of death from AGI compares with other GCRs that are already possible. I'd expect intuitively the standard deviation of other GCRs that exist to be lower.
Answer by ekkaJun 22, 20223

Can Longtermism succeed without creating a benevolent stable authoritarianism given that it is unlikely that all humans will converge to the same values? Without such a hegemony or convergence of values, doesn't it seem like conflicting interests among different humans will eventually lead to a catastrophic outcome?

For what it's worth, I found this post and the ensuing comments very illuminating. As a person relatively new to both EA and the arguments about AI risk, I was a little bit confused as to why there was not much push back on the very high confidence beliefs about AI doom within the next 10 years. My assumption had been that there was a lot of deference to EY because of reverence and fealty stemming from his role in getting the AI alignment field started not to mention the other ways he has shaped people's thinking. I also assumed that his track record on predictions was just ambiguous enough for people not to question his accuracy. Given that I don't give much credence to the idea that prophets/oracles exist, I thought it unlikely that the high confidence on his predictions were warranted on the count that there doesn't seem to be much evidence supporting the accuracy of long range forecasts. I did not think that there were such glaring mispredictions made by EY in the past so thank you for highlighting them.

Load More