A

AppliedDivinityStudies

2483 karmaJoined

Comments
164

Hey Ben, I'm late, but have written up some thoughts here. I think you're misinterpreting Peter on AI on point 2
> The only promising technology for avoiding stagnation is AI.

You quote the NYT interview where he says
> If you don’t have A.I., wow, there’s just nothing going on.

But in my interpretation this is clearly a criticism rather than an endorsement. Here's Peter in another interview

why is AI going to be the only technology that matters? If we say there’s only this one big technology that’s going to be developed, and it is going to dominate everything else, that’s already, in a way, conceding a version of the centralization point. So, yes, if we say that it’s all around the next generation of large language models, nothing else matters, then you’ve probably collapsed it to a small number of players. And that’s a future that I find somewhat uncomfortably centralizing, probably.

The definition of technology — in the 1960s, technology meant computers, but it also meant new medicines, and it meant spaceships and supersonic planes and the Green Revolution in agriculture. Then, at some point, technology today just means IT. Maybe we’re going to narrow it even further to AI. And it seems to be that this narrowing is a manifestation of the centralizing stagnation that we should be trying to get out of.


 

Ah wow, yeah super relevant. Thanks for sharing!

Ah okay, thanks for explaining this. That does meaningfully affect my understanding of this particular role.

Yeah this is all right, but I see EA as being since it's founding, much closer to Protestant ideals than Catholic ones, at least on this particular axis.

If you had told me in 2018 that EA was about "supporting what Dustin Moskovitz chooses to do because he is the best person who does the most good", or "supporting what Nick Bostrom says is right because he has given it the most thought", I would have said "okay, I can see how in this world, SBF's failings would be really disturbing to the central worldview". 

But instead it feels like this kind of attitude has never been central to EA, and that in fact EA embraces something like the direct opposite of this attitude (reasoning from first principles, examining the empirical evidence, making decisions transparently). In this way, I see EA as already having been post-reformation (so to speak).

I'm going off memory and could be wrong, but in my recollection the thought here was not very thorough. I recall some throwaway lines like "of course this isn't liquid yet", but very little analysis. In hindsight, it feels like if you think you have between $0 and $1t committed, you should put a good amount of thought into figuring out the distribution.

One instance of this mattering a lot is the bar for spending in the current year. If you have $1t the bar is much lower and you should fund way more things right now. So information about the movement's future finances turns out to have a good deal of moral value.

I might have missed this though and would be interested in reading posts from before 11/22 that you can dig up.

Oh, I think you would be super worried about it! But not "beat ourselves up" in the sense of feeling like "I really should have known". In contrast to the part I think we really should have known, which was that the odds of success were not 100% and that trying to figure out a reasonable estimate for the odds of success and the implications of failure would have been a valuable exercise that did not require a ton of foresight.

Bit of a nit, but "we created" is stronger phrasing than I would use. But maybe I would agree with something like "how can we be confident that the next billionaire we embrace isn't committing fraud". Certainly I expect there will be more vigilance the next time around and a lot of skepticism.

I am around!
https://twitter.com/alexeyguzey/status/1668834171945635840

Does EA Forum have a policy on sharing links to your own paywalled writing? E.g. I've shared link posts to my blog, and others have shared link posts to their substacks, but I haven't see anyone share a link post to their own paid substack before.

I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future.  Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.

The idea that "the future might not be good" comes up on the forum every so often, but this doesn't really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don't fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we're pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today

Load more