A

Arepo

5074 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
724

Topic contributions
17

There are many ways to reduce existential risk. I don't see any good reason to think that reducing small chances of extinction events is better EV than reducing higher chances of smaller catastrophes, or even just building human capacity in preferentially non-destructive way. The arguments that we should focus on extinction have always boiled down to 'it's simpler to think about'.

It's still in use, but it has the basic problem of EA services that unless there's something to announce, there's not really any socially acceptable way of advertising it.

I was nodding along until I got to here:
 

 Some reduce the problem to AI-not-kill-everyone-ism, which seems straightforward enough and directed and the most robust source of value here,

By any normal definition of 'robust', I think this is the opposite of true. The arguments for AI extinction are highly speculative. By the arguments that increasingly versatile AI destabilises the global economy and/or military are far more credible. Many jobs already seem to have been lost to contemporary AI, and OpenAI has already signed a deal with autonomous arms dealer Anduril.

I think it's not hard to imagine worlds where even relatively minor societal catastrophes significantly increase existential risk, as I've written about elsewhere, and AI credibly (though I don't think obviously) makes these more likely. 

So while I certainly wouldn't advocate the EA movement pivoting toward soft AI risk or even giving up on extinction risk entirely, I don't see anything virtuous in leaning too heavily into the latter.

This philosophy seems at stark odds with 80k's recent hard shift into AI safety. The arguments for the latter, at least as an extinction risk, necessarily lack good evidence. If you're still reading this I'm curious whether you disagree with that assessment, or have shifted the view you espoused in the OP?

Have you checked out the EA Gather? It's been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it's had several events run or part-run on there - though you'd have to check in with the organisers to see how successful they were.

Reading the Eliezer thread, I think I agree with him that there's no obvious financial gain for you if you hard-lock the money you'd have to pay back. 

I don't follow this comment. You're saying Vasco gives you X now, 2X to be paid back after k years. You plan to spend X/2 now, and lock up X/2, but somehow borrow 3/(2X) money now, such that you can pay the full amount back in k years? I'm presumably misunderstanding - I don't see why you'd make the bet now if you could just borrow that much, or why anyone would be willing to lend to you based on money that you were legally/technologically committed to giving away in k years.

One version that makes more sense to me is planning to pay back in installments, on the understanding that you'd be making enough money to do so at the agreed rate - though a) that comes with obviously increased counterparty risk, and b) it still doesn't make much sense if your moneymaking strategy is investing money which you have rather than selling service/labour, since, again, it seems irrational for you to have any money at the end of the k-year period.

I remain a non-doomer (and am considering such bets more recently), but support this comment. I don't think the above criticisms make sense, but with a couple of caveats:

1) Zach Stein-Perlman's above borrowing in general seems reasonable. If your response is that it's high risk, it seems like making a bet is de-facto asking the better to shoulder that risk for you

2) 'This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.' - I know you were responding to his 'can’t possibly be good for you' comment (emph mine), but I don't see why this isn't a rational behaviour if you think the world is going to end in <4 years. Both from a selfish perspective - is it selfishly rational to be concerned about a couple of years of reduced reputation vs extinction beyond that?; and from an altruistic perspective - if you think the world is almost certainly doomed, that the counterfactual world in which we survive is extremely +EV, and that spending the extra money could move the needle on preventing doom, it seems crazy not to just spend it and figure out the reputational details on the slim chance we survive.

The second is one of the main sources of counterparty risk that makes me wary of such bets - it seems like it would be irrational for anyone to accept them with me in good faith.

It's difficult if the format requires a 1D sliding scale. I think reasonable positions can be opposed on AI vs other GCRs vs infrastructure vs evidenced interventions, and future (if it exists) is default bad vs future is default good, and perhaps 'future generations should be morally discounted' vs not.

I'm going to struggle to cast a meaningful vote on this, since I find 'existential risk' terminology as used in the OP more confusing than helpful, since e.g. it includes nonexistential considerations and in practice excludes non-extinction catastrophes from a discussion they should very much be in, in favour of work on the heuristical-but-insufficient grounds of focusing on events that have maximal extinction probability (i.e. AI). 

I've argued here that non-extinction catastrophes could be as or more valuable to work on than immediate extinction events, even if all we care about is the probability of very long-term survival. For this reason I actually find Scott's linked post extremely misleading, since it frames his priorities as 'existential' risk, then pushes people entirely towards working on extinction risk - and gives reasons that would apply as well to non-extinction GCRs. I gave some alternate terminology here, and while I don't want to insist on my own clunky suggestions, I wish serious discussions would be more precise.

Do you have examples of LLMs improving Fermi estimates? I've found it hard to get any kind of credences at all out of them, let alone convincing ones.

Load more