By inference, if you are one of those copies, the 'moral worth' of your own perceived torture will therefore be 1/10billionth of its normal level. So, selfishly, that's a huge upside - I might selfishly prefer being one of 10 billion identical torturees as long as I uniquely get a nice back scratch afterwards, for e.g.
Space lasers don't seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that's within the solar system they're targeting, then that system will still have plenty of time to see the object that's going to shoot them arriving. If they're much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to co...
I don't think anyone's arguing current technology would allow self-sufficiency. But part of the case for offworld settlements is that they very strongly incentivise technolology that would.
In the medium term, an offworld colony doesn't have to be fully independent to afford a decent amount of security. If it can a) outlast some globally local catastrophe (e.g. a nuclear winter or airborne pandemic) and b) get back to Earth once things are safer, it still makes your civilisation more robust.
I broadly agree with the arguments here. I also think space settlement has a robustness to its security that no other defence against GCRs does - it's trivially harder to kill all of more people spread more widely than it is to kill of a handful on a single planet. Compare this to technologies designed to regulate a single atmosphere to protect against biorisk, AI safety mechanisms that operate on AGIs whose ultimate nature we still know very little of, global political institutions that could be subverted or overthrown, bunkers on a single planet, etc, al...
I strongly agree with the first half of this post - bunkers and refuges are pretty bad as a defence against global catastrophes.
Your solution makes a lot less sense to me. It seems like it has many of the same problems you're trying to avoid - it won't be pressure tested until the world collapses. In particular, if it's an active part of a local community, that implies people will be leaving and reentering regularly, which means any virus with a long incubation period could be in there before people know it's a problem.
Also, I feel like your whole li...
Hey Corentin,
The calculators are intentionally silent on the welfare side, on the thought that in practice it's much easier to treat as a mostly independent question. That's not to say it actually is independent, and ideally I would like the output to include more information about what the pathways to either extinction or an interstellar state, so that people can do some further function on the output. I do think it's reasonable, even on a totalising view, to prioritise improving future welfare conditional on it existing and largely ignoring the question ...
I don’t feel so comfortable talking to community health at the moment.
Can you say why? That seems like the obvious first step, so it would make it easier to offer a useful alternative if you could share some part of your hesitation. I don't know if it would feel any safer to message a stranger, but feel free to DM me your concerns if you prefer (or you can email me if you don't want them stored on the EA forum). I'm not a support professional, but maybe have enough detachment from but also skin in the EA community to help you figure out next step.
Fwiw I've...
Triodos (the most ethical bank I could find)
Fwiw I have never been terribly impressed by Triodos' ethos. The last time I looked at the sort of projects they fund, they were e.g. investing in alternative medicine and divesting from nuclear energy, the former of which seems surreal to call 'ethical' and the latter of which is a disastrous strategy for the environment.
I would much rather invest in something with a higher interest rate and donate 50% of the difference (or whatever seems appropriate).
Yeah, it sounds like this might not be appropriate for someone with your credences, though I'm confused by what you say here:
...I mentioned point/mean probability estimates, but my upper bounds (e.g. 90th percentile) are quite close, as they are strongly limited by the means. For example, if one's mean probability is 10^-10, the 90th percentile probability cannot be higher than 10^-9, otherwise the mean probability would be higher than 10^-10 (= (1 - 0.90)*10^-9), which is the mean. So my point remains as long as you think my point/mean estimates are reasonab
Hm, the link works ok for me. What happens when you open it? It can be a bit shonky on mobile phones - maybe try using it on a laptop/desktop if you haven't.
It's called 'EA coworking and lounge', if that helps.
Thanks for the kind words, David. And apologies - I'd forgotten you'd published those explicit estimates. I'll edit them in to the OP.
My memory of WWOtF is that Will talks about the process, but other than giving a quick estimate of '90% chance we recover without coal, 95% chance with' he doesn't do as much quantifying as you and Luisa.
Also Lewis Dartnell talked about the process extensively in The Knowledge, but I don't think he gives any estimate at all about probabilities (the closest I could find was in an essay for Aeon where he opined that 'an industrial revolution without coal would be, at a minimum, very difficult').
Hey Vasco, thanks for the in-depth reply, and thanks again for trawling over this behemoth :)
Let me take these points in order:
I think the annual risk of human extinction not involving transformative AI (TAI) is astronomically low.
I'm highly sceptical of point probability estimates for events for which we have virtually no information - that's exactly why I made these tools. Per Dan Schwarz's recent post, it seems much more important to me to give an interactive model into which people can put their own credences, so that we can then debate the input ...
I'm happy to talk you through using it if you're finding it confusing.
If you (or anyone else) reading this wants to catch me for some support, I'm on the EA Gather Town as much as possible (albeit currently in New Zealand time), so you can log in there and ping me :)
I think it would have been better to speak up way, way sooner,
If and when this postmortem ever does happen, I hope they will address this, too. The lack of public engagement on the subject with the rest of the movement following the FTX disaster seems a comparable lapse of responsibility to anything that might have happened in the time leading up to it.
I found this interesting, and a model I've recently been working on might be relevant - I've emailed you about it. One bit of feedback:
Please reach out to hello@futuresearch.ai if you want to get involved!
You might want to make it more clear what kind of collaboration you're hoping to receive.
I think you gave up on your theory being maximally consistent when you opted for diversity of experience as a metavalue. Most people don't actually consider their own positive experiences cheapened by someone on the other side of the world having a similar experience.
Also, if you're doing morality by intuition (a methodology I think has no future), then I suspect most people would much sooner drop 'diversity of experience good' than 'torture bad'.
This. I'm imagine some Abrodolph Lincoler-esque character - Abronard Willter maybe - putting me in a brazen bull and cooing 'Don't worry, this will all be over soon. I'm going to create 10billion more of you also in a brazen bull, so the fact that I continue to torture you personally will barely matter.'
most people intrinsically value diversity of experience, and see a large number of very similar lives as less of a good thing.
Especially in such a contentious argument, I think it's bad epistemics to link to a page with some random dude saying he personally believes x (and giving no argument for it) with the linktext 'most people believe x'.
It's wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.
This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I'm not sure what you mean here. That the simulation argument doesn't seem different from those? Or that the argument that 'we have no evidence of their existence and therefore shouldn't update on speculation about them' is comparable to what I'm saying about the simulation hypothesis?
If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only ...
I think assuming that this is purely based on optics is unwarranted. Like I argued at the time, talk of 'optics' is kind of insulting to the everyperson, carrying the implication that the irrational public will misunderstand the +EV of such a decision. Whereas I contend that there's a perfectly rational Bayesian update that people should do towards an organisation being poorly run or even corrupt when that org spends large sums of money on vanity projects which they justify with a vague claim about having done some CBA that they don't want to share.
Meanwhi...
To be clear, what I am criticizing here is not operating the venue while the sale is going on, or setting some kind of target for the operators in terms of quality-adjusted-events or estimates of counterfactual events caused, that would allow them to continue operating the venue.
I totally agree that observing someone spending money on a "vanity project" would be evidence that they are poorly run or corrupt, but like, Wytham would not be a vanity project if it were to make economic sense for EV or the EA community at large to operate. So whether a project is a vanity project is dependent on a cost-effectiveness analysis (which I don't think really has occurred in this case).
- We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn't necessarily have access to what the simulation is run on.
It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that
Is there are online version of the case for the fading qualia argument? This feels a bit abstract without it...
Partly from a scepticism about the highly speculative arguments for 'direct' longtermist work - on which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement).
Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while say...
Hey Johannes :)
To be clear, I think the original post is uncontroversially right that it's very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that it's going to be high tier or even optimal for some related concern C.
Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevant - you look at the evidence...
This statement was very surprising to me:
The “concerned” participants (all of whom were domain experts) ... the “skeptical” group (mainly “superforecasters”)
Can you say more about your selection process, because this seems very important to understanding how much to update on this. Did you
a) decide you needed roughly equally balanced groups of sceptics vs concerned, start with superforecasters, find that they were overwhelmingly sceptics, and therefore specifically seek domain experts because they were concerned
b) decide you needed roughl...
Interesting stuff. I'm sceptical a priori, but it would be amazing if this kind of thing replicated. I think there's a typo:
… and four to six months after treatment, they…
In contrast, after...
I think it would be a little bit of a surprising and suspicious convergence if the best interventions to improve human health (e.g. GiveWell's top charities) were also the best to reliably improve global capacity
Fwiw, I think Greg's essay is one of the most overweighted in forum history (as in, not necessarily overrated, but people put way too much weight in its argument). It's a highly speculative argument with no real-world grounding, and in practice we know that of many well-evidenced socially beneficial causes that do seem convergently beneficial in ot...
I don't think these examples illustrate that "bewaring of suspicious convergence" is wrong.
For the two examples I can evaluate (the climate ones), there are co-benefits, but there isn't full convergence with regards to optimality.
On air pollution, the most effective intervention for climate are not the most effective intervention for air pollution even though decarbonization is good for both.
See e.g. here (where the best intervention for air pollution would be one that has low climate benefits, reducing sulfur in diesel; and I think if that chart were...
greater confidence in EEV lends itself to supporting longshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way.
This just depends on what you think those EEVs are. Long-serving EAs tend to lean towards thinking that targeted efforts towards the far future have higher payoff, but that also has a strong selection effect. I know many smart people with totalising consequentialist sympathies who are sceptical enough of the far future that they prefer to donate to GHD causes. None of them are at all active in the EA movement, and I don't think that's coincidence.
I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what you're highlighting doesn't seem to fairly represent his actual concerns.
Nuño never argues for why the comments they link to shouldn't be moderated
He does. Also, I suspect his main concern is with people being banned rather than having their posts moderated.
...Nuňo doesn't make points that EA is too naïve/consequentialist/ignoring common-sense enough. Instead, they don't think we've gone far enough into that direction. See in Alternate Visions of EA, the cl
Hey Arepo, thanks for the comment. I wasn't trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I'm going to make some edits to clear up confusion I might have introduced. Some quick points of note:
Cool! I just submitted a project - minor bit of feedback is that it's slightly irritating to have the 'project subtitle' field be mandatory.
Great post - I'm embarrassed to have missed it til now! One key point I disagree with:
there might be interventions that reduce risk a lot for not very long or not very much but for a long time. But actions that drastically reduce risk and do so for a long time are rare.
I think there are two big possible exceptions to the latter claim: benign AI and becoming sustainably multiplanetary. EAs have discussed the former a lot, and I don't have much to add (though I'm highly sceptical of it as an arbitrary-value lock-in mechanism on cosmic timelines). I think the...
Katja and I date, so yes, I am biased, but I really think that’s a pretty unimportant fact about her
Congrats to both of you on your great catches! Say hi to her for me - it's been a while :)
More generally, what incentives exist? In a normal for-profit environment there are various reasons for individuals to start their own company, to seek promotion, to do a good job, to do a bad job, to commit institutional fraud etc - we typically think of these as mainly financial, and often use the adage 'follow the money' as a methodology to try and discover these phenomena, to encourage the good ones and discourage the bad.
I want to know what the equivalent methodology would be to find out equivalent phenomena at EA organisations.
EA organizations don't really have a great need for nurses, for history professors, for plumbers, etc.
Fwiw, I was involved with an EA organisation that that struggled for years with the admin of finding trustworthy tradespeople (especially plumbers).
More generally, I think a lot of EA individuals would benefit a lot from access to specialist knowledge from all sorts of fields, if people with that knowledge were willing to offer it free or at a discount to others in the community.
I have a stronger version of the same concerns, fwiw. I can't imagine a 'Long Reflection' that didn't involve an extremely repressive government clamping down on private industry every time a company tried to do anything too ambitious, and that didn't effectively promote some caste of philosopher kings above all others to the resentment of the populace. It's hard to believe this could lead to anything other than substantially worse social values.
I also don't see any a priori reason to think 'reflecting' gravitates people towards moral truth or better values. Philosophers have been reflecting for centuries, and there's still very little consensus among them or any particular sign that they're approaching one.
Are there reasonably engaging narrative tropes (or could we invent effective new ones) that could easily be recycled in genre fiction to promote effective altruist principles, in much the same way that e.g. the noble savage trope can easily be used to promote ecocentric philosophies, no-one gets left behind trope promotes localism, etc?
A steel manned version of the best longtermist argument(s) against AI safety as the top priority cause area.
How can we make effective altruism more appealing to political conservatives without alienating engaged liberals? If there is an inevitable trade-off between the two, what is the optimal equilibrium, how close to it are we, and can we get closer?
Write a concrete proposal for a scalable bunker system that would be robust and reliable enough to preserve technological civilisation in the event of human extinction due to e.g. nuclear winter, biopandemics on the surface. How much would it cost? Given that many people assert it would be much easier than settling other planets, why hasn't anyone started building such systems en mass, and how could we remove whatever the blocker is?
Investigating incentives in EA organisations. Is money still the primary incentive? If not, how should we think about the intra-EA economy?
What are the most likely scenarios in which we don't see transformative AI this century or perhaps for even longer? Do they require strong assumptions about (e.g.) theory of mind?
Is there an underunexplored option to fund early stage for-profits that seem to have high potential social value? Might it sometimes be worth funding them in exchange for basically 0 equity so that it's comparatively easy for them to raise further funding the normal way?
If we take utilitarianism at face value, what are the most likely candidates for the physical substrate of 'a utilon'? Is it plausible there are multiple such substrates? Can we usefully speculate on any interesting properties they might have?
Some empirical research into the fragile world hypothesis, in particular with reference to energy return on investment (EROI). Is there a less extreme version of 'The great energy descent' that implies that average societal EROI could stay at sustainable levels but only absent shocks, and that one or two big shocks could push it below that point and make it a) impossible to recover or b) possible to recover but only after such a major restructuring of our economy that it would resemble the collapse of civiliation?
An updated version of Luisa Rodriguez's 'What is the likelihood that civilizational collapse would cause technological stagnation? (outdated research)' post that took into account her subsequent concerns, and looked beyond 'reaching an industrial revolution' to 'rebuilding an economy large enough to eventually become spacefaring'.
That's sad. For anyone interested in why they shut down (I'd thought they had an indefinitely sustainable endowment!), the archived version of their website gives some info: