All of Arepo's Comments + Replies

That's sad. For anyone interested in why they shut down (I'd thought they had an indefinitely sustainable endowment!), the archived version of their website gives some info:

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home).  Starting in 2020, the Faculty imposed a freeze on fundraising and hiring.  In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed.  On 16 April 2024, the Institute was closed down.

By inference, if you are one of those copies, the 'moral worth' of your own perceived torture will therefore be 1/10billionth of its normal level. So, selfishly, that's a huge upside - I might selfishly prefer being one of 10 billion identical torturees as long as I uniquely get a nice back scratch afterwards, for e.g.

Space lasers don't seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that's within the solar system they're targeting, then that system will still have plenty of time to see the object that's going to shoot them arriving. If they're much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to co... (read more)

I don't think anyone's arguing current technology would allow self-sufficiency. But part of the case for offworld settlements is that they very strongly incentivise technolology that would.

In the medium term, an offworld colony doesn't have to be fully independent to afford a decent amount of security. If it can a) outlast some globally local catastrophe (e.g. a nuclear winter or airborne pandemic) and b) get back to Earth once things are safer, it still makes your civilisation more robust. 

I broadly agree with the arguments here. I also think space settlement has a robustness to its security that no other defence against GCRs does - it's trivially harder to kill all of more people spread more widely than it is to kill of a handful on a single planet. Compare this to technologies designed to regulate a single atmosphere to protect against biorisk, AI safety mechanisms that operate on AGIs whose ultimate nature we still know very little of, global political institutions that could be subverted or overthrown, bunkers on a single planet, etc, al... (read more)

I strongly agree with the first half of this post - bunkers and refuges are pretty bad as a defence against global catastrophes.

Your solution makes a lot less sense to me. It seems like it has many of the same problems you're trying to avoid - it won't be pressure tested until the world collapses. In particular, if it's an active part of a local community, that implies people will be leaving and reentering regularly, which means any virus with a long incubation period could be in there before people know it's a problem. 

Also, I feel like your whole li... (read more)

1
SimonKS
7d
> it won't be pressure tested until the world collapses. - so I'm saying it should not only be pressure tested but be in continuous operation in order to flush out failure modes before a catastrophic scenario plays out, it needs to be providing value way before an extinction level event plays out.  - I agree about your point with the bio/virus with a long incubation time, I think the only way round this would be to have shifts like on an oil platform or mine, where different groups spend a period of time (say 1-3 months) working in isolation from the general population. > I don't see how digging underground is going it make it better at water treatment, electricity generation etc than the equivalent aboveground services. - it may not make it better but would make it more resilient, an open water treatment plant for example is going to become immediately polluted with nuclear dust if is built in standard outdoor settings, where as an isolated underground facility would be protected from that risk. A geothermal power plant may not be more efficient than a wind turbine or solar panels, but again is more resilient to hurricanes or nuclear winter. > Fwiw my take is that offworld bases have much better longterm prospects - they're pressure tested every moment of every day; - I agree they are better as they are pressure tested by necessity, to do this on-world we have to simulate the necessity, in my mind it's good to have 3 options, 1) Don't destroy earth's ecosystem 2) Have off-world bases 3) Have Citadelles or something similar  - They each address different needs, 1) Addresses Reliability, 2) Redundancy 3) Disaster recovery.  

Hey Corentin,

The calculators are intentionally silent on the welfare side, on the thought that in practice it's much easier to treat as a mostly independent question. That's not to say it actually is independent, and ideally I would like the output to include more information about what the pathways to either extinction or an interstellar state, so that people can do some further function on the output. I do think it's reasonable, even on a totalising view, to prioritise improving future welfare conditional on it existing and largely ignoring the question ... (read more)

Answer by ArepoApr 09, 20242
1
0

I don’t feel so comfortable talking to community health at the moment.

Can you say why? That seems like the obvious first step, so it would make it easier to offer a useful alternative if you could share some part of your hesitation. I don't know if it would feel any safer to message a stranger, but feel free to DM me your concerns if you prefer (or you can email me if you don't want them stored on the EA forum). I'm not a support professional, but maybe have enough detachment from but also skin in the EA community to help you figure out next step.

Fwiw I've... (read more)

Answer by ArepoApr 08, 20242
0
0

Triodos (the most ethical bank I could find)

Fwiw I have never been terribly impressed by Triodos' ethos. The last time I looked at the sort of projects they fund, they were e.g. investing in alternative medicine and divesting from nuclear energy, the former of which seems surreal to call 'ethical' and the latter of which is a disastrous strategy for the environment.

I would much rather invest in something with a higher interest rate and donate 50% of the difference (or whatever seems appropriate).

Yeah, it sounds like this might not be appropriate for someone with your credences, though I'm confused by what you say here:

I mentioned point/mean probability estimates, but my upper bounds (e.g. 90th percentile) are quite close, as they are strongly limited by the means. For example, if one's mean probability is 10^-10, the 90th percentile probability cannot be higher than 10^-9, otherwise the mean probability would be higher than 10^-10 (= (1 - 0.90)*10^-9), which is the mean. So my point remains as long as you think my point/mean estimates are reasonab

... (read more)
2
Vasco Grilo
11d
Yes, I was referring to the arithmetic mean of a probability distribution. To illustrate, if I thought the probability of a given event was uniformly distributed between 0 and 1, the mean (best guess) probability would be 50 % (= (0 + 1)/2). I agree the median, geometric mean, or geometric mean of odds are usually better than the mean to aggregate forecasts[1]. However, if we aggregated multiple probability distributions from various models/forecasters, we would end up with a final probability distribution, and I am saying our final point estimate corresponds to the mean of this distribution. Jaime Sevilla illustrated this here. Maybe it helps to think about this in the context of a distribution which is not over a probability. If we have a distribution over possible profits, and our expected profit is 100 $, it cannot be the case that the 90th percentile profit is 1 M$, because in this case the expected profit would be at least 100 k$ (= (1 - 0.90)*1*10^6), which is much larger than 100 $. You may want to check Joe Carlsmith's thoughts on this topic in the context of AI risk. No, at least not in any depth. I think permanent collapse would require very large population and infrastructure losses, but I see these as very unlikely, at least in the absence of TAI. I estimated a probability of 3.29*10^-6 of the climatic effects of nuclear war before 2050 killing 50 % of the global population (based on the distribution I defined here for the famine death rate). Pandemics would not directly cause infrastructure loss. Indirectly, there could be infrastructure loss due people stopping maintenance activities out of fear of being infected, but I guess this requires a level of lethality which makes the pandemic very unlikely. Besides more specific considerations like the above, I have consistently ended up arriving to tail risk estimates much lower than canonical ones from the effective altruism community. So, instead of regarding these as a prior as I used to do, now I im

Hm, the link works ok for me. What happens when you open it? It can be a bit shonky on mobile phones - maybe try using it on a laptop/desktop if you haven't.

It's called 'EA coworking and lounge', if that helps.

Thanks for the kind words, David. And apologies - I'd forgotten you'd published those explicit estimates. I'll edit them in to the OP.

My memory of WWOtF is that Will talks about the process, but other than giving a quick estimate of '90% chance we recover without coal, 95% chance with' he doesn't do as much quantifying as you and Luisa. 

Also Lewis Dartnell talked about the process extensively in The Knowledge, but I don't think he gives any estimate at all about probabilities (the closest I could find was in an essay for Aeon where he opined that 'an industrial revolution without coal would be, at a minimum, very difficult').

Hey Vasco, thanks for the in-depth reply, and thanks again for trawling over this behemoth :)

Let me take these points in order:

I think the annual risk of human extinction not involving transformative AI (TAI) is astronomically low.

I'm highly sceptical of point probability estimates for events for which we have virtually no information - that's exactly why I made these tools. Per Dan Schwarz's recent post, it seems much more important to me to give an interactive model into which people can put their own credences, so that we can then debate the input ... (read more)

4
Vasco Grilo
11d
Thanks for the follow up! I strongly upvoted it. I mentioned point/mean probability estimates, but my upper bounds (e.g. 90th percentile) are quite close, as they are strongly limited by the means. For example, if one's mean probability is 10^-10, the 90th percentile probability cannot be higher than 10^-9, otherwise the mean probability would be higher than 10^-10 (= (1 - 0.90)*10^-9), which is the mean. So my point remains as long as you think my point/mean estimates are reasonable. Makes sense. I liked that post. I think my comment was probably overly crictical, and not related specifically to your series. I was not clear, but I meant to point to the greater value of using standard cost-effectiveness analyses (relative to using a model like yours) given my current empirical beliefs (astronomically low non-TAI extinction risk). Fair! I suspect the number of lives saved, maybe weighted by the reciprocal of the population size, would still be a good proxy for the benefits of interventions affecting civilisational collapse. When I tried to look into this quantitatively in the context of nuclear war, improving worst case outcomes did not appear to be the driver of the overall expected value. So I am guessing using standard cost-effectiveness analyses, based on a metric like lives saved per dollar, would continue to be a fair way of assessing interventions. In any case, I assume your model would still be useful for people with different views! I meant full recovery in the sense of reaching the same state we are in now, with roughly the same chances of becoming a benevolent interstellar civilisation going forward. For my estimate of a probability of 0.0513 % of not fully recovering, yes, because I was assuming human extinction in my calculation. If the disaster is less severe, my probability of not fully recovering would be even lower. If one thinks the probability of extinction or permanent collapse without TAI is astronomically low (as I do), the probability of

I'm happy to talk you through using it if you're finding it confusing.

If you (or anyone else) reading this wants to catch me for some support, I'm on the EA Gather Town as much as possible (albeit currently in New Zealand time), so you can log in there and ping me :)

1
Deborah W.A. Foulkes
12d
Can't find the EA Gather Town via this link or on the Gather app. Can you give its exact handle/label? Thanks.

I think it would have been better to speak up way, way sooner,

If and when this postmortem ever does happen, I hope they will address this, too. The lack of public engagement on the subject with the rest of the movement following the FTX disaster seems a comparable lapse of responsibility to anything that might have happened in the time leading up to it.

Rebecca's comments seem consistent with Beckstead being part of her concern, though.

I found this interesting, and a model I've recently been working on might be relevant - I've emailed you about it. One bit of feedback:
 

Please reach out to hello@futuresearch.ai if you want to get involved!

You might want to make it more clear what kind of collaboration you're hoping to receive.

4
dschwarz
16d
I suppose I left it intentionally vague :-). We're early, and are interested in talking to research partners, critics, customers, job applicants, funders, forecaster copilots, writers. We'll list specific opportunities soon, consider this to be our big hello.

I think you gave up on your theory being maximally consistent when you opted for diversity of experience as a metavalue. Most people don't actually consider their own positive experiences cheapened by someone on the other side of the world having a similar experience.

Also, if you're doing morality by intuition (a methodology I think has no future), then I suspect most people would much sooner drop 'diversity of experience good' than 'torture bad'.

This. I'm imagine some Abrodolph Lincoler-esque character - Abronard Willter maybe - putting me in a brazen bull and cooing 'Don't worry, this will all be over soon. I'm going to create 10billion more of you also in a brazen bull, so the fact that I continue to torture you personally will barely matter.'

1
Isaac King
1d
Creating identical copies of people is not claimed to sum to less moral worth than one person. It's claimed to sum to no more than one person. Torturing one person is still quite bad.

most people intrinsically value diversity of experience, and see a large number of very similar lives as less of a good thing.

Especially in such a contentious argument, I think it's bad epistemics to link to a page with some random dude saying he personally believes x (and giving no argument for it) with the linktext 'most people believe x'.

3
Isaac King
20d
I didn't mean it to be evidence for the statement, just an explanation of what I meant by the phrase. Do you disagree that most people value that? My impression is that wireheading and hedonium are widely seen as undesirable.
4
MichaelStJules
21d
Also, I'd guess most people who value diversity of experience mean that only for positive experiences. I doubt most would mean repeated bad experiences aren't as bad as diverse bad experiences, all else equal.

I hadn't even thought of that! Yeah, that's some pretty impressive hypocrisy.

huw
21d12
3
1

It's wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.

This doesn’t seem so different from p-zombies, and probably some moral thought experiments.

I'm not sure what you mean here. That the simulation argument doesn't seem different from those? Or that the argument that 'we have no evidence of their existence and therefore shouldn't update on speculation about them' is comparable to what I'm saying about the simulation hypothesis? 

If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only ... (read more)

I think assuming that this is purely based on optics is unwarranted. Like I argued at the time, talk of 'optics' is kind of insulting to the everyperson, carrying the implication that the irrational public will misunderstand the +EV of such a decision. Whereas I contend that there's a perfectly rational Bayesian update that people should do towards an organisation being poorly run or even corrupt when that org spends large sums of money on vanity projects which they justify with a vague claim about having done some CBA that they don't want to share.

Meanwhi... (read more)

To be clear, what I am criticizing here is not operating the venue while the sale is going on, or setting some kind of target for the operators in terms of quality-adjusted-events or estimates of counterfactual events caused, that would allow them to continue operating the venue. 

I totally agree that observing someone spending money on a "vanity project" would be evidence that they are poorly run or corrupt, but like, Wytham would not be a vanity project if it were to make economic sense for EV or the EA community at large to operate. So whether a project is a vanity project is dependent on a cost-effectiveness analysis (which I don't think really has occurred in this case).

  1. We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn't necessarily have access to what the simulation is run on.

It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that

  1. we have no evidence for it, and 
  2. if it is true then everything we know about the universe is equally undermined
2
MichaelStJules
22d
I agree there is something a bit weird about it, but I'm not sure I endorse that reaction. This doesn’t seem so different from p-zombies, and probably some moral thought experiments. I don't think it's true that everything we know about the universe would be equally undermined. Most things wouldn't be undermined at all or at worst would need to be slightly reinterpreted. Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There's just more stuff outside our universe. I guess you can imagine short simulations where all our understanding of physics is actually just implanted memories and fabricated records. But in doing so, you're throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable. Longer simulations can preserve that causal structure.

Is there are online version of the case for the fading qualia argument? This feels a bit abstract without it...

3
David Mathers
23d
The best argument for functionalism* in my opinion is that there aren't really any good alternatives. If mental state kinds aren't functional kinds, they'd presumably have to be neuroscientific kinds.  But if  that's right, then we can already know now aliens without neurons aren't conscious. Which seems wild to me: how can we possibly know if aliens are conscious till we meet them, and observe their behavior and how it depends on what goes on inside them? And surely once we do meet them, no one is going to say "oh consciousness is this sort of neurobiological property, we looked in their head with our scanner and they have no neurons, problem solved, we know they aren't conscious. People seem to want there to be some intermediate view that says "oh of course there might be conscious aliens with different biology, we just mean to rule out weird functional duplicates of humans like a robot controlled by radio signals running the same program as human**", but it's really unclear how to do that in a principled way. (And I suspect the root of the desire to do so is a sort of primitive sense that living matter can have feelings but dead matter can't, which I think people would consciously disavow if they understood it was driving their views) *There's an incredibly technical complication here about the fact that "functionalism" is usually defined in opposition to mind-body dualism, but in the current context it makes more sense to classify certain forms of dualism as functionalist, since they agree with functionalism about what guarantees something is conscious in the actual world. But I'm going to ignore it because I don't think I can explain it to non-philosophers quickly and easily.  **https://en.wikipedia.org/wiki/China_brain

Partly from a scepticism about the highly speculative arguments for 'direct' longtermist work - on which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement).

Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while say... (read more)

Hey Johannes :)

To be clear, I think the original post is uncontroversially right that it's very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that it's going to be high tier or even optimal for some related concern C.

Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevant - you look at the evidence... (read more)

4
jackva
23d
Interesting, thanks for clarifying! Just to fully understand -- where does that intuition come from? Is it that there is a common structure to high impact? (e.g. if you think APs are good for animals you also think they might be good for climate, because some of the goodness comes from the evidence of modular scalable technologies getting cheap and gaining market share?)

This statement was very surprising to me:
 

The “concerned” participants (all of whom were domain experts) ... the “skeptical” group (mainly “superforecasters”) 

Can you say more about your selection process, because this seems very important to understanding how much to update on this. Did you 

a) decide you needed roughly equally balanced groups of sceptics vs concerned, start with superforecasters, find that they were overwhelmingly sceptics, and therefore specifically seek domain experts because they were concerned

b) decide you needed roughl... (read more)

Interesting stuff. I'm sceptical a priori, but it would be amazing if this kind of thing replicated. I think there's a typo:

… and four to six months after treatment, they…

In contrast, after...


 

3
Nicholas Kruus
1mo
Thanks for catching the typo! I've updated the post to fix it. Not sure how that happened... I think it's reasonable to be skeptical. The results seem somewhat too good to be true. That said, the study seemed to have been carefully conducted, so, if the results are bunk, I doubt it was intentional.

I think it would be a little bit of a surprising and suspicious convergence if the best interventions to improve human health (e.g. GiveWell's top charities) were also the best to reliably improve global capacity

Fwiw, I think Greg's essay is one of the most overweighted in forum history (as in, not necessarily overrated, but people put way too much weight in its argument). It's a highly speculative argument with no real-world grounding, and in practice we know that of many well-evidenced socially beneficial causes that do seem convergently beneficial in ot... (read more)

I don't think these examples illustrate that "bewaring of suspicious convergence" is wrong.

For the two examples I can evaluate (the climate ones), there are co-benefits, but there isn't full convergence with regards to optimality.

On air pollution, the most effective intervention for climate are not the most effective intervention for air pollution even though decarbonization is good for both.
See e.g. here (where the best intervention for air pollution would be one that has low climate benefits, reducing sulfur in diesel; and I think if that chart were... (read more)

8
Ben Millwood
1mo
Maybe a nitpick, but idk if this is suspicious convergence -- I thought the impact on economic outcomes (presumably via educational outcomes) was the main driver for it being considered an effective intervention?

greater confidence in EEV lends itself to supporting longshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way.

This just depends on what you think those EEVs are. Long-serving EAs tend to lean towards thinking that targeted efforts towards the far future have higher payoff, but that also has a strong selection effect. I know many smart people with totalising consequentialist sympathies who are sceptical enough of the far future that they prefer to donate to GHD causes. None of them are at all active in the EA movement, and I don't think that's coincidence.

I'm in New Zealand atm, so this is uncomfortably early for me, but I'll try and make it!

1
Patrick Liu
1mo
Thanks for your interest Arepo!  We will try to make future events more flexible to accommodate our global audience.  For this current event, we needed a time that fit the presenters.  We hope to have a recording online after the event and have them in the slack to answer follow up questions.  

I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what you're highlighting doesn't seem to fairly represent his actual concerns.

Nuño never argues for why the comments they link to shouldn't be moderated

He does. Also, I suspect his main concern is with people being banned rather than having their posts moderated.

Nuňo doesn't make points that EA is too naïve/consequentialist/ignoring common-sense enough. Instead, they don't think we've gone far enough into that direction. See in Alternate Visions of EA, the cl

... (read more)
JWS
1mo10
0
0
1

Hey Arepo, thanks for the comment. I wasn't trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I'm going to make some edits to clear up confusion I might have introduced.  Some quick points of note:

  • On the comments/Nuño's stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the 'EA Forum Stewardship' post, so I appreciate the added context. And having read that though, I can't really square a disagreement about moderation policy with "I disagree with the EA For
... (read more)

Cool! I just submitted a project - minor bit of feedback is that it's slightly irritating to have the 'project subtitle' field be mandatory.

1
Jeroen De Ryck
1mo
Thanks for the feedback and submitting a project! I'll make that field non-mandatory when I'm doing some updates :)

Great post - I'm embarrassed to have missed it til now! One key point I disagree with:

there might be interventions that reduce risk a lot for not very long or not very much but for a long time. But actions that drastically reduce risk and do so for a long time are rare.

I think there are two big possible exceptions to the latter claim: benign AI and becoming sustainably multiplanetary. EAs have discussed the former a lot, and I don't have much to add (though I'm highly sceptical of it as an arbitrary-value lock-in mechanism on cosmic timelines). I think the... (read more)

1
arvomm
1mo
Thank you for adding various threads to the conversation Arepo! I don't disagree with what I take to be your main point: benign AI and interstellar travel are likely to have a big impact. I will say though, while their success might significantly reduce risk, and for a long time, any given intervention is unlikely to make major progress towards them. Hence, at the intervention level, I'm tempted to remain sceptical about the abundance of interventions that dramatically reduce risk for a long time.

Katja and I date, so yes, I am biased, but I really think that’s a pretty unimportant fact about her

Congrats to both of you on your great catches! Say hi to her for me - it's been a while :)

More generally, what incentives exist? In a normal for-profit environment there are various reasons for individuals to start their own company, to seek promotion, to do a good job, to do a bad job, to commit institutional fraud etc - we typically think of these as mainly financial, and often use the adage 'follow the money' as a methodology to try and discover these phenomena, to encourage the good ones and discourage the bad. 

I want to know what the equivalent methodology would be to find out equivalent phenomena at EA organisations.

EA organizations don't really have a great need for nurses, for history professors, for plumbers, etc.

Fwiw, I was involved with an EA organisation that  that struggled for years with the admin of finding trustworthy tradespeople (especially plumbers).

More generally, I think a lot of EA individuals would benefit a lot from access to specialist knowledge from all sorts of fields, if people with that knowledge were willing to offer it free or at a discount to others in the community. 

4
Jason
1mo
At the risk of going off-topic, look for plumbing firms that pay their employees a flat hourly rate rather than a commission based on how much revenue they generate. That's what my plumber said he looked for when researching plumbers for out-of-town family members. In general, finding someone who has more than enough work and bills at an hourly rate is often a sound strategy when one is dependent on the contractor's professional judgment as to what needs to be done and how long it should take. Under those circumstances, the busy hourly-rate contractor has much less incentive to recommend unnecessary work or stretch it out. The downside is that, because they have more than enough work, they may not be immediately available. . . .

I have a stronger version of the same concerns, fwiw. I can't imagine a 'Long Reflection' that didn't involve an extremely repressive government clamping down on private industry every time a company tried to do anything too ambitious, and that didn't effectively promote some caste of philosopher kings above all others to the resentment of the populace. It's hard to believe this could lead to anything other than substantially worse social values.

I also don't see any a priori reason to think 'reflecting' gravitates people towards moral truth or better values. Philosophers have been reflecting for centuries, and there's still very little consensus among them or any particular sign that they're approaching one.

Answer by ArepoMar 02, 20244
0
0

Are there reasonably engaging narrative tropes (or could we invent effective new ones) that could easily be recycled in genre fiction to promote effective altruist principles, in much the same way that e.g. the noble savage trope can easily be used to promote ecocentric philosophies, no-one gets left behind trope promotes localism, etc?

Answer by ArepoMar 02, 20249
2
0

A steel manned version of the best longtermist argument(s) against AI safety as the top priority cause area.

4
Vasco Grilo
1mo
Thanks for the suggestion. For reference, readers interested in this topic can check the posts on AI risk skepticism.
Answer by ArepoMar 02, 202412
3
0

How can we make effective altruism more appealing to political conservatives without alienating engaged liberals? If there is an inevitable trade-off between the two, what is the optimal equilibrium, how close to it are we, and can we get closer?

Answer by ArepoMar 02, 20244
1
0

Write a concrete proposal for a scalable bunker system that would be robust and reliable enough to preserve technological civilisation in the event of human extinction due to e.g. nuclear winter, biopandemics on the surface. How much would it cost? Given that many people assert it would be much easier than settling other planets, why hasn't anyone started building such systems en mass, and how could we remove whatever the blocker is?

2
Vasco Grilo
1mo
Thanks for the suggestion. @Ulrik Horn, who is working on a project related to refuges, may have some thoughts. I think the reason is that they would be very far from passing a standard cost-benefit analysis. I estimated the cost-effectiveness of decreasing nearterm annual extinction risk from asteroids and comets via refuges is 6.04*10^-10 bp/T$. For a population of 8 billion, and a refuges which remained effective for 10 years, that would be a cost per life saved of 207 T$ (= 10^12/(6.04*10^-10*10^-4*8*10^9*10)), i.e. one would have to spend 2 times the size of the global economy to save a life. In reality, the cost-effectiveness would be much higher because refuges would work in non-extinction catastrophes too, but it would remain very far from passing a standard governmental cost-benefit analysis.
Answer by ArepoMar 02, 202411
3
1

Investigating incentives in EA organisations. Is money still the primary incentive? If not, how should we think about the intra-EA economy?

1
Stan Pinsent
1mo
By incentives do you mean incentives for taking one job over another, like pay, benefits, type of work, etc.?
Answer by ArepoMar 02, 20246
2
0

What are the most likely scenarios in which we don't see transformative AI this century or perhaps for even longer? Do they require strong assumptions about (e.g.) theory of mind?

4
Vasco Grilo
1mo
Hi, Relatedly, I liked Explosive Growth from AI: A Review of the Arguments.
Answer by ArepoMar 02, 20242
0
0

Is there an underunexplored option to fund early stage for-profits that seem to have high potential social value? Might it sometimes be worth funding them in exchange for basically 0 equity so that it's comparatively easy for them to raise further funding the normal way?

Answer by ArepoMar 02, 20249
1
0

If we take utilitarianism at face value, what are the most likely candidates for the physical substrate of 'a utilon'? Is it plausible there are multiple such substrates? Can we usefully speculate on any interesting properties they might have?

Answer by ArepoMar 02, 20244
0
0

Some empirical research into the fragile world hypothesis, in particular with reference to energy return on investment (EROI). Is there a less extreme version of 'The great energy descent' that implies that average societal EROI could stay at sustainable levels but only absent shocks, and that one or two big shocks could push it below that point and make it a) impossible to recover or b) possible to recover but only after such a major restructuring of our economy that it would resemble the collapse of civiliation?

Answer by ArepoMar 02, 20244
0
0

An updated version of Luisa Rodriguez's 'What is the likelihood that civilizational collapse would cause technological stagnation? (outdated research)' post that took into account her subsequent concerns, and looked beyond 'reaching an industrial revolution' to 'rebuilding an economy large enough to eventually become spacefaring'.

Load more