AshwinAcharya

AshwinAcharya's Posts

Sorted by New

AshwinAcharya's Comments

Stefan Schubert: Psychology of Existential Risk and Long-Termism

Nice to see the assumptions listed out. My worries about the future turning out well by default are part of why I'd like to see more work done in clarifying and sharing our values, and more work on questioning this assumption (eg looking into Pinker's work, thinking about why the trends might not hold). I'm aware of SI and some negative-utilitarian approaches to this, but I'd love to see links on whatever else is out there.

An Argument to Prioritize "Tithing to Catalyze a Paradigm Shift and Negate Meta Existential Risk"

I think most EAs share these premises with you:

1. Some people live in relative material abundance, and face significant diminishing returns to having more material wealth.

2. However, many problems remain, including poverty and catastrophic risk.

3. It would be valuable for funds to go towards reducing these problems, and thus quite valuable to successfully spread values that promote donating towards them.

You also make a couple of interesting claims:

4. We can feasibly cause a 'paradigm shift' in values by convincing people to tithe.

5. The benefits of changing society's values in this way don't depend on us spreading norms around effectiveness, or encouraging donations to effective charities in particular.

which I'm tentatively interpreting as

5A. We should focus on a moonshot-like attempt to paradigm-shift society via promoting tithing to any cause, rather than incrementally spreading effective altruist ideas, because the expected value of effort spent on the former is higher.

Could you explain why you think these are true?

On 4, I think the fact that the donation rate has remained steady, despite the existence of many large charities with competent fundraisers who'd love to increase it, provides some evidence against this being easy. As does, you know, human nature.

On 5, I'm not sure what mechanism will channel within-US donations to benefits in the developing world, or to animals or x-risk. I assume the idea is that people will get less selfish and more altruistic? If you mean 5A, I agree that this mechanism will probably kick in eventually, but then I'm back to asking about feasibility.

Brian Tse: Risks from Great Power Conflicts

Interesting talk. I agree with the core model of great power conflict being a significant catastrophic risk, including via leading to nuclear war. I also agree that emerging tech is a risk factor, and emerging tech governance a potential cause area, albeit one with uncertain tractability.

I would have guessed AI and bioweapons were far more dangerous than space mining and gene editing in particular; I'd have guessed those two were many decades off from having a significant effect, and preventing China from gene editing seems low-tractability. Geoengineering seems low-scale, but might be particularly tractable since we already have significant knowledge about how it could take place and what the results might be. Nanotech seems like another low-probability high-impact uncertain-tractability emerging tech, though my sense is it doesn't have as obvious a path to large-scale application as AI or biotech.

--

The Taleb paper mentioned is here: http://www.fooledbyrandomness.com/longpeace.pdf

I don't understand all the statistical analysis, but the table on page 7 is pretty useful for summarizing the historical mean and spread of time-gaps between conflicts of a given casualty scale. As a rule of thumb, average waiting time for a conflict with >= X million casualties is about 14 years * sqrt(X), and the mean absolute deviation is about equal to the average. (This is using the 'rescaled' data, which buckets historical events based on casualties as a proportion of population; this feels to me like a better way to generalize than by considering raw casualty numbers.)

They later mention that the data are consistent with homogeneous Poisson distributions, especially for larger casualty scales. That is, the waiting time between conflicts of a given scale can be modeled as exponentially distributed, with a mean waiting time that doesn't change over time. So looking at that table should in theory give you a sense of the likelihood of future conflicts.

But I think it's very likely that, as Brian notes in the talk, nuclear weapons have qualitatively changed the distribution of war casualties, reducing the likelihood of say 1-20 million casualty wars but increasing the proportion of wars with casualties in the hundreds of millions or billions. I suspect that when forecasting future conflicts it's more useful to consider specific scenarios and perhaps especially-relevant historical analogues, though the Taleb analysis is useful for forming a very broad outside-view prior.

EA is vetting-constrained

This seems like a good point, and I was surprised this hadn't been addressed much before. Digging through the forum archives, there are some relevant discussions from the past year:

  • A post by RandomEA suggesting an EA crowdfunding platform (Raemon in the comments suggests having a 'common app' for the various funders, which seems like a good idea)
  • benjamin-pence's announcement of the EA Angel Group (current status unclear)
  • Brendon_Wong's post on three ideas for improving EA funding: a 'kickstarter' for projects, a platform for distributed grantmakers to share expertise and grant opportunities, and improved centralized grantmaking. Lots of interesting discussion in the comments.
  • CEA's EA Meta Fund grants post mentioned a $10K grant for Let's Fund, an org that "helps people to discover and crowdfund breakthrough research, policy and advocacy projects" via performing "in-depth research into fledgling projects" and sharing their recommendations. So far, Let's Fund has a couple of object-level posts about specific areas (improving scientific norms and doing climate change research), and is running a crowdfunding campaign for $75,000.

I found a few more discussions that seemed relevant, but not that many.* One reason this might not be very discussed is major donors are often pretty well plugged-into the community, so their social networks might do a good enough job of bringing valuable opportunities to their attention. (And that goes double for big grantmakers.) Still, it seems to me like a hub of useful centralized information could benefit everyone, if it can establish itself as a Schelling point and not as another competing standard. And improving information flow to small donors alone is obviously still valuable, though I'd worry a bit about duplicated work and low-quality analyses resulting from a norm of distributed vetting.

*It's interesting that most of the relevant posts are from the past year. Maybe a result of the 'talent-constrained' discourse getting people interested in what value small donors can provide beyond more funding for big projects?

Cause profile: mental health

Thanks for the thoughts, Michael. Sorry for the minor thread necro - Milan just linked me to this comment from my short post on short-termism.

The first point feels like a crux here.

On the second, the obvious counterargument is that it applies just as well to e.g. murder; in the case where the person is killed, "there is no sensible comparison to be made" between their status and that in the case where they are alive.

You could still be against killing for other reasons, like effects on friends of the victim, but I think most people have an intuition that the effects of murder on the victim alone are a significant argument against it. For example, it seems strange to say it's fine to kill someone when you're on a deserted island with no hope of rescue, no resource constraints, and when you expect the murder to have no side effects on you.

I guess the counter-counterargument is something like "while they were alive, if they knew they were going to die, they would not approve." But that seems like a fallback to the first point, rather than an affirmation of the second.

A relevant thought experiment: upon killing the other islander, the murderer is miraculously given the chance to resurrect them. This option is only available after the victim is dead; should it matter what their preferences were in life? (I think some people would bite this bullet, which also implies that generally living in accordance with our ancestors' aggregate wishes is good.)

Climate Change Is, In General, Not An Existential Risk

One terminology for this is introduced in "Governing Boring Apocalypses", a recent x-risk paper. They call direct bad things like nuclear war an "existential harm", but note that two other key ingredients are necessary for existential risk: existential vulnerability (reasons we are vulnerable to a harm) and existential exposure (ways those vulnerabilities get exposed). I don't fully understand the vulnerability/exposure split, but I think e.g. nuclear posturing, decentralized nuclear command structures, and launch-on-warning systems constitute a vulnerability, while global-warming-caused conflicts could lead to an exposure of this vulnerability.

(I think this kind of distinction is useful, so we don't get bogged down in debates or motte/baileys over whether X is an x-risk because of indirect effects, but I'm not 100% behind this particular typology.)

What’s the Use In Physics?

You mention nanotechnology; in a similar vein, understanding molecular biology could help deal with biotech x-risks. Knowing more about plausible levels of manufacture/detection could help us understand the strategic balance better, and there’s obviously also concrete work to be done in building eg better sensors.

On the more biochemical end, there’s of mechanical and biological engineering for cultured meat.

Also, wrt non-physics careers, a major one is quantitative trading (eg at Jane Street), which seems to benefit from a physics-y mindset and use some similar tools. I think there’s even a finance firm that mostly hires physics PhDs.

[Link] Vox Article on Engineered Pathogens/ Global Catastrophic Biorisks

Interesting, scary stuff. I've been reading up on biotech/bioweapons a bit as part of my research on AI strategy. They're interesting both because there could be dangerous effects from AI improving bioweapons*, and because they're a relatively close analogue to AI by virtue of their dual-use, concealability, and reasonably large-scale effects.

Do you know of good sources on bioweapons strategy, offense-defense dynamics, and potential effects of future advances? I'm reading Koblentz's Living Weapons right now and it's quite good, but I haven't found many other leads. (I'd think there would be more papers on this; maybe they're mostly kept secret, or maybe I'm using the wrong keywords.)

*My impression from Koblentz is that foreseeable advances in biotech aren't hugely destabilizing, since bioattacks aren't a good strategic threat; military locations can be pretty effectively hardened against them for not-unbearable costs. One danger I'm curious about is the scope of potential attacks in 20-30 years; could there be devastating, hard-to-trace attacks on civilian populations?

Load More