All of Ward's Comments + Replies

I've respected Rethink's work for a long time. Excited to see you guys expanding into the longtermist space!

Could you clarify how your stated "room for more funding" relates to your budget plans? For example, the maximally ambitious RFMF for longtermism in 2022 is $5.34m, but the maximally ambitious budget for 2022 is $2.5m. Is the idea that you would hold onto some money for 2023?

6
abrahamrowe
2y
This is correct - the RFMF is how much we think we'd like to raise between now and the end of 2022 to spend in 2022 and 2023 according the budgets above. 

Do people's wildness intuitions change when we think about human lives or life-years, instead of calendar years?

7 billion of 115 billion humans ever are living today. Given today's higher life expectancies, about 15% of all experience so far has been experienced by people who are alive right now.

So the idea that the reference class "humans alive today" is special among pre-expansion humans doesn't feel that crazy. (There's a related variant of the doomsday argument -- if we expect population to grow rapidly until space expansion, many generations might rea... (read more)

3
Holden Karnofsky
3y
I think your last comment is the key point for me - what's wild is how early we are, compared to the full galaxy population across time.

Nice to see the assumptions listed out. My worries about the future turning out well by default are part of why I'd like to see more work done in clarifying and sharing our values, and more work on questioning this assumption (eg looking into Pinker's work, thinking about why the trends might not hold). I'm aware of SI and some negative-utilitarian approaches to this, but I'd love to see links on whatever else is out there.

I think most EAs share these premises with you:

1. Some people live in relative material abundance, and face significant diminishing returns to having more material wealth.

2. However, many problems remain, including poverty and catastrophic risk.

3. It would be valuable for funds to go towards reducing these problems, and thus quite valuable to successfully spread values that promote donating towards them.

You also make a couple of interesting claims:

4. We can feasibly cause a 'paradigm shift' in values by convincing people to tithe.

5. The benefit... (read more)

Interesting talk. I agree with the core model of great power conflict being a significant catastrophic risk, including via leading to nuclear war. I also agree that emerging tech is a risk factor, and emerging tech governance a potential cause area, albeit one with uncertain tractability.

I would have guessed AI and bioweapons were far more dangerous than space mining and gene editing in particular; I'd have guessed those two were many decades off from having a significant effect, and preventing China from gene editing seems low-tractability. Geoengin... (read more)

Ward
5y26
0
0

This seems like a good point, and I was surprised this hadn't been addressed much before. Digging through the forum archives, there are some relevant discussions from the past year:

... (read more)

To provide more information on the status of the EA Angel Group, Benjamin Pence and I are working together on the EA Angel Group (and its parent project Altruism.vc). The EA Angel Group is operating, although it received a lower than expected number of referrals from angels within the group which has significantly reduced the benefit that the group currently provides to its members.

I anticipated this concern months ago and tried to resolve the issue, but was delayed by ~5 months in our attempt to discuss sharing grant proposals with EA Grants. I felt like ... (read more)

He brought this up in a conversation with me; I don't know if he's written it up anywhere.

5
Max_Daniel
5y
If I recall correctly this paper by Tom Sittler also makes the point you paraphrased as "some reasonable base rate of x-risk means that the expected lifespan of human civilization conditional on solving a particular risk is still hundreds or thousands of years", among others.
1
Pablo
5y
I see. Thanks.

Thanks for the thoughts, Michael. Sorry for the minor thread necro - Milan just linked me to this comment from my short post on short-termism.

The first point feels like a crux here.

On the second, the obvious counterargument is that it applies just as well to e.g. murder; in the case where the person is killed, "there is no sensible comparison to be made" between their status and that in the case where they are alive.

You could still be against killing for other reasons, like effects on friends of the victim, but I think most people have an intui... (read more)

2
MichaelPlant
5y
Person-affecting views are those will hold not all possible people matter. Once you've decided who matters (the present, necessary or actual people), it's then a different question how you think about the badness of death for those that matter. You can say creating people isn't good/bad, but it's still bad if already existing people die early. FWIW, I also find Epicureanism about the badness of death rather plausible, i.e. I don't think we compare the value of living longer for someone. I recognise this makes me something of a 'moral hipster' but I think the arguments for it are pretty good, although I won't get into that here. As such, I think death, whether by murder or other means, isn't bad for someone. I think we tend to have the intuition that murder is wrong over and above what it deprives the deceased from, which it why we think it's just as wrong to murder someone with 1 month vs 10 years left to live. hence I think you're getting at a deontological intuition, not one about value. I find the stuff about posthumous harms and benefits very implausible. If Socrates wants us to say 'Socrates' and we do, does it really make his life go better?

One terminology for this is introduced in "Governing Boring Apocalypses", a recent x-risk paper. They call direct bad things like nuclear war an "existential harm", but note that two other key ingredients are necessary for existential risk: existential vulnerability (reasons we are vulnerable to a harm) and existential exposure (ways those vulnerabilities get exposed). I don't fully understand the vulnerability/exposure split, but I think e.g. nuclear posturing, decentralized nuclear command structures, and launch-on-warning system... (read more)

You mention nanotechnology; in a similar vein, understanding molecular biology could help deal with biotech x-risks. Knowing more about plausible levels of manufacture/detection could help us understand the strategic balance better, and there’s obviously also concrete work to be done in building eg better sensors.

On the more biochemical end, there’s of mechanical and biological engineering for cultured meat.

Also, wrt non-physics careers, a major one is quantitative trading (eg at Jane Street), which seems to benefit from a physics-y mindset and use some similar tools. I think there’s even a finance firm that mostly hires physics PhDs.

Interesting, scary stuff. I've been reading up on biotech/bioweapons a bit as part of my research on AI strategy. They're interesting both because there could be dangerous effects from AI improving bioweapons*, and because they're a relatively close analogue to AI by virtue of their dual-use, concealability, and reasonably large-scale effects.

Do you know of good sources on bioweapons strategy, offense-defense dynamics, and potential effects of future advances? I'm reading Koblentz's Living Weapons right now and it's quite good... (read more)

4
DannyBressler
5y
I don't think there is much publicly available on this topic besides Koblentz's work (also check out his 2003 article in International Security). The "strategy of conflict" as it pertains to bioweapons is something we thought about, but we don't discuss it much in our paper. Some thoughts: Historically bioweapons research has focused on diseases that are not transmissible person to person like Tularemia, Anthrax, Q Fever, and Botulism. If you dump a bunch of anthrax spores from an airplane over a city, you would kill a lot of people (I recall seeing a study where they estimated that if you dumped anthrax over a large city you would kill ~200,000 people) even though it's not transmissible person to person. Japanese Unit 731 used Plague, which is transmissible person to person, but used it well behind enemy lines on enemy cities to limit collateral damage. This made this sort of weapon more of a poor man's strategic bombing campaign that could wreak havoc on civilian populations by dumping swarms of plague infested fleas on enemy territory. There is a lot of uncertainty on the actual numbers, but Unit 731 may have killed more civilians in China through these sorts of attacks than the US killed Japanese civilians in Japan from the nuclear bombs and firebombings (the exact numbers are hard to know in large part because it's hard to attribute civilian deaths directly to the released bioweapons because the weapon is meant to present itself as a natural epidemic, as we mention in the paper). There is evidence that the Japanese did sustain some collateral damage from their plague attacks against China during the war: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1200679/. Starting a pandemic transmissible Person to Person in the enemy's country is probably a bad idea because it could turn into a global pandemic and there is significant collateral damage to your own country. Though, it should be noted that using other WMDs like nuclear weapons also has significant, and pos

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting simil

... (read more)

This is interesting. I'm strongly in favor of having rough models like this in general. Thanks for sharing!

Edit suggestions:

  • STI says "what percent of bad scenarios should we expect this to avert", but the formula uses it as a fraction. Probably best to keep the formula and change the wording.

  • Would help to clarify that TXR is a probability of X-risk. (This is clear after a little thought/inspection, but might as well make it as easy to use as possible.)

Quick thoughts:

  • It might be helpful to talk in terms of research-years rather than resea

... (read more)

Thanks for sharing! This seems like good news, and I'm glad they're looking at safety issues along so many different axes.

However, I'm a bit confused as to what interventions like this are meant/expected to accomplish. It seems like the long-term result of this kind of intervention would be a recovery of the mosquito population as the modified mosqs' descendents got outcompeted by mosquitos without the genes.

Is the idea that mosquito populations are small enough (relative to the number of modified ones introduced) that they might be eradicated entirely, ... (read more)

2
Avi Norowitz
8y
Ashwin, Oxitec takes the following strategy: 1. Issue repeated releases of large numbers of male transgenic mosquitoes over 4-6 months to suppress the mosquito population to very low levels. 2. Issue repeated releases of lower numbers of male mosquitoes after that to prevent resurgence of the mosquito population. See this video, starting from 4:50: https://www.youtube.com/watch?v=5XGcYoeHMMY#t=4m50s Avi

Hey! Just happened upon this article while searching for something else. Hope the necro isn't minded.

I wanted to point out that since this article was written--and especially in the last year--basic income at least has become a lot more mainstream. There's the (failed) Swiss referendum, and apparently Finland and YCombinator are both running basic income trials as well. (More locally, there's of course the GiveDirectly UBI trial as well.)

Anecdotally, it seems like these events have also been accompanied by many more people (in my particular bubble) being... (read more)