All of Liam_Donovan's Comments + Replies

Even if you think (eg) abortion access is bad on the margin

If you believe this, doesn't it flip the sign of the "very best interventions" (ie you would believe they are exceptionally bad interventions)?

6
Zach Stein-Perlman
2y
Insofar as the relevant interventions are only assessed by something like "number of abortions counterfactually caused," yes. But within the "reproductive rights" domain, there are interventions that affect other relevant dimensions too.

Endorsement by the Democratic congressional leadership. There are plenty of low-information voters who hardly follow politics but generally prefer Democrats to Republicans so in the primary, they are more likely to vote for candidate endorsed by the those people who aim to get a Democratic majority in Congress.

17.4% of the citizen voting age population of OR-6 is Hispanic

https://davesredistricting.org/maps#viewmap::9b2b545f-5cd2-4e0d-a9b9-cc3915a4750f

2
_pk
2y
Wow, davesredestricting.org is a great tool, thanks for posting that! I'll just note that according to the link you posted, OR-6 has the highest % Hispanic representation in the state by nearly 5%. So this is a definitional issue: is it accurate to call the most Hispanic district in the 14th most Hispanic state (per Wikipedia) "not a heavily Hispanic area or anything?"

So now that it's over, can someone explain what the heck was up with SBF donating $6m to HMP in exchange for a $1m donation to Flynn? From an outside perspective it seems tailor made to look vaguely suspicious and generate bad press, without seeming to produce any tangible benefits for Flynn or EA. 

7
Aleks_K
2y
I don't know anything about the SBF donation to HMP, but it it seems plausible that the HMP support for Flynn could well have been positive had it not led to  a large pushback from Latino democrats and BOLD PAC spending for Salinas, so whoever is responsible for getting HMP involved probably didn't realise that there was a risk that this might happen.

It seems like these observations could be equally explained by Paul correctly having high credence in long timelines, and giving advice that is appropriate in worlds where long timelines are true, without explicitly trying to persuade people of his views on timelines. Given that, I'm not sure there's any strong evidence that this is good advice to keep in mind when you actually do have short timelines, regardless of your views on the Bible.

I'd be interested in joining the Slack group

2
Peter Wildeford
4y
Email me at peter@rethinkpriorities.org and I'll send an invite back to your email.

I'd like to take Buck's side of the bet as well if you're willing to bet more

What was her rationale for prioritizing hand soap over food?

It's probably the lizardman constant showing up again -- if ~5% of people answer randomly and <5% of the population are actually veg*ns, then many of the self-reported veg*ns will have been people who answered randomly.

5
David_Moss
4y
I think this is a good explanation of at least part of the phenomenon. As you note, where we do samples of the general population and only 5% of people report being vegetarian or vegan, then even a small number of lizardperson answering randomly, oddly or deliberately trolling could make up a large part of the 5%. That said, I note that even in surveys which are deliberately solely targeting identified vegetarians or vegans (so 100% of people in the sample identified as vegetarian or vegan), large percentages then say that they eat some meat. Rethink Priorities has an unpublished survey (report forthcoming soon) which sampled exclusively people who have previously identified as vegetarian or vegan (and then asked them again in the survey whether they identified as vegetarian or vegan) and we found just over 25% of those who answered affirmatively to the latter question still seemed to indicate that they consumed some meat product in a food frequency questionnaire. So that suggests to me that there's likely something more systematic going on, where some reasonably large percentage of people identify as vegetarian or vegan despite eating meat (e.g. because they eat meat very infrequently and think that's close enough). Of course, it's also possible that the first sampling to find self-identified vegetarian or vegans sampled a lot of lizardpersons, meaning that there was a disproportionate number of lizardpersons in the second sampling, meaning that there was a disproportionate number of lizardpersons who then identified as vegetarian or vegan in our survey. And perhaps lizardpersons don't just answer randomly but are disproportionately likely to identify as vegetarian or vegan when asked, which might also contribute.

I think it's misleading to call that evidence that marriage causes shorter lifespans (not sure if that's your intention)

2
Linch
4y
I mean, there's literally a strong causal relationship between marriage and having a shorter lifespan. I assume sociologists are usually referring to other effects however.

Do you have a link and/or a brief explanation of how they convincingly established causality for the "married women have shorter lives" claim?

1
Linch
4y
I don't know what the time period is, but at the risk of saying the obvious, the historical rate of maternal mortality is much higher than it is today in the First World. Our world in data[1] estimates historical rates at .5-1% per birth. So assuming 6 births per woman[2], you get 3-6% of married women dying from childbirth alone, at a relatively young age. [1] https://ourworldindata.org/maternal-mortality [2] https://ourworldindata.org/fertility-rate#the-number-of-children-per-woman-over-the-very-long-run

The next logical step is to evaluate the novel ideas, though, where a "cadre of uber-rational people" would be quite useful IMHO. In particular, a small group of very good evaluators seems much better than a large group of less epistemically rational evaluators who could be collectively swayed by bad reasoning.

I think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.

Have you read this paper suggesting that there is no good evidence of a connection between climate change and the Syrian war? I found it quite persuasive.

What is a Copernican prior? I can't find any google results

3
Linch
4y
Wikipedia gives the physicist's version, but EAs (and maybe philosophers?) use it more broadly. https://en.wikipedia.org/wiki/Copernican_principle The short summary I use to describe it is that "we" are not that special, for various definitions of the word we. Some examples on FB.
7[anonymous]4y
It's just an informal way to say that we're probably typical observers. It's named after Copernicus because he found that the Earth isn't as special as people thought.
4
JP Addison
4y
I don't know the history of the term or its relationship to Copernicus, but I can say how my forgotten source defined it. Suppose you want to ask, "How long will my car run?" Suppose it's a weird car that has a different engine and manufacturer than other cars, so those cars aren't much help. One place you could start is with how long it's currently be running for. This is based on the prior that you're observing it on average halfway through its life. If it's been running for 6 months so far, you would guess 1 year. There surely exists a more rigorous definition than this, but that's the gist.

You're estimating there are ~1000 people doing direct EA work? I would have guessed around an order of magnitude less (~100-200 people).

8
Aaron Gertler
4y
It depends on what you count as "direct". But if you consider all employees of GiveWell-supported charities, for example, I think you'd get to 1000. You can get to 100 just by adding up employees and full-time contractors at CEA, 80K, Open Phil, and GiveWell. CHAI, a single research organization, currently has 42 people listed on its "people" page, and while many are professors or graduate students who aren't working "full-time" on CHAI, I'd guess that this still represents 25-30 person-years per year of work on AI matters.

What if rooms at the EA Hotel were cost-price by default, and you allocated "scholarships" based on a combination of need and merit, as many US universities do? This might avoid a negative feedback cycle (because you can retain the most exceptional people) while reducing costs and making the EA Hotel a less attractive target for unaligned people to take resources from.

With the charity structure we're setting up, charging cost price will also amount to a grant in the form of a partial subsidy. Charging anyone less than market rate (~double cost price) means they are a beneficiary of the charity. So in practice everyone will have to apply for a grant of free accommodation, board and stipend, and the amount given (total or partial subsidy) will depend on their need and merit.

What does this mean in the context of the EA Hotel? In particular, would your point apply to university scholarships as well, and if not, what breaks the analogy between scholarships and the Hotel?

Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.

Doesn't that assume EAs should value the lives of fetuses and e.g. adult humans equally?

Due to politicization, I'd expect reducing farm animal suffering/death to be much cheaper/more tractable per animal than reducing abortion is per fetus; choosing abortion as a cause area would also imperil EA's ability to recruit smart people across the political spectrum. I'd guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true?

Note: It would also be quite costly for EA as a movement to generate a better-researched estimate of the parameters due to the risk of politicizing the movement.

7
Larks
5y
There are a lot of EAs who think that human lives are significantly more important than animal lives, and that future lives matter a lot, so this does not seem totally unreasonable. The most recent piece I read on the subject was this piece from Scott, with two methodologies that suggested one human was worth 320-500 chickens. Having said that, I think he mis-analysed the data slightly - people who selected "I don't think animals have moral value commensurable with the value of a human, and will skip to the end" should have been coded as assigning really high value to humans, not dropped from the analysis. Making this adjustment gives a median estimate of each human being worth just over 1,000 chickens. Bearing in mind that half of all people have above-median estimates, so it could be very worthwhile for them. Using my alternative coding, the 75th percentile answer was human being worth 999,999,999 chickens. So even though it might not be worthwhile for some EAs, it definitely could be for others.
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.

I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in "... (read more)

2
kbog
5y
This is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.

What does it mean to be "pro-science"? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn't meet this criterion look like?

I ask because I don't have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would "proto-EAs" who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don't have a clear picture of what activities/causes being "pro-science" would exclude.


edit: Why was this downvoted?

As a scientist, I consider science a way of learning about the world, and not what a particular group of people say. I think the article is fairly explicit about taking a similar definition of "science-aligned":

(i) the use of evidence and careful reasoning to work out...

(...)

  • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on careful rigorous argument and theoretical models as well as data.

There is usually a vast body of existing relevant work on a topic across va

... (read more)
7
Ozzie Gooen
5y
My quick take onto why this was downvoted would be because someone may have glanced at it quickly and assumed you were being negative to MIRI or EA. I think around being "Science-aligned", the post means using the principals and learnings of the scientific method and similar tools, rather than agreeing with "the majority of scientists" or similar. The mainstream scientific community seems also likely to be skeptical of EA, but that doesn't mean that EA would have to therefore be similarly skeptical of itself. That said, of course whether one follows the scientific method and similar for some practices, especially in cases where they aren't backed by many other communities, could be rather up for debate.

An example of what I had in mind was focusing more on climate change when running events like Raemon's Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of "equal importance to EA" (however that's defined) in e.g. technical AI safety.

The answer to your question is basically what I phrased as a hypothetical before:

participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

I was involved in EA at university for 2 years before coming to believe Catholicism is true, and it didn't seem like Church dogma conflicted with my pro-EA intuitions at all, so I've just stayed with it. It helped that I wasn't ever an EA for rigidly consequentialist reasons; I just wanted to help people and EA's analytical approach was a na... (read more)

I downvoted the post because I didn't learn anything from it that would be relevant to a discussion of C-GCRs (it's possible I missed something). I agree that the questions are serious ones, and I'd be interested to see a top level post that explored them in more detail. I can't speak for anyone else on this, and I admit I downvote things quite liberally.

Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools


Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this man... (read more)

1
ZacharyRudolph
5y
I'm not sure I understand your objection, but I feel like I should clarify that I'm not endorsing consequentialism as a sort of moral criterion (that is, the thing in virtue of which something is right or wrong) so much as I take the "effective" part of effective altruism to imply using some sort nonmoral consequentialist reasoning. As far as I understand (which isn't far), a Catholic moral framework would still allow for some sort of moral quantification (that some acts are more good than others or are good to a greater degree), e.g. saints are a thing. If so, then (I think) it seems sensible to say a Catholic could sensibly take the results of a consequentialist reasoning as applied to her own framework as morally motivating reasons to choose one act over another. My worry is that if that framework holds only one value as most basic, then this consequentialist reasoning might (edit: depending on the value) validly lead to the conclusion that the way to do the most good is something radically different from the things that this subculture tends to endorse, and that this should count towards the concern that this subculture's actions could produce serious disvalue (edit: disvalue from, say, the moral consequentialist's point of view). On the other hand if this framework is some sort of pluralist/virtue system (you mentioned a virtue of charity), then yeah I definitely agree that effective altruism could represent the pursuit of excellence in such a virtue or that "effectiveness" could be interpreted as a way of saying that the altruist is simply addressing what he takes to be his most stringent obligations with regard to his duty of charity. These, though, I think would count as different arguments (i.e. arguments which make sense to Catholics) than those which utilitarians take to give morally motivating reasons.
3
zdgroff
5y
This is all really interesting, and thank you all for chiming in. Liam, I'm curious—do you adopt EA tools within a Catholic moral framework, or do you practice Catholicism while adopting a different moral framework? I figure your participation in EA is some sort of anecdata.

Thank you! I'm not sure, but I assume that I accidentally highlighted part of the post while trying to fix a typo, then accidentally e.g. pressed "ctrl-v" instead of "v" (I often instinctively copy half-finished posts into the clipboard). That seems like a pretty weird accident, but I'm pretty sure it was just user error rather than anything to do with the EA forum.

This post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?

2
JP Addison
5y
Currently a work in progress feature that is admin only. (And has been in that state for a while unfortunately.) I've reverted this post. Do you know what the sequence of events is that caused it to get garbled?

This doesn't seem like a great idea to me for two reasons:

1. The notion of explicitly manipulating one's beliefs about something as central as religion for non-truthseeking reasons seems very sketchy, especially when the core premise of EA relies on an accurate understanding of highly uncertain subjects.

2. Am I correct in saying the ultimate aim of this strategy is to shift religious groups' dogma from (what they believe to be) divinely revealed truth to [divinely revealed truth + random things EAs want]? I'm genuinely not sure if I interpreted the post correctly, but that seems like an unnecessarily adversarial move against a set of organized groups with largely benign goals.

1
Liam_Donovan
5y
This post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?

Yeah, I don't think I phrased my comment very clearly.

I was trying to say that, if the Christian conception of heaven/hell exists, then it is highly likely than an objective non-utilitarian morality exists. It shouldn't be surprising that continuing to use utilitarianism within an otherwise Christian framework yields garbage results! As you say, a Christian can still be an EA, for most relevant definitions of "be an EA".

I'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

1
ZacharyRudolph
5y
You're right. What I was trying to get at was that I presume Catholics would start with different answers to axiological questions like "what is the most basic good?". Where I might offer a welfarist answer, the Church might say say "a closeness to God" (I'm not confident in that). Thus, if a Catholic altruist applies the "effective" element of EA reasoning, the way to do the most good in the world might end up looking like aggressive evangelism in order to save the most souls. And that if we're trying to convince Catholic Priests to encourage the Church use its resources for usual EA interventions, it seems like you'd need to either employ a different set of arguments than those used to convince welfarists/utilitarians or convince them to adopt answer to the question we started with.
3
Kirsten
5y
The set of tools EA provides for considering how to help others, and the network/community, could be useful for any altruist. Utilitarianism is less compatible with Catholicism.

This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I'm concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.

2
kbog
5y
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.
7
Aaron Gertler
5y
My impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven't been written up by anyone with a strong connection to EA). I can't speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just... small, without enough research-hours available to devote to everything worth exploring. (Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they've earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.) Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, "EAs should do more X" means something like "these specific people and organizations should do more X and less Y", unless we grow the pool of available people/organizations.)

Great answer, thank you!

Do you know of any examples of the "direct work+" strategy working, especially for EA-recommended charities? The closest thing I can think of would be the GiveDirectly UBI trial; is that the sort of thing you had in mind?

It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.

Isn't this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be "weird-sounding" or unintuitive? I think it's a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.

2
kbog
5y
Nah, I'm just saying that a curse applies to every method, so it doesn't tell us to use a particular method. I'm excluding arguments from the issue, not bringing them in. So if we were previously thinking that weird causes are good and common sense/model pluralism aren't useful, then we should just stick to our guns. But if we were previously thinking that common sense/model pluralism are generally more accurate anyway, then we should stick with them.

A recent example of this happening might be EA LTF Fund grants to various organizations trying to improve societal epistemic rationality (e.g. by supporting prediction markets)

Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can't give any

Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I'm often not convinced by the use of quantitative models in research (e.g. the "Racing to the Precipice" paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED's impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more q... (read more)

Does he still endorse the retraction? It's just idle curiosity on my part but it wasn't clear from the comments

A few thoughts this post raised for me (not directed at OP specifically):

1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

2. Does "ea organisations are unwilling to even endorse the hotel" refer to RAISE/Rethink Charity (very surprising & important evidence!), or other EA organizations without direct ties to the Hotel?

3. I would be curious what the margin... (read more)

2
toonalfrink
5y
Not yet, but it's certainly a project that is on our radar. We also want to find ways to measure innate talent, so that people can tell earlier whether AIS research would be a good fit for them.

2. RAISE very much does endorse the hotel (especially given that the founder works for and lives at the hotel, and the hotel was integral to their progress over the last 6 months). See e.g. here and here. We have no formal relationship with Rethink Charity (or Rethink Priorities in particular) - individuals at the hotel have applied for and got work from them independently.

3. The marginal cost of adding a new resident when already at ~75% capacity is ~£4k/yr.

6. I wonder too. I wonder also how different it would be if it was done after another 6-12 months of getting established.

re signal boost: any particular reason why?

-13
kbog
5y

Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn't see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example

2
egastfriend
5y
Medical marijuana fell outside the scope of our consulting project, but I think the evidence is weak for medical marijuana as a promising intervention: "When researchers extended their analysis through 2013, they found that the association between having any medical marijuana law and lower rates of opioid deaths completely disappeared. Moreover, the association between states with medical marijuana dispensaries and opioid mortality fell substantially as well." https://www.rand.org/news/press/2018/02/06.html It's definitely an interesting/intriguing idea, but it also carries risks of increasing some of the harms associated with marijuana use. Curious to see more evidence come out about it.

Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I'm missing)? I'm having a hard time understanding the mechanism through which this occurs.

5
Freethinkers In EA
5y
It's not necessarily as intentional as that. Some people have certain political goals. They can achieve those goals co-operatively by engaging people in civil discussion or by adversarily by protesting/creating negative publicity. If the later tends to be successful, a greater proportion of people will be drawn towards it. Is that clearer?

The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested "topics to avoid" are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don't see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.

If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn't be discussed because they are irrelevant than to argue that they shouldn't be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.

I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don't see how an "EA certification" effectively accomplishes this goal.

I do think there would be a place for small EA-run businesses in fields with:

  • a lot of EAs
  • low barriers to entry
  • sharply diminishing returns to scale

Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the... (read more)

1
Vaidehi Agarwalla
5y
I agree strongly with the last point in this comment,- and the post in general. I have a few responses to the first points. I imagine the EA-certification would have many benefits: * certification of successful companies could set an example for other companies to follow, and set a high bar for CSR - not just to dontate x% but to give it to an effective charity. * keeping track of EA-Corps as the movement grows so that they can attract EAs outside the personal networks of the creators * spreading EA values beyond the non-profit industry and tight social networks of current EAs * potentially create a new model for socially-minded businesses to follow (and allow socially-minded investors a new business model which could have better results than the social benefit companies model)

That's a good point, but I don't think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That's not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a "brittle argument"

On the other hand, I think it's fair to say that e.g. "Climate change was for sure the primary cause of the Syrian civil war" is a brittle argument

I'd previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: https://journals.ametsoc.org/doi/10.1175/WCAS-D-13-00059.1 This paper claims the opposite though: https://www.sciencedirect.com/science/article/pii/S0962629816301822.

"The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise f... (read more)

2
kbog
5y
Highlight your text and then select the hyperlink icon in the pop-up bar.

I don't think this is indirect and unlikely at all; in fact, I think we are seeing this effect already. In particular, some of the 2nd-order effects of climate change (such as natural catastrophe-->famine-->war/refugees) are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth). As the effects of climate change intensify, so will the dangers to other x-risks.

In particular, a plausible path is climate change immiserate... (read more)

2
kbog
5y
AFAIK this is not how the current refugee crisis occurred. The wars in the Middle East / Afghanistan were not caused by climate change. If climate change increases, that will convince people to stop voting for politicians who think it is a myth. You're also relying on the assumption that leaders who oppose immigration will also be leaders who doubt climate change. That may be true in the US right now but as a sweeping argument across decades and continents it is unsubstantiated. It's also unclear if such politicians will increase or decrease x-risks.
3
Pablo
5y
Beware brittle arguments.
Load more