All of Liam_Donovan's Comments + Replies

Coronavirus Research Ideas for EAs

I'd be interested in joining the Slack group

2Peter Wildeford2yEmail me at [] and I'll send an invite back to your email.
What are the key ongoing debates in EA?

I'd like to take Buck's side of the bet as well if you're willing to bet more

COVID-19 brief for friends and family

What was her rationale for prioritizing hand soap over food?

Is vegetarianism/veganism growing more partisan over time?

It's probably the lizardman constant showing up again -- if ~5% of people answer randomly and <5% of the population are actually veg*ns, then many of the self-reported veg*ns will have been people who answered randomly.

5David_Moss2yI think this is a good explanation of at least part of the phenomenon. As you note, where we do samples of the general population and only 5% of people report being vegetarian or vegan, then even a small number of lizardperson answering randomly, oddly or deliberately trolling could make up a large part of the 5%. That said, I note that even in surveys which are deliberately solely targeting identified vegetarians or vegans (so 100% of people in the sample identified as vegetarian or vegan), large percentages then say that they eat some meat. Rethink Priorities has an unpublished survey (report forthcoming soon) which sampled exclusively people who have previously identified as vegetarian or vegan (and then asked them again in the survey whether they identified as vegetarian or vegan) and we found just over 25% of those who answered affirmatively to the latter question still seemed to indicate that they consumed some meat product in a food frequency questionnaire. So that suggests to me that there's likely something more systematic going on, where some reasonably large percentage of people identify as vegetarian or vegan despite eating meat (e.g. because they eat meat very infrequently and think that's close enough). Of course, it's also possible that the first sampling to find self-identified vegetarian or vegans sampled a lot of lizardpersons, meaning that there was a disproportionate number of lizardpersons in the second sampling, meaning that there was a disproportionate number of lizardpersons who then identified as vegetarian or vegan in our survey. And perhaps lizardpersons don't just answer randomly but are disproportionately likely to identify as vegetarian or vegan when asked, which might also contribute.
Love seems like a high priority

I think it's misleading to call that evidence that marriage causes shorter lifespans (not sure if that's your intention)

2Linch2yI mean, there's literally a strong causal relationship between marriage and having a shorter lifespan. I assume sociologists are usually referring to other effects however.
Love seems like a high priority

Do you have a link and/or a brief explanation of how they convincingly established causality for the "married women have shorter lives" claim?

1Linch2yI don't know what the time period is, but at the risk of saying the obvious, the historical rate of maternal mortality is much higher than it is today in the First World. Our world in data[1] estimates historical rates at .5-1% per birth. So assuming 6 births per woman[2], you get 3-6% of married women dying from childbirth alone, at a relatively young age. [1] [] [2] []
Love seems like a high priority

The next logical step is to evaluate the novel ideas, though, where a "cadre of uber-rational people" would be quite useful IMHO. In particular, a small group of very good evaluators seems much better than a large group of less epistemically rational evaluators who could be collectively swayed by bad reasoning.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

I think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.

8 things I believe about climate change

Have you read this paper suggesting that there is no good evidence of a connection between climate change and the Syrian war? I found it quite persuasive.

Are we living at the most influential time in history?

What is a Copernican prior? I can't find any google results

3Linch2yWikipedia gives the physicist's version, but EAs (and maybe philosophers?) use it more broadly. [] The short summary I use to describe it is that "we" are not that special, for various definitions of the word we. Some examples [] on FB.
7SoerenMind2yIt's just an informal way to say that we're probably typical observers. It's named after Copernicus because he found that the Earth isn't as special as people thought.
4JP Addison2yI don't know the history of the term or its relationship to Copernicus, but I can say how my forgotten source defined it. Suppose you want to ask, "How long will my car run?" Suppose it's a weird car that has a different engine and manufacturer than other cars, so those cars aren't much help. One place you could start is with how long it's currently be running for. This is based on the prior that you're observing it on average halfway through its life. If it's been running for 6 months so far, you would guess 1 year. There surely exists a more rigorous definition than this, but that's the gist.
EA Leaders Forum: Survey on EA priorities (data and analysis)

You're estimating there are ~1000 people doing direct EA work? I would have guessed around an order of magnitude less (~100-200 people).

8Aaron Gertler2yIt depends on what you count as "direct". But if you consider all employees of GiveWell-supported charities, for example, I think you'd get to 1000. You can get to 100 just by adding up employees and full-time contractors at CEA, 80K, Open Phil, and GiveWell. CHAI, a single research organization, currently has 42 people listed on its "people" page [], and while many are professors or graduate students who aren't working "full-time" on CHAI, I'd guess that this still represents 25-30 person-years per year of work on AI matters.
EA Hotel Fundraiser 5: Out of runway!

What if rooms at the EA Hotel were cost-price by default, and you allocated "scholarships" based on a combination of need and merit, as many US universities do? This might avoid a negative feedback cycle (because you can retain the most exceptional people) while reducing costs and making the EA Hotel a less attractive target for unaligned people to take resources from.

With the charity structure we're setting up, charging cost price will also amount to a grant in the form of a partial subsidy. Charging anyone less than market rate (~double cost price) means they are a beneficiary of the charity. So in practice everyone will have to apply for a grant of free accommodation, board and stipend, and the amount given (total or partial subsidy) will depend on their need and merit.

EA Hotel Fundraiser 5: Out of runway!

What does this mean in the context of the EA Hotel? In particular, would your point apply to university scholarships as well, and if not, what breaks the analogy between scholarships and the Hotel?

Long-Term Future Fund: August 2019 grant recommendations

Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.

What opinions that you hold would you be reluctant to express publicly to other EAs?

Doesn't that assume EAs should value the lives of fetuses and e.g. adult humans equally?

What opinions that you hold would you be reluctant to express publicly to other EAs?

Due to politicization, I'd expect reducing farm animal suffering/death to be much cheaper/more tractable per animal than reducing abortion is per fetus; choosing abortion as a cause area would also imperil EA's ability to recruit smart people across the political spectrum. I'd guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true?

Note: It would also be quite costly for EA as a movement to generate a better-researched estimate of the parameters due to the risk of politicizing the movement.

6Larks2yThere are a lot of EAs who think that human lives are significantly more important than animal lives, and that future lives matter a lot, so this does not seem totally unreasonable. The most recent piece I read on the subject was this piece [] from Scott, with two methodologies that suggested one human was worth 320-500 chickens. Having said that, I think he mis-analysed the data slightly - people who selected "I don't think animals have moral value commensurable with the value of a human, and will skip to the end" should have been coded as assigning really high value to humans, not dropped from the analysis. Making this adjustment gives a median estimate of each human being worth just over 1,000 chickens. Bearing in mind that half of all people have above-median estimates, so it could be very worthwhile for them. Using my alternative coding, the 75th percentile answer was human being worth 999,999,999 chickens. So even though it might not be worthwhile for some EAs, it definitely could be for others.
Extinguishing or preventing coal seam fires is a potential cause area
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.

I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in "... (read more)

2kbog2yThis is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.
Defining Effective Altruism

What does it mean to be "pro-science"? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn't meet this criterion look like?

I ask because I don't have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would "proto-EAs" who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don't have a clear picture of what activities/causes being "pro-science" would exclude.

edit: Why was this downvoted?

As a scientist, I consider science a way of learning about the world, and not what a particular group of people say. I think the article is fairly explicit about taking a similar definition of "science-aligned":

(i) the use of evidence and careful reasoning to work out...


  • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on careful rigorous argument and theoretical models as well as data.

There is usually a vast body of existing relevant work on a topic across va

... (read more)
7Ozzie Gooen2yMy quick take onto why this was downvoted would be because someone may have glanced at it quickly and assumed you were being negative to MIRI or EA. I think around being "Science-aligned", the post means using the principals and learnings of the scientific method and similar tools, rather than agreeing with "the majority of scientists" or similar. The mainstream scientific community seems also likely to be skeptical of EA, but that doesn't mean that EA would have to therefore be similarly skeptical of itself. That said, of course whether one follows the scientific method and similar for some practices, especially in cases where they aren't backed by many other communities, could be rather up for debate.
Extinguishing or preventing coal seam fires is a potential cause area

An example of what I had in mind was focusing more on climate change when running events like Raemon's Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of "equal importance to EA" (however that's defined) in e.g. technical AI safety.

Want to Save the World? Enter the Priesthood

The answer to your question is basically what I phrased as a hypothetical before:

participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

I was involved in EA at university for 2 years before coming to believe Catholicism is true, and it didn't seem like Church dogma conflicted with my pro-EA intuitions at all, so I've just stayed with it. It helped that I wasn't ever an EA for rigidly consequentialist reasons; I just wanted to help people and EA's analytical approach was a na... (read more)

Corporate Global Catastrophic Risks (C-GCRs)

I downvoted the post because I didn't learn anything from it that would be relevant to a discussion of C-GCRs (it's possible I missed something). I agree that the questions are serious ones, and I'd be interested to see a top level post that explored them in more detail. I can't speak for anyone else on this, and I admit I downvote things quite liberally.

Want to Save the World? Enter the Priesthood

Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools

Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this man... (read more)

1ZacharyRudolph2yI'm not sure I understand your objection, but I feel like I should clarify that I'm not endorsing consequentialism as a sort of moral criterion (that is, the thing in virtue of which something is right or wrong) so much as I take the "effective" part of effective altruism to imply using some sort nonmoral consequentialist reasoning. As far as I understand (which isn't far), a Catholic moral framework would still allow for some sort of moral quantification (that some acts are more good than others or are good to a greater degree), e.g. saints are a thing. If so, then (I think) it seems sensible to say a Catholic could sensibly take the results of a consequentialist reasoning as applied to her own framework as morally motivating reasons to choose one act over another. My worry is that if that framework holds only one value as most basic, then this consequentialist reasoning might (edit: depending on the value) validly lead to the conclusion that the way to do the most good is something radically different from the things that this subculture tends to endorse, and that this should count towards the concern that this subculture's actions could produce serious disvalue (edit: disvalue from, say, the moral consequentialist's point of view). On the other hand if this framework is some sort of pluralist/virtue system (you mentioned a virtue of charity), then yeah I definitely agree that effective altruism could represent the pursuit of excellence in such a virtue or that "effectiveness" could be interpreted as a way of saying that the altruist is simply addressing what he takes to be his most stringent obligations with regard to his duty of charity. These, though, I think would count as different arguments (i.e. arguments which make sense to Catholics) than those which utilitarians take to give morally motivating reasons.
3zdgroff2yThis is all really interesting, and thank you all for chiming in. Liam, I'm curious—do you adopt EA tools within a Catholic moral framework, or do you practice Catholicism while adopting a different moral framework? I figure your participation in EA is some sort of anecdata.
Want to Save the World? Enter the Priesthood

Thank you! I'm not sure, but I assume that I accidentally highlighted part of the post while trying to fix a typo, then accidentally e.g. pressed "ctrl-v" instead of "v" (I often instinctively copy half-finished posts into the clipboard). That seems like a pretty weird accident, but I'm pretty sure it was just user error rather than anything to do with the EA forum.

Want to Save the World? Enter the Priesthood

This post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?

2JP Addison2yCurrently a work in progress feature that is admin only. (And has been in that state for a while unfortunately.) I've reverted this post. Do you know what the sequence of events is that caused it to get garbled?
Want to Save the World? Enter the Priesthood

This doesn't seem like a great idea to me for two reasons:

1. The notion of explicitly manipulating one's beliefs about something as central as religion for non-truthseeking reasons seems very sketchy, especially when the core premise of EA relies on an accurate understanding of highly uncertain subjects.

2. Am I correct in saying the ultimate aim of this strategy is to shift religious groups' dogma from (what they believe to be) divinely revealed truth to [divinely revealed truth + random things EAs want]? I'm genuinely not sure if I interpreted the post correctly, but that seems like an unnecessarily adversarial move against a set of organized groups with largely benign goals.

1Liam_Donovan2yThis post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?
Want to Save the World? Enter the Priesthood

Yeah, I don't think I phrased my comment very clearly.

I was trying to say that, if the Christian conception of heaven/hell exists, then it is highly likely than an objective non-utilitarian morality exists. It shouldn't be surprising that continuing to use utilitarianism within an otherwise Christian framework yields garbage results! As you say, a Christian can still be an EA, for most relevant definitions of "be an EA".

Want to Save the World? Enter the Priesthood

I'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.

1ZacharyRudolph2yYou're right. What I was trying to get at was that I presume Catholics would start with different answers to axiological questions like "what is the most basic good?". Where I might offer a welfarist answer, the Church might say say "a closeness to God" (I'm not confident in that). Thus, if a Catholic altruist applies the "effective" element of EA reasoning, the way to do the most good in the world might end up looking like aggressive evangelism in order to save the most souls. And that if we're trying to convince Catholic Priests to encourage the Church use its resources for usual EA interventions, it seems like you'd need to either employ a different set of arguments than those used to convince welfarists/utilitarians or convince them to adopt answer to the question we started with.
3Khorton2yThe set of tools EA provides for considering how to help others, and the network/community, could be useful for any altruist. Utilitarianism is less compatible with Catholicism.
Extinguishing or preventing coal seam fires is a potential cause area

This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I'm concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.

2kbog2yReducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.
7Aaron Gertler2yMy impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven't been written up by anyone with a strong connection to EA). I can't speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just... small, without enough research-hours available to devote to everything worth exploring. (Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they've earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.) Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, "EAs should do more X" means something like "these specific people and organizations should do more X and less Y", unless we grow the pool of available people/organizations.)
How to evaluate the impact of influencing governments vs direct work in a given cause area?

Great answer, thank you!

Do you know of any examples of the "direct work+" strategy working, especially for EA-recommended charities? The closest thing I can think of would be the GiveDirectly UBI trial; is that the sort of thing you had in mind?

There's Lots More To Do

It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.

[Link] The Optimizer's Curse & Wrong-Way Reductions

Isn't this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be "weird-sounding" or unintuitive? I think it's a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.

2kbog3yNah, I'm just saying that a curse applies to every method, so it doesn't tell us to use a particular method. I'm excluding arguments from the issue, not bringing them in. So if we were previously thinking that weird causes are good and common sense/model pluralism aren't useful, then we should just stick to our guns. But if we were previously thinking that common sense/model pluralism are generally more accurate anyway, then we should stick with them.
[Link] The Optimizer's Curse & Wrong-Way Reductions

A recent example of this happening might be EA LTF Fund grants to various organizations trying to improve societal epistemic rationality (e.g. by supporting prediction markets)

[Link] The Optimizer's Curse & Wrong-Way Reductions

Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can't give any

8Max_Daniel3yUnfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I'm often not convinced by the use of quantitative models in research (e.g. the "Racing to the Precipice" paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED's impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more quantitative statements were being made -- e.g. "this was one of the two most interesting papers I read this year" is much more informative than "I enjoyed reading your paper". Again, this is somewhat more subtle than I can easily convey: in particular, I'm definitely not saying that e.g. the ALLFED model or the "Racing to the Precipice" paper shouldn't have been made - it's more that I wish they would have been accompanied by a more careful qualitative analysis, and would have been used to find conceptual insights and test assumptions rather than as a direct argument for certain practical conclusions.
Why is the EA Hotel having trouble fundraising?

Does he still endorse the retraction? It's just idle curiosity on my part but it wasn't clear from the comments

Why is the EA Hotel having trouble fundraising?

A few thoughts this post raised for me (not directed at OP specifically):

1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

2. Does "ea organisations are unwilling to even endorse the hotel" refer to RAISE/Rethink Charity (very surprising & important evidence!), or other EA organizations without direct ties to the Hotel?

3. I would be curious what the margin... (read more)

2toonalfrink3yNot yet, but it's certainly a project that is on our radar. We also want to find ways to measure innate talent, so that people can tell earlier whether AIS research would be a good fit for them.

2. RAISE very much does endorse the hotel (especially given that the founder works for and lives at the hotel, and the hotel was integral to their progress over the last 6 months). See e.g. here and here. We have no formal relationship with Rethink Charity (or Rethink Priorities in particular) - individuals at the hotel have applied for and got work from them independently.

3. The marginal cost of adding a new resident when already at ~75% capacity is ~£4k/yr.

6. I wonder too. I wonder also how different it would be if it was done after another 6-12 months of getting established.


re signal boost: any particular reason why?

PAF: Opioid Epidemic

Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn't see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example

2egastfriend3yMedical marijuana fell outside the scope of our consulting project, but I think the evidence is weak for medical marijuana as a promising intervention: "When researchers extended their analysis through 2013, they found that the association between having any medical marijuana law and lower rates of opioid deaths completely disappeared. Moreover, the association between states with medical marijuana dispensaries and opioid mortality fell substantially as well." It's definitely an interesting/intriguing idea, but it also carries risks of increasing some of the harms associated with marijuana use. Curious to see more evidence come out about it.
The Importance of Truth-Oriented Discussions in EA

Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I'm missing)? I'm having a hard time understanding the mechanism through which this occurs.

5Freethinkers In EA3yIt's not necessarily as intentional as that. Some people have certain political goals. They can achieve those goals co-operatively by engaging people in civil discussion or by adversarily by protesting/creating negative publicity. If the later tends to be successful, a greater proportion of people will be drawn towards it. Is that clearer?
Making discussions in EA groups inclusive

The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested "topics to avoid" are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don't see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.

If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn't be discussed because they are irrelevant than to argue that they shouldn't be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.

Profiting-to-Give: harnessing EA talent with a new funding model

I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don't see how an "EA certification" effectively accomplishes this goal.

I do think there would be a place for small EA-run businesses in fields with:

  • a lot of EAs
  • low barriers to entry
  • sharply diminishing returns to scale

Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the... (read more)

1vaidehi_agarwalla3yI agree strongly with the last point in this comment,- and the post in general. I have a few responses to the first points. I imagine the EA-certification would have many benefits: * certification of successful companies could set an example for other companies to follow, and set a high bar for CSR - not just to dontate x% but to give it to an effective charity. * keeping track of EA-Corps as the movement grows so that they can attract EAs outside the personal networks of the creators * spreading EA values beyond the non-profit industry and tight social networks of current EAs * potentially create a new model for socially-minded businesses to follow (and allow socially-minded investors a new business model which could have better results than the social benefit companies model)
Climate Change Is, In General, Not An Existential Risk

That's a good point, but I don't think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That's not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a "brittle argument"

On the other hand, I think it's fair to say that e.g. "Climate change was for sure the primary cause of the Syrian civil war" is a brittle argument

Climate Change Is, In General, Not An Existential Risk

I'd previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: This paper claims the opposite though:

"The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise f... (read more)

2kbog3yHighlight your text and then select the hyperlink icon in the pop-up bar.
Climate Change Is, In General, Not An Existential Risk

I don't think this is indirect and unlikely at all; in fact, I think we are seeing this effect already. In particular, some of the 2nd-order effects of climate change (such as natural catastrophe-->famine-->war/refugees) are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth). As the effects of climate change intensify, so will the dangers to other x-risks.

In particular, a plausible path is climate change immiserate... (read more)

2kbog3yAFAIK this is not how the current refugee crisis occurred. The wars in the Middle East / Afghanistan were not caused by climate change. If climate change increases, that will convince people to stop voting for politicians who think it is a myth. You're also relying on the assumption that leaders who oppose immigration will also be leaders who doubt climate change. That may be true in the US right now but as a sweeping argument across decades and continents it is unsubstantiated. It's also unclear if such politicians will increase or decrease x-risks.
3Pablo3yBeware brittle arguments [].
The case for taking AI seriously as a threat to humanity

1. A system that will imprison a black person but not an otherwise-identical white person can be accurately described as "a racist systsem"

2. One example of such a system is employing a ML algorithm that uses race as a predictive factor to determine bond amounts and sentencing

3. White people will tend to be biased towards more positive evaluations of a racist system because they have not experienced racism, so their evaluations should be given lower weight

4. Non-white people tend to evaluate racist systems very negatively, even when they improv... (read more)

Response to a Dylan Matthews article on Vox about bipartisanship

How did Dylan Matthews become associated with EA? This is a serious question -- based on the articles of his I've read, he doesn't seem to particularly care about some core EA values, such as epistemic rationality and respect for "odd-sounding" opinions.

He wrote one of the early articles about earning to give, as far as I know "became associated with EA" in the same way that other EAs do - by getting interested in the ideas and starting to act on them. For example, he donated a kidney.

EA Hotel with free accommodation and board for two years

I suspect Greg/the manager would not be able to filter projects particularly well based on personal interviews; since the point of the hotel is basically 'hits-based giving', I think a blanket ban on irreversible projects is more useful (and would satisfy most of the concerns in the fb comment vollmer linked)

7John_Maxwell3yJust to play devil's advocate for a moment, aren't personal interviews and hits-based giving essentially the process used by other EA funders? I believe it was OpenPhil who coined the term hits-based giving []. It sounds like maybe your issue is with the way funding works in EA broadly speaking, not this project in particular. The same seems to apply to vollmer's point about adverse selection effects. Over time, the project pool will increasingly be made up of projects everyone has rejected [] . So this could almost be considered a fully general counterargument against funding any project. (Note that this thinking directly opposes replaceability: replaceability encourages you to fund projects no other funder is willing to fund; this line of reasoning says just the opposite.) Anyway, I think the EA Hotel could easily be less vulnerable to adverse selection effects, if it appeals to a different crowd. I'm the first long-term resident of the hotel, and I've never applied for funding from any other source. (I'm self-studying machine learning at the hotel, which I don't think I would ever get a grant for.) Sounds like you really want a broader rule like "no irreversible projects without community consensus" or something. In general, mitigating downside risk seems like an issue that's fairly orthogonal to establishing low cost of living EA hubs.
EA Hotel with free accommodation and board for two years

Following on vollmer's point, it might be reasonable to have a blanket rule against policy/PR/political/etc work -- anything that is irreversible and difficult to evaluate. "Not being able to get funding from other sources" is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.

On the other hand, I really can't imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of the... (read more)

2MichaelPlant3yThis basically applies to everything as a matter of degree, so it looks impossible to put in a blanket rule. Suppose I raise £10 and send it to AMF. That's irreversible. Is it difficult to evaluate? Depends what you mean by 'difficult' and what the comparison class is.
2Jonas Vollmer3yI agree research projects are more robustly positive. Information hazards [] are one main way in which they could do a significant amount of harm.
EA Hotel with free accommodation and board for two years

From my perspective, the manager should

  1. Not (necessarily) be an EA
  2. Be paid more (even if this trades off against capacity, etc)
  3. Not also be a community mentor

One of the biggest possible failure modes for this project seems to be hiring a not-excellent manager; even a small increase in competence could make a big difference between the project failing and succeeding. Thus, the #1 consideration ought to be "how to maximize the manager's expected skill". Unfortunately, the combination of undesirable location, only hiring EAs, and the low salary ... (read more)

0Greg_Colbourn3yI would say it’s a bit more than vague ;) I think it’s important to have someone who really understands and shares the goals of the project. Someone who doesn’t get EA is not likely to care about it much beyond seeing it as a means to get paid. It would then be largely up to part time volunteers (the other Trustees) to direct the project and keep it aligned with EA. This scenario seems more likely to lead to stagnation/failure to me. I think a flair for optimisation is needed in any kind of ops role. The more you optimise, the greater your capacity (/free time). Conscientiousness would be required. But there are a fair amount of EAs with that trait, right? In practice I think these are mostly the same thing. The more initial success there is, the more likely expansion is. The point I was making is that the manager will have a large stake in the course the project takes, so it will depend on what they make of it (hence meaning it should be seen as an exciting opportunity. I mean yeah, there will be some amount of “boring” (mindfulness promoting?) tasks - but it could be so much more fun than “Hotel Manager in Blackpool” initially sounds). In many ways this won’t be a typical hotel (non-profit, longer term stays, self-service breakfast and lunch, simplified dinner menu, weekly linen/towel changes, EA evening events etc), so I’m not sure how much prior hotel experience is relevant. Really anyone who is a reasonably skilled generalist, passionate about the project, and friendly should be able to do it. Salary is open to negotiation (have amended ad [] ). I think that once everything is set up, the day-to-day management of the hotel itself won’t require full time hours. Would prefer to have one full time employee rather than two part-time employees, but as I’ve said previously, I am open to splitting the role. As mentioned above, part of optimisation can be outsourcing tasks you are less good at (or don’t like doing). e.g. hiring s
Load More