All of Robert_Wiblin's Comments + Replies

On Mike Berkowitz's 80k Podcast

Exciting to see a post about this episode 5 hours after we put it out (!).

A few quick thoughts:

"Berkowitz never mentions that the median voter in most Republican primaries is currently "pro-Trump" so he leaves out the single sentence explanation."

No but I say that. IIRC one of his responses also takes this background explanation as a given.

"Japan and New Zealand have shown that sovereign parliamentary democracies do not manifest even nascent electoral movements."

In general I'm with you on thinking some systems of government are less conducive to populist m... (read more)

PhD student mutual line-manager invitation

Great to see someone giving this a crack! Let me know how it works out. :)

How You Can Counterfactually Send Millions of Dollars to EA Charities

"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."

The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.

I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.

Those that offer the highest rates (something approaching 1%) on comparison sites tend to have conditions (... (read more)

Thanks for your thoughtful reply Rob!

"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."

The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.

I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.

To quote my reply to GMcGowan, "We used the latest Form 990 data from 201... (read more)

If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant

In addition to the issues raised by other commentators I would worry that someone trying to work on something they're a bad fit for can easily be harmful.

That especially goes for things related to existential risk.

And in addition to the obvious mechanisms, having most of the people in a field be ill-suited to what they're doing but persisting for 'astronomical waste' reasons will mean most participants struggle to make progress, get demoralized, and repel others from joining them.

2EdoArad10moMy gut reaction was to be surprised that there are whole fields or causes in which some people not only aren't a good fit for the most important roles there but that they just can't use their skill set in a constructive way in which they would feel that they are making some contribution. But on second thought, we are talking about extremely small fields with limited resources. This means that it would be difficult financially for people who aren't skilled in accordance with the top needs of the field. Then again, the field might grow and people can upskill quite a bit if they are willing to wait a decade or two before working directly on their favorite x-risk.
How much does a vote matter?

He says he's going to write a response. If I recall Jason isn't a consequentialist so he may have a different take on what kinds of things we can have a duty to do.

How much does a vote matter?

Want to write a TLDR summary? I could find somewhere to stick it.

5Nathan Young1ytl;dr * In a close election in the US, you have a 1 in a 10 million chance to swing the election if you live in a competitive district. * A 1 in 10 million (10,000,000) chance might sound small, but since the US government spends $17,500,000,000,000, it's worth nearly $2 million. * Other countries spend less money, but their districts are often smaller. In competitive districts, your vote can still be worth a lot. * Government spending is easy to quantify, but there is also foreign policy, social and political freedoms. How valuable is a 1 in 10 million chance to halve the chance of nuclear war? * If you aren't informed here are two tips: * Find someone informed who shares your values. Ask them how they will vote and match them. * Read and follow international opinion polling - your country might be 50/50 but the world might not. * If you think it's worth voting, it's probably worth telling your friends to as well. * In conclusion * If you already follow politics, often it will be effective to vote. * If you would have to spend time researching, that time might be better spent working on one of the world's pressing problems or earning to give to an effective charity.
1Nathan Young1ySure. Though that's not what I meant. I more mean an op-ed style version of the same content that is lighter and more chatty. But maybe I'm misunderstanding the process? I guess if a journalist wants to summarise it, they'll do that themselves? Eg in this style https://unherd.com/2020/10/why-do-people-believe-such-complete-rubbish/
How much does a vote matter?

It seems like to figure out whether it's a good use of time for 300 people like you to vote, you still need to figure out if it's worth it for any single of them.

1ofer1yWhat I mean to say is that, roughly speaking, one should compare the world where people like them vote to the world where people like them don't vote, and choose the better world. That can yield a different decision than when one decides without considering the fact that they're not deciding just for themselves.
When you shouldn't use EA jargon and how to avoid it

I'm actually more favourable to a smaller EA community, but I still think jargon is bad. Using jargon doesn't disproportionately appeal to the people we want.

The most capable folks are busy with other stuff and don't have time to waste trying to understanding us. They're also more secure and uninterested in any silly in-group signalling games.

When you shouldn't use EA jargon and how to avoid it

Yes but grok also lacks that connotation to the ~97% of the population who don't know what it means or where it came from.

8Max_Daniel1yAs one data point, I had to google what Stranger in a Strange Land refers to, and don't know what connotations the comment above yours [1] refers to. I always assumed 'grok' was just a generic synonym for '(deeply) understand', and didn't even particularly associate it with the EA community. (Maybe it's relevant here that I'm not a native speaker.) [1] Replacing the jargon term 'grandparent' ;)
4willbradshaw1yAnd the way it's used in tech is almost totally lacking the mystical angle from Stranger in a Strange Land anyway. Also Stranger in a Strange Land is a profoundly weird and ideosyncratic book and there's not really any reason to evoke it in most EA contexts. (That said I do think "deeply understand" doesn't quite do the job.)
[Link] "Where are all the successful rationalists?"

The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more. They are often doing better by their own lights as a result of having learned things from other people interested in EA-ish topics. Typically they aren't yet at the top of their fields but that's unsurprising as most are 25-35.

The rationality community, inasmuch as it doesn't overlap with the EA community, also has plenty of people who are successful by their own lights, but their goals tend to be becoming thinke... (read more)

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

To better understand your view, what are some cases where you think it would be right to either

  1. not invite someone to speak, or
  2. cancel a talk you've already started organising,

but only just?

That is, cases where it's just slightly over the line of being justified.

Can my self-worth compare to my instrumental value?

For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.

You're also almost always better placed than anyone else to provide the things you need — e.g. sleep, recreation, fun, friends, healthy behaviours — so it's each person's comparative advantage to put extra effort into looking out for themselves. I don't know why, but doing that is more motivating if it feels like i... (read more)

9Ramiro1yI think this is still an instrumental reason for someone to place "substantial intrinsic value on themselves." Though I have no problem with that, I thought what C Tilli complained about was precisely that, for EAs, all self-concern is for the sake of the greater good, even when it is rephrased as a psychological need for a small amount self-indulgence. Second, I'd say that people who are "more successful and have a larger social impact in the long term" are "people who place substantial intrinsic value on themselves,” but that's just selection dynamics: if you have a large impact, then you (likely) place substantial intrinsic value on yourself. Even if it does imply that you’re more likely to succeed if you place substantial intrinsic value on yourself (if only people who do that can succeed), it does not say anything about failure – confident people fail all the time, and the worst way of failing seems to be reserved for those who place substantial value on themselves and end up being successful with the wrong values [https://markmanson.net/personal-values]. But I wonder if our sample of “successful people” is not too biased towards those who get the spotlights. Petrov didn’t seem to put a lot of value on himself, and Arkhipov is often described as exceptionally humble; no one strives to be an unsung hero.
Keynesian Altruism

Yep that sounds good, non-profits should aim to have fairly stable expenditure over the business cycle.

I think I was thrown off your true motivation by the name 'Keynesian altruism'. It might be wise to rename it 'countercyclical' so it doesn't carry the implication that you're looking for an economic multiplier.

Keynesian Altruism

The idea that charities should focus on spending money during recessions because of the extra benefit that provides seems wrong to me.

Using standard estimates of the fiscal multiplier during recessions — and ignoring any offsetting effects your actions have on fiscal or monetary policy — if a US charity spends an extra $1 during a recession it might raise US GDP by between $0 and $3.

If you're a charity spending $1, and just generally raising US GDP by $3 is a significant fraction of your total social impact, you must be a very ineffective organisation. I c... (read more)

Thanks for your comment.

I'm not advocating it because of the fiscal multiplier. That would be the cherry on the cake.

The first simple step is simply to say don't cut back expenditure because shrinking and regrowing an organisation is costly. Most charities (though EA ones are somewhat atypical) see their income reduced during bad times. And since most charities think in bland terms of x months of reserves, this means their expenditure fluctuates as well. This is an not efficient way to manage an organisation. In good times, build a buffer, s... (read more)

More empirical data on 'value drift'

Is there even 1 exclusively about people working at EA organisations?

If someone had taken a different job with the goal of having a big social impact, and we didn't think what they were doing was horribly misguided, I don't think we would count them as having 'dropped out of EA' in any of the 6 data sets.

I was referring to things like phrasings used and how often someone working for an EA org vs not was discussed relative to other things; I wasn't referring to the actual criteria used to classify people as having dropping out / reduced involvement or not. 

Given that Ben says he's now made some edits, it doesn't seem worth combing through the post again in detail to find examples of the sort of thing I mean. But I just did a quick ctrl+f for "organisations", and found this, as one example:

Of the 14 classified as staff, I don’t count any clear cases of

... (read more)
The case of the missing cause prioritisation research

"For example 80000 Hours have stopped cause prioritisation work to focus on their priority paths"

Hey Sam — being a small organisation 80,000 Hours has only ever had fairly limited staff time for cause priorities research.

But I wouldn't say we're doing less of it than before, and we haven't decided to cut it. For instance see Arden Koehler's recent posts about Ideas for high impact careers beyond our priority paths and Global issues beyond 80,000 Hours’ current priorities.

We aim to put ~10% of team time into underlying research, where one topic is trying

... (read more)
9weeatquince1ySuper great to hear that 10% of 80000 Hours team time will go into underlying research. (Also apologies for getting things wrong, was generalising from what I could find online about what 80K plans to work on – have edited the post). If you have more info on what this research might look into do let me know. – – That there is an exploit explore tradeoff. Continuing to do cause prioritisation research needs to be weighed against focusing on specific cause areas. I imply in my post that EA organisations have jumped too quickly into exploit. (I mention 80K and FHI, but l am judging from an outside view so might be wrong). I think this is a hard case to make, especially to anyone who is more certain than me about which causes matter (which may be the most EA folk). That said there are other reasons for continuing to explore, to create a diverse community, epistemic humility, game theoretic reasons (better if everyone explores a bit more), to counter optimism bias, etc. Not sure I am explaining this well. I guess I am saying that I still think the high level point I was making stands: that EA organisations seem to move towards exploit quicker than I would like. But do let me know if you disagree.
Intellectual Diversity in AI Safety

It seems like lots of active AI safety researchers, even a majority, are aware of Yudkowsky and Bostrom's views but only agree with parts of what they have to say (e.g. Russell, Amodei, Christiano, the teams at DeepMind, OpenAI, etc).

There may still not be enough intellectual diversity, but having the same perspective as Bostrom or Yudkowsky isn't a filter to involvement.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

As Michael says, common sense would indicate I must have been referring to the initial peak, or the peak in interest/panic/policy response, or the peak in the UK/Europe, or peak where our readers are located, or — this being a brief comment on an unrelated topic — just speaking loosely and not putting much thought into my wording.

FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.

As Michael says, common sense would indicate

This sounds like a status move. I asked a sincere question and maybe I didn't think too carefully when I asked it, but there's no need to rub it in.

FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.Neither the US or Latin America could plausibly be said to peak then.

Thanks, I appreciate the clarification! :)

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I think you know what I mean — the initial peak in the UK, the country where we are located, in late March/April.

6Linch1ySorry if I sounded mean! I genuinely didn't know what you meant! I live in the US and I assumed that most of 80k's audiences will be more concerned about worldwide numbers or their home country's, then that of 80k's "base." (I also didn't consider the possibility that there are other reasons than audience interest for you to be prioritizing certain podcasts, like logistics) I really appreciate a lot of your interviews on covid-19, btw. Definitely didn't intend my original comment in a mean way!
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

There's often a few months between recording and release and we've had a handful of episodes that took a frustratingly long time to get out the door, but never a year.

The time between the first recording and release for this one was actually 9 months. The main reason was Howie and Ben wanted to go back and re-record a number of parts they didn't think they got right the first time around, and it took them a while to both be free and in the same place so they could do that.

A few episodes were also pushed back so we could get out COVID-19 interviews during the peak of the epidemic.

4Linch1yWait, what's your probability that we're past the peak (in terms of, eg, daily worldwide deaths)?
Study results: The most convincing argument for effective donations

Thanks for doing this research, nice work.

Could you make your figure a little larger, it's hard to read on a desktop. It might also be easier for the reader if each of the five arguments had a one-word name to keep track of the gist of their actual content.

"As you can see, the winner in Phase 2 was Argument 9 by a nose. Argument 9 was also the winner by a nose in Phase 1, and thus the winner overall."

I don't think this is quite right. Arguments 5 and 12 are very much within the confidence interval for Argument 9. Eyeballing it I would guess we can only

... (read more)
6Stefan_Schubert1yEric Schwitzgebel responded as follows to a similar comment on his wall [https://www.facebook.com/eschwitz/posts/10220331421028224?comment_id=10220332373292030] : But many won't interpret it that way and further clarification would have been good, yes. Edit: Schwitzgebel's post actually had another title: "Contest Winner! A Philosophical Argument That Effectively Convinces Research Participants to Donate to Charity [https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html] "
Problem areas beyond 80,000 Hours' current priorities

Hi Tobias — thanks for the ideas!

Invertebrate welfare is wrapped into 'Wild animal welfare', and reducing long-term risks from malevolent actors is partially captured under 'S-risks'. We'll discuss the other two.

Should EA Buy Distribution Rights for Foundational Books?

For future reference, next time you need to look up the page number for a citation, Library Genesis can quickly let you access a digital copy of almost any book: https://en.wikipedia.org/wiki/Library_Genesis

Many books are still not available on Library Genesis. Fortunately, a sizeable fraction of those can be "borrowed" for 14 days from the Internet Archive.

Will protests lead to thousands of coronavirus deaths?

I didn't mean to imply that the protests would fix the whole problem, obviously they won't.

As you say you'd need to multiply through by a distribution for 'likelihood of success' and 'how much of the problems solved'.

3willbradshaw1ySure, I didn't think you were saying that the protests would be a panacea. My main point was less about probability/degree of success and more about counterfactual impact.
Will protests lead to thousands of coronavirus deaths?

I think a crux for some protesters will be how much total damage they think bad policing is doing in the USA.

While police killings or murders draw the most attention, much more damage is probably done in other ways, such as through over-incarceration, petty harassment, framing innocent people, bankrupting folks through unnecessary fines, enforcing bad laws such a drug prohibition, assaults, and so on. And that total damage accumulates year after year.

On top of this we could add the burden of crime itself that results from poor policing practices, including

... (read more)

These points don't apply to the UK and elsewhere to anywhere near the same extent, so the post does at least seem like a good argument against the protests in the UK and elsewhere.

I think this is the wrong question.

The point of lockdown is that for many people it is individually rational to break the lockdown - you can see your family, go to work, or have a small wedding ceremony with little risk and large benefits - but this imposes external costs on other people. As more and more people break lockdown, these costs get higher and higher, so we need a way to persuade people to stay inside - to make them consider not only the risks to themselves, but also the risks they are imposing on other people. We solve this with a combination ... (read more)

I suspect that a lot of protesters would be very angry we're even raising these kinds of issues, but...

If we're being consequentialist about this, then the impact of the protests is not the difference between fixing these injustices, and the status quo continuing forever. It's the difference between a chance of fixing these injustices now, and a chance of fixing them next time a protest-worthy incident comes around.

Sadly, opportunities for these kinds of protests seem to come around fairly regularly in the US. So I expect these protests are probably only r

... (read more)
How can I apply person-affecting views to Effective Altruism?

If I weren't interested in creating more new beings with positive lives I'd place greater priority on:

  • Ending the suffering and injustice suffered by animals in factory farming
  • Ending the suffering of animals in the wilderness
  • Slowing ageing, or cryonics (so the present generation can enjoy many times more positive value over the course of their lives)
  • Radical new ways to dramatically raise the welfare of the present generation (e.g. direct brain stimulation as described here)

I haven't thought much about what would look good from a conservative Christia

... (read more)
Eleven recent 80,000 Hours articles on how to stop COVID-19 & other pandemics

Hi PBS, I understand where you're coming from and expect many policy folks may well be having a bigger impact than front-line doctors, because in this case prevention is probably better than treatment.

At the same time I can see why we don't clap for them in that way, because they're not taking on a particularly high risk of death and injury in the same way the hospital staff are right now. I appreciate both, but on a personal level I'm more impressed by people who continue to accept a high risk of contracting COVID-19 in order to treat patients.

Toby Ord’s ‘The Precipice’ is published!

I've compiled 16 fun or important points from the book for the write-up of my interview with Toby, which might well be of interest people here. :)

Who should give sperm/eggs?

Hi Khorton — yes as I responded to Denise, it appears the one year thing must have been specific to the (for-profit) bank I spoke with. They pay so many up-front costs for each new donor I think they want to ensure they get a lot of samples out of each one to be able to cover them.

And perhaps they were highballing the 30+ number, so they couldn't say they didn't tell you should the most extreme thing happen, even if it's improbable.

Who should give sperm/eggs?

Hmmmm, this is all what I was told at one place. Maybe some of these rules — 30 kids max, donating for a year at a minimum, or the 99% figure — are specific to that company, rather than being UK-wide norms/regulations.

Or perhaps they were rounding up to 99% to just mean 'the vast majority'.

I'd forgotten about the ten family limit, thanks for the reminder.

Like you I have the impression that they're much less selective on eggs.

Who should give sperm/eggs?

In some ways the UK sperm donation process is an even more serious commitment than egg donation.

From what I was told, the rejection rate is extremely high — close to 99% of applicants are filtered out for one reason or another. If you get through that process they'll want you to go in and donate once a week or more, for at least a year. Each time you want to donate, you can't ejaculate for 48 hours beforehand.

And the place I spoke to said they'd aim to sell enough sperm to create 30 kids in the UK, and even more overseas.

The ones born in the UK can find ou

... (read more)
3Khorton2yHFEA says that most donors "create one or two families, with one or two children each". The legal maximum is 10 families. "You’ll normally need to go to a fertility clinic once a week for between three and six months to make your donation." https://www.hfea.gov.uk/donation/donors/donating-your-sperm/ [https://www.hfea.gov.uk/donation/donors/donating-your-sperm/]
5Denise_Melchin2yI'm fairly surprised by this response, this doesn't match what I have read. The Human Fertilisation and Embryology Authority imposes a limit for sperm and egg donors to donate to a maximum of ten families in the UK, although there is no limit on how many children might be born to these ten families (I'm struggling to link, but google 'HFEA ten family limit'). But realistically, they won't all want to have three children. I'm curious whether you have a source for the claim that 99% of prospective sperm donors in the UK get rejected? I'm much less confident about this, but this doesn't line up with my impression. I also didn't have the impression they were particularly picky about egg donors, unlike in the US. But yes, it's true for sperm and egg donors alike that in the UK they can be contacted once the offspring turns 18.
Attempted summary of the 2019-nCoV situation — 80,000 Hours

I know 2 working in normal pandemic preparedness and 2-3 in EA GCBR stuff.

I can offer introductions though they are probably worked off their feet just now. DM me somewhere?

1Wei_Dai2yThanks Rob, I emailed you.
Should Longtermists Mostly Think About Animals?

Part of the issue might be the subheading "Space colonization will probably include animals".

If the heading had been 'might', then people would be less likely to object. Many things 'might' happen!

3abrahamrowe2yThat makes sense!
7Peter Wildeford2yGood point. I agree.
Should Longtermists Mostly Think About Animals?

80% seems reasonable. It's hard to be confident about many things that far out, but:

i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we'll bring pigs to Alpha Centauri if we go, than whether we'll ever go to Alpha Centauri.

ii) That we'll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There's not much alternative.

iii) Inasmuch as we're focussing in on (what's in my opinion) a narrow part of the whole probability space — lik

... (read more)
5Peter Wildeford2yI agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance. To wit, I think a lot of retorts to Abraham's argument appear to me to be of the form "well, this seems rather unlikely to happen", whereas I don't think such an argument actually succeeds. And to reiterate for clarity, I'm not taking a particular stance on Abraham's argument itself - only saying why I think this one particular counterargument doesn't work for me.
Should Longtermists Mostly Think About Animals?

I apologise if I'm missing something as I went over this very quickly.

I think a key objection for me is to the idea that wild animals will be included in space settlement in any significant numbers.

If we do settle space, I expect most of that, outside of this solar system, to be done by autonomous machines rather than human beings. Most easily habitable locations in the universe are not on planets, but rather freestanding in space, using resources from asteroids, and solar energy.

Autonomous intelligent machines will be at a great advantage over animals fro

... (read more)

I worry this is very overconfident speculation about the very far future. I'm inclined to agree with you, but I feel hard-pressed to put more than say 80% odds on it. I think the kind of s-risk nonhuman animal dystopia that Rowe mentions (and has been previously mentioned by Brian Tomasik) seems possible enough to merit significant concern.

(To be clear, I don't know how much I actually agree with this piece, agree with your counterpoint, or how much weight I'd put on other scenarios, or what those scenarios even are.)

Hey Rob!

I'm not sure that even under the scenario you describe animal welfare doesn't end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn&apo... (read more)

I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it's worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people's moral circles.

7Linch2yOne way this could happen is if the deep ecologists or people who care about life-in-general "win", and for some reason have an extremely strong preference for spreading biological life to the stars without regard to sentient suffering. I'm pretty optimistic this won't happen however. I think by default we should expect that the future (if we don't die out), will be predominantly composed of humans and our (digital) descendants, rather than things that look like wild animals today. Another thing that the analysis leaves out is that even aside from space colonization, biological evolved life is likely to be an extremely inefficient method of converting energy to positive (or negative!) experiences.
Concerning the Recent 2019-Novel Coronavirus Outbreak

Howie and I just recorded a 1h15m conversation going through what we do and don't know about nCoV for the 80,000 Hours Podcast.

We've also compiled a bunch of links to the best resources on the topic that we're aware of which you can get on this page.

Growth and the case against randomista development

I've guessed this is the case on 'back of the envelope' grounds for a while, so nice to see someone put more time into evaluating it.

It's not true to say EAs have been blindly on board with RCTs — I've been saying economic policy is probably the top priority for years and plenty of people have agreed that's likely the case. But I don't work on poverty so unfortunately wasn't able to take it further than that.

Making decisions under moral uncertainty

Will's book, 'Moral Uncertainty', is coming out next month for those who are interested in the topic: https://www.amazon.co.uk/Moral-Uncertainty-William-MacAskill/dp/0198722273

I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA

I Jessica, IIRC the main problem you'll likely encounter is that some naïve cost-effectiveness estimates will give you a really low figure, like donating $1 to corporate campaigns is as effective as being vegan a whole year. (Not exactly, but that order of magnitude.)

Given that I'm inclined to just make it the lowest amount that feels substantial and like it would actually plausibly be enough to make someone else veg*n for a year — for me that means about $100 a year.

Assumptions about the far future and cause priority

Yes it needs to go in an explanation of how we score scale/importance in the problem framework! It's on the list. :)

Alternatively I've been wondering if we need a standalone article explaining how we can influence the long term, and what are signs that something might be highly leveraged for doing that.

Assumptions about the far future and cause priority

As a first pass the rate of improvement should asymptote towards zero so long as there's a theoretical optimum and declining returns to further research before the heat death of the universe, which seems like pretty mild assumptions.

As an analogy, there's an impossibly wide range of configurations of matter you could in theory use to create a glass from which we can drink water. But we've already gotten most of the way towards the best glass for humans, I would contend. I don't think we could keep improving glasses in any meaningful way using a galaxy's re

... (read more)
3Jc_Mourrat2yI think it would be really useful if this idea was explained in more details somewhere, preferably on the 80k website. Do you think there is a chance that this happens at some point? (hopefully not too far in the future ;-) )
Assumptions about the far future and cause priority

Having settled most of the accessible universe we'll have hundreds of billions or even trillions of years to try to keep improving how we're using the matter and energy at our disposal.

Doesn't it seems almost certain that over such a long time period our annual rate if improvement in the value generated by the best configuration would eventually asymptote towards zero? I think that's all that's necessary for safety to be substantially more attractive than speed-ups.

(BTW safety is never 'infinitely' preferred because even on a strict plateau view the accessible universe is still shrinking by about a billionth a year.)

2FlorentBerthet2yAgreed. And even in the scenario where we could continue to find more valuable patterns of matter even billions of years in the future, I don’t think that efforts to accelerate things now would have any significant impact on the value we will create in the future, because it seems very likely that our future value creation will mostly depend on major events that won’t have much to do with the current state of things. Let’s consider the launch of Von Neumann probes throughout the universe as such a possible major event: even if we could increase our current growth rate by 1% with a better allocation of resources, it doesn’t mean that the future launch of these probes will be 1% more efficient. Rather, the outcomes of this event seem largely uncorrelated with our growth rate prior to that moment. At best, accelerating our growth would hasten the launch by a tiny bit, but this is very different than saying “increasing our growth by 1% now will increase our whole future utility by 1%”.
2Jc_Mourrat2yLet me call X the statement: "our rate of improvement remains bounded away from zero far into the future". If I understand correctly, you are saying that we have great difficulties imagining a scenario where X happens, therefore X is very unlikely. Human imagination is very limited. For instance, most of human history shows very little change from one generation to the next; in other words, people were not able to imagine ways for future generations to do certain things in better ways than how they already knew. Here you ask our imagination to perform a spectacularly difficult task, namely to imagine what extremely advanced civilizations are likely to be doing in billions of years. I am not surprised if we do not manage to produce a credible scenario where X occurs. I do not take this as strong evidence against X. Separately from this, I personally do not find it very likely that we will ultimately settle most of the accessible universe, as you suppose, because I would be surprised if human beings hold such a special position. (In my opinion, either advanced civilizations are not so interested in expanding in space; or else, we will at some point meet a much more advanced civilization, and our trajectory after this point will probably depend little on what we can do before it.) Concerning the point you put in parentheses about safety being "infinitely" preferred, I meant to use phrases such as "virtually infinitely preferred" to convey that the preference is so strong that any actual empirical estimate is considered unnecessary. In footnote 5 above, I mentioned this [https://80000hours.org/2013/10/influencing-the-far-future/] 80k article intended to summarize the views of the EA community, where it is said that speedup interventions are "essentially morally neutral" (which, given the context, I take as being equivalent to saying that risk mitigation is essentially infinitely preferred).
Effective Altruism and International Trade

"changes outlook towards life, makes married life less unequal for women, increases self-respect, self-confidence, allows for better participation in society"

I agree these are all benefits, but I class them as instrumental benefits, and imagine most others here do as well.

They are benefits inasmuch as they go on to improve people's well-being.

"the human development index, it includes education as an outcome, valuable for its own sake"

The HDI also includes GDP which presumably nobody thinks is valuable for its own sake (i.e. widgets are only useful inasmuch

... (read more)
-7lucy.ea82y
Effective Altruism and International Trade

You quote GiveWell as saying:

"We do not place much intrinsic value on increasing time in school or test scores"

But you cut off the quote in a very misleading way indeed:

We do not place much intrinsic value on increasing time in school or test scores, although we think that such improvements may have instrumental value.

Unless you think spending time in school is very useful even if it has no other benefits to kids (i.e. they don't learn anything they use later in life), GiveWell is surely right here that the benefits are mostly instrumental.

It is wr

... (read more)
-2lucy.ea82yThis has bothered me quite a lot. I have been clear and consistent that I think that education has intrinsic value and have focused on that aspect. This can be seen from comments and posts on the forum. I had no intent to mislead, I don't see what I wrote as misleading or misrepresenting. I quoted accurately and linked to the source. If anything it is the EA community that is misleading folks. "Development" is widely understood to include education when used in the context of "developing" countries or the global poor. Saying "Global Development" and not including education is extremely misleading it took me 2 years of immersion in EA to finally understand that EA folks don't include education when talking about "Global Development" Maybe you should do a podcast on this. The following URL needs to be fixed to say "Global Health and Income" or "Global Health and Poverty", right now it's misleading. https://www.effectivealtruism.org/articles/cause-profile-global-health-and-development/ [https://www.effectivealtruism.org/articles/cause-profile-global-health-and-development/]
2lucy.ea82yFrom Education for All: is the world on track? EFA global monitoring report, 2002 by UNESCO -------------------------------------------------------------------------------- As Sen puts it, "it is often asked whether certain political or social freedoms, such as the liberty of political participation and dissent, or opportunities to receive basic education, are or are not ‘conducive to development’. In the light of the more foundational view of development as freedom, this way of posing the question tends to miss the important understanding that these substantive freedoms (that is, the liberty of political participation or the opportunity to receive basic education or health care) are among the constituent components of development. Their relevance for development does not have to be freshly established through their indirect contribution to the growth of GNP or to the promotion of industrialization." Hence, education counts as a ‘valuable being or doing’, as an ‘end’ of development. --------------------------------------------------------------------------------
0lucy.ea82yUNDP (United Nations Development Programme) assumes education has intrinsic value, so do I. UNDP in 1996 Human Development Report [http://hdr.undp.org/sites/default/files/reports/257/hdr_1996_en_complete_nostats.pdf] page 50 asks the question "Why is income part of the human development index?" For them it is obvious that "Longevity and education are clearly valuable aspects of a good life" they then go on to explain why income should be included in the index. In this thread I asked the question earlier "The most respected and widely used index for measuring human well being is the human development index, it includes education as an outcome, valuable for its own sake, the EA community has to explain why it deems education not useful while the UNDP thinks that it is important." and also as a post Global basic education as a missing cause priority [https://forum.effectivealtruism.org/posts/pe7QHjMpMuxT8YTir/global-basic-education-as-a-missing-cause-priority] And that is the crux of the disagreement. I (and UNDP) believe Education like health has intrinsic value, whereas Give Well and the EA community does not. Let me unpack this. The time spent in school has benefits for kids even if the benefits do not show up in terms of health, wealth. Why? It changes outlook towards life, makes married life less unequal for women, increases self-respect, self-confidence, allows for better participation in society. Rethinking the Value of Education: Amartya Sen and the Capability Approach Dr. Sunday Olaoluwa Dada http://internationaljournalcorner.com/index.php/theijhss/article/view/126772/87663 [http://internationaljournalcorner.com/index.php/theijhss/article/view/126772/87663] "There are aspects of human flourishing that education enhances that are neglected by the human capital approach. This is the aspect of education enabling human being to live freely and fully. The development of human capacity to think and reason. This facilitates the ability of individuals who are
Effective Altruism and International Trade

"If growth leads to education, then why is South Africa behind Jamaica and India, how about Bangladesh > Pakistan? Sri Lanka > Brazil"

Because it's not the only factor?

"Its very strange EA says education has no value"

'EA' does not say this, and I don't know anyone involved in EA who holds such a strong view.

-6lucy.ea82y
Oddly, Britain has never been happier

Hi bfinn, maybe have a listen to this episode of the Freakonomics podcast: http://freakonomics.com/podcast/new-freakonomics-radio-podcast-the-suicide-paradox/

It's one of the things that shaped my view that cross-country differences in suicide are best explained by culture rather than underlying happiness.

Oddly, Britain has never been happier

I also don't trust mental health time series to show whether conditions are becoming more common, because it's equally or more likely that more people are coming forward as having, e.g. depression, as it becomes very acceptable to talk about it.

But suicide rates are hugely influenced by the social acceptability of suicide specifically, and easy access to suicide methods that allow you to successfully kill yourself on impulse (e.g. guns, which have become less accessible to people over time). So unfortunately I don't think suicide rates are a reliable way to track mental health problems over time either.

1bfinn2yThanks for this. The ONS suicide data is from 1981, showing a decline by about a third by 2007 - pretty big, and I'm not aware that any popular suicide method became less available in that time. (Unlike e.g. coal gas used in ovens, once a popular suicide method but it was phased out in the 1960s and 1970s leading to a suicide reduction - I don't think coal gas was available thereafter as the last plant closed in the late 1970s. And guns have never been generally available in the UK.) The methods used have apparently changed popularity in recent years; hanging/suffocation/strangulation and poisoning are the most popular: https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/bulletins/suicidesintheunitedkingdom/2017registrations#suicide-methods [https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/bulletins/suicidesintheunitedkingdom/2017registrations#suicide-methods] But I'm not sure why they have changed other than 'fashion'. It could be the case that some of these methods are significantly more effective than others which could affect the statistics, but I doubt by this much. Also I'm not aware that suicide has changed in acceptability in the UK in recent decades. It was never considered acceptable (unlike say in Japan). So I'm still inclined to regard suicide as a better proxy of extreme mental health problems than anything else. (That said, I'm not an expert at all in this area.)
[updated] Global development interventions are generally more effective than climate change interventions

Thanks for rewriting and republishing this. All very interesting.

On this new revised version, something that stood out to me was the truly extreme range between the optimistic and pessimistic scenarios you describe.

I think the relative cost-effectiveness range you've given spans fully ten orders of magnitude, or a range of 10,000,000,000x. Even by our standards that's a lot. If we're really this uncertain it seems we can say almost nothing. But I don't think we are that uncertain.

By choosing a value out in the tail for 4 different input variables all at on

... (read more)
4HaukeHillebrandt2yThanks Rob for taking the time to comment and my sincere apologies for the delay in replying. 1. There really is a lot of uncertainty here. Note that all parameters estimates are based on or grounded in empirical and published estimates. Even my adjustment for the social cost of carbon being over- or underestimate by 10x corresponds to values with similar orders of magnitude you can find in the literature - see cell comments of the spreadsheet model. For instance, one recent paper [https://www.sciencedirect.com/science/article/pii/S014098831930218X] by a renowned climate economist finds that under different model specifications the SCC ranges from $3.38/tCO2e to $21,889/tCO2e. Ditto with what the eta parameter for the income adjustment. 2. The “realistic estimate” model scenario is what I perceive to use parameters estimates around which there more consensus, but that’s just my opinion and one can reasonably disagree with these choices. 3. I used the extreme scenarios to highlight the uncertainty and to make statements such as “Even if you believe the true social cost of carbon is higher than most models suggest (i.e. $20k per tonne, the most extreme value in the literature), then that still often is not enough to beat global development interventions”. Generally, my agenda was probably a bit simpler than people might have supposed. This was not intended to be the last word on whether climate change or development interventions are always better. Rather it's a starting point and “choose your own adventure” model to help prioritizing between a concrete climate and a concrete development charity. Note that there are four parameters that drive the results of this analysis (the SCC, the income adjustment eta, the cost to avert CO2, and the effectiveness of global dev/health vs. cash). For the first two, there really is a lot more uncertainty, but for the latter two, it’s more clear. This makes the model
Updated Climate Change Problem Profile

Hi mchr3k — thanks for writing this. I'm completely slammed with other work at 80,000 Hours just now (I'm recording 7 podcast interviews this month), so I won't be able to respond right away.

For what it's worth I agree with just posting this and emailing it to us, rather than letting us hold you up. Many people are going to be interested in what you're saying here and might have useful comments to add, not just 80,000 Hours. It's also an area where reasonable people can disagree so it's useful to have a range of views represented publicly.

Possibly letting us comment on a Google Doc first might have been helpful but I don't think people should treat it as a necessary step!

Load More