All of Robert_Wiblin's Comments + Replies

AI Risk is like Terminator; Stop Saying it's Not

I interpreted them not as saying that Terminator underplays the issue but rather that it misrepresents what a real AI would be able to do (in a way that probably makes the problem seem far easier to solve). But that may be me suffering from the curse of knowledge.

6skluug1mo
I don't think this is a good characterization of e.g. Kelsey's preference for her Philip Morris analogy over the Terminator analogy--does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan's piece; it doesn't seem to be saying "AI risk is a much harder problem than implied by the Terminator films", except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI. It seems to me like the proper explanatory path is "Like Terminator?" -> "Basically" -> "So why not just not give AI nuclear launch codes?" -> "There are a lot of other ways AI could take over". "Like Terminator?" -> "No, like Philip Morris" seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn't take over the world.
AI Risk is like Terminator; Stop Saying it's Not

Isn't a key difference that in Terminator the AI seems incredibly incompetent at wiping us out? Surely we'd be destroyed in no time — to start with it could just manufacture a poison like dioxin and coat the world (or something much smarter). Going around with tanks and guns as depicted in the film is entirely unnecessary.

1skluug1mo
I feel like this is a pretty insignificant objection, because it implies someone might going around thinking, "don't worry, AI Risk is just like Terminator! all we'll have to do is bring humanity back from the brink of extinction, fighting amongst the rubble of civilization after a nuclear holocaust". Surely if people think the threat is only as bad as Terminator, that's plenty to get them to care.
Some clarifications on the Future Fund's approach to grantmaking

If it's just a form where the main reason for rejection is chosen from a list then that's probably fine/good.

I've seen people try to do written feedback before and find it a nightmare so I guess people's mileage varies a fair bit.

Some clarifications on the Future Fund's approach to grantmaking

"However, banking on this as handling the concerns that were raised doesn't account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result. "

I mean I think people are radically underestimating the opportunity cost of doing feedback properly at the moment. If I'm right then getting feedback might reduce people's chances of getting funded by say, 30%, or 50%, because the throughput for grants will be much reduced.

I would probably rather have a 20% ch... (read more)

Rob, I think you're consistently arguing against a point few people are making. You talk about ongoing correspondence with projects, or writing (potentially paragraphs of) feedback. Several people in this thread have suggested that pre-written categories of feedback would be a huge improvement from the status quo, and I can't see anything you've said that actually argues against that.

Also, as someone who semi-regularly gives feedback to 80+ people, I've never found it to make my thinking worse, but I've sometimes found it makes my thinking better.

I'm not s... (read more)

Some clarifications on the Future Fund's approach to grantmaking

It would be very surprising if there weren't an opportunity cost to providing feedback. Those might include:

  1. Senior management time to oversee the project, bottlenecking other plans
  2. PR firefighting and morale counselling when 1 in ~100 people get angry at what you say and cause you grief (this will absolutely happen)
  3. Any hires capable of thinking up and communicating helpful feedback (this is difficult!) could otherwise use that time to read and make decisions on more grant proposals in more areas — or just improve the decision-making among the same pool
... (read more)

an opportunity cost to providing feedback

huge mistake for Future Fund to provide substantial feedback except in rare cases.

 

Yep, I'd imagine what makes sense is between 'highly involved and coordinated attempt to provide feedback at scale' and 'zero'. I think it's tempting to look away from how harmful 'zero' can be at scale

> That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.


Agreed – this seems... (read more)

Tentative Reasons You Might Be Underrating Having Kids

I find these arguments intellectually interesting to a degree.

But like you, my aesthetic preference is just that people who personally feel like having kids should have kids, and those who personally don't feel like having kids shouldn't.

If we followed that dollar-store rule of thumb I expect things would go roughly as well as they can, all things considered.

FTX/CEA - show us your numbers!

My guess is this would reduce grant output a lot relative to how much I think anyone would learn (maybe it would grantmaking in half?) so personally I'd rather see them just push ahead and make a lot of grants then review or write about just a handful of them from time to time.

Introducing 80k After Hours

Here you go: https://www.stitcher.com/show/80k-after-hours

(Seems like Stitcher is having technical problems, I've contacted their technical support about it.)

Why is Operations no longer an 80K Priority Path?

For the 10/10 criteria do you mean a $50k hiring bonus, or a $50k annual salary?

4CarolineJ5mo
A $50k hiring bonus, which I think is really really high - maybe too high. $10-20k would probably make more sense. I've edited my comment to say $20k instead of 50 and for clarity. (Curious to know if you think this is about right or not).
Think about EA alignment like skill mastery, not cult indoctrination

"creating closed social circles"

Just on this my impression is that more senior people in the EA community actively recommend not closing your social circle because, among other reasons, it's more robust to have a range of social supports from separate groups of people, and it's better epistemically not to exclusively hang out with people who already share your views on things.

Inasmuch as people's social circles shrink I don't think it's due to guidance from leaders (as in a typical cult, I would think) but rather because people naturally find it more fun to socialise with people who share their beliefs and values, even if they think that's not in their long-term best interest.

The Bioethicists are (Mostly) Alright

Cool yeah. I just want to provide another more boring reason a lot of us have piled on to bioethics that doesn't even require ingroup-outgroup dynamics.

Basically all of the people you're citing (like me) have an amateur interest in bioethics as it affects legal policy or medical practice or pandemic control (the thing we actually follow closely).

You and I agree that harmful decisions are regularly being made by IRBs (and politicians), often on the basis of supposed 'bioethics'. We also both agree there are at least a handful of poor thinkers in the field w... (read more)

1Devin Kalish5mo
Thanks! I'm glad you found it useful.
The Bioethicists are (Mostly) Alright

Fair enough, I'm happy to talk less about bioethicists and talk more about institutional review of research ethics.

For what it's worth I and other critics do regularly/constantly refer people to the classic dissection of the problem caused by IRBs (The Censor's Hand).

We also talk about the misaligned incentives faced by bureaucrats about as ad nauseam as we talk about bioethics.

And when I've seen IRBs in action they have worked to keep their decisions and the reasons for them secret and intimidate researchers into not speaking out, while philosophers publi... (read more)

2Devin Kalish5mo
This is all fair, and I appreciate the response. I don’t mean to say that you and other critics overall have bad takes on the issue of research oversight, I agree with most of the criticisms, and think they are important. It’s just on the topic of bioethicists specifically that I find a good deal of the discourse weird (I should also add that there are plenty of particular bioethicists, like Leon Kass, who are worthy of the criticisms, I just don’t think they are representative, or the root of the problem).
On Mike Berkowitz's 80k Podcast

Exciting to see a post about this episode 5 hours after we put it out (!).

A few quick thoughts:

"Berkowitz never mentions that the median voter in most Republican primaries is currently "pro-Trump" so he leaves out the single sentence explanation."

No but I say that. IIRC one of his responses also takes this background explanation as a given.

"Japan and New Zealand have shown that sovereign parliamentary democracies do not manifest even nascent electoral movements."

In general I'm with you on thinking some systems of government are less conducive to populist m... (read more)

PhD student mutual line-manager invitation

Great to see someone giving this a crack! Let me know how it works out. :)

How You Can Counterfactually Send Millions of Dollars to EA Charities

"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."

The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.

I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.

Those that offer the highest rates (something approaching 1%) on comparison sites tend to have conditions (... (read more)

Thanks for your thoughtful reply Rob!

"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."

The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.

I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.

To quote my reply to GMcGowan, "We used the latest Form 990 data from 201... (read more)

If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant

In addition to the issues raised by other commentators I would worry that someone trying to work on something they're a bad fit for can easily be harmful.

That especially goes for things related to existential risk.

And in addition to the obvious mechanisms, having most of the people in a field be ill-suited to what they're doing but persisting for 'astronomical waste' reasons will mean most participants struggle to make progress, get demoralized, and repel others from joining them.

2EdoArad2y
My gut reaction was to be surprised that there are whole fields or causes in which some people not only aren't a good fit for the most important roles there but that they just can't use their skill set in a constructive way in which they would feel that they are making some contribution. But on second thought, we are talking about extremely small fields with limited resources. This means that it would be difficult financially for people who aren't skilled in accordance with the top needs of the field. Then again, the field might grow and people can upskill quite a bit if they are willing to wait a decade or two before working directly on their favorite x-risk.
How much does a vote matter?

He says he's going to write a response. If I recall Jason isn't a consequentialist so he may have a different take on what kinds of things we can have a duty to do.

How much does a vote matter?

Want to write a TLDR summary? I could find somewhere to stick it.

5Nathan Young2y
tl;dr * In a close election in the US, you have a 1 in a 10 million chance to swing the election if you live in a competitive district. * A 1 in 10 million (10,000,000) chance might sound small, but since the US government spends $17,500,000,000,000, it's worth nearly $2 million. * Other countries spend less money, but their districts are often smaller. In competitive districts, your vote can still be worth a lot. * Government spending is easy to quantify, but there is also foreign policy, social and political freedoms. How valuable is a 1 in 10 million chance to halve the chance of nuclear war? * If you aren't informed here are two tips: * Find someone informed who shares your values. Ask them how they will vote and match them. * Read and follow international opinion polling - your country might be 50/50 but the world might not. * If you think it's worth voting, it's probably worth telling your friends to as well. * In conclusion * If you already follow politics, often it will be effective to vote. * If you would have to spend time researching, that time might be better spent working on one of the world's pressing problems or earning to give to an effective charity.
1Nathan Young2y
Sure. Though that's not what I meant. I more mean an op-ed style version of the same content that is lighter and more chatty. But maybe I'm misunderstanding the process? I guess if a journalist wants to summarise it, they'll do that themselves? Eg in this style https://unherd.com/2020/10/why-do-people-believe-such-complete-rubbish/
How much does a vote matter?

It seems like to figure out whether it's a good use of time for 300 people like you to vote, you still need to figure out if it's worth it for any single of them.

1ofer2y
What I mean to say is that, roughly speaking, one should compare the world where people like them vote to the world where people like them don't vote, and choose the better world. That can yield a different decision than when one decides without considering the fact that they're not deciding just for themselves.
When you shouldn't use EA jargon and how to avoid it

I'm actually more favourable to a smaller EA community, but I still think jargon is bad. Using jargon doesn't disproportionately appeal to the people we want.

The most capable folks are busy with other stuff and don't have time to waste trying to understanding us. They're also more secure and uninterested in any silly in-group signalling games.

When you shouldn't use EA jargon and how to avoid it

Yes but grok also lacks that connotation to the ~97% of the population who don't know what it means or where it came from.

8Max_Daniel2y
As one data point, I had to google what Stranger in a Strange Land refers to, and don't know what connotations the comment above yours [1] refers to. I always assumed 'grok' was just a generic synonym for '(deeply) understand', and didn't even particularly associate it with the EA community. (Maybe it's relevant here that I'm not a native speaker.) [1] Replacing the jargon term 'grandparent' ;)
4Will Bradshaw2y
And the way it's used in tech is almost totally lacking the mystical angle from Stranger in a Strange Land anyway. Also Stranger in a Strange Land is a profoundly weird and ideosyncratic book and there's not really any reason to evoke it in most EA contexts. (That said I do think "deeply understand" doesn't quite do the job.)
[Link] "Where are all the successful rationalists?"

The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more. They are often doing better by their own lights as a result of having learned things from other people interested in EA-ish topics. Typically they aren't yet at the top of their fields but that's unsurprising as most are 25-35.

The rationality community, inasmuch as it doesn't overlap with the EA community, also has plenty of people who are successful by their own lights, but their goals tend to be becoming thinke... (read more)

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

To better understand your view, what are some cases where you think it would be right to either

  1. not invite someone to speak, or
  2. cancel a talk you've already started organising,

but only just?

That is, cases where it's just slightly over the line of being justified.

Can my self-worth compare to my instrumental value?

For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.

You're also almost always better placed than anyone else to provide the things you need — e.g. sleep, recreation, fun, friends, healthy behaviours — so it's each person's comparative advantage to put extra effort into looking out for themselves. I don't know why, but doing that is more motivating if it feels like i... (read more)

9Ramiro2y
I think this is still an instrumental reason for someone to place "substantial intrinsic value on themselves." Though I have no problem with that, I thought what C Tilli complained about was precisely that, for EAs, all self-concern is for the sake of the greater good, even when it is rephrased as a psychological need for a small amount self-indulgence. Second, I'd say that people who are "more successful and have a larger social impact in the long term" are "people who place substantial intrinsic value on themselves,” but that's just selection dynamics: if you have a large impact, then you (likely) place substantial intrinsic value on yourself. Even if it does imply that you’re more likely to succeed if you place substantial intrinsic value on yourself (if only people who do that can succeed), it does not say anything about failure – confident people fail all the time, and the worst way of failing seems to be reserved for those who place substantial value on themselves and end up being successful with the wrong values [https://markmanson.net/personal-values]. But I wonder if our sample of “successful people” is not too biased towards those who get the spotlights. Petrov didn’t seem to put a lot of value on himself, and Arkhipov is often described as exceptionally humble; no one strives to be an unsung hero.
Keynesian Altruism

Yep that sounds good, non-profits should aim to have fairly stable expenditure over the business cycle.

I think I was thrown off your true motivation by the name 'Keynesian altruism'. It might be wise to rename it 'countercyclical' so it doesn't carry the implication that you're looking for an economic multiplier.

Keynesian Altruism

The idea that charities should focus on spending money during recessions because of the extra benefit that provides seems wrong to me.

Using standard estimates of the fiscal multiplier during recessions — and ignoring any offsetting effects your actions have on fiscal or monetary policy — if a US charity spends an extra $1 during a recession it might raise US GDP by between $0 and $3.

If you're a charity spending $1, and just generally raising US GDP by $3 is a significant fraction of your total social impact, you must be a very ineffective organisation. I c... (read more)

Thanks for your comment.

I'm not advocating it because of the fiscal multiplier. That would be the cherry on the cake.

The first simple step is simply to say don't cut back expenditure because shrinking and regrowing an organisation is costly. Most charities (though EA ones are somewhat atypical) see their income reduced during bad times. And since most charities think in bland terms of x months of reserves, this means their expenditure fluctuates as well. This is an not efficient way to manage an organisation. In good times, build a buffer, s... (read more)

More empirical data on 'value drift'

Is there even 1 exclusively about people working at EA organisations?

If someone had taken a different job with the goal of having a big social impact, and we didn't think what they were doing was horribly misguided, I don't think we would count them as having 'dropped out of EA' in any of the 6 data sets.

I was referring to things like phrasings used and how often someone working for an EA org vs not was discussed relative to other things; I wasn't referring to the actual criteria used to classify people as having dropping out / reduced involvement or not. 

Given that Ben says he's now made some edits, it doesn't seem worth combing through the post again in detail to find examples of the sort of thing I mean. But I just did a quick ctrl+f for "organisations", and found this, as one example:

Of the 14 classified as staff, I don’t count any clear cases of

... (read more)
The case of the missing cause prioritisation research

"For example 80000 Hours have stopped cause prioritisation work to focus on their priority paths"

Hey Sam — being a small organisation 80,000 Hours has only ever had fairly limited staff time for cause priorities research.

But I wouldn't say we're doing less of it than before, and we haven't decided to cut it. For instance see Arden Koehler's recent posts about Ideas for high impact careers beyond our priority paths and Global issues beyond 80,000 Hours’ current priorities.

We aim to put ~10% of team time into underlying research, where one topic is trying

... (read more)
9weeatquince2y
Super great to hear that 10% of 80000 Hours team time will go into underlying research. (Also apologies for getting things wrong, was generalising from what I could find online about what 80K plans to work on – have edited the post). If you have more info on what this research might look into do let me know. – – That there is an exploit explore tradeoff. Continuing to do cause prioritisation research needs to be weighed against focusing on specific cause areas. I imply in my post that EA organisations have jumped too quickly into exploit. (I mention 80K and FHI, but l am judging from an outside view so might be wrong). I think this is a hard case to make, especially to anyone who is more certain than me about which causes matter (which may be the most EA folk). That said there are other reasons for continuing to explore, to create a diverse community, epistemic humility, game theoretic reasons (better if everyone explores a bit more), to counter optimism bias, etc. Not sure I am explaining this well. I guess I am saying that I still think the high level point I was making stands: that EA organisations seem to move towards exploit quicker than I would like. But do let me know if you disagree.
Intellectual Diversity in AI Safety

It seems like lots of active AI safety researchers, even a majority, are aware of Yudkowsky and Bostrom's views but only agree with parts of what they have to say (e.g. Russell, Amodei, Christiano, the teams at DeepMind, OpenAI, etc).

There may still not be enough intellectual diversity, but having the same perspective as Bostrom or Yudkowsky isn't a filter to involvement.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

As Michael says, common sense would indicate I must have been referring to the initial peak, or the peak in interest/panic/policy response, or the peak in the UK/Europe, or peak where our readers are located, or — this being a brief comment on an unrelated topic — just speaking loosely and not putting much thought into my wording.

FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.

As Michael says, common sense would indicate

This sounds like a status move. I asked a sincere question and maybe I didn't think too carefully when I asked it, but there's no need to rub it in.

FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.Neither the US or Latin America could plausibly be said to peak then.

Thanks, I appreciate the clarification! :)

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I think you know what I mean — the initial peak in the UK, the country where we are located, in late March/April.

6Linch2y
Sorry if I sounded mean! I genuinely didn't know what you meant! I live in the US and I assumed that most of 80k's audiences will be more concerned about worldwide numbers or their home country's, then that of 80k's "base." (I also didn't consider the possibility that there are other reasons than audience interest for you to be prioritizing certain podcasts, like logistics) I really appreciate a lot of your interviews on covid-19, btw. Definitely didn't intend my original comment in a mean way!
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

There's often a few months between recording and release and we've had a handful of episodes that took a frustratingly long time to get out the door, but never a year.

The time between the first recording and release for this one was actually 9 months. The main reason was Howie and Ben wanted to go back and re-record a number of parts they didn't think they got right the first time around, and it took them a while to both be free and in the same place so they could do that.

A few episodes were also pushed back so we could get out COVID-19 interviews during the peak of the epidemic.

4Linch2y
Wait, what's your probability that we're past the peak (in terms of, eg, daily worldwide deaths)?
Study results: The most convincing argument for effective donations

Thanks for doing this research, nice work.

Could you make your figure a little larger, it's hard to read on a desktop. It might also be easier for the reader if each of the five arguments had a one-word name to keep track of the gist of their actual content.

"As you can see, the winner in Phase 2 was Argument 9 by a nose. Argument 9 was also the winner by a nose in Phase 1, and thus the winner overall."

I don't think this is quite right. Arguments 5 and 12 are very much within the confidence interval for Argument 9. Eyeballing it I would guess we can only

... (read more)
6Stefan_Schubert2y
Eric Schwitzgebel responded as follows to a similar comment on his wall [https://www.facebook.com/eschwitz/posts/10220331421028224?comment_id=10220332373292030] : But many won't interpret it that way and further clarification would have been good, yes. Edit: Schwitzgebel's post actually had another title: "Contest Winner! A Philosophical Argument That Effectively Convinces Research Participants to Donate to Charity [https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html] "
Problem areas beyond 80,000 Hours' current priorities

Hi Tobias — thanks for the ideas!

Invertebrate welfare is wrapped into 'Wild animal welfare', and reducing long-term risks from malevolent actors is partially captured under 'S-risks'. We'll discuss the other two.

Should EA Buy Distribution Rights for Foundational Books?

For future reference, next time you need to look up the page number for a citation, Library Genesis can quickly let you access a digital copy of almost any book: https://en.wikipedia.org/wiki/Library_Genesis

Many books are still not available on Library Genesis. Fortunately, a sizeable fraction of those can be "borrowed" for 14 days from the Internet Archive.

Will protests lead to thousands of coronavirus deaths?

I didn't mean to imply that the protests would fix the whole problem, obviously they won't.

As you say you'd need to multiply through by a distribution for 'likelihood of success' and 'how much of the problems solved'.

3Will Bradshaw2y
Sure, I didn't think you were saying that the protests would be a panacea. My main point was less about probability/degree of success and more about counterfactual impact.
Will protests lead to thousands of coronavirus deaths?

I think a crux for some protesters will be how much total damage they think bad policing is doing in the USA.

While police killings or murders draw the most attention, much more damage is probably done in other ways, such as through over-incarceration, petty harassment, framing innocent people, bankrupting folks through unnecessary fines, enforcing bad laws such a drug prohibition, assaults, and so on. And that total damage accumulates year after year.

On top of this we could add the burden of crime itself that results from poor policing practices, including

... (read more)

These points don't apply to the UK and elsewhere to anywhere near the same extent, so the post does at least seem like a good argument against the protests in the UK and elsewhere.

I think this is the wrong question.

The point of lockdown is that for many people it is individually rational to break the lockdown - you can see your family, go to work, or have a small wedding ceremony with little risk and large benefits - but this imposes external costs on other people. As more and more people break lockdown, these costs get higher and higher, so we need a way to persuade people to stay inside - to make them consider not only the risks to themselves, but also the risks they are imposing on other people. We solve this with a combination ... (read more)

I suspect that a lot of protesters would be very angry we're even raising these kinds of issues, but...

If we're being consequentialist about this, then the impact of the protests is not the difference between fixing these injustices, and the status quo continuing forever. It's the difference between a chance of fixing these injustices now, and a chance of fixing them next time a protest-worthy incident comes around.

Sadly, opportunities for these kinds of protests seem to come around fairly regularly in the US. So I expect these protests are probably only r

... (read more)
How can I apply person-affecting views to Effective Altruism?

If I weren't interested in creating more new beings with positive lives I'd place greater priority on:

  • Ending the suffering and injustice suffered by animals in factory farming
  • Ending the suffering of animals in the wilderness
  • Slowing ageing, or cryonics (so the present generation can enjoy many times more positive value over the course of their lives)
  • Radical new ways to dramatically raise the welfare of the present generation (e.g. direct brain stimulation as described here)

I haven't thought much about what would look good from a conservative Christia

... (read more)
Eleven recent 80,000 Hours articles on how to stop COVID-19 & other pandemics

Hi PBS, I understand where you're coming from and expect many policy folks may well be having a bigger impact than front-line doctors, because in this case prevention is probably better than treatment.

At the same time I can see why we don't clap for them in that way, because they're not taking on a particularly high risk of death and injury in the same way the hospital staff are right now. I appreciate both, but on a personal level I'm more impressed by people who continue to accept a high risk of contracting COVID-19 in order to treat patients.

Toby Ord’s ‘The Precipice’ is published!

I've compiled 16 fun or important points from the book for the write-up of my interview with Toby, which might well be of interest people here. :)

Who should give sperm/eggs?

Hi Khorton — yes as I responded to Denise, it appears the one year thing must have been specific to the (for-profit) bank I spoke with. They pay so many up-front costs for each new donor I think they want to ensure they get a lot of samples out of each one to be able to cover them.

And perhaps they were highballing the 30+ number, so they couldn't say they didn't tell you should the most extreme thing happen, even if it's improbable.

Who should give sperm/eggs?

Hmmmm, this is all what I was told at one place. Maybe some of these rules — 30 kids max, donating for a year at a minimum, or the 99% figure — are specific to that company, rather than being UK-wide norms/regulations.

Or perhaps they were rounding up to 99% to just mean 'the vast majority'.

I'd forgotten about the ten family limit, thanks for the reminder.

Like you I have the impression that they're much less selective on eggs.

Who should give sperm/eggs?

In some ways the UK sperm donation process is an even more serious commitment than egg donation.

From what I was told, the rejection rate is extremely high — close to 99% of applicants are filtered out for one reason or another. If you get through that process they'll want you to go in and donate once a week or more, for at least a year. Each time you want to donate, you can't ejaculate for 48 hours beforehand.

And the place I spoke to said they'd aim to sell enough sperm to create 30 kids in the UK, and even more overseas.

The ones born in the UK can find ou

... (read more)
3Khorton2y
HFEA says that most donors "create one or two families, with one or two children each". The legal maximum is 10 families. "You’ll normally need to go to a fertility clinic once a week for between three and six months to make your donation." https://www.hfea.gov.uk/donation/donors/donating-your-sperm/ [https://www.hfea.gov.uk/donation/donors/donating-your-sperm/]
5Denise_Melchin2y
I'm fairly surprised by this response, this doesn't match what I have read. The Human Fertilisation and Embryology Authority imposes a limit for sperm and egg donors to donate to a maximum of ten families in the UK, although there is no limit on how many children might be born to these ten families (I'm struggling to link, but google 'HFEA ten family limit'). But realistically, they won't all want to have three children. I'm curious whether you have a source for the claim that 99% of prospective sperm donors in the UK get rejected? I'm much less confident about this, but this doesn't line up with my impression. I also didn't have the impression they were particularly picky about egg donors, unlike in the US. But yes, it's true for sperm and egg donors alike that in the UK they can be contacted once the offspring turns 18.
Attempted summary of the 2019-nCoV situation — 80,000 Hours

I know 2 working in normal pandemic preparedness and 2-3 in EA GCBR stuff.

I can offer introductions though they are probably worked off their feet just now. DM me somewhere?

1Wei_Dai2y
Thanks Rob, I emailed you.
Should Longtermists Mostly Think About Animals?

Part of the issue might be the subheading "Space colonization will probably include animals".

If the heading had been 'might', then people would be less likely to object. Many things 'might' happen!

3abrahamrowe2y
That makes sense!
8Peter Wildeford2y
Good point. I agree.
Should Longtermists Mostly Think About Animals?

80% seems reasonable. It's hard to be confident about many things that far out, but:

i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we'll bring pigs to Alpha Centauri if we go, than whether we'll ever go to Alpha Centauri.

ii) That we'll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There's not much alternative.

iii) Inasmuch as we're focussing in on (what's in my opinion) a narrow part of the whole probability space — lik

... (read more)
5Peter Wildeford2y
I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance. To wit, I think a lot of retorts to Abraham's argument appear to me to be of the form "well, this seems rather unlikely to happen", whereas I don't think such an argument actually succeeds. And to reiterate for clarity, I'm not taking a particular stance on Abraham's argument itself - only saying why I think this one particular counterargument doesn't work for me.
Should Longtermists Mostly Think About Animals?

I apologise if I'm missing something as I went over this very quickly.

I think a key objection for me is to the idea that wild animals will be included in space settlement in any significant numbers.

If we do settle space, I expect most of that, outside of this solar system, to be done by autonomous machines rather than human beings. Most easily habitable locations in the universe are not on planets, but rather freestanding in space, using resources from asteroids, and solar energy.

Autonomous intelligent machines will be at a great advantage over animals fro

... (read more)

I worry this is very overconfident speculation about the very far future. I'm inclined to agree with you, but I feel hard-pressed to put more than say 80% odds on it. I think the kind of s-risk nonhuman animal dystopia that Rowe mentions (and has been previously mentioned by Brian Tomasik) seems possible enough to merit significant concern.

(To be clear, I don't know how much I actually agree with this piece, agree with your counterpoint, or how much weight I'd put on other scenarios, or what those scenarios even are.)

Hey Rob!

I'm not sure that even under the scenario you describe animal welfare doesn't end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn&apo... (read more)

I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it's worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people's moral circles.

If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so,

One way this could happen is if the deep ecologists or people who care about life-in-general "win", and for some reason have an extremely strong preference for spreading biological life to the stars without regard to sentient suffering.

I'm pretty optimistic this won't happen however. I think by default we should expect that... (read more)

Concerning the Recent 2019-Novel Coronavirus Outbreak

Howie and I just recorded a 1h15m conversation going through what we do and don't know about nCoV for the 80,000 Hours Podcast.

We've also compiled a bunch of links to the best resources on the topic that we're aware of which you can get on this page.

Growth and the case against randomista development

I've guessed this is the case on 'back of the envelope' grounds for a while, so nice to see someone put more time into evaluating it.

It's not true to say EAs have been blindly on board with RCTs — I've been saying economic policy is probably the top priority for years and plenty of people have agreed that's likely the case. But I don't work on poverty so unfortunately wasn't able to take it further than that.

Load More