Isn't a key difference that in Terminator the AI seems incredibly incompetent at wiping us out? Surely we'd be destroyed in no time — to start with it could just manufacture a poison like dioxin and coat the world (or something much smarter). Going around with tanks and guns as depicted in the film is entirely unnecessary.
If it's just a form where the main reason for rejection is chosen from a list then that's probably fine/good.
I've seen people try to do written feedback before and find it a nightmare so I guess people's mileage varies a fair bit.
"However, banking on this as handling the concerns that were raised doesn't account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result. "
I mean I think people are radically underestimating the opportunity cost of doing feedback properly at the moment. If I'm right then getting feedback might reduce people's chances of getting funded by say, 30%, or 50%, because the throughput for grants will be much reduced.
I would probably rather have a 20% ch... (read more)
Rob, I think you're consistently arguing against a point few people are making. You talk about ongoing correspondence with projects, or writing (potentially paragraphs of) feedback. Several people in this thread have suggested that pre-written categories of feedback would be a huge improvement from the status quo, and I can't see anything you've said that actually argues against that.
Also, as someone who semi-regularly gives feedback to 80+ people, I've never found it to make my thinking worse, but I've sometimes found it makes my thinking better.
I'm not s... (read more)
It would be very surprising if there weren't an opportunity cost to providing feedback. Those might include:
an opportunity cost to providing feedback
huge mistake for Future Fund to provide substantial feedback except in rare cases.
Yep, I'd imagine what makes sense is between 'highly involved and coordinated attempt to provide feedback at scale' and 'zero'. I think it's tempting to look away from how harmful 'zero' can be at scale
> That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.
Agreed – this seems... (read more)
I find these arguments intellectually interesting to a degree.
But like you, my aesthetic preference is just that people who personally feel like having kids should have kids, and those who personally don't feel like having kids shouldn't.
If we followed that dollar-store rule of thumb I expect things would go roughly as well as they can, all things considered.
My guess is this would reduce grant output a lot relative to how much I think anyone would learn (maybe it would grantmaking in half?) so personally I'd rather see them just push ahead and make a lot of grants then review or write about just a handful of them from time to time.
Here you go: https://www.stitcher.com/show/80k-after-hours
(Seems like Stitcher is having technical problems, I've contacted their technical support about it.)
For the 10/10 criteria do you mean a $50k hiring bonus, or a $50k annual salary?
"creating closed social circles"
Just on this my impression is that more senior people in the EA community actively recommend not closing your social circle because, among other reasons, it's more robust to have a range of social supports from separate groups of people, and it's better epistemically not to exclusively hang out with people who already share your views on things.
Inasmuch as people's social circles shrink I don't think it's due to guidance from leaders (as in a typical cult, I would think) but rather because people naturally find it more fun to socialise with people who share their beliefs and values, even if they think that's not in their long-term best interest.
Cool yeah. I just want to provide another more boring reason a lot of us have piled on to bioethics that doesn't even require ingroup-outgroup dynamics.
Basically all of the people you're citing (like me) have an amateur interest in bioethics as it affects legal policy or medical practice or pandemic control (the thing we actually follow closely).
You and I agree that harmful decisions are regularly being made by IRBs (and politicians), often on the basis of supposed 'bioethics'. We also both agree there are at least a handful of poor thinkers in the field w... (read more)
Fair enough, I'm happy to talk less about bioethicists and talk more about institutional review of research ethics.
For what it's worth I and other critics do regularly/constantly refer people to the classic dissection of the problem caused by IRBs (The Censor's Hand).
We also talk about the misaligned incentives faced by bureaucrats about as ad nauseam as we talk about bioethics.
And when I've seen IRBs in action they have worked to keep their decisions and the reasons for them secret and intimidate researchers into not speaking out, while philosophers publi... (read more)
Exciting to see a post about this episode 5 hours after we put it out (!).
A few quick thoughts:
"Berkowitz never mentions that the median voter in most Republican primaries is currently "pro-Trump" so he leaves out the single sentence explanation."
No but I say that. IIRC one of his responses also takes this background explanation as a given.
"Japan and New Zealand have shown that sovereign parliamentary democracies do not manifest even nascent electoral movements."
In general I'm with you on thinking some systems of government are less conducive to populist m... (read more)
Great to see someone giving this a crack! Let me know how it works out. :)
"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."
The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.
I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.
Those that offer the highest rates (something approaching 1%) on comparison sites tend to have conditions (... (read more)
Thanks for your thoughtful reply Rob!
"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."
The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.
I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.
To quote my reply to GMcGowan, "We used the latest Form 990 data from 201... (read more)
In addition to the issues raised by other commentators I would worry that someone trying to work on something they're a bad fit for can easily be harmful.
That especially goes for things related to existential risk.
And in addition to the obvious mechanisms, having most of the people in a field be ill-suited to what they're doing but persisting for 'astronomical waste' reasons will mean most participants struggle to make progress, get demoralized, and repel others from joining them.
He says he's going to write a response. If I recall Jason isn't a consequentialist so he may have a different take on what kinds of things we can have a duty to do.
Want to write a TLDR summary? I could find somewhere to stick it.
It seems like to figure out whether it's a good use of time for 300 people like you to vote, you still need to figure out if it's worth it for any single of them.
I'm actually more favourable to a smaller EA community, but I still think jargon is bad. Using jargon doesn't disproportionately appeal to the people we want.
The most capable folks are busy with other stuff and don't have time to waste trying to understanding us. They're also more secure and uninterested in any silly in-group signalling games.
Yes but grok also lacks that connotation to the ~97% of the population who don't know what it means or where it came from.
The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more. They are often doing better by their own lights as a result of having learned things from other people interested in EA-ish topics. Typically they aren't yet at the top of their fields but that's unsurprising as most are 25-35.
The rationality community, inasmuch as it doesn't overlap with the EA community, also has plenty of people who are successful by their own lights, but their goals tend to be becoming thinke... (read more)
To better understand your view, what are some cases where you think it would be right to either
but only just?
That is, cases where it's just slightly over the line of being justified.
For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.
You're also almost always better placed than anyone else to provide the things you need — e.g. sleep, recreation, fun, friends, healthy behaviours — so it's each person's comparative advantage to put extra effort into looking out for themselves. I don't know why, but doing that is more motivating if it feels like i... (read more)
Yep that sounds good, non-profits should aim to have fairly stable expenditure over the business cycle.
I think I was thrown off your true motivation by the name 'Keynesian altruism'. It might be wise to rename it 'countercyclical' so it doesn't carry the implication that you're looking for an economic multiplier.
The idea that charities should focus on spending money during recessions because of the extra benefit that provides seems wrong to me.
Using standard estimates of the fiscal multiplier during recessions — and ignoring any offsetting effects your actions have on fiscal or monetary policy — if a US charity spends an extra $1 during a recession it might raise US GDP by between $0 and $3.
If you're a charity spending $1, and just generally raising US GDP by $3 is a significant fraction of your total social impact, you must be a very ineffective organisation. I c... (read more)
Thanks for your comment.
I'm not advocating it because of the fiscal multiplier. That would be the cherry on the cake.
The first simple step is simply to say don't cut back expenditure because shrinking and regrowing an organisation is costly. Most charities (though EA ones are somewhat atypical) see their income reduced during bad times. And since most charities think in bland terms of x months of reserves, this means their expenditure fluctuates as well. This is an not efficient way to manage an organisation. In good times, build a buffer, s... (read more)
Is there even 1 exclusively about people working at EA organisations?
If someone had taken a different job with the goal of having a big social impact, and we didn't think what they were doing was horribly misguided, I don't think we would count them as having 'dropped out of EA' in any of the 6 data sets.
I was referring to things like phrasings used and how often someone working for an EA org vs not was discussed relative to other things; I wasn't referring to the actual criteria used to classify people as having dropping out / reduced involvement or not.
Given that Ben says he's now made some edits, it doesn't seem worth combing through the post again in detail to find examples of the sort of thing I mean. But I just did a quick ctrl+f for "organisations", and found this, as one example:
... (read more)Of the 14 classified as staff, I don’t count any clear cases of
"For example 80000 Hours have stopped cause prioritisation work to focus on their priority paths"
Hey Sam — being a small organisation 80,000 Hours has only ever had fairly limited staff time for cause priorities research.
But I wouldn't say we're doing less of it than before, and we haven't decided to cut it. For instance see Arden Koehler's recent posts about Ideas for high impact careers beyond our priority paths and Global issues beyond 80,000 Hours’ current priorities.
We aim to put ~10% of team time into underlying research, where one topic is trying
... (read more)It seems like lots of active AI safety researchers, even a majority, are aware of Yudkowsky and Bostrom's views but only agree with parts of what they have to say (e.g. Russell, Amodei, Christiano, the teams at DeepMind, OpenAI, etc).
There may still not be enough intellectual diversity, but having the same perspective as Bostrom or Yudkowsky isn't a filter to involvement.
As Michael says, common sense would indicate I must have been referring to the initial peak, or the peak in interest/panic/policy response, or the peak in the UK/Europe, or peak where our readers are located, or — this being a brief comment on an unrelated topic — just speaking loosely and not putting much thought into my wording.
FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.
As Michael says, common sense would indicate
This sounds like a status move. I asked a sincere question and maybe I didn't think too carefully when I asked it, but there's no need to rub it in.
FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.Neither the US or Latin America could plausibly be said to peak then.
Thanks, I appreciate the clarification! :)
I think you know what I mean — the initial peak in the UK, the country where we are located, in late March/April.
There's often a few months between recording and release and we've had a handful of episodes that took a frustratingly long time to get out the door, but never a year.
The time between the first recording and release for this one was actually 9 months. The main reason was Howie and Ben wanted to go back and re-record a number of parts they didn't think they got right the first time around, and it took them a while to both be free and in the same place so they could do that.
A few episodes were also pushed back so we could get out COVID-19 interviews during the peak of the epidemic.
Thanks for doing this research, nice work.
Could you make your figure a little larger, it's hard to read on a desktop. It might also be easier for the reader if each of the five arguments had a one-word name to keep track of the gist of their actual content.
"As you can see, the winner in Phase 2 was Argument 9 by a nose. Argument 9 was also the winner by a nose in Phase 1, and thus the winner overall."
I don't think this is quite right. Arguments 5 and 12 are very much within the confidence interval for Argument 9. Eyeballing it I would guess we can only
... (read more)Hi Tobias — thanks for the ideas!
Invertebrate welfare is wrapped into 'Wild animal welfare', and reducing long-term risks from malevolent actors is partially captured under 'S-risks'. We'll discuss the other two.
For future reference, next time you need to look up the page number for a citation, Library Genesis can quickly let you access a digital copy of almost any book: https://en.wikipedia.org/wiki/Library_Genesis
Many books are still not available on Library Genesis. Fortunately, a sizeable fraction of those can be "borrowed" for 14 days from the Internet Archive.
I didn't mean to imply that the protests would fix the whole problem, obviously they won't.
As you say you'd need to multiply through by a distribution for 'likelihood of success' and 'how much of the problems solved'.
I think a crux for some protesters will be how much total damage they think bad policing is doing in the USA.
While police killings or murders draw the most attention, much more damage is probably done in other ways, such as through over-incarceration, petty harassment, framing innocent people, bankrupting folks through unnecessary fines, enforcing bad laws such a drug prohibition, assaults, and so on. And that total damage accumulates year after year.
On top of this we could add the burden of crime itself that results from poor policing practices, including
... (read more)These points don't apply to the UK and elsewhere to anywhere near the same extent, so the post does at least seem like a good argument against the protests in the UK and elsewhere.
I think this is the wrong question.
The point of lockdown is that for many people it is individually rational to break the lockdown - you can see your family, go to work, or have a small wedding ceremony with little risk and large benefits - but this imposes external costs on other people. As more and more people break lockdown, these costs get higher and higher, so we need a way to persuade people to stay inside - to make them consider not only the risks to themselves, but also the risks they are imposing on other people. We solve this with a combination ... (read more)
I suspect that a lot of protesters would be very angry we're even raising these kinds of issues, but...
If we're being consequentialist about this, then the impact of the protests is not the difference between fixing these injustices, and the status quo continuing forever. It's the difference between a chance of fixing these injustices now, and a chance of fixing them next time a protest-worthy incident comes around.
Sadly, opportunities for these kinds of protests seem to come around fairly regularly in the US. So I expect these protests are probably only r
... (read more)If I weren't interested in creating more new beings with positive lives I'd place greater priority on:
I haven't thought much about what would look good from a conservative Christia
... (read more)Hi PBS, I understand where you're coming from and expect many policy folks may well be having a bigger impact than front-line doctors, because in this case prevention is probably better than treatment.
At the same time I can see why we don't clap for them in that way, because they're not taking on a particularly high risk of death and injury in the same way the hospital staff are right now. I appreciate both, but on a personal level I'm more impressed by people who continue to accept a high risk of contracting COVID-19 in order to treat patients.
I've compiled 16 fun or important points from the book for the write-up of my interview with Toby, which might well be of interest people here. :)
Hi Khorton — yes as I responded to Denise, it appears the one year thing must have been specific to the (for-profit) bank I spoke with. They pay so many up-front costs for each new donor I think they want to ensure they get a lot of samples out of each one to be able to cover them.
And perhaps they were highballing the 30+ number, so they couldn't say they didn't tell you should the most extreme thing happen, even if it's improbable.
Hmmmm, this is all what I was told at one place. Maybe some of these rules — 30 kids max, donating for a year at a minimum, or the 99% figure — are specific to that company, rather than being UK-wide norms/regulations.
Or perhaps they were rounding up to 99% to just mean 'the vast majority'.
I'd forgotten about the ten family limit, thanks for the reminder.
Like you I have the impression that they're much less selective on eggs.
In some ways the UK sperm donation process is an even more serious commitment than egg donation.
From what I was told, the rejection rate is extremely high — close to 99% of applicants are filtered out for one reason or another. If you get through that process they'll want you to go in and donate once a week or more, for at least a year. Each time you want to donate, you can't ejaculate for 48 hours beforehand.
And the place I spoke to said they'd aim to sell enough sperm to create 30 kids in the UK, and even more overseas.
The ones born in the UK can find ou
... (read more)I know 2 working in normal pandemic preparedness and 2-3 in EA GCBR stuff.
I can offer introductions though they are probably worked off their feet just now. DM me somewhere?
Thanks for the detailed feedback Adam. :)
Part of the issue might be the subheading "Space colonization will probably include animals".
If the heading had been 'might', then people would be less likely to object. Many things 'might' happen!
80% seems reasonable. It's hard to be confident about many things that far out, but:
i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we'll bring pigs to Alpha Centauri if we go, than whether we'll ever go to Alpha Centauri.
ii) That we'll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There's not much alternative.
iii) Inasmuch as we're focussing in on (what's in my opinion) a narrow part of the whole probability space — lik
... (read more)I apologise if I'm missing something as I went over this very quickly.
I think a key objection for me is to the idea that wild animals will be included in space settlement in any significant numbers.
If we do settle space, I expect most of that, outside of this solar system, to be done by autonomous machines rather than human beings. Most easily habitable locations in the universe are not on planets, but rather freestanding in space, using resources from asteroids, and solar energy.
Autonomous intelligent machines will be at a great advantage over animals fro
... (read more)I worry this is very overconfident speculation about the very far future. I'm inclined to agree with you, but I feel hard-pressed to put more than say 80% odds on it. I think the kind of s-risk nonhuman animal dystopia that Rowe mentions (and has been previously mentioned by Brian Tomasik) seems possible enough to merit significant concern.
(To be clear, I don't know how much I actually agree with this piece, agree with your counterpoint, or how much weight I'd put on other scenarios, or what those scenarios even are.)
Hey Rob!
I'm not sure that even under the scenario you describe animal welfare doesn't end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn&apo... (read more)
I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it's worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people's moral circles.
If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so,
One way this could happen is if the deep ecologists or people who care about life-in-general "win", and for some reason have an extremely strong preference for spreading biological life to the stars without regard to sentient suffering.
I'm pretty optimistic this won't happen however. I think by default we should expect that... (read more)
Howie and I just recorded a 1h15m conversation going through what we do and don't know about nCoV for the 80,000 Hours Podcast.
We've also compiled a bunch of links to the best resources on the topic that we're aware of which you can get on this page.
I've guessed this is the case on 'back of the envelope' grounds for a while, so nice to see someone put more time into evaluating it.
It's not true to say EAs have been blindly on board with RCTs — I've been saying economic policy is probably the top priority for years and plenty of people have agreed that's likely the case. But I don't work on poverty so unfortunately wasn't able to take it further than that.
I interpreted them not as saying that Terminator underplays the issue but rather that it misrepresents what a real AI would be able to do (in a way that probably makes the problem seem far easier to solve). But that may be me suffering from the curse of knowledge.