Mostly agreed, but I do think that donating some money, if you are able, is a big part of being in EA. And again this doesn’t mean reorienting your entire career to become a quant and maximize your donation potential.
All punishment is tragic, I guess, in that it would be a better world if we didn't have to punish anyone. I guess I just don't think the fact that SBF on some level "believed" in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA - is a reason that his punishment is more tragic than anyone else's
This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it's really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuf...
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
Love the post, don't love the names given.
I think "capacity growth" is a bit too vague, something like "tractable, common-sense global interventions" seems better.
I also think "moonshots" is a bit derogatory, something like "speculative, high-uncertainty causes" seems better.
This post is a great exemplar for why the term “AI alignment” has proven a drag on AI x-risk safety. The concern is and has always been that AI would dominate humanity like humans dominate animals. All of the talk about aligning AI to “human values” leads to pedantic posts like this one arguing about what “human values” are and how likely AIs are to pursue them.
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
Hmm, I still don’t think this response quite addresses the intuition. Various groups yield outsized political influence owing to their higher rates of voting - seniors, a lot of religious groups, post-grad degree ppl, etc. Nonetheless, they vote in a lot of uncompetitive races where it would seem their vote doesn’t matter. It seems wrong that an individual vote of theirs has much EV in an uncompetitive race. On the other hand, it seems basically impossible to mediate strategy such that there is still a really strong norm of voting in competitive races but ...
Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.
I will say that I think most of this stuff is really just dancing around the fundamental issue, which is that expected value of your single vote really isn't the best way of thinking about it. Your vote "influences" other people's vote, either through acausal decision theory or because of norms that build up (elections are repeated games, after all!).
I may go listen to the podcast if you think it settles this more, but on reading it I'm skeptical of Joscha's argument. It seems to skip the important leap from "implemented" to "computable". Why does the fact that our universe takes place in an incomputable continuous setting mean it's not implemented? All it means is that it's not being implemented on a computer, right?
I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.
To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don't think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don't get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don't). Assuming AI doesn't kill us obviously.
I think the simplest answer is not that such a world would be terrible (except for factory farming and wild animal welfare, which are major concerns), but that a world with all these transhumanist initiatives would be much better
I am glad somebody wrote this post. I often have the inclination to write posts like these, but I feel like advice like this is sometimes good and sometimes bad and it would be disingenuous for me to stake out a claim in any direction. Nonetheless, I think it’s a good mental exercise to explicitly state the downsides of comparative claims and the upsides of absolute claims, and then people in the comments will (and have) assuredly explain the opposite.
"...for most professional EA roles, and especially for "thought leadership", English-language communication ability is one of the most critical skills for doing the job well"
Is it, really? Like, this is obviously true to some extent. But I'm guessing that English communication ability isn't much more important for most professional EA roles than it is for eg academics or tech startup founders. These places are much more diverse in native language than EA I think.
How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?
This consideration is something I had never thought of before and blew my mind. Thank you for sharing.
Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was.
The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them.
Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one depend...
If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?
Yeah, I think there’s a big difference between how Republican voters feel about it and how their elites do. Romney is, uhh, not representative of most elite Republicans, so I’d be cautious there
Do we have any idea how Republican elites feel about AI regulation?
This seems like the biggest remaining question mark which will determine how much AI regulation we get. It's basically guaranteed that Republicans will have to agree to AI regulation legislation, and Biden can't do too much without funding in legislation. Also there's a very good chance Trump wins next year and will control executive AI Safety regulation.
Politics is really important, so thank you for recognizing that and adding to discussion about Pause.
But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don't know if I agree with, but it's an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don't understand why you'd expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don't know why they apply better to regulation than a Pause.
Reading this great thread on SBF's bio it seems like his main problem was stimulants wrecking his brain. He was absurdly overconfident in everything he did, did not think things through, didn't sleep, and admitted to being deficient in empathy ("I don't have a soul"). Much has been written about deeper topics like naive utiliarianism and trust in response to SBF, but I wonder if the main problem might just be the drug culture that exists in certain parts of EA. Stimulants should be used with caution, and a guy like SBF probably should never have been using them, or at least nowhere near the amount he was getting.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They're applied to animals, but I think they're really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
Yeah i guess that makes sense. But uh.... have other institutions actually made large efforts to preserve such info? Which institutions? Which info?
This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.
I don't think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never "too soon" to order the regulation of AI. We may not know exactly ...
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its "weird" premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between "doesn't rest on controversial claims" and "maximal impact".
On "End high-skilled immigration programs": The thing about big-brained stuff like this is it very rarely works. Consider:
What is p(doom|immigration restrictions)-p(doom|status quo immigration)? To that end: might immigration be useful in AI Safety research as well?
What is E[utility from AI doom]-E[utility from not AI doom]? This also probably gets into all sorts of infinite ethics/pascal's mugging issues.
How likely are you to actually change immigration laws like this?
What is the non-AI-related utility of immigration, before AI doom or assuming AI d...
Let me make the contrarian point here that you don't have to build AGI to get these benefits eventually. An alternative, much safer approach would be to stop AGI entirely and try to inflate human/biological intelligence with drugs or other biotech. Stopping AGI is unlikely to happen and this biological route would take a lot longer but it's worth bringing up in any argument about the risks vs. reward of AI.
I am nervous about wading into partisan politics with AI safety. I think there’s a chance that AI safety becomes super associated with one party due to a stunt like this, or worse becomes a laughing stock for both parties. Partisan politics is an incredibly adversarial environment, which I fear could undermine the currently unpolarized nature of AI safety.
Ooh, now this is interesting!
Running a candidate is one thing, actually getting coverage for this candidate is another. If we could get a candidate to actually make the debate stage in one of the parties that would be a big deal, but that would also be very hard. The one person who I can think who could actually get on the debate stage is Andrew Yang, if there ends up being a Democratic primary (which I am not at all sure about). If I recall he has actually talked about AI x-risk in the past? Even if that’s wrong, I know he has interacted with EA before, s...
Ahh, I didn't read it as you talking about the effects of Eliezer's past outreach. I strongly buy "this time is different", and not just because of the salience of AI in tech. The type of media coverage we're getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we've ever seen before. We're reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to...
Not to be rude but this seems like a lot of worrying about nothing. "AI is powerful and uncontrollable and could kill all of humanity, like seriously" is not a complicated message. I'm actually quite scared if AI Safety people are hesitant to communicate because they think the misinterpretation will be as bad as you are saying here; this is a really strong assumption, an untested one at that, and the opportunity cost of not pursuing media coverage is enormous.
The primary purpose of media coverage is to introduce the problem, not to immediately push f...
Well, maybe to both parts; it's a good sign, but a weak one. Also concerns about response bias, etc., especially since YouGov doesn't specialize in polling these types of questions and there's no "ground truth" here to compare to.
I would caution people against reading too much into this. If you poll people about a concept they know nothing about ("AI will cause the end of the human race") you will always get answers that don't reflect real belief. These answers are very easily swayed, they don't cause people to take action like real beliefs would, they are not going to affect how people vote or which elites they trust, etc.
This is an important warning but to be clear it also isn’t necessarily always the case. Rethink Priorities has studied low salience issue polling a lot and we think there are some good methods. I don’t think YouGov has been very good about using those methods here though.
Largely agree, but results like this (1) indicate that if AI does become more salient the public will be super concerned about risks and (2) might help nudge policy elites to be more interested in regulating AI. (And it's not like there's some other "real belief" that the survey fails to elicit-- most people just don't have 'real beliefs' on most topics.)
Part of the motivation for this post is that I think AI Safety press is substantially different from EA press as a whole. AI safety is inherently a technical issue which means you don’t get this knee-jerk antagonism that happens when people’s ideology is being challenged (ie when you tell people they should be donating to your cause instead of theirs). So while I haven’t read the whole EA press post you linked to, I think parts of it probably apply less to AI.
With all due respect I think people are reading way too far into this, Eliezer was just talking about the enforcement mechanism for a treaty. Yes, treaties are sometimes (often? always?) backed up by force. Stating this explicitly seems dumb because it leads to posts like this, but let's not make this bigger than it is.
The point of the letter is to raise awareness for AI safety, not because they actually think a pause will be implemented. We should take the win.
Thanks!
I hate to be someone who walks into a heated debate and pretends to solve it in one short post, so I hope my post didn’t come off too authoritative (I just genuinely have never seen debate about the term). I’ll look more into these.
Note that, if you are going to start thinking about these cofounders, you have to consider cofounders working against this relationship as well:
The difference, from my perspective, is that the mixing of romantic and work relationships in a poly context has much more widespread damage. In monogamous relationships, the worst that can happened is that there is one incident involving 2 or so people, which can be dealt with in a contained way. In poly relationships, when you have a relationship web spanning a large part of an organization, this can cause very large harm to the company and to potential future employees. I, frankly, would feel very uncomfortable if I was at an organization where most of my coworkers were in a polyamorous relationship.
I think a better way of looking at this is that EA is very inviting of criticism but not necessarily that responsive to it. There are like 10 million critiques on the EA Forum, most with serious discussion and replies. Probably very few elicit actual change in EA. (I am of the opinion that most criticism just isn’t very good, and that there is a reason it hasn’t been adopted, but obviously this is debatable).
I have opposite intuition actually - I'd guess that people closer to animals have more empathy for their suffering. Either way I think this is mostly orthogonal to the cultural values of masculinity you are talking about.
Small point here but unless you think that even after adjusting for partisanship working-class or rural Americans are more likely to oppose animal welfare action, I would take out the part about working class and rural and just leave right-wing. Otherwise, it just detracts from epistemic value as people create stereotypes about what political parties' voting bases look like.
Yeah, in fact I think most of the domestic opposition also comes from this backlash (in poli sci it's called "negative partisanship"). The right starts to oppose animal welfare policy not on its merits but simply because the left supports it - another reason to strive not to polarize the issue.
Yes, I just would have emphasized it more. I sort of read it as “yeah this is something you might do if you’re really interested”, while I would more say “this is something you should really probably do”