Re: extremely toxic, most people who would see this post are left-wing, that is obvious.
I don't think that a word-for-word identical where the author self-identified as an EA would be good. I think it would be less bad, and I might not clamor for the title to be changed.
The problem is that this post blew up on Twitter and a lot of people's image of EA was downgraded because of it. To me, that's very unfair; this post is wrong on the substance, this is an extremely unpopular opinion within EA, and the author doesn't even identify as an EA so the post ...
IMO it's pretty outrageous to make a piece entitled "The EA case for [X]" when you yourself do not call yourself identify as an effective altruist and the [X] in question is extremely toxic to most everyone on the outside. It's like if I made a piece "the feminist case for Benito Mussolini" where I made clear that I am not a feminist but feminists should be supporting Mussolini.
I do want to make the point that how tied to EA you are isn’t really your choice. The reason it’s really easy for media outlets to tie EA to scientific racism is that there’s a lot of interaction with scientific racists and nobody from the outside really cares if events like this explicitly market themselves as EA events or not. Strong free speech norms enabling scientific racism have always been a source of tension for this community, and you can’t just get around that by not calling yourselves EA.
A few quick thoughts:
Many arguments about the election’s tractability don’t hinge on the impact of donations.
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in.
I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
All punishment is tragic, I guess, in that it would be a better world if we didn't have to punish anyone. I guess I just don't think the fact that SBF on some level "believed" in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA - is a reason that his punishment is more tragic than anyone else's
This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it's really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.
Please note that my previous post took the following positions:
1. That SBF did terrible acts that harmed people.
2. That it was necessary that he be punished. To the extent that it wasn't implied by the previous comment, I clarify that what he did was illegal (EDIT: which would involve a finding of culpable mental states that would imply that his wrongdoing was no innocent or negligent mistake).
3. The post doesn't even take a position as to whether the 25 years is an appropriate sentence.
All of the preceding is consistent with the proposition that he also a...
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuf...
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
This post is a great exemplar for why the term “AI alignment” has proven a drag on AI x-risk safety. The concern is and has always been that AI would dominate humanity like humans dominate animals. All of the talk about aligning AI to “human values” leads to pedantic posts like this one arguing about what “human values” are and how likely AIs are to pursue them.
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
Hmm, I still don’t think this response quite addresses the intuition. Various groups yield outsized political influence owing to their higher rates of voting - seniors, a lot of religious groups, post-grad degree ppl, etc. Nonetheless, they vote in a lot of uncompetitive races where it would seem their vote doesn’t matter. It seems wrong that an individual vote of theirs has much EV in an uncompetitive race. On the other hand, it seems basically impossible to mediate strategy such that there is still a really strong norm of voting in competitive races but ...
Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.
I will say that I think most of this stuff is really just dancing around the fundamental issue, which is that expected value of your single vote really isn't the best way of thinking about it. Your vote "influences" other people's vote, either through acausal decision theory or because of norms that build up (elections are repeated games, after all!).
I may go listen to the podcast if you think it settles this more, but on reading it I'm skeptical of Joscha's argument. It seems to skip the important leap from "implemented" to "computable". Why does the fact that our universe takes place in an incomputable continuous setting mean it's not implemented? All it means is that it's not being implemented on a computer, right?
To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don't think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don't get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don't). Assuming AI doesn't kill us obviously.
I am glad somebody wrote this post. I often have the inclination to write posts like these, but I feel like advice like this is sometimes good and sometimes bad and it would be disingenuous for me to stake out a claim in any direction. Nonetheless, I think it’s a good mental exercise to explicitly state the downsides of comparative claims and the upsides of absolute claims, and then people in the comments will (and have) assuredly explain the opposite.
"...for most professional EA roles, and especially for "thought leadership", English-language communication ability is one of the most critical skills for doing the job well"
Is it, really? Like, this is obviously true to some extent. But I'm guessing that English communication ability isn't much more important for most professional EA roles than it is for eg academics or tech startup founders. These places are much more diverse in native language than EA I think.
This consideration is something I had never thought of before and blew my mind. Thank you for sharing.
Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was.
The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them.
Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one depend...
Several of the grants we’ve made to Rethink Priorities funded research related to moral weights; we’ve also conducted our own research on the topic. We may fund additional moral weights work next year, but we aren’t certain. In general, it's very hard to guarantee we'll fund a particular topic in a future year, since our funding always depends on which opportunities we find and how they compare to each other — and there's a lot we don't know about future opportunities.
I unfortunately won’t have time to engage with further responses for now, but whenev...
Do we have any idea how Republican elites feel about AI regulation?
This seems like the biggest remaining question mark which will determine how much AI regulation we get. It's basically guaranteed that Republicans will have to agree to AI regulation legislation, and Biden can't do too much without funding in legislation. Also there's a very good chance Trump wins next year and will control executive AI Safety regulation.
Politics is really important, so thank you for recognizing that and adding to discussion about Pause.
But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don't know if I agree with, but it's an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don't understand why you'd expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don't know why they apply better to regulation than a Pause.
Reading this great thread on SBF's bio it seems like his main problem was stimulants wrecking his brain. He was absurdly overconfident in everything he did, did not think things through, didn't sleep, and admitted to being deficient in empathy ("I don't have a soul"). Much has been written about deeper topics like naive utiliarianism and trust in response to SBF, but I wonder if the main problem might just be the drug culture that exists in certain parts of EA. Stimulants should be used with caution, and a guy like SBF probably should never have been using them, or at least nowhere near the amount he was getting.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They're applied to animals, but I think they're really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.
I don't think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never "too soon" to order the regulation of AI. We may not know exactly ...
On "End high-skilled immigration programs": The thing about big-brained stuff like this is it very rarely works. Consider:
What is p(doom|immigration restrictions)-p(doom|status quo immigration)? To that end: might immigration be useful in AI Safety research as well?
What is E[utility from AI doom]-E[utility from not AI doom]? This also probably gets into all sorts of infinite ethics/pascal's mugging issues.
How likely are you to actually change immigration laws like this?
What is the non-AI-related utility of immigration, before AI doom or assuming AI d...
Let me make the contrarian point here that you don't have to build AGI to get these benefits eventually. An alternative, much safer approach would be to stop AGI entirely and try to inflate human/biological intelligence with drugs or other biotech. Stopping AGI is unlikely to happen and this biological route would take a lot longer but it's worth bringing up in any argument about the risks vs. reward of AI.
I am nervous about wading into partisan politics with AI safety. I think there’s a chance that AI safety becomes super associated with one party due to a stunt like this, or worse becomes a laughing stock for both parties. Partisan politics is an incredibly adversarial environment, which I fear could undermine the currently unpolarized nature of AI safety.
Ooh, now this is interesting!
Running a candidate is one thing, actually getting coverage for this candidate is another. If we could get a candidate to actually make the debate stage in one of the parties that would be a big deal, but that would also be very hard. The one person who I can think who could actually get on the debate stage is Andrew Yang, if there ends up being a Democratic primary (which I am not at all sure about). If I recall he has actually talked about AI x-risk in the past? Even if that’s wrong, I know he has interacted with EA before, s...
This type of thing is talked about from time to time. The unfortunate thing is that there aren't a ton of plausible interventions. The main tool we have to fight against authoritarianism in the US is lawsuits, and that's already being done and not any place where EA could have a comparative advantage. The other big thing that people come up with is helping Democrats win elections, and there are people working on this, although (fortunately) elections are really ultimately decided by the voters, campaign tactics have limited effect at least at the national ... (read more)