yes, that's right.
I can think of grounds to disagree, though. Say for example you were able to disproportionately protect e.g. white people from being prosecuted for jaywalking. I think jaywalking shouldn't be illegal, so in a sense any person you protect from prosecution is a win. But there would be indirect effects to a racially unfair punishment, e.g. deepening resentment and disillusionment, enabling and encouraging racists in other aspects of their beliefs and actions. So even though there would be less direct harm, there might be more indirect harm.
I...
While I see what you're saying here, I prefer evil to be done inconsistently rather than consistently, and every time someone merely gets what they deserve instead of what some unhinged penal system (whether in the US or elsewhere) thinks they deserve seems like a good thing to me.
(I don't personally have an opinion on what SBF actually deserves.)
I think if there's no credible reason to assign responsibility to the intervention, there's no need to include it in the model. I think assigning the charity responsibility for the consequences of a crime they were the victim of is just not (by default) a reasonable thing to do.
It is included in the detailed write-up (the article even links to it). But without any reason to believe this level of crime is atypical for the context or specifically motivated by e.g. anger against the charity, I don't think anything else needs to be made of it.
I've been linked to The benefits of Novavax explained which is optimistic about the strengths of Novavax, suggesting it has the potential to offer longer-term protection, and protection against variants as well.
I think the things the article says or implies about pushback from mRNA vaccine supporters seem unlikely to me -- my guess is that in aggregate Wall Street benefits much more from eliminating COVID than it does from selling COVID treatments, though individual pharma companies might feel differently -- but they seem like the sort of unlikely thing th...
I tend to think there's an asymmetry between how good well-being is & how bad suffering is
This isn't relevant if you think GiveWell charities mostly act to prevent suffering. I think this is certainly true for the health stuff, and arguably still plausible for the economic stuff.
This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar's criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to "direct harms", it would no longer be true that charities do harm. Wenar's concerns involve very indirect effects. I think it's very unlikely that there's any consistent and plausible way to count these as having dispropo...
'Cause here' is an example of an ineffective cause, with an estimated cost of 'cost here' to save one life.
You might find it tricky to fill these in. In general cost estimates for less effective charities are, when they exist at all, much less developed and much lower quality, because it's laborious to develop an accurate estimate and there's not much demand for precision once something is unlikely to be a top charity.
The nature of the effective altruist project is mostly to distinguish between "known to be effective" and "not known to be effective", an...
Gathering some notes on private COVID vaccine availability in the UK.
News coverage:
It sounds like there's been a licensing change allowing provision of the vaccine outside the NHS as of March 2024 (ish). Pharmadoctor is a company that supplies pharmacies and has been putting about the word that they'll soon be able to supply them with vaccine doses for private sale -- most media coverage I found...
Yeah I think this is quite sensible -- I feel like I noticed one thing missing from the normal doom scenario and didn't notice all of the implications of missing that thing, in particular that the reason the AI in the normal doom scenario takes over is because it is highly likely to succeed, and if it isn't, takeover seems much less interesting.
oh man, it's altruistically-good and selfishly-sad to see so many of the things I was thinking about pre-empted there, thanks for the link!
deworming seems to be beneficial for education (even if the magnitude might have been overstated)
Maybe a nitpick, but idk if this is suspicious convergence -- I thought the impact on economic outcomes (presumably via educational outcomes) was the main driver for it being considered an effective intervention?
Quoting this paragraph and bolding the bit that I want to discuss:
...Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview. So we should be especially cautious of completely dismissing commonsense priorities in
However, if they anticipate trade 2 being offered after Alice is born, then I think they shouldn't make trade 1, since they know they'll make trade 2 and end up in World 3 minus some money, which is worse than World 1 for presently existing people and necessary people before Alice is born.
I think it's pretty unreasonable for an ethical system to:
I upvoted this comment for the second half about categories, but this part didn't make much sense to me:
I think the advantage of a label like "Global health and development" is that is doesn't require a super specific worldview: you make your own assumptions about what you value, then you can decide for yourself whether GHD works as a cause area for you, based on the evidence presented.
I can imagine either speciesism or anti-speciesism being considered "specific" worldviews, likewise person-affecting ethics or total ethics, likewise pure time discounti...
Should I take the fact that you have stopped donations as a signal that you no longer value further responses? Will you close the Google Form and/or update this post when you're beginning analysis and further responses won't be counted? Is there a specific deadline?
As a quick comment, I think something else that distinguishes GHD and animal welfare is that the global non-EA GHD community seems to me the most mature and "EA-like" of any of the non-EA analogues in other fields. It's probably the one that requires the most modest departure from conventional wisdom to justify it.
Is it at least fair to say that in situations where the other main actors aren't explicitly coordinating with you and aren't aware of your efforts (and, to an approximation, weren't expecting your efforts and won't react to them), you should be thinking more like I suggested?
I think there are several different activities that people call "impact attribution", and they differ in important ways that can lead to problems like the ones outlined in this post. For example:
I think the fact that any action relies enormously on context, and on other people's previous actions, and so on, is a strong challenge to the second point, but I'd argue it's the first point t...
I haven't thought about this carefully yet, but I believe this kind of thinking comes out differently depending on whether you say "the average cost per net is $1" or "the average number of nets I can make for $1 is 1". I think often when we say things like this, we imagine a neat symmetrical normal distribution around the average, but you can't simultaneously have a neat normal distribution around both of these numbers! Perhaps you'd need to look more into where the numbers are coming from to get a better intuition for which shape of distribution is more plausible.
I think "human-level" is often a misleading benchmark for AI, because we already have AIs that are massively superhuman in some respects and substantially subhuman in others. I sometimes worry that this is leading people to make unwarranted assumptions of how closely future dangerous AIs will track humans in terms of what they're capable of. This is related to a different post I'm writing, but maybe deserves its own separate treatment too.
A problem with a lot of AI thoughts I have is that I'm not really in enough contact with the AI "mainstream" to know what's obvious to them or what's novel. Maybe "serious" AI people already don't say human-level, or apply a generous helping of "you know what I mean" when they do?
Many of the post ideas on my list of things I want to write would be basically as good if someone else wrote them (and they come with some existing prioritisation in agreevotes)
My understanding (for whatever it's worth) is that most of the reason why a full repayment looks feasible now is a combination of:
I think it's reasonable to think of both of these as luck, and certainly a company relying on them to pay their debts is not solvent.
This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who'd like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they're trying to promote a cause. "I want effective altruists to highly prioritise something that they currently don't" is in some sense how all our existing priorities got to where they are. I don't think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
Sure, it's easy to dismiss the value of unaligned AIs if you compare against some idealistic baseline; but I'm asking you to compare against a realistic baseline, i.e. actual human nature.
I haven't read your entire post about this, but I understand you believe that if we created aligned AI, it would get essentially "current" human values, rather than e.g. some improved / more enlightened iteration of human values. If instead you believed the latter, that would set a significantly higher bar for unaligned AI, right?
It seems like you're just substantially more pessimistic than I am about humans. I think factory farming will be ended, and though it seems like humans have caused more suffering than happiness so far, I think their default trajectory will be to eventually stop doing that, and to ultimately do enough good to outweigh their ignoble past. I don't think this is certain by any means, but I think it's a reasonable extrapolation. (I maybe don't expect you to find it a reasonable extrapolation.)
Meanwhile I expect the typical unaligned AI may seize power for some ...
A lot of these points seem like arguments that it's possible that unaligned AI takeover will go well, e.g. there's no reason not to think that AIs are conscious, or will have interesting moral values, or etc.
My stance is that we (more-or-less) know humans are conscious and have moral values that, while they have failed to prevent large amounts of harm, seem to have the potential to be good. AIs may be conscious and may have welfare-promoting values, but we don't know that yet. We should try to better understand whether AIs are worthy successors before tran...
My stance is that we (more-or-less) know humans are conscious and have moral values that, while they have failed to prevent large amounts of harm, seem to have the potential to be good.
I claim there's a weird asymmetry here where you're happy to put trust into humans because they have the "potential" to do good, but you're not willing to say the same for AIs, even though they seem to have the same type of "potential".
Whatever your expectations about AIs, we already know that humans are not blank slates that may or may not be altruistic in the future: we ac...
Something I'm trying to do in my comments recently is "hedge only once"; e.g. instead of "I think X seems like it's Y", you pick either one of "I think X is Y" or "X seems like it's Y". There is a difference in meaning, but often one of the latter feels sufficient to convey what I wanted to say anyway.
This is part of a broader sense I have that hedging serves an important purpose but is also obstructive to good writing, especially concision, and the fact that it's a particular feature of EA/rat writing can be alienating to other audiences, even though I think it comes from a self-awareness / self-critical instinct that I think is a positive feature of the community.
I think the concern about jargon is misplaced in this context. Jargon is learned by native and non-native speakers alike as they engage with the community: it's specifically the stuff that already knowing the language doesn't help you with, which means not knowing the language doesn't disadvantage you. That's not to say jargon doesn't have its own problems, but I think that someone who attempts to reduce jargon specifically as a way to reach non-native speakers better has probably misdirected their focus.
But a core thing that you don't mention (maybe because you are a native speaker, and you have to not be one to realize that, not being mean here but simply stating what I think is a fact), is that jargon adds to the effort.
Not only you have to speak a flawless English and not mull over potential mistakes that potentially make you look foolish in the eyes of your interlocutor and reduce the credibility of your discourse, but you also have to use the right jargon. Add saying something meaningful in top of it: you have to pay attention to how you say th...
I think if we deanonymise now, there's a strong chance that the next whistleblower will remember what happened as "they got deanonymised" and will be reluctant to believe it won't happen to them. It kind of doesn't matter if there's reasons why it's OK in this case, as long as they require digging through this post and all the comments to understand them. People won't do that, so they won't feel safe from getting the same treatment.
I think that posting that someone is banned and why they were banned is not mainly about punishing them. It's about helping people understand what the moderation team is doing, how rule-breaking is handled, and why someone no longer has access to the forum. For example, it helps us to understand if the moderation team are acting on inadequate information, or inconsistently between different people. The fact that publishing this information harms people is an unfortunate side effect, after the main effect of improving transparency and keeping people informe...
People often propose HR departments as antidotes to some of the harm that's done by inappropriate working practices in EA. The usual response is that small organisations often have quite informal HR arrangements even outside of EA, which does seem kinda true.
Another response is that it sometimes seems like people have an overly rosy picture of HR departments. If your corporate culture sucks then your HR department will defend and uphold your sucky corporate culture. Abusive employers will use their HR departments as an instrument of their abuse.
Perhaps the...
It seems like we disagree on how bad it is to self-vote (I don't think it's anywhere near the level of "actual crime", but I do think it's pretty clearly dishonest and unfair, and for such a petty benefit it's hard for me to feel sympathetic to the temptation).
But I don't think it's the central point for me. If you're simultaneously holding that:
then there's a paternalistic subtext where people can't be trusted to come to the ...
Just because something is true doesn’t mean you forfeit your rights to not have that information be made public.
I agree that not all true things should be made public, but I think when it specifically pertains to wrongdoing and someone's trustworthiness, the public interest can override the right to privacy. If you look into your neighbour's window and you see them printing counterfeit currency, you go to the police first, rather than giving them an opportunity to simply hide their fraud better.
I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there's a case to be made when the information is cherry-picked or biased, or there's no opportunity to hear a fair response. But goodness, if we've learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.
"But does not everyone do that? I mean, they should think about effectiveness and all that! It is the only sensible thing to do."
If only! I think from the inside (and, it seems, some people on the outside), EA can seem "obvious", at least in the core / fundamentals. But I think most philanthropy is not done like this still, even among people who spend a lot of time and effort on it.
For example, the idea that we should compare between different cause areas or that we should be neutral between helping those in our own country vs. those abroad still seem relative minority positions to me.
Thanks for all the work you do :)
typo (edit: fixed): "The Global Fund is the world’s largest funder of maria control activities"
I would most naturally interpret it as "I also have this question" or "I agree with something implicitly expressed by this question"
I doubt anyone made a strategic decision to start fundraising orgs outside the Bay Area instead of inside it. I would guess they just started orgs while having personal reasons for living where they lived. People aren't generally so mobile or project-fungible that where projects are run is something driven mostly by where they would best be run.
That said, I half-remember that both 80k and CEA tried being in the Bay for a bit and then left. I don't know what the story there was.
"ask not what you can do for EA, but what EA can do for you"
like, you don't support EA causes or orgs because they want you to and you're acquiescing, you support them because you want to help people and you believe supporting the org will do that – when you work an EA job, instead of thinking "I am helping them have an impact", think "they are helping me have an impact"
of course there is some nuance in this but I think broadly this perspective is the more neglected one
I like the spirit of the reactions feature although the specific choice of reactions seems quite narrow / unnatural to me? I think two big missing ones from social media are laugh and sad – if you're concerned about being laughing at comments instead of with them you could mitigate it by labelling it something more unambiguously complimentary, like "witty" or "enjoyable"?
I think "changed my mind" is a great one, though.
This is similar to how StackOverflow / StackExchange works, I think – any user can propose an edit (or there's some very low reputation threshold, I forget) but if you're below some reputation bar then your edit won't be published until reviewed by someone else.
Making this system work well though probably requires higher-karma users having a way of finding out about pending edits.
OK, but this post is about drawing an analogy between the degrowth debate and the AI pause debate, and I don't see the analogy. Do you disagree with my argument for why they aren't analogous?
Especially if you disagree, explain why or upvote a comment that roughly reflects your view rather than downvoting. Downvoting controversial views only hides them rather than confronting them.
As a meta-comment, please don't assume that anyone who downvotes does so because they disagree, or only because they disagree. A post being controversial doesn't mean it must be useful to read, any more than it means it must not be useful to read. I vote on posts like this based on whether they said something that I think deserves more attention or helped me understand something better, regardless of whether I think it's right or wrong.
While I agree there are similarities in the form of argument between degrowth and AI pause, I don't think those similarities are evidence that the two issues should have the same conclusion. There's simply nothing at all inconsistent about believing all of these at the same time:
Almost the entire question, for resolving either of these issues, is working out whether these premises are really true or not. And that's where the similarities end, IMO: there's not m...
I don't know if you intended it this way, but I read this comment as saying "the author of this post is missing some fairly basic things about how IR works, as covered by introductory books about it". If so, I'd be interested in hearing you say more about what you think is being missed.
In the name of trying to make legible what I think is going on in the average non-expert's head about this, I'm going to say a bunch of things I know are likely to be mistaken, incomplete, or inadequately sophisticated. Please don't take these as endorsements that this thinking is correct, just that it's what I see when I inspect my instincts about this, and suspect other casual spectators might have the same ones.
It feels intuitive that Google and OpenAI and Anthropic etc. are more likely to co-operate with each other than any of them are to co-operate wi...
I'd like to see more of an analysis of where we are now with what people want from CH, what they think it does, and what it actually does, and to what extent and why gaps exist between these things, before we go too deep into what alternative structures could exist. Currently I don't feel like we really understand the problem well enough to solve it.
Yeah, sorry, when I said "unhinged" I meant "the US penal system is in general unhinged", not "this ruling in particular is unhinged". I also used "evil" as an illustrative / poetic example of something which I'd rather be inconsistent than consistent, and implied more than I intended that the sentencing judge was actually doing evil in this case.
It's possible that I'm looking at how the system treats e.g. poor people and racial minorities, where I think it's much more blatantly unreasonable, and transplanting that judgement into cases where it's less meri... (read more)