I've just noticed that the OBBB Act contains a "no tax on overtime" provision, exempting extra overtime pay up to a deduction of $12,500, for tax years 2025-2028. If you, like me, are indifferent between 40-hour workweeks and alternating 32- and 48-hour workweeks, you can get a pretty good extra tax deduction. This can be as easy as working one weekend day every 2 weeks and taking a 3-day weekend the following week. (That's an upper bound on the difficulty! Depending on your schedule and preferences there are probably even easier ways.) Unfortunately this only works for hourly, not salaried, employees.
Thank you very much, I hadn't seen that the moral parliament calculator had implemented all of those.
Moral Marketplace strikes me as quite dubious in the context of allocating a single person's donations, though I'm not sure it's totally illogical.
Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be ver...
Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.
By "greater threat to AI safety" you mean it's a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/government (like this).
This is probably a simplification but I'll try:
Positivism asks: What is true, measurable, and generalisable?
Within this frame, Effective Altruism privileges phenomena that can be quantified, compared, and optimised. What cannot be measured is not merely sidelined but often treated as epistemically inferior or irrelevant.
German theoretical physicist Werner Heisenberg, Nobel laureate for his foundational work in quantum mechanics, explicitly rejected positivism:
...“The positivists have a simple solution: the world must be divided into that which we can say clea
"Individual donors shouldn't diversify their donations"
Arguments in favor:
Arguments against:
For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch)
Really? I think it would be the opposite: LTFF grantees are the most persistent and accomplished applicants and are therefore the least likely to end up as bycatch.
I think most of us should get direct work jobs, and the E2G crowd should do high-EV careers (to the extent that they're personally sustainable), even if risky.
No, that wouldn't prove moral realism at all. That would merely show that you and a bunch of aliens happen to have the same opinions.
If you're someone with an impressive background, you can answer this by asking yourself if you feel that you would be valued even without that background. Using myself as an example, I...
Was I warmly accepted into EA back when my resume was much weaker than it is now? Do I think I would have gotten the same upvotes if I had posted an...
EA Forum posts have been pretty effective in changing community direction in the past, so the downside risk seems low
But giving more voting power to people with lots of karma entrenches the position/influence of people who are already high in the community based on its current direction, so it would be an obstacle to the possibility of influencing the community through forum posts.
If you think it's important for forum posts to be able to change community direction, you should be against vote power scaling with karma.
This presupposes that the way something gets to change community direction is by having high karma, while I think it's actually about being well reasoned and persuasive AND being viewed. Being high karma helps it be viewed, but this is neutral to actively negative if the post is low quality/flawed - that just entrenches people in their positions more/makes them think less of the forum. So in order for this change to help, there must be valuable posts that are low karma that would be high karma if voting was more democratic - I personally think that the current system is better at selecting for quality and this outweighs any penalty to dissenting opinions, which I would guess is fairly minor in practice
@Ben Kuhn has a great presentation on this topic. Relatedly, nonprofits have worse names: see org name bingo
(For what it's worth, I don't think you're irrational, you're just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don't do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don't do anything about it, they made the right call and you'd be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn't irrational to be upset.)
Thanks, that's a great reason to downvote my comment and I appreciate you explaining why you did it (though it has gotten some upvotes so I wouldn't have noticed anyone downvoted except that you mentioned it). And yes, I misread whom your paragraph was referring to; thanks for the clarification.
However, you're incorrect that those factual errors aren't relevant. Your feelings toward EA leadership are based on a false factual premise, and we shouldn't be making decisions about branding with the goal of appealing to people who are offended based on their own misunderstanding.
Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became "EA Adjacent" when Scott Alexander's followers attacked a journalist for daring to scare him a little -- that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values.
This is actually disputed. While so-called "bird watchers" and other pro-bird factions may tell you there are many birds, the rival scientific theory contends that birds aren't real.
When a reward or penalty is so small, it is less effective than no incentive at all, sometimes by replacing an implicit incentive.
In the study, the daycare had a problem with parents showing up late to pick up their kids, making the daycare staff stay late to watch them. They tried to fix this problem by implementing a small fine for late pickups, but it had the opposite of the intended effect, because parents decided they were okay with paying the fine.
In this case, if you believe recruiting people to EA does a huge amount of good, you might think that it's very valuable to refer people to EAG, and there should be a big referral bounty.
From an altruistic cause prioritization perspective, existential risk seems to require longtermism
No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
...When I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. I
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it's also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier "From an altruistic cause prioritization perspective" because I think that from an impartial cause prioritization perspective, the case is different. If you're comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.
(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it's invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)
The idea of haggling doesn't sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative.
Counterpoint: some people are more price-sensitive than typical consumers, and really can't afford things. If we prohibit or stigmatize haggling, society is leaving value on the table, in terms of sale profits and consumer surplus generated by transactions involving these more financially constrained consumers. (When the seller is a monopolist, they even introduce opportunities like this through the more sinister-sounding practice of price discrimination.)
I think EA's have the mental strength to handle diverse political views well.
No, I think you would expect EAs to have the mental strength to handle diverse political views, but in practice most of them don't. For example, see this heavily downvoted post about demographic collapse by Malcolm and Simone Collins. Everyone is egregiously misreading it as being racist or maybe just downvoting it because of some vague right-wing connotations they have of the authors.
If you don't aim to persuade anyone else to agree with your moral framework and take action along with you, you're not doing the most good within your framework.
(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don't care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)
embrace of the "Meat-Eater Problem" inbuilt into both the EA Community and its core ideas
Embrace of the meat-eater problem is not built into the EA community. I'm guessing a large majority of EAs, especially the less engaged ones who don't comment on the Forum, would not take the meat-eater problem seriously as a reason we ought to save fewer human lives.
Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.
Now that the election is over, I'd love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.
Is it available for macOS?