Seems like others agreed with you. I meant it mostly seriously.
The more substantial point that I'm trying to make is that the political balance of the EA Forum shouldn't be a big factor in someone's decision to publicize important information about a major charity evaluator, or probably even in how they put the criticism. Many people read posts linked from the EA Forum who never read the comments or don't visit the Forum often for other posts, i.e. they are not aware of the overall balance of political sympathies on the Forum. The tenor of the Forum as a whole is something that should be managed (though I wouldn't adv... (read more)
I have been researching sterilizing rodents instead of killing them to control their populations, and it's much more popular already than I had realized. ContraPest is a bait that sterilizes rats with a few doses. It reduces sperm viability in males and induces aging of ovarian follicles in females, sort of like early menopause. There's a bit of a lag before the population reduces, but it has the benefits of humaneness, not disturbing the rats' territories (because older rats stick around, preventing movement between territories which can spread disease), ... (read more)
I think this post is pretty damning of ACE. Are you saying OP shouldn't have posted important information about how ACE is evaluting animal charities because there has been too much anti-SJ/DEI stuff on the forum lately?
Are you implying that Larry Summers was wrong or that Texaco's actions were somehow his fault?
I think it's important for EA to promote high decoupling in intellectual spaces. You also have to consider that this is a philosophy dissertation, which is an almost maximally decoupling space.
I don't understand why thinking like that quote isn't totally passe to EAs. At least to utilitarian EAs. If anyone's allowed to think hypothetically ("divorced from the reality") I would think it would be a philosophy grad student writing a dissertation.
I can answer 6, as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on... (read more)
Such an answer is exactly what I am looking for!
I’m curious about people’s evaluations of (2)— how long would that go on? How bad would it really be compared to the losses from shutdown?
iirc, we actually did prompt them to take the exit survey and give them time to fill it out during the fourth meeting, but clearly not everyone did. But my memory of that is really not clear. We had been in breakout groups most of that session so maybe there was too much disorder when we asked them to take a survey at the end of that. And if we had done that then they wouldn't have had their one-on-one meetings with us yet.
For the 9 month follow-up we just sent them an email.
Don't forget that a lot of groups have other funding sources available, especially student groups. The EA groups at Harvard make use of CEA, and we wouldn't be able to do as much without money from CEA, but we have plenty of other funding sources (such as Harvard and well-off alum EAs) and many of our events cost only volunteer labor.
Is it really a matter of incorrectness or just that you think that argument is really important and he didn’t include it? There are plenty of innocent reasons he might not have included that argument or many others. He might have thought it was a weak argument or maybe didn’t include it because it wasn’t relevant to his personal objections to NU.
But Peter, he just didn't have time and the CV issue was too unimportant (not to publish-- just too unimportant to verify):
The issue with Bostrom’s CV is a minor thing compared to the other things I write about in this text. For example, if I were to ask Bostrom something, I would rather ask him about the seemingly problematic behaviour of the organisation FHI he leads. There are also many other people that I mention in this text who I could have asked about more important things than a CV before publishing this text. But I doubt I would have t
What's unacceptable about this in your opinion, anon account?
None of the accusations here is shocking, and often they reflect the author's naivete more than any wrongdoing on the part of the accused. Assistants contribute to writing books (however, private correspondence is meant to stay private). Organizations set ethical standards for the conducting and sharing of their research. People present themselves in the best light possible. Will is a co-founder of EA, not of the idea of maximizing social impact, but of the set of ideas and practices that governs this community today.
I don't like Toby's "Why I'm Not a Negative Utilitarian" essay because I think it doesn't engage good arguments in favor of NU (to which I am partial). But I don't think it's in any way dishonest for him to have written an informal essay describing his views on the matter. I found it immensely helpful in understanding Toby's writings about the kind of utilitarianism he endorses.
I really appreciate this! Thank you! And I feel lucky to get any free tools like this. I was just irked because I didn’t understand the need for the change. I feel much better about the loss of the recurring donations functionality now that I know the old platform was at the end of its life.
I find it much less intuitive and the aesthetic very cold. I liked the pie chart on my MyGiving dashboard... although I understand how diversifying causes made it easy to break that feature.
Thanks, this kind of specific feedback on features you'd like is more helpful than vaguer comments about it being "so bad."
Overgeneral though this comment is, it does seem to me like GWWC and donations are really getting the shaft from EA Uber-orgs, and that giving simply not being a priority is probably part of the problem.
What I still don’t understand is why they abandoned a perfectly good platform with MyGiving (imo) in order to make an incomplete move to EA.org.
I thought it was so that it tied in with EA Funds, which is something that made me think CEA was paying much more attention to donations, making a unified system that also allowed people to donate from one platform and automatically record donations.
Although I agree that repeated donations not being an option is quite annoying.
I’m no power user either. I just want to be able to add and modify recurring reservations, which you can’t do with EA.org. (I just learned you can email them with the details of a recurring reservation to have them add it for you, but come on.) You could do this easily in MyGiving. I also find the EA.org interface very bare, unlike MyGiving. I just don’t understand why they needed to make this move when they weren’t prepared to finish it.
The only reason I don’t identify as longtermist is tractability. I would appreciate a definition that allowed me to affirm that when a being occurs in time is morally arbitrary without also committing me to focusing my efforts on the long-term.
Yes, it's a bit question-begging to assert that the actions with the highest marginal utility per dollar are those targeting long-term outcomes.
one thing to bear in mind is that even using the the weighting scheme I suggested in the post - which seemingly strongly favors young people - that would move the median voter (in the US) from age 55 to age 40.
How do you get this result? Are you just saying with these multipliers applied to the current age distribution of voters, the median US vote would be cast by a 40 yo? Or if this anticipating the response to the multipliers? Like, for example, does this take into account that young people would probably vote more if their votes counted 6x more?
I'... (read more)
I downvoted your comments as well, Milan, because I think this is exactly the kind of thing that should go on the EA Forum. The emergence of this term “longtermism” to describe a vaguer philosophy that was already there has been a huge, perhaps the main EA topic for like 2 years. I don’t even subscribe to longtermism (well, at least not to strong longtermism, which I considered to be the definition before reading this post) but the question of whether to hyphenate has come up many times for me. This was all useful information that I’m glad was put up for e
An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs. This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions, but would not rule out views on which it’s just empirically intractable to try to improve the long-term future
I’ve referred to this definition as “temporal cosmopolitanism.” Whatever we call it, ... (read more)
FWIW, I think the young lacking life experience and crystallized intelligence is pretty clutch. This argument rests on the young having not only a greater stake in future but being able to make sensible decisions about what to do with it. I would at least suggest that 18-25 yo voters not have a multiplier.
I do like reducing the influence of the old who know very well when voting that, for instance, climate change will not really affect them. But I think any vote weighting scheme has to take stakeholding and competence into account.
I would at least suggest that 18-25 yo voters not have a multiplier.
I would at least suggest that 18-25 yo voters not have a multiplier.
Yes. As a reductio ad absurdum of Will's idea, why not give toddlers an extreme multiplier? Well, we know toddlers don't make good judgements. But it's not like your ability to make good judgments suddenly turns a corner on your 18th birthday. So as long as we're refactoring voting weights for different ages, we should also fix the 18th birthday step function issue, and create a scheme which gradually accounts for a person's increased wisdom as they age.
[Edit: A countervailing consi
So you think he's worried about other people being misled?
You've done a good job at reporting the trends in thought and terminology here. I'm not directing the following at you, but at the trend in the field you're describing.
I'm an evolutionary biologist and I'm tired of people saying r/K has been discredited. I think what really happened is that people realized r/K was a generalization without realizing that every other useful principle in evolutionary biology is also a generalization.
I use r/K parlance and I never get any complaints from the evolutionary theorists and population gene... (read more)
My point is that Ben is in fact able to do whatever legal thing he wants. He doesn't need to make us wrong to do so. It's interesting that he feels the need to. Whether EA or Peter Singer has suggested that it's morally wrong not to give, Ben is free to follow his own conscience/desires and does not need our approval. If his real argument is that he should be respected by EAs for his decision not to give, I think that should be distinguished from a pseudo-factual argument that we're deceived about the need to give money.
But you seem to be also arguing "you don't need to justify your actions to yourself / at all"
Kinda. More like "nobody can make you act in accordance with your own true values-- you just have to want to."
If people aren't required to live in accordance with even their own values, what's the point in having values?
To fully explain my position would require a lot of unpacking. But, in brief, no-- how could people be required to live in accordance with their own values? Other people might try to enforce value-aligned livi... (read more)
"However, effective altruism really is warm and calculating."
I can't believe I've never thought of this! That's great :)
Great post, too. I think EA has a helpful message for most people who are drawn to it, and for many people that message is overcoming status quo indifference. However, I worry that caring too much, as in overidentifying with or feeling personally responsible for the suffering of the world, is also a major EA failure mode. I have observed that most people assume their natural tendency towards either indifference or overresponsibility is s
Is this speaking to a concern someone has that terraforming would make a bunch more animals to suffer? What motivated this piece?
From the early sections, I thought you were going in the opposite direction-- how already involved EAs can be mindful of their secret motives for being involved. (I think that's super-important, btw.) For outreach, I would have thought the implication was that we should balance the need to appeal to and accomodate the human need for status with the possibility that EA would get diluted by the attempt to market EA in a low-fidelity way. I agree with CEA's emphasis on the high-fidelity model: there's no point in growing EA if it stops being EA... (read more)
Now that I've made all these comments, I realize I should have just asked Ben if his post was his true rejection of EA-style giving. My comments have all been motivated by suspicion that Ben just isn't convinced by arguments about giving enough to give himself, but he feels like he has to prove them wrong on their own terms instead of just acting as he sees fit. (That's a lot of assumptions on my part.) If that particular scenario happens to be true for him or anyone reading, my message is that you are in charge of these decisions and you do... (read more)
I'm fairly confident, based on reading other stuff Ben Hoffman has written, that this post has much less to do with Ben wanting to justify a rejection of EA style giving, and and much more to do with Ben being frustrated by what he sees as bad arguments/reasoning/deception in the EA sphere.
Singer says it's wrong to spend frivolously on ourselves while there are others in need but he doesn't say it should be illegal. He also doesn't give any hard and fast rules about giving, and he doesn't think people who don't give should be shamed. He simply points out how much more the money could do for others, each of whom matter as much as any of us.
I just get the feeling that Ben isn't comfortable doing what he wants or what he thinks would make most of us (wealthy people) happier without getting us to agree with him first that it's what everyone shou
As I commented on Ben's blog, I just think it bears mentioning that we're allowed to focus on our own lives whether or not there are people who could use or money more than us. So if anyone were motivated to undermine the need for donations in order to feel justified in focusing on themselves and their loved ones, they needn't do it. It's already okay to do that, and no one's perfectly moral. Maybe if you don't feel the need to prove EA wrong before taking care of yourself, you'll want to return to giving or other EA activities after giving yourself some tlc, because instead of feeling forced, you know you want to do these things of your own free will.
I'd like to propose another group that shouldn't donate: people with a pre-disposition to conditions that require treatment with medication that is hard on the kidneys.
I'm really glad I didn't try to donate my kidney a few years ago before I knew I would need to be taking a med (probably for the rest of my life) that can cause serious renal damage. In fact, kidney damage is a major reason people have to go off this drug and often they don't find an equivalent cocktail for dealing with the disease symptoms.
I imagine getting treate... (read more)
People who are doing direct work, if they expect three weeks of their work to produce more QALYs than donating.
It may be worth considering whether the enforced rest from donating a kidney would have some of the benefits of taking a vacation for you.
This could be turned into a searing satire of EA. "Earn a rest from the work that's too marginally impactful to pause for a few weeks by donating a kidney. To you, post-surgical recovery will seem like a vacation!"
The real goal you seem to be advancing, Milan, is spirituality, not psychedelics per se. Based on testimony from people I trust and some slightly dubious research, I think psychedelics can likely be helpful in that, but they shouldn't be our frontline tool. I think meditation is a much better candidate for that.
Sam Harris and Michael Pollan argue that psychedelics are useful for convincing people there's a there there, and that makes sense to me. You have to put a lot of time and blind effort into meditation to get that same assurance. But the struggle, an
Haven't had a chance to read much but it's already gold
But they have project projects as well as what you're describing.
I think this is your strongest point, but the question remains whether you can specialize in situational awareness and adding complex value. Personally, I think you need to have a main hustle to really apply these abilities.
Not to be mean, but how much value has Alex actually generated? The size of his network is very impressive, but do we know that making it has had substantial positive outcomes?
(This is mostly a rhetorical question because I know Alex and his activities very well. I know my opinion but perhaps you will disagree. Also, he knows about my skepticism.)
I appreciate this!
Although I don't think it's a likely EA cause area, I definitely think it's good for the world to raise awareness about the costs of sleep deprivation among EAs! I'd love to see norms in our community of respecting sleep, like not having events too late, not making them too overstimulating, not relying on alcohol to make something a social event, rejecting startup-y "always on" culture on by doing business mostly by daylight, etc.
I think I know very well where Nathan is coming from, and I don't think it's invalid, for the reasons you state among others. But after much wrangling with the same issues, my comment is the only summary statement I've ever really been able to make on the matter. He's just left religion and I feel him on not knowing what to trust-- I don't think there's any othe place he could be right now.
I suppose what I really wanted to say is that you can never surrender those doubts to anyone else or some external system. You just have to accept that you will make mistakes, stay alert to new information, and stay in touch with what changes in you over time.
First of all, youch, people did not like this post. That's okay.
Aww, I'm sorry-- I didn't mean to sound harsh. I get very sensitive on this forum so I hate that I made you feel that way. I guess I was just really eager to clarify that diversity was not why I wanted an ethnography done and not considerate enough of the position you laid out.
I have a strong reaction against weighted voting on the basis of demographics, but it would definitely be interesting to see how it changed things.
Just person to person, I don't think there's any substitute for staying awake and alert around your beliefs. I don't mean be tense or reflexively skeptical-- I mean accept that there is always uncertainty, so you have to trust that, if you are being honest with yourself and doing your best, you will notice when discomfort with your professed beliefs arises. You can set up external standards and fact checking, but can't expect some external system to do the job for you of knowing whether you really think this stuff is true. People who don't trust themselves on the latter over-rely on the former.
+1 to this.
I partly agree with Nathan's post, for a few reasons: