I find it’s very rare to have to do the work test in 1 sitting, and I at least usually do better if I can split it up a bit
It sounds like you would benefit from greater prioritisation and focus. (Eg see: https://calnewport.com/dangerous-ideas-college-extracurriculars-are-meaningless/).
I don’t think it requires years of learning to write a thoughtful op-ed-level critique of EA. I’d be surprised if that’s true for an academic paper-level one either
That's fair! But I also think most op-eds on any topic are pretty bad. As for academic papers, I have to say it took me at least a year to write anything good about EA, and that was on a research-only postdoc with 50% of my research time devoted to longtermism.
There's an awful lot that has been written on these topics, and catching up on the state of the art can't be rushed without bad results.
The point I’m trying to make is that there are many ways you can be influential (including towards people that matter) and only some of them increase prestige. People can talk about your ideas without ever mentioning or knowing your name, you can be a polarising figure who a lot of influential people like but who it’s taboo to mention, and so on.
I also do think you originally meant (or conveyed) a broader meaning of influential - as you mention economic output and the dustbins of history, which I would consider to be about broad influence.
This is interesting, thanks. Though I wanted to flag that the volume of copyediting errors means I’m unlikely to share it with others.
I’m very confused why you think that FHI brought prestige to Oxford University rather than the other way around
In the examples you give, the arguments for and against are fairly cached so there’s less of a need to bring them up. That doesn’t apply here. I also think your argument is often false even in your examples - in my experience, the bigger the gap between the belief the person is expressing and that of the ~average of everyone else in the audience, the more likely there is to be pushback (though not always by putting someone on the spot to justify their beliefs, e.g. awkwardly changing the conversation or straight out ridiculing the person for the belief)
In my experience people update less from positive comments and more from negative comments intuitively to correct for this asymmetry (that it's more socially acceptable to give unsupported praise than unsupported criticism). Your preferred approach to correcting the asymmetry, while I agree is in the abstract better, doesn't work in the context of these existing corrections.
Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
Are you sure it's not the other possible candidate? I have only heard negative things about one of their personalities.
The Wired article says that there’s been a bunch more research in recent years about the effects of bed nets on fish stocks, so I would consider the GiveWell response out of date
I don’t think it can be separated neatly. If the person who has died as a result of the charity’s existence is a recipient of a disease reduction intervention, then they may well have died from the disease instead if not for the intervention.
What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number?
Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse - defining terms unique is very common.
Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional?
...WHILE SBF’S MONEY was st
I don’t think you incorporate the number at face value, but plausibly you do factor it in in some capacity, given the level of detail GiveWell goes into for other factors
I am very surprised to read that GiveWell doesn't at all try to factor in deaths caused by the charities when calculating lives saved. I don't agree that you need a separate number for lives lost as for lives saved, but I had always implicitly assumed that 'lives saved' was a net calculation.
The rest of the post is moderately misleading though (e.g. saying that Holden didn't start working at Open Phil, and the EA-aligned OpenAI board members didn't take their positions, until after FTXFF had launched).
The "deaths caused" example picked was pretty tendentious. I don't think it's reasonable to consider an attack at a facility by a violent criminal in a region with high baseline violent crime "deaths caused by the charity" or to extrapolate that into the assumption that two more people will be shot dead for every $100,000 donated. (For the record, if you did factor that into their spreadsheet estimate, it would mean saving a life via that program now cost $4776 rather than $4559)
I would expect the lives saved from the vaccines to be netted out against deat...
We don't know from this announcement that they are planning to prioritise rapidity of sale over time-adjusted return - it could still make sense to not continue e.g. paying as many salaries, and to have declared it shut down as a project.
That wasn’t my interpretation of this section. I took “be smart” to mean like ‘make smart career decisions’, not ‘be Smart^TM’
Regarding your last paragraph, I see the Profile 1 vs Profile 2 axis as basically distinct from the Doer vs Thinker axis. People can spend years in large companies without ever needing or developing a get sh*t done mentality, and otoh starting an EA org and rapidly iterating can be a great way to develop or exercise that skill (see e.g. BlueDot Impact, AI-Plans.com). Maybe it's that you're leaving out a Profile 3 - people who start their career in (or very quickly switch into) EA but by starting a new thing rather than working their way up the ladder of an EA org. (Though the starting of a new thing could technically happen within an existing org as well).
I'd be quite interested in reading a more fleshed-out version of this, if you were considering whether that was worth your time. What dimensions of advice about a given career path are you seeing people given that should be discounted without domain success?
All CE charities to date have focused on global development or animal welfare
CE incubated Training for Good, which runs two AI-related fellowships. They didn’t start out with an AI focus, but they also didn’t start out with a GHD or animal welfare focus.
Saying it isn't an EA project seems too strong - another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and 'coaching for EA leaders' programs. TfG was incubated by Charity Entrepreneurship.
You missed the most impressive part of Jan-Willem’s EA CV - he used to co-direct EA Netherlands, and I hear that's a real signal of talent ;)
But yes, I guess it depends on how you define ‘EA project’. They're intentionally trying to do something different, so that's why I don't describe them as one, but the line is very blurred when you take into account the personal and philosophical ties.
If EA was a broad and decentralised movement, similar to e.g., environmentalism, I'd classify SMA as an EA project. But right now EA isn't quite that. Personally, I hope we one day get there.
I agree that starting with some non-EA experience is good (and this is the approach I took), though 5 years seems too long.
I think it’s reasonable to focus on expressing an experienced sentiment, but I think it’s also fair for people to push back on the sentiment. There are after all people who have felt alienated from and pushed out of EA as a result of the active shaping of forum content to be more agreeable.
implicitly endorsed by CEA by virtue of not being removed or something like that
I think it would be quite bad if forum mods began to remove posts on the basis that something existing on the forum constitutes an endorsement by CEA. I’m not even sure it’s a coherent im...
It's often done to make sure the reader tries to weigh the merits of the content by itself.
My understanding is that it's usually meant to serve the opposite purpose: to alert readers to the possibility of bias so they can evaluate the content with that in mind and decide for themselves whether they think bias has creeped in. The alternative is people being alerted to the CoI in the comments and being angry the quite relevant information being kept from them, not that they would otherwise still know about the bias and not be able to evaluate the article well because of it.
I think the key actual difference (vs perceived as you point out), is whether you think those constraints are good or not.
CE/AIM just launched something like a founding-to-give incubation program, will be interesting to see how that goes, who their participants end up being etc
Hmm so I currently think the default should be that withdrawals without a decision aren't included in the time-till-_decision_ metric, as otherwise you're reporting a time-till-closure metric. (I weakly think that if the withdrawal is due to the decision taking too long and that time is above the average (as an attempt to exclude cases where the applicant is just unusually impatient), then it should be encorporated in some capacity, though this has obvious issues.)
Perhaps I am overestimating how worried a source might be that their organisation traces a leak back to them if it's known that someone from within the organisation provided it.
I tick 2.5 of the DEI boxes you’ve identified, and I found this post quite off-putting. It’s hard for me to evaluate the examples as the box you’ve reasonably chosen to focus on I don’t tick, but the anecdote about your experience on the plane I found quite alarming. You say “I get it”, but I don’t get it. Airport security is overly stringent, and I’d be very surprised if I’d react that way in similar circumstances. Should I be offended that you think it’s representative of the average white person’s feelings? So I wonder if you might be projecting your own biases onto other white people/men/etc.
Hi Rebecca, I am realizing after posting and after your insightful comment that perhaps my feelings about DEI maybe is at least to some degree some sort of male/white guilt and that I am overcompensating. And it is a good point that I might be projecting my biases too strongly onto others who share my privileges - I did spend the first 18 years of my life in a very white environment, for example, so am probably wired quite differently from someone that grew up somewhere more diverse. Your comment is definitely well taken and makes me update towards being e...
My point was that if someone withdraws their application because you were taking so long to get back to them, and you count that as the date you gave them your decision, you’re artificially lowering the average time-till-decision metric.
Actually the reason I asked if you’d factored in withdrawn application not how was to make sure my criticism was relevant before bringing it up - but that probably made the criticism less clear
My point is more around the fact that if a person withdraws their application, then they never received a decision and so the time till decision is unknown/infinite, it’s not the time until they withdrew.
The question relating to website timelines would be hard to answer as it was changed a few times I believe
I think a 2x2 rather than 1x3 seating arrangement would be more natural. Currently it feels like you and Arden are too far away to make it a cosy chat vibe. I agree with Jamie that the topics should be impact-relevant, rather than just friends chatting about random things.