How do you expect incubating for-profit orgs to differ from AIM's experience incubating charities, and what do you plan to do to execute well despite these differences?
Nice! This is helpful, and I love the reasoning transparency. How did you get to the 80% CI?(sorry if I missed this somewhere)
The problem is you are framing these ideas as advice you're giving to others - that if they took seriously could affect something important (i.e. a job interview). If you're going to presume to advise others, you should be more confident the advice is true/helpful.
I thought this was a very useful review and would strongly encourage others to read it, if they’ve engaged with the previous posts on this subject. I wouldn't have seen it without your post, so thanks! I think publishing on the forum in full (or relevant sections) would be great - though I'll leave it to the author/others to decide.
I loved this. For hungry readers, Peter Godfrey-Smith's 'Other Minds' is great (so too the subsequent 'Metazoa').
Awesome work, thanks! And this model resonates with my experience getting more involved with bio over the last few years.
Yeah, though to be fair the CEA for Malawi was b/c it was LEEP's literal first campaign. I'd imagine LEEP has CEAs for all their country work which include adjustments for likelihood of success, though I don't know whether they intend to publish them any time soon.
Yeah makes sense, and that the early research could have been heavily discounted by pessimism about a charity achieving big wins.
One example I know of off the top of my head is LEEP - their CEA for their Malawi campaign found a median of $14/DALY. CE's original report on lead paint regulation suggested $156/DALY (as a central estimate, I think). That direction and magnitude is pretty surprising to me. I expect it would be explicable based on the details of the different approaches/considerations, but I'd need to look into the details. Maybe a motivating story is that LEEP's Malawi campaign was surprisingly fast and effective compared to the original report's hopes?
Another is Family ...
I sympathise with this view, but I think I see it in more continuous terms than ex ante vs. ex post, and maybe akin to quality. This is because even ex post, I think there would still be substantial guess-work and assumptions, and the bottom line still relies on interpretation. But the difference for ex post is how empirically informed that analysis can be, and how specific. I.e an ex post analysis can ground estimates on data for that specific org, with that program, in that community. Ex ante analyses can also differ in quality for how empirically inform...
I think a similar view is found in 'Why we can't take expected value estimates literally even when they're unbiased' I.e. we should have a pretty low prior that any particular intervention is above (e.g.) 10x cash transfers, but the strength and robustness of top charities' CEAs are sufficient to clear them over the bar. And most CEAs of specific interventions written up on the forum aren't compelling enough to bring the estimate all that much higher from the low prior.
I agree it'd be informative to see what 'naive' versions of top charity CEAs would be li...
I weakly agree with the claim that the offense/defense balance is not a useful way to project the implications of AI. However, I disagree strongly with how the post got there. Considering only cyber-security and per-capita death rate is not a sufficient basis for the claim that there is "little historical evidence for large changes in the O/D balance, even in response to technological revolutions."
There are good examples where technology greatly shifts the nature of war: castles favouring defense, before becoming negated by cannons. The machine gun and bar...
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Thanks! Fixed.
I don't know the nuclear field well, so don't have much to add. If I'm following your comment though, it seems like you have your own estimate of the chance of nuclear war raising 47+ Tg of soot, and on the basis of that infer the implied probability supers give to extinction conditional on such a war. Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chanc...
Hi Vasco, nice post thanks for writing it! I haven't had the time to look into all your details so these are some thoughts written quickly.
I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn't look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published - I'm on it!). That work was as an independent contractor for OP, but I now work for them on the GCR Cause Prio team. All that to say these are my own views, not OP'...
Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:
As one data point: I was interested in global health from a young age, and found 80K during med school in 2019, which led to opportunities in biosecurity research, and now I'm a researcher on global catastrophic risks. I'm really glad I've made this transition! However, it's possible that I would have not applied to 80K (and not gone down this path) if I had gotten the impression they weren't interested in near-termist causes.
Looking back at my 80K 1on1 application materials, I can see I was aware that 80K thought global health was less neglected tha...
Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!
Things can be 'not the best', but still good. For example, let's say a systematic, well-run, whistleblower organisation was the 'best' way. And compare it to 'telling your friends about a bad org'. 'Telling your friends' is not the best strategy, but it still might be good to do, or worth doing. Saying "telling your friends is not the best way" is consistent with this. Saying "telling your friends is a bad idea" is not consistent with this.
I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.
Your top-level post did not claim 'public exposés are not the best strategy', you claimed "public exposés are often a bad idea in EA". That is a different claim, and far from a default view. It is also the view I have been arguing against. I think you've greatly misunderstood others' positions, and have rudely dismissed them rather than trying to understand them. You've ignored the arguments given by others, while not defending your own assertions. So it's frustrating to see you playing the 'I'm being cool-headed and rational here' card. This has been a pretty disappointing negative update for me. Thanks
You didn’t provide an alternative, other than the example of you conducting your own private investigation. That option is not open to most, and the beneficial results do not accrue to most. I agree hundreds of hours of work is a cost; that is a pretty banal point. I think we agree that a more systematic solution would be better than relying on a single individual’s decision to put in a lot of work and take on a lot of risk. But you are, blithely in my view, dismissing one of the few responses that have the potential to protect people. Nonlinear have their...
Not everyone is well connected enough to hear rumours. Newcomers and/or less-well-connected people need protection from bad actors too. If someone new to the community was considering an opportunity with Nonlinear, they wouldn't have the same epistemic access as a central and long-standing grant-maker. They could, however, see a public exposé.
What a fantastic resource, thanks all! Also may be worth adding, the new National Security Commission on Emerging Biotechnology, which will be delivering a 2024 report based on “a thorough review of how advances in emerging biotechnology and related technologies will shape current and future activities of the Department of Defense“ - delivering it to the DoD, White House, and Congress.
Ooh what about Bob Fischer? He's a philosophy professor who ran Rethink's moral weights project and is now on their new Worldview Investigations team! [edit: just saw him suggested in a different comment]
How come, out of curiousity? I haven't looked into EDCs at all, but on a skim - is it non-neglectedness, weak evidence, both, weak importance, other things?
Promote stimulant use can be fine in some cases - eg "have you considered getting an ADHD diagnosis, maybe try mine for a day and see how you feel"
I think this is a bad idea. Suggesting someone 'get a diagnosis' is a terrible approach to health and medical advice. Giving someone your own prescribed medication is also a bad idea, and is exactly the kind of norm-crossing ickiness that should be reduced/eliminated. The version I would endorse is:
"Have you considered whether you might have ADHD? It might be a good idea to talk to a doctor about these issues you're having, as medication can be helpful here."
Just want to say I appreciate your commentary over the past 9 months. Having someone with legal expertise and (what seems to me) a pretty even-handed and sensible perspective is a really valuable contribution.
Cool! One point from a quick skim - the number of animals wouldn't be lost in many kinds of human extinction events or existential risks. Only a subset would erase the entire biosphere - e.g. a resource-maximising rogue AI, vacuum decay, etc. Presumably with extinction of just humans the animal density of reclaimed land would be higher than current, so the number of animals would rise (assuming it outweighs the end of factory farming).
The implications of human existential risks for animals is interesting, and I can see some points either way dependin...
Does anyone else from the UK get a 'unsupported protocol' error from the Asterix site? I do, but it doesn't trigger if I use a VPN.
Thanks for this - super interesting! One thing I hadn't caught before is how much the estimates reduce for domain experts in the top quintile for reciprocal scoring - in many cases an order of magnitude lower than that of the main domain expert group!
I think another factor is that HLI's analysis is not just below the level of Givewell, but below a more basic standard. If HLI had performed at this basic standard, but below Givewell, I think strong criticism would have been unreasonable, as they are still a young and small org with plenty of room to grow. But as it stands the deficiencies are substantial, and a major rethink doesn't appear to be forthcoming, despite being warranted.
I really enjoyed this 2022 paper by Rose Cao ("Multiple realizability and the spirit of functionalism"). A common intuition is that the brain is basically a big network of neurons with input on one side and all-or-nothing output on the other, and the rest of it (glia, metabolism, blood) is mainly keeping that network running.
The paper's helpful for articulating how that model's impoverished, and argues that the right level for explaining brain activity (and resulting psychological states) might rely on the messy, complex, biological details, such tha...
As the origin of that comment i should say other reasons for non-convergence are stronger, but the attrition thing contributed. E.g. biases both for experts to over-rate and supers to under-rate. I wonder also about the structure of engagement with strong team identities fomenting tribal stubbornness for both...
Ah okay, thanks for the correction! In which case I think ~all my questions apply to the B10 figure then.
On the DURC CEA:
Thanks for engaging! Yep I agree with what you said - cross-pollination and interdisciplinary engagement and all that. For context I haven't spent a lot of time looking at the Collins' work, hence light stakes/investment for this discussion. But my impression of their work makes me skeptical that they are "highly accomplished" in any field and I am also very surprised that they would be "thinkers [you] respect" (to borrow from Austin's comment).
In terms of their ideas, I think that hosting someone as a speaker at your conference doesn't mean that you endor... (read more)