I'm an AI Program Officer at Longview Philanthropy, though all views I express here are my own.
Before Longview I was a researcher at Open Philanthropy and a Charity Entrepreneurship incubatee.
Edit: I didn’t see how old this post was! It came up on my feed somehow and I’d assumed it was recent.
Thanks for this. I’ve only skimmed the report, and don’t have expertise in the area. So the below is lightly held.
Section 4.2.3 talks about negative wellbeing effects. I think these are a serious downside risk, but other than noting that severe harms are indeed faced by guest workers, the report’s response is based on a single paper (Clemens 2018) and the idea that the intervention could improve things via surveys and ratings. Most of the benefit/harms considered in the rest of the report are financial (it appears on a skim).
I think the risk of facilitating severe harms against individuals (participating in the ‘repugnant transaction’) is very unsettling, and would be my main reason not to donate to such a charity. If I were a prospective donor I would want to see deeper exploration and red teaming of this worry.
I’d also note that this issue has characteristics that EA/AIM is likely to systematically underrate:
I thought this was great! Seems like a very accessible way to spread this info.
Two minor notes - when you perch you get a little stuck where you can’t move or unperch, I only got free by clicking socialize. And in the caged scenario, I shared the space with 2 other chickens, not 5-10 like the info says. Making the surroundings of the cage depicting other chickens would be more intense too, rather than the existing pattern.
If you were you or others were going to extend it, I’d imagine gamifying it might be interesting. Eg you gain points by performing the natural behaviors, and points allow you to unlock the more elaborate natural behaviors. And then having some mechanic where you have to choose the deleterious behaviors (attacking other chickens, pulling own feathers). Maybe some stress bar that increases as a function of the space you have. Performing natural behaviors brings the stress down, and this is manageable in the kinder scenarios. But in cages, the stress is increasing rapidly and the natural behaviors aren’t available, so you’re forced to do the deleterious ones.)
The main benefit of doing this gamification would be to increase the chance people get interested, or that some streamer gives it a go.
Nice work!
Thanks for this. Post-hoc theorizing:
‘doing good better’ calls to mind comparisons to the reader’s typical ideas about doing good. It implicitly criticizes those examples, which is a negative experience for the reader and could cause defensiveness.
‘Do the most good’ makes the reader attempt to imagine what that could be, which is a positive and interesting question, and doesn’t immediately challenge the reader’s typical ideas about doing good.
It wouldn’t have been obvious to me before the fact whether the above stuff wouldn’t be outweighed by worries about reactions to ‘the most good’ or what have you, so I appreciate you gathering empirical evidence here.
"Given leadership literature is rife with stories of rejected individuals going on to become great leaders"
The selection effect can be very misleading here — in that literature you usually don't hear from all the individuals who were selected and failed, nor those who were rejected correctly and would have failed, and so on. Lots of advice from the start-up/business sector is super sus for this exact reason.
What can I read to understand the current and near-term state of drone warfare, especially (semi-)autonomous systems?
I'm looking for an overview of the developments in recent years, and what near-term systems are looking like. I've been enjoying Paul Scharre's 'Army of None', but given it was published in 2018 it's well behind the curve. Thanks!
I don't know. My guess is that they give very slim odds to the Trump admin caring about carbon neutrality, and think that the benefit of including a mention in their submission to be close to zero (other than demonstrating resolve in their principles to others).
On the minus side, such a mention risks a reaction with significant cost to their AI safety/security asks. So overall, I can see them thinking that including a mention does not make sense for their strategy. I'm not endorsing that calculus, just conjecturing.
"very obviously their direct experience with thinking and working with existing AIs would be worth > $1M pa if evaluated anonymously based on understanding SOTA AIs, and likely >$10s M pa if they worked on capabilities."
"Y&S endorsing some effort as good would likely have something between billions $ to tens of billions $ value."
fwiw both of these claims strike me as close to nonsense, so I don't think this is a helpful reaction.