Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course - a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I'm wrong.
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Proportion of students who are employed while studying: UK: survey of 10,000 students showed that 56% of full-time UK undergraduates had paid employment (14.5 hours/week average) - June 2024 Guardian article https://www.theguardian.com/education/article/2024/jun/13/more-than-half-of-uk-students-working-long-hours-in-paid-jobs USA: 43% of full-time students work while enrolled in college - January 2023 Fortune article https://fortune.com/2023/01/11/college-students-with-jobs-20-percent-less-likely-to-graduate-than-privileged-peers-study-side-hustle/
Why are students taking on paid work? UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above. Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
Thanks for compiling this! I skimmed this, and it was a good way of getting an overview of what is happening in parts of EA that I know less about. I found having it separated by cause then by month useful so the reader can choose which overview they prefer, although some non-AI causes could have had their own section rather than being clumped together (I slowly scrolled through month by month and clicked on some of the more interesting looking articles).
Exciting! A few thoughts/questions:
I'm not sure quantitatively how much of a difference it makes having a giving pledge focusing just on healthcare professionals, rather than overall, but it's possible that the focus/community aspect may make some difference. But the concept was interesting enough to get me (past medical device engineer, now health economics) to click on the post, then the website, so there's that.
When did this actually start/how many people have taken the pledge? On the Pledge page there are 15 people listed ranging from October 2022 to December 2023. It's possibly not all the people that have signed up, and/or the pledge might not have been open yet, but my first thought was that it looks like not many people have signed up/this isn't that big. But if the launch is now (i.e. given the post) that makes more sense.
I see a comment below that 1% seems a bit low. Healthcare salaries do vary a lot between countries (e.g. a doctor in the US typically earns a lot more than a doctor in the UK, and also the rate that people are taxed will be different between countries as well). If healthcare workers in the countries you are targeting typically donate more than 1% I think that could take away a lot of the impact from the pledge (people don't donate more, they might even feel justified to donate less). On the other hand I can see the argument that 10% is high/potentially offputting, so 5% might be a good middle ground. How did you choose which percentage to set as the pledge amount?
Hey, thanks also for the detailed response.
I don't think that part is our disagreement. Maybe the way I would phrase the question is whether there should be an additional multiplier put on extinction in addition to the expected future loss of wellbeing. If I was to model it, the answer would be 'no' to avoid double counting (i.e. the effect of extinction is the effect of future loss of wellbeing). The disagreement is how this is not by default assumed to apply to animals as well.
"If you knew for sure that the animals had net negative lives, would you still think their extinction was bad?" Not sure how likely such a situation is to come up, as I'm not sure how I would know this for sure. Because that seems like not just being sure that every of that species that exists now has a net negative life, it's assuming that every of that species that might exist in the future also will have. But to answer the question philosophically and not practically, I would not say that the extinction of a species that will definitely have guaranteed suffering is bad.
"But we were discussing whether we should treat animal extinctions with the weight of an X-risk (i.e. a human extinction). For that, we need a little more than an assumption that the animal's lives are net positive." Definitely agreed for prioritising between things that more than the just the assumption of net positive is required. But research would be required to know that, and as far as I can tell there has been very little done (and there are ~8.7 million animal species).
I see thanks - I can now find the section you were referring to. I don't think I agree the full argument as made follows, but I haven't made the full thing and I don't want this to be a thread discussing this one particular paper!
Agreed there are nuances re animals. However, outside philosophy I'm not sure how many people you'd have arguing against 'human extinction is bad even if humans are replaced by another species'!
My bad, I meant primarily the second paragraph (referring to how animal extinction was valued, and given the lack of discussions around this - I had it more general then decided to specify the paragraph... then picked the wrong one!). Agreed with your response here, will edit the original.
I think there should be more discussions of animal x-risk and/or animal longtermism within EA. I definitely care more about human extinction than animal extinction and think human extinction risk should be a higher priority, but that does not mean that animal extinction risk should not be considered. For example, I think considering both human and animal xrisk might change how much climate change is prioritised as an existential risk, and what interventions are worth focusing on for it (there are definitely more people focussing on climate change compared to other xrisks outside of EA, but that does not mean that the best interventions have been focussed on similar to the global poverty space).
I don't have strong opinions on 'this should definitely be prioritised', but I think at least a few people should be researching this (to see if it is important/tractable/neglected/etc), and it should be discussed more than it currently is.
Interesting quick take, thanks for sharing!