These are interesting ideas. It seems like there's still a lack of clarity about the magnitude of the effects of each issue on the nonhuman animal side, and therefore their relative cost-effectiveness. But as more research is done, say on ITNs in later stages of their lifecycle and the effects of tapeworms on pigs, maybe trades could be made based on these issues!
Wow, this is amazing! Thank you for putting in the time and effort to write it. I just ordered a copy for the Effective Altruism at Georgia Tech library. Can’t wait to read it!
I think it would be really useful for someone with a mathematical background to develop this further. The flexibility/dedication tradeoff seems about the same as the explore/exploit tradeoff, which I understand to have been studied a fair amount. I'd imagine there's a lot of theory that could be applied and would allow us to make better decisions as a community, especially now that lots of people are thinking about specializing or funding specialization. I bet we could avoid significant mistakes at a low cost by quantifying investments in each area and comparing them to theoretical ideals.
I quite like how you distinguish approaches at the individual level! I think focusing on which area they support makes sense. One lingering question I have is the relative value a donor's donations vs. the value of their contribution toward building a culture of effective giving. I also think it's at least somewhat common for people to get into other areas of EA after starting out in effective giving.
Agreed on the intro fellowship point as well! Long-term it supports field-building since plenty of participants filter through, but it...
It's great that you're doing what you can on this front, despite all the challenges! I don't have specific nutritional advice, though maybe the writer of the first post you linked would.
You may have already considered this (some of your ideas hinted in this direction), but I think it's important to focus on suffering intensity, which you could measure in terms of suffering per calorie or suffering per pound of food. Doing so will minimize your overall suffering footprint. My understanding is that the differences in capacity for suffering ...
Great post, thanks for writing it! Healthy and active vegans sharing their stories helps change the narrative, bit by bit.
Destroying viruses in at-risk labs
Thanks to Garrett Ehinger for feedback and for writing the last paragraph.
Military conflict around or in the vicinity of biological research laboratories could substantially increase the risk of releasing a dangerous pathogen into the environment. The fighting and mass movement of refugees combine with other risk factors to magnify the potential ramifications of this risk. Garrett Ehinger elaborates on this issue in his excellent Chicago Tribune piece, and proposes the creation of nonaggression treaties for biol...
One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.
The thought experime...
I appreciate how this post adds dimension to community building, and I think the four examples you used are solid examples of each approach. I'm not sure what numbers I'd put on each area as current or ideal numbers, but I do have some other thoughts.
I think it's a little hard to distinguish between movement support and field building in many community building cases. When someone in a university group decides to earn to give instead of researching global priorities, does that put them in movement support instead of the field? To what ext...
I don't think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals. I'm also not sure what it would look like for an AI to select its own interests. This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.
To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents. I recommend reading about instrumental convergence. Hope this helps!
Different group organizers have widely varying beliefs that affect what work they think is valuable. From certain perspectives, work that’s generally espoused by EA orgs looks quite negative. For example, someone may believe that the harms of global health work through the meat eater problem dominate the benefits of helping reduce human suffering and saving lives. Someone may believe that the expected value of the future with humans is negative, and as such, biosecurity work that reduces human extinction risk is net-negative. I...
Fantastic post, thank you for writing it! One challenge I have with encouraging effective giving, especially with a broader non-EA crowd, is that global health and development will probably be the main thing people end up giving to. I currently don't support that work because of the meat eater problem. If you have any thoughts on dealing with this, I'd love to hear them.
Some arguments to support global health work despite the meat eater problem that I see are:
"People in low-income countries that are being helped with Givewell-style interv...
I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up. Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.
In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes. In venture capital, for example, a fund will invest in a portfolio of companies. Most are expected to fail completely. A small portion will survive but not change significantly in value.&nbs...
My current belief in the sentience of most nonhuman animals comes partly from the fact that they were subjected to many of the same evolutionary forces that gave consciousness to humans. Other animals also share many brain structures with us. ChatGPT never went through that process and doesn't have the same structures, so I wouldn't really expect it to be conscious. I guess your post looks at the outputs of conscious beings, which are very similar to what ChatGPT produces, whereas I'm partly looking at the inputs that we know have created...
I’ve addressed the point on costs in other commentary, so we may just disagree there!
Great point! A historian or archivist could take on this role. Maybe CEA could hire one? I’d say it fits within their mission “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.”
Definitely agree with Chris here! Worst case scenario, you create useful material for someone else who tackles it down the line, best case scenario, you write the whole thing yourself.
I think opportunity cost is well worth mentioning, but I don't know that I think it's as high as you believe it to be.
Choosing someone who has been around a while is optional. The value of having an experienced community member do it is built-in trust, access, and understanding. The costs are the writer's time (though that cost is decreasing as more people start writing about EA professionally) and the time of those being interviewed. I would also note that while there's lots of work for technical people in EA, writers in the community ma...
I agree with this last point on underlying motives. EA is one direction for purpose-seeking people to go in, but not everyone will choose it. This program could also look vaguely religious, which is generally preferable to avoid.
I would also question whether a focused program is the best way to develop people with EA motivation. I think sometimes people go through the intro program and find purpose in it because...
I think that stipends for intro fellows is an idea worth considering, but I have real concerns at the moment, especially since Penn’s write-up about it hasn’t come out yet.
1.1 “Makes Fellowships more accessible to people who are not wealthy, potentially leading to a more diverse community”
I think there’s probably some truth to this, but honestly, I don’t think an amount that we could give every fellow would allow anyone to meaningfully decrease the outside work they do. I’d be in support of packages for those that wouldn’t be able to participate with...
Thanks Nathan!