All posts

New & upvoted

Week of Sunday, 7 April 2024
Week of Sun, 7 Apr 2024

Research 14
Community 13
Cause prioritization 12
Opinion 10
Global health & development 10
AI safety 10
More

Frontpage Posts

233
· 22d ago · 23m read
13
11
· 23d ago · 8m read
9
· 20d ago · 2m read
6
rootpi
· 23d ago · 1m read
3
· 20d ago · 3m read
1
· 25d ago · 1m read

Quick takes

Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of: 1. (Systematically) exploring cause areas 2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency 3. Sharing their list and reasons publicly.[2] The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list. Related things I appreciate, but aren't quite what I'm envisioning: * Tools and models like those by Rethink Priorities and Mercy For Animals, though they're less focused on explanation of specific prioritisation decisions. * Longlists of causes by Nuno Sempere and CEARCH, though these don't provide ratings, rankings, and reasoning. * Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation's broader prioritisation process. There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus. If you know of other public writeups and explanations of ranked lists, please share them in the comments![3] 1. ^ Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means. 2. ^ I'm a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain... and not at all systematic or thorough. I think I roughly: - Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),  - Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes - Didn't ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes 'not-core-EA™-cause-areas' or based on criteria other than ITN). I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project. 3. ^ Rough and informal explanations welcome. I'd especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I'd like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.
Please advertise applications at least 4 weeks before closing! (more for fellowships!) I've seen a lot of cool job postings, fellowships, or other opportunities that post that applications are open the forum or on 80k ~10 days before closing.  Because many EA roles or opportunities often get cross-posted to other platforms or newsletters, and there's a built in lag-time between the original post and the secondary platform, this is especially relevant to EA. For fellowships or similar training programs, where so much work has gone into planning and designing the program ahead of time, I would really encourage to open applications ~2 months before closing.  Keep in mind that most forum posts don't stay on the frontpage very long, so "posting something on the forum" does not equal "the EA community has seen this". As someone who runs a local group and a newsletter, opportunities with short application times are almost always missed by my community, since there's not enough turnaround time between when we see the original post, the next newsletter, and time for community members to apply.
Im intrigued where people stand on the threshold where farmed animal lives might become net positive? I'm going to share a few scenarios i'm very unsure about and id love to hear thoughts or be pointed towards research on this. 1. Animals kept in homesteads in rural Uganda where I live. Often they stay inside with the family at night, then are let out during the day to roam free along the farm or community. The animals seem pretty darn happy most of the time for what it's worth, playing and galavanting around. Downsides here include poor veterinary care so sometimes parasites and sickness are pretty bad and often pretty rough transport and slaughter methods (my intuition net positive). 2. Grass fed sheep in New Zealand, my birth country. They get good medical care, are well fed on grass and usually have large roaming areas (intuition net positive) 3. Grass fed dairy cows in New Zealand. They roam fairly freely and will have very good vet care, but have they calves taken away at birth, have constantly uncomfortably swollen udders and are milked at least twice daily. (Intuition very unsure) 4. Free range pigs. Similar to the above except often space is smaller but they do get little houses. Pigs are far more intelligent than cows or sheep and might have more intellectual needs not getting met. (Intuition uncertain) Obviously these kind of cases make up a small proportion of farmed animals worldwide, with the predominant situation - factory farmed animals likely having net negative lives. I know that animals having net positive lives far from justifies farming animals on it's own, but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering. Thanks for your input.
26
JackM
20d
4
Could it be more important to improve human values than to make sure AI is aligned? Consider the following (which is almost definitely oversimplified):   ALIGNED AI MISALIGNED AI HUMANITY GOOD VALUES UTOPIA EXTINCTION HUMANITY NEUTRAL VALUES NEUTRAL WORLD EXTINCTION HUMANITY BAD VALUES DYSTOPIA EXTINCTION For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction. The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.  The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins. The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment). This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?). I doubt this is a novel argument, but what do y’all think?