New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
U.S. Politics should be a main focus of US EAs right now. In the past year alone, every major EA cause area has been greatly hurt or bottlenecked by Trump. $40 billion in global health and international development funds was lost when USAID shut down, which some researchers project could lead to 14 million more deaths by 2030. Trump has signed an Executive Order that aims to block states from creating their own AI regulations, and has allowed our most powerful chips to be exported to China. Trump has withdrawn funding from, and U.S. support for, international governance bodies like the United Nations and the World Health Organization, thereby removing the world's most influential country from the collaborative efforts necessary to combat climate change and global pandemics. Most recently, the administration even changed nutritional guidelines, encouraging Americans to eat more animal protein than ever, which could drive more demand for unethically-produced animal products. In addition to all of this, Trump has continuously acted undemocratically, brazenly breaking norms and laws meant to protect us from autocracy and dictatorship. This goes to show just how determinative U.S. politics is to our successes and our failures.
12
Hazo
16h
1
I've seen AI-based animal communication technologies starting to be involved in some EA events / discussions (e.g. https://www.earthspecies.org/ ). I'm worried these initiatives may be actively negative, and I'm wondering if anyone has / will articulate a stronger defense of why they're good? The high-level argument I've heard is that communicating with animals will make humans be more empathetic towards them. But I don't see why this would be the most likely outcome: 1. Humans are already fairly empathetic to animals, especially around things that we'd consider important welfare issues. We don't need a hen to articulately describe why she'd prefer not to have her beak cut off or be kept in a cage, I think it would be fairly obvious to most people. 2. Animals might become less sympathetic if we knew what they were saying. It seems possible that most of their thoughts and words are about food, sex, and ingroup / outgroup dynamics.  A similar argument is that communication would allow us to see that animals are actually intelligent, but again I don't see why this is necessarily the case. If their thoughts are things people would generally consider crude, it's possible people would become more confident in their lack of intelligence (despite still deserving moral consideration). More importantly, a large effect of being able to communicate with animals is that they'll become more useful to humans. If animals had political power or legal rights, this might open the door to mutually beneficial trade. But in reality, they don't have these things, so it seems more likely that this would allow humans to exploit these species more easily. They reason chickens, cows, and pigs are in such a bad state is because they're very useful to humans, and I'm worried animal communication technologies will subject more species to similar fates.
More good news! Norwegian meat industry announced that they will stop using fast-growing chicken breeds by the end of 2027. These breeds are source of immense suffering due to the toll such rapid growth takes on animal's body. This will be the first country to stop using them. More here: https://animainternational.org/blog/norway-ends-fast-growing-chickens
Someone should write a good, linkable online resource describing the concept of the long reflection. It's very strange that there isn't a simple post/webpage that I can link to that gives a good, medium-depth description.  Currently the best things are probably the EA Forum Topic page, and this list of quotes. 
21
Linch
4d
5
Here's my current four-point argument for AI risk/danger from misaligned AIs.  * We are on the path of creating intelligences capable of being better than humans at almost all economically and militarily relevant tasks. * There are strong selection pressures and trends to make these intelligences into goal-seeking minds acting in the real world, rather than disembodied high-IQ pattern-matchers. * Unlike traditional software, we have little ability to know or control what these goal-seeking minds will do, only directional input. * Minds much better than humans at seeking their goals, with goals different enough from our own, may end us all, either as a preventative measure or side effect. Request for feedback: I'm curious whether there are points that people think I'm critically missing, and/or ways that these arguments would not be convincing to "normal people." Original goal.