Hi team. How active is the Center for Space Governance currently? What are your plans for the next two years, if any?
I asked in public because I believe that multiple people could benefit from the answer and that's more efficient than multiple people asking the same question in private emails. Regardless, I don't care about the post's karma or what anyone thinks about my decision to ask publicly except for the EA Forum staff or the EAG organizers.
For example, most members want to eat animals, and even if they know that it is wrong to eat those among them raised in cruel conditions, they will continue to do so.
I think that people continue eating animals because they're not aware of the cruel conditions in which many animals are raised, not because they like animal cruelty. Generally, when people are made aware of those cruel conditions, they oppose them. For example, a Data for Progress survey in 2022 found that 80% of respondents supported California's Farm Animal Confinement Initiative (Prop 12); ...
Thanks for sharing this!
I opt for strategy 2 (donating appreciated assets). The standard deduction is so high ($14,600 in 2024) that I would have to donate several times that amount for the benefit of itemizing to outweigh giving up the standard deduction. Taking itemized deductions also involves a lot of paperwork. By donating assets, at least I get to realize the value of the assets for charitable purposes without paying capital gains taxes.
I've been thinking about the meat eater problem a lot lately, and while I think it's worth discussing, I've realized that poverty reduction isn't to blame for farmed animal suffering.
(Content note: dense math incoming)
Assume that humans' utility as a function of income is (i.e. isoelastic utility with ), and the demand for meat is where is the income elasticity of demand. Per Engel's law, is typically between 0 and 1. As long as , at low incomes...
I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.
I don't think this is what the meat-eater problem does. You could imagine a world in which the West is responsible for inventing the entire machinery of factory farming, or even running all the factory farms, and still believe that lifting additional people out of poverty would help the Western factory farmers sell more produce. It's not about blame, just about consequences.
I realise this isn't your main poin...
In your article, you write:
If we decide to intervene in poor people's lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions.
Echoing other users' comments, what do you think about EA global health and development (GHD) orgs' attempts to empower beneficiaries of aid? I think that empowerment has come up in the EA context in two ways:
Something(!) needs to be done. Otherwise, it's just a mess for clarity and the communication of ideas.
Sounds like a good move - although I'm skeptical that CEA will achieve the escape velocity necessary to spin out of CEA's center of gravity. Shoot your shot!
Why did SBF only get 25 years when the prosecution called for 40-50 (and the sentencing guidelines call for 110)?
A post about the current status of the Future of Humanity Institute (FHI) and a post-mortem if it has shut down. Some users including me have speculated that FHI is dead, but an official confirmation of the org's status would count as a reliable source for Wikipedia purposes.
Further evidence: The 80,000 Hours website footer no longer mentions FHI. Until February 2023, the footer contained the following statement:
We're affiliated with the Future of Humanity Institute and the Global Priorities Institute at the University of Oxford.
By February 21, that statement was replaced with a paragraph simply stating that 80k is part of EV. The references to GPI, CEA and GWWC were also removed:
Yeah, it looks like the FHI website's news section hasn't been updated since 2021. Nor are there any publications since 2021.
I didn't write the paper, but thank you for the comment, Prof. Ord! I appreciate your perspective.
I also personally am not sold on the biosphere having negative overall value. I think the immense number of sentient beings that spend large portions of their lives suffering makes it a real possibility, but I am not 100% sure that utilitarianism is true when it comes to balancing wild animal welfare and broader ecological health. I think that humanity needs to spend more effort figuring out what is ultimately of value, and because the ecological view has been...
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense.
For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and ...
Imagine a product A with 0 CO2 but a huge animal suffering impact, B with huge CO2 but 0 suffering, and C with non-zero but tiny impact on both dimensions. Your weighting would favor C, while for any rational person either A or B (or both) would necessarily be preferable.
I think it's the other way around. Under a weighted product model (WPM), the overall impact of both A and B is zero because either component is zero, so the WPM favors A and B over C. Whereas summing the climate and welfare components (with "reasonable" weights) would result in C being the most favorable.
How can the EA community better support neurodivergent community members who feel like they might make mistakes without realizing it?
As a person with an autism (at the time "asperger's") diagnosis from childhood, I think this is very tricky territory. I agree that autistics are almost certainly more likely to make innocent-but-harmful mistakes in this context. But I'm a bit worried about overcorrection for that for a few reasons:
Firstly, men in general (and presumably women to some degree also), autistic or otherwise are already incredibly good at self-deception about the actions they take to get sex (source: basic commonsense). So giving a particular subset of us more of an excus...
Returning to this thread because my Forum Wrapped says it's my most upvoted comment this year 😆
This makes me think of a Linkin Park song that was written specifically to address the cycle of valorization and demonization in the public sphere, particularly of celebrities:
We're building it up
To break it back down
We're building it up
To burn it down
We can't wait to burn it to the ground
You might say "the pendulum swings" between both extremes of this cycle.
I'm noticing a trend in "literary" online magazines in EA and adjacent movements, like Works in Progress and Asterisk. Were you inspired by these other magazines/websites? :3
The Center for New Liberalism's New Liberal Podcast (fka Neoliberal Podcast) covered the PEPFAR crisis in a November 10 episode.
A commenter on this thread said it should have been a top-level post rather than a QT. Throwing in my vote for this feature.
I recently saw a presentation with a diagram showing how committed EA funding dropped by almost half with the collapse of FTX, based on these data compiled by 80k in 2022. Open Phil at the time had a $22.5 billion endowment and FTX's founders were collectively worth $16.5 billion.
I think that this narrative gives off the impression that EA causes (especially global health and development) are more funding-constrained than they really are. 80k's data excludes philanthropists that often make donations ...
Great start, I'm looking forward to seeing how this software develops!
I noticed that the model estimates of cost-effectiveness for GHD/animal welfare and x-risk interventions are not directly comparable. Whereas the x-risk interventions are modeled as a stream of benefits that could be realized over the next 1,000 years (barring extinction), the distribution of cost-effectiveness for a GHD or animal welfare is taken as given. Indeed:
...For interventions in global health and development we don't model impact internally, but instead stipulate the range of possi
Anyone can create a linkpost for an 80k episode. Though it might be extra convenient to have a way to automatically create a linkpost with a pre-filled summary of the linked page and a top-level comment with your thoughts.
Content warning: Israel/Palestine
Has there been research on what interventions are effective at facilitating dialogue between social groups in conflict?
I remember an article about how during the last Israel-Gaza flare-up, Israelis and Palestinians were using the audio chatroom app Clubhouse to share their experiences and perspectives. This was portrayed as a phenomenon that increased dialogue and empathy between the two groups. But how effective was it? Could it generalize to other ethnic/religious conflicts around the world?
Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:
I think one reason why you're getting downvoted is that: many people in this community are non-religious (80% per the most recent EA survey). Many non-religious people don't appreciate being told "you should believe in god"; it's basically a microaggression to them. The body of your post is innocuous to me but the title comes off as preachy IMO.
Thanks for the linkpost! Could you please add a summary of the article for those of us who can't access it? Also, you can convert this post into a linkpost by clicking on the link icon in the editor window.
Thanks for the responses, @Linch and @calebp!
There are several organizations that work on helping non-humans in the long-term future, such as Sentience Institute and Center on Long-Term Risk; do you think that their activities could be competitive with the typical grant applications that LTFF gets?
Also, in general, how do you folks decide how to prioritize between causes and how to compare projects?
Open Phil funds pro-housing advocacy, whose benefits are especially concentrated in areas like Berkeley, so these benefits will flow through to the EA and AIS communities as well.
Reason 3 (travel distances) includes local transit. As a New Yorker, I commute to work at least once a week, and I'm thankful that the subway gets me there in under 30 minutes. In the Bay Area, due to the company I work for, I'd be commuting for at least an hour from either San Francisco or Berkeley into San Jose in horrid rush-hour traffic (or a mix of BART and Uber which, though slower, was a more pleasant experience), or living in the South Bay itself, which does not have great transit options either.
How does the team weigh the interests of non-humans (such as animals, extraterrestrials, and digital sentience) relative to humans? What do you folks think of the value of interventions to help non-humans in the long-term future specifically relative to that of interventions to reduce x-risk?
As a Scorpio, I concur that the Taurus emoji does not lack practical uses on social media apps 😤
Update August 2023: I've discovered China Labor Watch, a 501(c)(3) organization that investigates working conditions in Chinese manufacturing companies, educates workers on their labor rights, and "engages in dialogues" with the companies responsible for those conditions. They've exposed horrid working conditions at the companies that make products for Apple, Mattel, and others - which include sexual harassment and exposure to toxic chemicals.
You can donate to CLW via PayPal Giving Fund here; as of the time of writing, all transaction fees are covered by P...
Fertility rate may be important but to me it's not worth restricting (directly or indirectly) people's personal choices for. A lot of socially regressive ideas have been justified in the name of "raising the fertility rate" – for example, the rhetoric that gay acceptance would lead to fewer babies (as if gay people can simply "choose to be straight" and have babies the straight way). I think it's better to encourage people who are already interested in having kids to do so, through financial and other incentives.
Great article! Is it available on the website?
I noticed a few minor errors:
Cari Tuna is spelled as "Tuna Carry".
It is better to use italics for emphasis than quotation marks, as in this sentence:
You can safely develop these skills in your field of choice, “and” impact a lot of animals with your donations.
Especially around AI, there seem to be a bunch of key considerations that many people disagree about - so it's tricky to have a strong set of agreements to do evaluation around.
One could try to make the evaluation criteria worldview-agnostic – focusing on things like the quality of their research and workplace culture – and let individuals donate to the best orgs working on problems that are high priority to them.
Agreed, I suggest making this a linkpost.
この日本語のテキストをほとんど分からないけど、この翻訳のプロジェクトも努力も鑑賞します。頑張り続けてください!
Although I can barely understand the Japanese text, I appreciate this translation project and your efforts. Keep up the good work!
Relatedly, the auto-generated audio narration feature breaks down for non-English posts.
For example, in the Japanese post above, the narration skips everything except for the bits of English.
The handling of this Spanish post is slightly better: all of the text, being in Latin script, is included in the narration, but the words are spoken as if they're English words.
Well, if we allow complex numbers, a lottery over all negative utilities would result in a real geometric mean, but for a mixture of positive and negative utilities, we'd get imaginary numbers.
For example, consider lottery A with Pr(-5) = 0.5, Pr(-3) = 0.3, and Pr(-2) = 0.2. Then
G(A)=(−5)0.5⋅(−3)0.3⋅(−2)0.2.
The (-1)'s factor out, giving us
G(A)=(50.5⋅30.3⋅20.2)⋅(−1)0.5+0.3+0.2=(50.5⋅30.3⋅20.2)⋅−1,
which is a negative number.
Now consider lottery B where one of the utilities is positive - e.g. we have Pr(-5) = 0.5, Pr(3) = 0.3, and Pr(-2) =... (read more)