All of BrownHairedEevee's Comments + Replies

Well, if we allow complex numbers, a lottery over all negative utilities would result in a real geometric mean, but for a mixture of positive and negative utilities, we'd get imaginary numbers.

For example, consider lottery  with Pr(-5) = 0.5, Pr(-3) = 0.3, and Pr(-2) = 0.2. Then

.

The (-1)'s factor out, giving us

,

which is a negative number.

Now consider lottery  where one of the utilities is positive - e.g. we have Pr(-5) = 0.5, Pr(3) = 0.3, and Pr(-2) =... (read more)

Hi team. How active is the Center for Space Governance currently? What are your plans for the next two years, if any?

I asked in public because I believe that multiple people could benefit from the answer and that's more efficient than multiple people asking the same question in private emails. Regardless, I don't care about the post's karma or what anyone thinks about my decision to ask publicly except for the EA Forum staff or the EAG organizers.

For example, most members want to eat animals, and even if they know that it is wrong to eat those among them raised in cruel conditions, they will continue to do so.

I think that people continue eating animals because they're not aware of the cruel conditions in which many animals are raised, not because they like animal cruelty. Generally, when people are made aware of those cruel conditions, they oppose them. For example, a Data for Progress survey in 2022 found that 80% of respondents supported California's Farm Animal Confinement Initiative (Prop 12); ... (read more)

5
BrianK
8d
Thanks very much for the comment. As you can imagine, given my work, most of my friends and family know a lot about factory farming, and many continue to eat them, some on a daily basis. That includes plenty of my peers who identify as EAs. I don’t see a compelling reason to think colonists won’t salivate at a rib-eye or chicken wing too and act on that desire, if they can. Knowing about a problem isn’t usually enough to override our humanity. That isn’t to say some people don’t need to be educated, but this isn’t just a knowing problem; it’s a doing one.

Good point. My total state and local tax bill was higher than $10k, so I would have been able to take a $10k SALT deduction, meaning I would have had to donate at least $3,850 in order to itemize. I donated about $2.5k of my own money (plus $2.5k in employer matches), so it made sense to take the standard deduction.

The standard deduction in 2024 is $14,600, so I'd have to claim $10k SALT deduction + at least $4.6k charitable deductions or other itemized deductions in order to benefit from itemizing. I could easily do that. I'm not looking forward to itemiz... (read more)

Thanks for sharing this!

I opt for strategy 2 (donating appreciated assets). The standard deduction is so high ($14,600 in 2024) that I would have to donate several times that amount for the benefit of itemizing to outweigh giving up the standard deduction. Taking itemized deductions also involves a lot of paperwork. By donating assets, at least I get to realize the value of the assets for charitable purposes without paying capital gains taxes.

4
Jason
12d
One can itemize state/local taxes up to a cap, and deduct mortgage interest. So for many people, those easy deductions get them most of the way to the standard already. (I currently lump every two years -- might move to three with one year savings, one year real-time, and one year very attractive cash advances and/or intro credit card offers. Would only consider that for someone with strong job security, a comparably-paid spouse, etc.)

I've been thinking about the meat eater problem a lot lately, and while I think it's worth discussing, I've realized that poverty reduction isn't to blame for farmed animal suffering.

(Content note: dense math incoming)

Assume that humans' utility as a function of income is  (i.e. isoelastic utility with ), and the demand for meat is  where  is the income elasticity of demand. Per Engel's law is typically between 0 and 1. As long as  at low incomes... (read more)

I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.

I don't think this is what the meat-eater problem does. You could imagine a world in which the West is responsible for inventing the entire machinery of factory farming, or even running all the factory farms, and still believe that lifting additional people out of poverty would help the Western factory farmers sell more produce. It's not about blame, just about consequences.

I realise this isn't your main poin... (read more)

In your article, you write:

If we decide to intervene in poor people's lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions.

Echoing other users' comments, what do you think about EA global health and development (GHD) orgs' attempts to empower beneficiaries of aid? I think that empowerment has come up in the EA context in two ways:

  • Letting beneficiaries make their own decisions: GiveDirectly is a longstanding charity recommendation in the GHD space, and it empowers people by giving them cash and the f
... (read more)
2
Jason
17d
Also, who do you understand to be the primary "beneficiaries" here -- toddlers, families, communities, nations, all of the above? IIRC, and based on GiveWell rationales, the bulk of the benefit from its recommended charities comes from saving the lives of under-5s. If one thinks the beneficiaries are toddlers, how does one "shift[] our power to them"? Does your answer have implications for domestic aid programs, under which (e.g.,) we pay thousands per kid per year for healthcare with no real power-shifting option?

How about making the April Fool's Day tag visible on the forum frontpage, like so?

8
yanni kyriacos
17d
I think the hilarity is in the confusion / click bait. Your idea would rob us of this! I think the best course of action is for anyone with a serious post to wait until April 3 :|

Something(!) needs to be done. Otherwise, it's just a mess for clarity and the communication of ideas. 

Sounds like a good move - although I'm skeptical that CEA will achieve the escape velocity necessary to spin out of CEA's center of gravity. Shoot your shot!

Why did SBF only get 25 years when the prosecution called for 40-50 (and the sentencing guidelines call for 110)?

6
Jason
19d
Pretty much everyone in the system agrees that the Guidelines tend to be too harsh for economic offenses, especially as the loss amount (as computed under the Guidelines) becomes the main driver of the Guidelines figure.  As for the rest, I haven't seen a transcript of Judge Kaplan's sentencing remarks. The federal system gives very broad discretion to the sentencing judge (unless there is a mandatory minimum, which there wasn't here). So while we can conclude that Judge Kaplan believed a 25-year sentence was "sufficient, but not greater than necessary" to achieve the purposes of sentencing, I don't know why he believed that.

April Fools' Day is in 11 days! Get yer jokes ready 🎶

A post about the current status of the Future of Humanity Institute (FHI) and a post-mortem if it has shut down. Some users including me have speculated that FHI is dead, but an official confirmation of the org's status would count as a reliable source for Wikipedia purposes.

Further evidence: The 80,000 Hours website footer no longer mentions FHI. Until February 2023, the footer contained the following statement:

We're affiliated with the Future of Humanity Institute and the Global Priorities Institute at the University of Oxford.

Screenshot of 80,000 Hours website as of February 1, 2023

By February 21, that statement was replaced with a paragraph simply stating that 80k is part of EV. The references to GPI, CEA and GWWC were also removed:

Screenshot of 80,000 Hours website as of February 21, 2023

Yeah, it looks like the FHI website's news section hasn't been updated since 2021. Nor are there any publications since 2021.

Hi, no, I'm not the author of the paper. I edited the top of the linkpost to indicate that.

I didn't write the paper, but thank you for the comment, Prof. Ord! I appreciate your perspective.

I also personally am not sold on the biosphere having negative overall value. I think the immense number of sentient beings that spend large portions of their lives suffering makes it a real possibility, but I am not 100% sure that utilitarianism is true when it comes to balancing wild animal welfare and broader ecological health. I think that humanity needs to spend more effort figuring out what is ultimately of value, and because the ecological view has been... (read more)

4
Toby_Ord
2mo
Yes, I completely agree. When I was exploring questions about wild animal welfare almost 20 years ago, I was very surprised to see how the idea of thinking about individual animals' lives was so foreign to the field.

Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense.

For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and ... (read more)

What time of day are the applications for the EA career development program due?

Imagine a product A with 0 CO2 but a huge animal suffering impact, B with huge CO2 but 0 suffering, and C with non-zero but tiny impact on both dimensions. Your weighting would favor C, while for any rational person either A or B (or both) would necessarily be preferable.

I think it's the other way around. Under a weighted product model (WPM), the overall impact of both A and B is zero because either component is zero, so the WPM favors A and B over C. Whereas summing the climate and welfare components (with "reasonable" weights) would result in C being the most favorable.

1
FlorianH
7h
You were of course right; I now fixed the A B & C round to make them consistent. Thanks!

How can the EA community better support neurodivergent community members who feel like they might make mistakes without realizing it?

As a person with an autism (at the time "asperger's") diagnosis from childhood, I think this is very tricky territory. I agree that autistics are almost certainly more likely to make innocent-but-harmful mistakes in this context. But I'm a bit worried about overcorrection for that for a few reasons: 

Firstly, men in general (and presumably women to some degree also), autistic or otherwise are already incredibly good at self-deception about the actions they take to get sex (source: basic commonsense). So giving a particular subset of us more of an excus... (read more)

Returning to this thread because my Forum Wrapped says it's my most upvoted comment this year 😆

This makes me think of a Linkin Park song that was written specifically to address the cycle of valorization and demonization in the public sphere, particularly of celebrities:

We're building it up
To break it back down
We're building it up
To burn it down
We can't wait to burn it to the ground

You might say "the pendulum swings" between both extremes of this cycle.

I'm noticing a trend in "literary" online magazines in EA and adjacent movements, like Works in Progress and Asterisk. Were you inspired by these other magazines/websites? :3

3
xander_balwit
4mo
Indeed so! We admire the depth and scope of their writing, not to mention their beautiful visuals. In our extended announcement on our website, we credit them as a serious source of inspiration. Saloni Dattani of Works in Progress is also an advisor for Asimov Press.  

The Center for New Liberalism's New Liberal Podcast (fka Neoliberal Podcast) covered the PEPFAR crisis in a November 10 episode.

This article is behind a paywall; do you have a summary that we can read?

5
Ian Turner
4mo
Here are a couple of paywall-free archives (though if you get more value than the subscription price, you should probably pay?). https://web.archive.org/web/20231208170421/https://www.insidephilanthropy.com/home/2023/12/7/six-reasons-why-effective-altruism-isnt-going-anywhere https://archive.is/dKMIG
1[comment deleted]4mo

A commenter on this thread said it should have been a top-level post rather than a QT. Throwing in my vote for this feature.

Questioning the new "EA is funding constrained" narrative

I recently saw a presentation with a diagram showing how committed EA funding dropped by almost half with the collapse of FTX, based on these data compiled by 80k in 2022. Open Phil at the time had a $22.5 billion endowment and FTX's founders were collectively worth $16.5 billion.

I think that this narrative gives off the impression that EA causes (especially global health and development) are more funding-constrained than they really are. 80k's data excludes philanthropists that often make donations ... (read more)

Great start, I'm looking forward to seeing how this software develops!

I noticed that the model estimates of cost-effectiveness for GHD/animal welfare and x-risk interventions are not directly comparable. Whereas the x-risk interventions are modeled as a stream of benefits that could be realized over the next 1,000 years (barring extinction), the distribution of cost-effectiveness for a GHD or animal welfare is taken as given. Indeed:

For interventions in global health and development we don't model impact internally, but instead stipulate the range of possi

... (read more)
5
Derek Shiller
5mo
Thanks for this insightful comment. We've focused on capturing the sorts of value traditionally ascribed to each kind of intervention. For existential risk mitigation, this is additional life years lived. For animal welfare interventions, this is suffering averted. You're right that there are surely other effects of these interventions. Existential risk mitigation and ghd interventions will have an effect on animals, for instance. Animal welfare interventions might contribute to moral circle expansion. Including these side effects is not just difficult, it adds a significant amount of uncertainty. The side effects we choose to model may determine the ultimate value we get out. The way we choose to model these side effects will add a lot of noise that makes the upshots of the model much more sensitive to our particular choices. This doesn't mean that we think it's okay to ignore these possible effects. Instead, we conceive of the model as a starting point for further thought, not a conclusive verdict on relative value assessments. To some extent, these sorts of considerations can be included via existing parameters. There is a parameter to determine how long the intervention's effects will last. I've been thinking of this as the length of time before the same policies would have been adopted, but you might think of this as the time at which companies renege on their commitments. We can also set a range of percentages of the population affected that represents the failure to follow through.

There's now a Netflix adaptation of it! And the ending reminded me of FTX 😜

Anyone can create a linkpost for an 80k episode. Though it might be extra convenient to have a way to automatically create a linkpost with a pre-filled summary of the linked page and a top-level comment with your thoughts.

Content warning: Israel/Palestine

Has there been research on what interventions are effective at facilitating dialogue between social groups in conflict?

I remember an article about how during the last Israel-Gaza flare-up, Israelis and Palestinians were using the audio chatroom app Clubhouse to share their experiences and perspectives. This was portrayed as a phenomenon that increased dialogue and empathy between the two groups. But how effective was it? Could it generalize to other ethnic/religious conflicts around the world?

2
Jamie_Harris
6mo
There's psychological research finding that both "extended contact" interventions and interventions that "encourage participants to rethink group boundaries or to prioritize common identities shared with specific outgroups" can reduce prejudice, so I can imagine the Clubhouse stuff working (and being cheap + scalable). https://forum.effectivealtruism.org/posts/re6FsKPgbFgZ5QeJj/effective-strategies-for-changing-public-opinion-a#Prejudice_reduction_strategies
8
Julia_Wise
6mo
Copenhagen Consensus has some older work on what might be cost-effective to preventing armed conflicts, like this paper.

Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:

  • Disarmament, Demobilization, and Reintegration (DDR) Programs 
  • Community-Driven Development
  • Cognitive Behavioral Therapy
  • Cash Transfers and/or Job Training
  • Alternative Dispute Resolution (ADR)
  • Contact Interventions and Mass Media
  • Investigative Journalism
  • Mediation and Diplomacy
4
EdoArad
6mo
Joshua Greene recently came to Israel to explore extending their work aiming at bridging the Republican-Democrat divide in the US to the Israel-Palestine conflict. A 2020 video here.

I think one reason why you're getting downvoted is that: many people in this community are non-religious (80% per the most recent EA survey). Many non-religious people don't appreciate being told "you should believe in god"; it's basically a microaggression to them. The body of your post is innocuous to me but the title comes off as preachy IMO.

1
Daniel Birnbaum
6mo
That's pretty fair. I'm pretty certain that that's the case to some extent, but I also think @Amber Dawn is correct that people just thought it was pascal's mugging, when I think it is a tad more nuanced than that (see: my reply to Amber's comment). 

Thanks for the linkpost! Could you please add a summary of the article for those of us who can't access it? Also, you can convert this post into a linkpost by clicking on the link icon in the editor window.

8
Matt Goodman
7mo
Sure. I've written a short summary and my reaction to it, and made it a linkpost

Thanks for the responses, @Linch and @calebp!

There are several organizations that work on helping non-humans in the long-term future, such as Sentience Institute and Center on Long-Term Risk; do you think that their activities could be competitive with the typical grant applications that LTFF gets?

Also, in general, how do you folks decide how to prioritize between causes and how to compare projects?

4
Linch
7mo
I'm confused about the prudence of publicly discussing specific organizations in the context of being potential grantees, especially ones that we haven't (AFAIK) given money to.

What about filing a patent and then releasing it?

5
Michelle Hauser
8mo
That would be the best option, but I'm afraid the tech transfer office at the university, who will be the ones doing the filing, has no incentive to release a patent. Their whole goal is profit.

Open Phil funds pro-housing advocacy, whose benefits are especially concentrated in areas like Berkeley, so these benefits will flow through to the EA and AIS communities as well.

Reason 3 (travel distances) includes local transit. As a New Yorker, I commute to work at least once a week, and I'm thankful that the subway gets me there in under 30 minutes. In the Bay Area, due to the company I work for, I'd be commuting for at least an hour from either San Francisco or Berkeley into San Jose in horrid rush-hour traffic (or a mix of BART and Uber which, though slower, was a more pleasant experience), or living in the South Bay itself, which does not have great transit options either.

How does the team weigh the interests of non-humans (such as animals, extraterrestrials, and digital sentience) relative to humans? What do you folks think of the value of interventions to help non-humans in the long-term future specifically relative to that of interventions to reduce x-risk?

7
Linch
8mo
I don't think there is a team-wide answer, and there certainly isn't an institutional answer that I'm aware of. My own position is a pretty-standard-within-EA form of cosmopolitanism, where a) we should have a strong prior in favor of moral value being substrate-independent, and b) we should naively expect people to (wrongly) underestimate the moral value of beings that look different from ourselves. Also as an empirical belief, I do expect the majority of moral value in the future to be held in minds that are very different from my own. The human brain is just such a narrow target in the space of possible designs, it'd be quite surprising to me if a million years from now the most effective way to achieve value is via minds-designed-just-like-2023-humans, even by the lights of typical 2023-humans. There are some second-order concerns like cooperativeness (I have a stronger presumption in favor of believing it's correct to cooperate with other humans than with ants, or with aliens), but I think cosmopolitanism is mostly correct. However, I want to be careful in distinguishing the moral value or moral patiency of other beings from their interests. It is at least theoretically possible to imagine agents (eg designed digital beings) with strong preferences and optimization ability but not morally relevant experiences. In those cases, I think there are cooperative reasons to care about their preferences, but not altruistic reasons. In particular, I think the case for optimizing for the preferences of non-existent beings is fairly weak, but the case for optimizing for their experiences (eg making sure future beings aren't tortured) is very strong. That said, in practice I don't think we often (ever?) get competitive grant applications that specialize in helping non-humans in the LT future; most of our applications are about reducing risks of extinction or other catastrophic outcomes, with a smattering of applications that are about helping individuals and organization

As a Scorpio, I concur that the Taurus emoji does not lack practical uses on social media apps 😤

Update August 2023: I've discovered China Labor Watch, a 501(c)(3) organization that investigates working conditions in Chinese manufacturing companies, educates workers on their labor rights, and "engages in dialogues" with the companies responsible for those conditions. They've exposed horrid working conditions at the companies that make products for Apple, Mattel, and others - which include sexual harassment and exposure to toxic chemicals.

You can donate to CLW via PayPal Giving Fund here; as of the time of writing, all transaction fees are covered by P... (read more)

Fertility rate may be important but to me it's not worth restricting (directly or indirectly) people's personal choices for. A lot of socially regressive ideas have been justified in the name of "raising the fertility rate" – for example, the rhetoric that gay acceptance would lead to fewer babies (as if gay people can simply "choose to be straight" and have babies the straight way). I think it's better to encourage people who are already interested in having kids to do so, through financial and other incentives.

3
Larks
9mo
Providing financial and other incentives to do X, if provided by the government, mean higher taxes on people who don't do X, an indirect restriction on their choices.
2
Roman Leventov
9mo
This is a radical libertarian view that most people don't share. Is it worth restricting people's access to hard drugs? Let's abstract for a moment from the numerous negative secondary effects that come with the fact that hard drugs are illegal, as well as from the crimes committed by drug users: if we can imagine that hard drugs could be just eliminated from Earth completely, with a magic spell, should we do it, or we "shouldn't restrict people's choices"? With AI romantic partners, and other forms of tech, we do have a metaphorical magic wand: we could decide whether such products ever get created or not. The example that you give doesn't work as evidence for your argument at all, due to the direct disanalogy: the "young man" from the "mainline story" which I outlined could want to have kids in the future or even wants to have kids already when he starts his experiment with the AI relationship, but his experience with the AI partner will prevent him from realising this desire and value over his future life. Technology, products, and systems are not value-neutral. We are so afraid of consciously shaping our own values that we are happy to offload this to the blind free market whose objective is not to shape values that reflectively endorse the most.

Great article! Is it available on the website?

I noticed a few minor errors:

Cari Tuna is spelled as "Tuna Carry".

It is better to use italics for emphasis than quotation marks, as in this sentence:

You can safely develop these skills in your field of choice, “and” impact a lot of animals with your donations.

0
Animal Advocacy Careers
9mo
Thank you so much! Fixed :)

Especially around AI, there seem to be a bunch of key considerations that many people disagree about - so it's tricky to have a strong set of agreements to do evaluation around.

One could try to make the evaluation criteria worldview-agnostic – focusing on things like the quality of their research and workplace culture – and let individuals donate to the best orgs working on problems that are high priority to them.

4
Jason
9mo
I think having recommendations in each subfield would make sense. But how many subfields have a consensus standard for how to evaluate such things as "quality of . . . research"?
1
smountjoy
9mo
Oops, thank you! I thought I had selected linkpost, but maybe I unselected without noticing. Fixed!

この日本語のテキストをほとんど分からないけど、この翻訳のプロジェクトも努力も鑑賞します。頑張り続けてください!

Although I can barely understand the Japanese text, I appreciate this translation project and your efforts. Keep up the good work!

3
EA Japan
9mo
Thank you very much for your kind words! This means a lot to us! 

Relatedly, the auto-generated audio narration feature breaks down for non-English posts.

For example, in the Japanese post above, the narration skips everything except for the bits of English.

The handling of this Spanish post is slightly better: all of the text, being in Latin script, is included in the narration, but the words are spoken as if they're English words.

Load more