New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
I'm nervous that the EA Forum might be having a small role for x-risk and some high-level prioritization work. - Very little biorisk content here, perhaps because of info-hazards. - Little technical AI safety work here, in part because that's more for LessWrong / Alignment Forum. - Little AI governance work here, for whatever reason. - Not too much innovative, big-picture longtermist prioritization projects happening at the moment, from what I understand.  - The cause of "EA community building" seems to be fairly stable, not much bold/controversial experimentation, from what I can tell. - Fairly few updates / discussion from grantmakers. OP is really the dominant one, and doesn't publish too much, particularly about their grantmaking strategies and findings. It's been feeling pretty quiet here recently, for my interests. I think some important threads are now happening in private slack / in-person conversations or just not happening. 
If you’re seeing things on the forum right now that boggle your mind, you’re not alone. Forum users are only a subset of the EA community. As a professional community builder, I’m fortunate enough to know many people in the EA community IRL, and I suspect most of them would think it’d be ridiculous to give a platform to someone like Hanania. If you’re like most EAs I know, please don’t be dissuaded from contributing to the forum. I’m very glad CEA handles its events differently.
I haven't had time to read all the discourse about Manifest (which I attended), but it does highlight a broader issue about EA that I think is poorly understood, which is that different EAs will necessarily have ideological convictions that are inconsistent with one another.  That is, some people will feel their effective altruist convictions motivate them to work to build artificial intelligence at OpenAI or Anthropic; others will think those companies are destroying the world. Some will try to save lives by distributing medicines; others will think the people those medicines save eat enough tortured animals to generally make the world worse off. Some will think liberal communities should exclude people who champion the existence of racial differences in intelligence; others will think excluding people for their views is profoundly harmful and illiberal.  I'd argue that the early history of effective altruism (i.e. the last 10-15 years) has generally been one of centralization around purist goals -- i.e. there're central institutions that effective altruism revolves around and specific causes and ideas that are the most correct form of effective altruism. I'm personally much more a proponent of liberal, member-first effective altruism than purist, cause-first EA. I'm not sure which of those options the Manifest example supports, but I do think it's indicative of the broader reality that for a number of issues, people on each side can believe the most effective altruist thing to do is to defeat the other. 
Curious what people think of Gwern Branwen's take that our moral circle has historically narrowed as well, not just expanded (so contra Singer), so we should probably just call it a shifting circle. His summary: > The “expanding circle” historical thesis ignores all instances in which modern ethics narrowed the set of beings to be morally regarded, often backing its exclusion by asserting their non-existence, and thus assumes its conclusion: where the circle is expanded, it’s highlighted as moral ‘progress’, and where it is narrowed, what is outside is simply defined away.  > > When one compares modern with ancient society, the religious differences are striking: almost every single supernatural entity (place, personage, or force) has been excluded from the circle of moral concern, where they used to be huge parts of the circle and one could almost say the entire circle. Further examples include estates, houses, fetuses, prisoners, and graves. (I admittedly don't find his examples all that persuasive, probably because I'm already biased to only consider beings that can feel pleasure and suffering.) What's the "so what"? Gwern: > One of the most difficult aspects of any theory of moral progress is explaining why moral progress happens when it does, in such apparently random non-linear jumps. (Historical economics has a similar problem with the Industrial Revolution & Great Divergence.) These jumps do not seem to correspond to simply how many philosophers are thinking about ethics.  > > As we have already seen, the straightforward picture of ever more inclusive ethics relies on cherry-picking if it covers more than, say, the past 5 centuries; and if we are honest enough to say that moral progress isn’t clear before then, we face the new question of explaining why things changed then and not at any point previous in the 2500 years of Western philosophy, which included many great figures who worked hard on moral philosophy such as Plato or Aristotle.  > > It is also troubling how much morality & religion seems to be correlated with biological factors. Even if we do not go as far as Julian Jaynes’s9 theories of gods as auditory hallucinations, there are still many curious correlations floating around.
8
lilly
11h
0
One feature I think it'd be nice for the Forum to have is a thing that shows you the correlation between your agree votes and karma votes. I don't think there is some objectively correct correlation between these two things, but it seems likely that it should be between, say, .2 and .6 (probably depending on the kind of comments you tend to read/vote on), and it might be nice for users to be able to know and track this.  Making this visible to individual users (and, potentially, to anyone who clicks on their profile) would provide at least a weak incentive to avoid reflexively downvoting comments that one disagrees with, something that happens a lot, and that I also find myself doing more than I'd like.

Popular comments

Recent discussion

Summary

  • Farmed cows and pigs account for a tiny fraction of the disability of the farmed animals I analysed.
  • The annual disability of farmed animals is much larger than that of humans, even under the arguably very optimistic assumption of all farmed animals having neutral
...
Continue reading
5
CB
Thanks for the post, I really appreciate the posts you make and the topics you tackle generally.

Thanks for the kind words, CB!

4
Wayne_Chang
Thanks, Vasco, for doing this analysis! Here are some of my learnings: 1. Fish and shrimp suffering is so much greater than that of other farm animals. I knew about their much larger numbers already, but I feel it much more viscerally now after better understanding the steps in your calculations and the assumptions behind them.  2. The overall picture of neglectedness (i.e. disability vs funding) is insensitive to the way pain/disability is measured or to the assumptions behind animal moral value (e.g. welfare range). Unless you literally assume farm animals matter zero, any reasonable assumption will show how neglected farm animals are relative to humans.  3. Wild animals have even greater neglectedness. We had initially discussed including them in the analysis, but they would dominate everything else, even farm animals. I had thought that limiting the analysis to certain types of animals (e.g. land vertebrates or just mammals) would result in farm animals being more prominent, but wild animals dominate even among just mammals.

Hi everyone,

I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my estimate hunch is that there is a 5% non-zero chance that it will cost more than 10,000 people their lives.

To be clear, I think it is unlikely that H5N1 will become a pandemic ...

Continue reading
3
MathiasKB
I'm keeping an eye out for Sentinel's analyses: https://forecasting.substack.com/p/alert-minutes-for-week-172024 I'm worried too!
2
NickLaing
Thanks I absolutely love this list and absolutely agree with your reasoning, that points towards H5N1 being one of the most likely (if not the most likely ever identified) situation where a known pathogen could move from a minor issue to disastrous pandemic. In saying that I still think its very unlikely, based on prior evidence that we will see this happen in real time. I'm not sure that humans have ever actually followed a specific virus that then became a dangerous pandemic. Can you think of an example that fulfils these critreia? 1. Virus identified in advance that was either non-transferrable to humans, or (like H5N1) has very limited human transmission 2. Prediction made that the virus could become dangerous 3. The virus mutates and becomes dangerous, causing an epidemic/pandemic Previous dangerous diseases that emerged from other animals (HIV, Ebola, Covid, Swineflu) were not predicted in advance. Because of that I would rate this statement as quite an overstatement."we're on a path to a devastating H5N1 pandemic within the next few years, possibly much sooner". The most likely scenario is that we never get a H5N1 pandemic. This doesn't mean we shouldn't be spending far more money on the issue and focusing on it, there's obviously a real chance that H5N1 becomes disastrous, I just think well below 50%. I in general though rate priors very heavily, far more heavily than theory so it depends on your prediction methods. This is a pretty good summary here by the institute for progressalso, where they estimate risk at 4% in the next year, my instincts are it might be even lower. I like their cascade of probabilities, but at a few stages I would have gone with lower probabilities. https://ifp.org/what-are-the-chances-an-h5n1-pandemic-is-worse-than-covid/ We could also ask serious forecasters here what they think? @Peter Wildeford @NunoSempere 

Thanks for the pushback, Nick.

No, I can't think of any examples that meet your criteria. As a layperson I wouldn't know about them if they existed, anyway.

I could quibble with your implicit method for computing a prior, however. You mention 4 zoonotic pandemics that were not predicted in advance. I'd argue the correct denominator here is pandemics which were predicted in advance. How many times in history have we had an argument this strong for a pandemic which was ultimately a nothingburger? That should be our reference class.

The joke goes that "econ... (read more)

Magnus Vinding commented on Why UFOs matter
25
13

UFOs matter in various ways. My aim in this post is to outline some of the ways in which UFOs are relevant to altruistic priorities, and thereby make a case for why it is worth taking UFOs seriously.[1]

 

1. Sufficient grounds for curiosity

The best witness reports and...

Continue reading
3
mako yass
Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It's mundane. All I really needed to hear was "the IR camera was on a plane", which then calls into question the assumption that it's moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed. And I'd agree with this member's take that the NYC 2010 one looks like balloons that were initially tethered coming apart. The sao paulo video is interesting though, I hadn't seen that before. My fav videos are dadsfriend films a hovering black triangle (could have been faked with some drones but I still like it) and the Nellis Air Range footage. But I've seen so many videos debunked that I don't put much stock in these. You would probably enjoy my UFO notes, I see (fairly) mundane explanations a lot of the other stuff too. So at this point, I don't think we have compelling video evidence at all, I think all we have is a lot of people saying that they saw things that were really definitely something, and I sure do wonder why they're all saying these things. I don't know if we'll ever know.

Thanks for your comment and for the links :)

I don't think we have compelling video evidence at all

I'd agree that there's no compelling video evidence in the sense of it being remotely conclusive; it's possible that it's all mundane. But it seems to me that some of the footage is sufficiently puzzling/sufficiently unclear so as to be worthy of investigation, and that it provides some (further) reason to take this issue seriously. I agree that the reports, including reports involving radar evidence, are more noteworthy in terms of existing evidence.

Regarding... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Background: I'm currently writing a PhD in moral philosophy on the topic of moral progress and moral circle expansion (you can read a bit about it here and here). I recently was at three EA-related conferences: EA Global London 2024, a workshop on AI, Animals, and Digital...

Continue reading

Damn, I really resonated with this post. 

I share most of your concerns, but I also feel that I have some even more weird thoughts on specific things, and I often feel like, "What the fuck did I get myself into?"

Now, as I've basically been into AI Safety for the last 4 years, I've really tried to dive deep into the nature of agency. You get into some very weird parts of trying to computationally define the boundary between an agent and the things surrounding it and the division between individual and collective intelligence just starts to break down a ... (read more)

2
Joseph_Chu
Just wanted to point out that the distinction between total and average utilitarianism predates Derek Parfit's Reasons and Persons, with Henry Sidgwick discussing it in the Methods of Ethics from 1874, and John Harsanyi advocates for a form of average utilitarianism in Morality and the Theory of Rational Behaviour from 1977. Other than that, great post! I feel for your moral uncertainty and anxiety. It reminds me of the discussions we used to have on the old Felicifia forums back when they were still around. A lot of negative leaning utilitarians on Felicifia actually talked about hypothetically ending the universe to end suffering and such, and the hedonium shockwave thing was also discussed a fair bit, as well as the neuron count as sentience metric proxy thing. A number of Felicifia alumni later became somewhat prominent EAs like Brian Tomasik and Peter Wildeford (back then he was still Peter Hurford).

This payout report covers the Long-Term Future Fund's grantmaking from May 1 2023 to March 31 2024 (11 months). We highlight some grants that we thought were interesting and covered a relatively wide scope of LTFF’s activities. We hope that reading the highlighted grants...

Continue reading

Thanks for sharing these updates! 

A minor feedback I would give is that at least to me, the title gives the impression that this post is only about payouts that happened in March 2024. While this is clarified at the very beginning, I think something like the full duration, or the no. of months could be mentioned in the title, as this will be relevant for people who are deciding on whether to click on the post or not.

July 1-7 will be AI Welfare Debate Week on the EA Forum. We will be discussing the debate statement: “AI welfare [1] should be an EA priority[2]. The Forum team will be contacting authors who are well-versed in this topic to post, but we also welcome...

Continue reading

I like this!

Relevant context for those unaware: supposedly, Good Ventures (and by extension OpenPhil) has recently decided to pull out of funding artificial sentience.

Can you give some examples of topics that qualify and some that don't qualify as "EA priorities"?

I feel like for the purpose of getting the debate started, the vague question is fine. For the purpose of measuring agreement/disagreement and actually directly debating the statement, it's potentially problematic. Does EA as a whole have priorities? How much of a priority should it be?

I'm nervous that the EA Forum might be having a small role for x-risk and some high-level prioritization work.
- Very little biorisk content here, perhaps because of info-hazards.
- Little technical AI safety work here, in part because that's more for LessWrong / Alignment...

Continue reading

Curious if you think there was good discussion before that and could point me to any particularly good posts or conversations?

There are still bunch of good discussions (see mostly posts with 10+ comments) in the last 6 months or so, its just that we can sometimes even go a week or two without more than one or two ongoing serious GHD chats. Maybe I'm wrong and there hasn't actually been much (or any) meaningful change in activity this year looking at this.

https://forum.effectivealtruism.org/?tab=global-health-and-development

I wonder if the forum is even a good place for a lot of these discussions? Feels like they need some combination of safety / shared context, expertise, gatekeeping etc?

I believe that:

  1. AI-enhanced organization governance could be a potentially huge win in the next few decades.
  2. AI-enhanced governance could allow organizations to reach superhuman standards, like having an expected "99.99" reliability rate of not being corrupt or not telling
...
Continue reading

(You can read this post as a Google Doc. You might find this easier to share with animal-sympathetic non-EAs. Also: I work at Rethink Priorities, but I'm writing in a personal capacity.)

A few weeks ago, I shared some suggested responses for a Defra consultation on welfare...

Continue reading

I did it too, thank you so much for posting! I did not use your templates at all, it is easier to stream-of-consciousness write about why I think cages are bad.

3
Ben Stevenson
Thanks for completing the consultation, and for your time assessment (I hope that helps others judge accurately!)