Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.
I also think it's likely that SMA believes that for their target audience it would be more valuable to interact with AIM than with 80k or CEA, not necessarily for the 3 reasons you mention.
I mean the reasoning behind this seems very close to #2 no? The target audience they're looking at is probably more interested in neartermism than AI/longtermism and they don't think they can get much tractability working with the current EA ecosystem?
The underlying idea here is the Housing Theory of Everything.
A lossy compression of the idea is that if you fix the housing crisis in Western Economies, you'll unlock positive outcomes across economic, social, and political metrics which you can then have high positive impact.
A sketch, for example, might be that you want the UK government to do lots of great stuff in AI Safety. But UK state capacity in general might be completely borked until it sorts out its housing crisis.
Reminds me of when an article about Rutger popped up on the Forum a while back (my comments here)
I expect SMA people probably think something along the lines of:
Not making a claim myself about whether and to what extent those claims are true.
Like Ian Turner I ended up disagreeing and not downvoting (I appreciate the work Vasco puts into his posts).
The shortest answer is that I find the "Meat Eater Problem" repugnant and indicitative of defective moral reasoning that, if applied at scale, would lead to great moral harm.[1]
I don't want to write a super long comment, but my overall feelings on the matter have not changed since this topic came up on the Forum. In fact, I'd say that one of the leading reasons I consider myself drastically less 'EA' since the last ~6 months have gone by is the seeming embrace of the "Meat-Eater Problem" inbuilt into both the EA Community and its core ideas, or at least the more 'naïve utilitarian' end of things. To me, Vasco's bottom line result isn't an argument that we should prevent children dying of malnutrition or suffering with malaria because of these second-order effects.
Instead, naïve hedonistic utilitarians should be asking themselves: If the rule you followed brought you to this, of what use was the rule?
I also agree factory farming is terrible. I just want to find pareto solutions that reduce needless animal suffering and increase human flourishing.
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Forum Posters of the Year:
Non-Forum Poasters of the Year:
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
Yeah I could have worded this better. What I mean to say is that I expect that the tags 'Criticism of EA' and 'Community' probably co-occur in posts a lot more than two randomly drawn tags, and probably rank quite high on the pairwise ranking. I don't mean to say that it's a necessary connection or should always be the case, but it does mean that downweighting Community posts will disproportionately downweight Criticism posts.
If I'm right, that is! I can probably scrape the data from 23-24 on the Forum to actually answer this question.
Just flagging this for context of readers, I think Habryka's position/reading makes more sense if you view it in the context of an ongoing Cold War between Good Ventures and Lightcone.[1]
Some evidence on the GV side:
To Habryka's credit, it's much easier to see what the 'Lightcone Ecosystem' thinks of OpenPhil!
I was nervous about writing this because I don't want to start a massive flame war, but I think it's helpful for the EA Community to be aware that two powerful forces in it/adjacent to it[2] are essentially in a period of conflict. When you see comments from either side that seem to be more aggressive/hostile than you otherwise might think warranted, this may make the behaviour make more sense.
Note: I don't personally know any of the people involved, and live half a world away, so expect it to be very inaccurate. Still, this 'frame' has helped me to try to grasp what I see as behaviours and attitudes which otherwise seem hard to explain to me, as an outsider to the 'EA/LW in the Bay' scene.
To my understanding, the Lightcone position on EA is that it 'should be disavowed and dismantled' but there's no denying the Lightcone is closer to EA than ~most all other organisations in some sense
First, I want to say thanks for this explanation. It was both timely and insightful (I had no idea about the LLM screening, for instance). So wanted to give that a big 👍
I think something Jan is pointing to (and correct me if I'm wrong @Jan_Kulveit) is that because the default Community tag does downweight the visibility and coverage of a post, it could be implicitly used to deter engagement from certain posts. Indeed, my understanding was that this was pretty much exactly the case, and was driven by a desire to reduce Forum engagement on 'Community' issues in the wake of FTX. See for example:
Now, it is also true that I think the Forum was broadly supportive about this at the time. People were exhausted by FTX, and there seemed like there was a new devasting EA scandal every week, and being able to downweight these discussions and focus on 'real' EA causes was understandably very popular.[1] So it wasn't even necessarily a nefarious change, it was responding to user demand.
Nevertheless I think, especially since criticisms of EA also come with the 'Community' tag attached,[2] it has also had the effect of somewhat reducing criticism and community sense-making. In retrospect, I still feel like the damage wrought by FTX hasn't had a full accounting, and the change to down-weight Community posts was trying to solve the 'symptoms' rather than the underling issues.
Sharing some planned Forum posts I'm considering, mostly as a commitment device, but welcome thoughts from others:
My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, I've come to view "Alignment" primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]
As such, I'm considering doing a deep-dive on the Apollo o1 report given the controversial reception it's had.[3] I think this is the most unlikely one though, as I'd want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a "stretch goal".
Finally, I don't expect to devote much more time[4] to adding to the "Criticism of EA Criticism" sequence. I often finish the posts well after the initial discourse has died down, and I'm not sure what effect they really have.[5] Furthermore, and I've started to notice my own views of a variety of topics start to diverge from "EA Orthodoxy", so I'm not really sure I'd make a good defender. This change may itself warrant a future post, though again I'm not committing to that yet.
Which I will rename
It possibly may be more helpful for those without technical backgrounds concerned about AI, but I'm not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I don't want to claim that. I'm very uncertain about the future of AI and could easily see myself being convinced to change my mind.
I'm slightly leaning towards the skeptical interpretation myself, as you might have guessed
if any at all, unless an absolutely egregious but widely-shared example comes up
Does Martin Sandbu read the EA Forum, for instance?
Not to self-promote too much but I see a lot of similarities here with my earlier post, Gradient Descent as an analogy for Doing Good :)
I think they complement each other,[1] with yours emphasising the guidance of the 'moral peak', and mine warning against going too straight and ignoring the ground underneath you giving way.
I think there is an underlying point that cluelessness wins over global consequentialism, which is pratically unworkable, and that solid moral heuristics are a more effective way of doing good in a world with complex cluelessness.
Though you flipped the geometry for the more intuitive 'reaching a peak' rather than the ML-traditional 'descending a valley'