All posts

Top (inflation-adjusted)

Today and yesterday
Today and yesterday

AI safety 2
Books 1
Policy 1
Electoral politics 1
Cause prioritization 1
Biodiversity loss 1
More

Quick takes

This is still in brainstorming stage; I think there's probably a convincing line of argument for "AI alignment difficulty is high at least on priors" that includes the following points: * Many humans don't seem particularly aligned to "human values" (not just thinking of dark triad traits, but also things like self-deception, cowardice, etc.) * There's a loose analogy where AI is "more technological progress," and "technological progress" so far hasn't always been aligned to human flourishing (it has solved or improved a lot of long-term problems of civilization, like infant mortality, but has also created some new ones, like political polarization, obesity, unhappiness from constant bombardement with images of people who are richer and more successful than you, etc.). So, based on this analogy, why think things will somehow fall into place with AI training so that the new forces that be will for once become aligned? * AI will accelerate everything, and if you accelerate something that isn't set up in a secure way, it goes off the rails ("small issues will be magnified").
Prompted by a different forum: > ...as a small case study, the Effective Altruism forum has been impoverished over the last few years by not being lenient with valuable contributors when they had a bad day. > > In a few cases, I later learnt that some longstanding user had a mental health breakdown/psychotic break/bipolar something or other. To some extent this is an arbitrary category, and you can interpret going outside normality through the lens of mental health, or through the lens of "this person chose to behave inappropriately". Still, my sense is that leniency would have been a better move when people go off the rails. > > In particular, the best move seems to me a combination of: > > * In the short term, when a valued member is behaving uncharacteristically badly, stop them from posting > * Followup a week or a few weeks later to see how the person is doing > > Two factors here are: > > * There is going to be some overlap in that people with propensity for some mental health disorders might be more creative, better able to see things from weird angles, better able to make conceptual connections. > * In a longstanding online community, people grow to care about others. If a friend goes of the rails, there is the question of how to stop them from causing harm to others, but there is also the question of how to help them be ok, and the second one can just dominate sometimes.
If you're currently at EAG London and you still see this quick take, you're exactly the person we'd like to meet: EAG London Meetup: EA Forum readers and writers | Saturday 5-6pm at Meeting point G Some members of the EA Forum online team are holding a casual meetup for EA Forum readers and writers to get to know each other (and us). Join us if you'd like to find a co-author, meet someone who can give you feedback on your draft, or make suggestions to the EA Forum team. We'll meet at meeting point G, unless otherwise stated.
Can someone who runs an EA podcast please convert recorded EAG talks to podcast form, so that more people can listen to them? @80000_Hours @hearthisidea @Kat Woods @EA Global (please tag other podcasters in the comments) The CEA events team seem open to this, but don't have the podcasting expertise or the bandwidth to start a new podcast If you're interested in this as well, say "yes" in the comments so the person who can take this up will be encouraged to do so. (Full disclosure - this is a bit of a selfish ask, I'm attending EAG and want to listen to quite a few talks that I don't have time for, and streaming them on YouTube seems clunky and not great for driving)

Past week
Past week

AI safety 5
Policy 3
Research 3
Announcements and updates 3
Building effective altruism 3
Animal welfare 3
More

Frontpage Posts

Quick takes

EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post  mentioned a lot of people and organizations, so it seemed like useful data. I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic.  This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out. Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive. It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k's mention was negative. I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I'm very glad I solicited comments, and found the process easier than expected. 
24
saulius
3d
7
EAG and covid [edit: solved, I'm not attending the EAG (I'm still testing positive as of Saturday)] I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly: 1. Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people. 2. Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult) 3. Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d charge it. Perhaps it’s better not to go if I’d have to do that. 4. Don't go. I don't mind doing that if that's the best thing altruistically. In all cases, I can inform all my 1-1s (I have ~18 tentatively planned) that I have covid. I can also attend only if I test negative in the morning of a day. This would be the third EAG London in a row where I’d cancel all my meetings last minute because I might be contagious with covid, although I’m probably not and I feel totally good. This makes me a bit frustrated and biased, which is partly why I’m asking for advice here. The thing is that I think that very few people are still that careful and still test but perhaps they should be, I don’t know. There are vulnerable people and long covid can be really bad. So if I’m going to take precautions, I’d like others reading this to also test and do the same, at least if you have a reason to believe you might have covid.
I highly recommend the book "How to Launch A High-Impact Nonprofit" to everyone. I've been EtG for many years and I thought this book wasn't relevant to me, but I'm learning a lot and I'm really enjoying it.
The Animal Welfare Department at Rethink Priorities is recruiting volunteer researchers to support on a high-impact project! We’re conducting a review on interventions to reduce meat consumption, and we’re seeking help checking whether academic studies meet our eligibility criteria. This will involve reviewing the full text of studies, especially methodology sections. We’re interested in volunteers who have some experience reading empirical academic literature, especially postgraduates. The role is an unpaid volunteer opportunity. We expect this to be a ten week project, requiring approximately five hours per week. But your time commitment can be flexible, depending on your availability. This is an exciting opportunity for graduate students and early career researchers to gain research experience, learn about an interesting topic, and directly participate in an impactful project. The Animal Welfare Department will provide support and, if desired, letters of experience for volunteers. If you are interested in volunteering with us, contact Ben Stevenson at bstevenson@rethinkpriorities.org. Please share either your CV, or a short statement (~4 sentences) about your experience engaging with empirical academic literature. Candidates will be invited to complete a skills assessment. We are accepting applications on a rolling basis, and will update this listing when we are no longer accepting applications. Please reach out to Ben if you have any questions. If you know anybody who might be interested, please forward this opportunity to them!
In late June, the Forum will be holding a debate week (almost definitely) on the topic of digital minds. Like the AI pause debate week, I’ll encourage specific authors who have thoughts on this issue to post, but all interested Forum users are also encouraged to take part. Also, we will have an interactive banner to track Forum user’s opinions and how they change throughout the week.  I’m still formulating the exact debate statement, so I’m very open for input here! I’d like to see people discuss: whether digital minds should be an EA cause area, how bad putting too much or too little effort into digital minds could be, and whether there are any promising avenues for further work in the domain. I’d like a statement which is fairly clear, so that the majority of debate doesn’t end up being semantic.  The debate statement will be a value statement of the form ‘X is the case’ rather than a prediction 'X will happen before Y'. For example, we could discuss how much we agree with the statement ‘Digital minds should be a top 5 EA cause area’-- but this is specific suggestion is uncomfortably vague.  Do you have any suggestions for alternative statements? I’m also open to feedback on the general topic. Feel free to dm rather than comment if you prefer. 

Past 14 days
Past 14 days

AI safety 7
Animal welfare 4
Cause prioritization 4
Announcements and updates 4
Organization updates 3
Forecasting 3
More

Frontpage Posts

Quick takes

53
Linch
13d
10
Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else. I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future. To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly sane and reasonable in at least a limited form as a way to prevent leaking trade secrets, and non-disparagement agreements, which prevents you from saying bad things about past employers. The latter seems clearly bad to have for anybody in a position to affect policy. Doubly so if the existence of the non-disparagement agreement itself is secretive.
Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  
I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs. * How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later. * Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.
Besides Ilya Sutskever, is there any person not related to the EA community who quit or was fired from OpenAI for safety concerns?
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups: 1. Non-EAs 2. Organisers 3. Existing members of the community Per target group, I'd say it has the following main activities: * Targeting non-EAs, it does comms and education (the VP programme). * Targeting organisers, you have the work of the groups team. * Targeting existing members, you have the events team, the forum team, and community health.  Per target group, these activities are aiming for the following short-term outcomes: * Targeting non-EAs, it doesn't aim to raise awareness of EA, but instead, it aims to ensure people have an accurate understanding of what EA is. * Targeting organisers, it aims to improve their ability to organise. * Targeting existing members, it aims to improve information flow (through EAG(x) events, the forum, newsletters, etc.) and maintain a healthy culture (through community health work). If you're interested, you can see EA Netherland's theory of change here. 

Past 31 days

AI safety 7
Announcements and updates 5
Policy 5
Existential risk 5
Building effective altruism 4
Animal welfare 3
More

Frontpage Posts

109
· · 25m read

Quick takes

137
Cullen
16d
0
I am not under any non-disparagement obligations to OpenAI. It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer. I have no further comments at this time.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice. (I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
72
William_S
1mo
5
I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to the release of an open source "transformer debugger" tool. I resigned from OpenAI on February 15, 2024.
We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI.    From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons: 1. Incentives 2. Culture From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enough firepower to host successful multibillion-dollar scientific/engineering projects: 1. As part of an intergovernmental effort (e.g. CERN’s Large Hadron Collider, the ISS) 2. As part of a governmental effort of a single country (e.g. Apollo Program, Manhattan Project, China’s Tiangong) 3. As part of a larger company (e.g. Google DeepMind, Meta AI) In each of those cases, I claim that there are stronger (though still not ideal) organizational incentives to slow down, pause/stop, or roll back deployment if there is sufficient evidence or reason to believe that further development can result in major catastrophe. In contrast, an AI-focused company has every incentive to go ahead on AI when the case for pausing is uncertain, and minimal incentive to stop or even take things slowly.  From a culture perspective, I claim that without knowing any details of the specific companies, you should expect AI-focused companies to be more likely than plausible contenders to have the following cultural elements: 1. Ideological AGI Vision AI-focused companies may have a large contingent of “true believers” who are ideologically motivated to make AGI at all costs and 2. No Pre-existing Safety Culture AI-focused companies may have minimal or no strong “safety” culture where people deeply understand, have experience in, and are motivated by a desire to avoid catastrophic outcomes.  The first one should be self-explanatory. The second one is a bit more complicated, but basically I think it’s hard to have a safety-focused culture just by “wanting it” hard enough in the abstract, or by talking a big game. Instead, institutions (relatively) have more of a safe & robust culture if they have previously suffered the (large) costs of not focusing enough on safety. For example, engineers who aren’t software engineers understand fairly deep down that their mistakes can kill people, and that their predecessors’ fuck-up have indeed killed people (think bridges collapsing, airplanes falling, medicines not working, etc). Software engineers rarely have such experience. Similarly, governmental institutions have institutional memories with the problems of major historical fuckups, in a way that new startups very much don’t.
58
OllieBase
22d
0
Congratulations to the EA Project For Awesome 2024 team, who managed to raise over $100k for AMF, GiveDirectly and ProVeg International by submitting promotional/informational videos to the project. There's been an effort to raise money for effective charities via Project For Awesome since 2017, and it seems like a really productive effort every time. Thanks to all involved! 

Since April 1st

April Fools' Day 5
Announcements and updates 4
Policy 4
Building effective altruism 3
Organization updates 3
AI safety 3
More

Frontpage Posts

Quick takes

In this "quick take", I want to summarize some my idiosyncratic views on AI risk.  My goal here is to list just a few ideas that cause me to approach the subject differently from how I perceive most other EAs view the topic. These ideas largely push me in the direction of making me more optimistic about AI, and less likely to support heavy regulations on AI. (Note that I won't spend a lot of time justifying each of these views here. I'm mostly stating these points without lengthy justifications, in case anyone is curious. These ideas can perhaps inform why I spend significant amounts of my time pushing back against AI risk arguments. Not all of these ideas are rare, and some of them may indeed be popular among EAs.) 1. Skepticism of the treacherous turn: The treacherous turn is the idea that (1) at some point there will be a very smart unaligned AI, (2) when weak, this AI will pretend to be nice, but (3) when sufficiently strong, this AI will turn on humanity by taking over the world by surprise, and then (4) optimize the universe without constraint, which would be very bad for humans. By comparison, I find it more likely that no individual AI will ever be strong enough to take over the world, in the sense of overthrowing the world's existing institutions and governments by surprise. Instead, I broadly expect unaligned AIs will integrate into society and try to accomplish their goals by advocating for their legal rights, rather than trying to overthrow our institutions by force. Upon attaining legal personhood, unaligned AIs can utilize their legal rights to achieve their objectives, for example by getting a job and trading their labor for property, within the already-existing institutions. Because the world is not zero sum, and there are economic benefits to scale and specialization, this argument implies that unaligned AIs may well have a net-positive effect on humans, as they could trade with us, producing value in exchange for our own property and services. Note that my claim here is not that AIs will never become smarter than humans. One way of seeing how these two claims are distinguished is to compare my scenario to the case of genetically engineered humans. By assumption, if we genetically engineered humans, they would presumably eventually surpass ordinary humans in intelligence (along with social persuasion ability, and ability to deceive etc.). However, by itself, the fact that genetically engineered humans will become smarter than non-engineered humans does not imply that genetically engineered humans would try to overthrow the government. Instead, as in the case of AIs, I expect genetically engineered humans would largely try to work within existing institutions, rather than violently overthrow them. 2. AI alignment will probably be somewhat easy: The most direct and strongest current empirical evidence we have about the difficulty of AI alignment, in my view, comes from existing frontier LLMs, such as GPT-4. Having spent dozens of hours testing GPT-4's abilities and moral reasoning, I think the system is already substantially more law-abiding, thoughtful and ethical than a large fraction of humans. Most importantly, this ethical reasoning extends (in my experience) to highly unusual thought experiments that almost certainly did not appear in its training data, demonstrating a fair degree of ethical generalization, beyond mere memorization. It is conceivable that GPT-4's apparently ethical nature is fake. Perhaps GPT-4 is lying about its motives to me and in fact desires something completely different than what it professes to care about. Maybe GPT-4 merely "understands" or "predicts" human morality without actually "caring" about human morality. But while these scenarios are logically possible, they seem less plausible to me than the simple alternative explanation that alignment—like many other properties of ML models—generalizes well, in the natural way that you might similarly expect from a human. Of course, the fact that GPT-4 is easily alignable does not immediately imply that smarter-than-human AIs will be easy to align. However, I think this current evidence is still significant, and aligns well with prior theoretical arguments that alignment would be easy. In particular, I am persuaded by the argument that, because evaluation is usually easier than generation, it should be feasible to accurately evaluate whether a slightly-smarter-than-human AI is taking bad actions, allowing us to shape its rewards during training accordingly. After we've aligned a model that's merely slightly smarter than humans, we can use it to help us align even smarter AIs, and so on, plausibly implying that alignment will scale to indefinitely higher levels of intelligence, without necessarily breaking down at any physically realistic point. 3. The default social response to AI will likely be strong: One reason to support heavy regulations on AI right now is if you think the natural "default" social response to AI will lean too heavily on the side of laissez faire than optimal, i.e., by default, we will have too little regulation rather than too much. In this case, you could believe that, by advocating for regulations now, you're making it more likely that we regulate AI a bit more than we otherwise would have, pushing us closer to the optimal level of regulation. I'm quite skeptical of this argument because I think that the default response to AI (in the absence of intervention from the EA community) will already be quite strong. My view here is informed by the base rate of technologies being overregulated, which I think is quite high. In fact, it is difficult for me to name even a single technology that I think is currently clearly underregulated by society. By pushing for more regulation on AI, I think it's likely that we will overshoot and over-constrain AI relative to the optimal level. In other words, my personal bias is towards thinking that society will regulate technologies too heavily, rather than too loosely. And I don't see a strong reason to think that AI will be any different from this general historical pattern. This makes me hesitant to push for more regulation on AI, since on my view, the marginal impact of my advocacy would likely be to push us even further in the direction of "too much regulation", overshooting the optimal level by even more than what I'd expect in the absence of my advocacy. 4. I view unaligned AIs as having comparable moral value to humans: This idea was explored in one of my most recent posts. The basic idea is that, under various physicalist views of consciousness, you should expect AIs to be conscious, even if they do not share human preferences. Moreover, it seems likely that AIs — even ones that don't share human preferences — will be pretrained on human data, and therefore largely share our social and moral concepts. Since unaligned AIs will likely be both conscious and share human social and moral concepts, I don't see much reason to think of them as less "deserving" of life and liberty, from a cosmopolitan moral perspective. They will likely think similarly to the way we do across a variety of relevant axes, even if their neural structures are quite different from our own. As a consequence, I am pretty happy to incorporate unaligned AIs into the legal system and grant them some control of the future, just as I'd be happy to grant some control of the future to human children, even if they don't share my exact values. Put another way, I view (what I perceive as) the EA attempt to privilege "human values" over "AI values" as being largely arbitrary and baseless, from an impartial moral perspective. There are many humans whose values I vehemently disagree with, but I nonetheless respect their autonomy, and do not wish to deny these humans their legal rights. Likewise, even if I strongly disagreed with the values of an advanced AI, I would still see value in their preferences being satisfied for their own sake, and I would try to respect the AI's autonomy and legal rights. I don't have a lot of faith in the inherent kindness of human nature relative to a "default unaligned" AI alternative. 5. I'm not fully committed to longtermism: I think AI has an enormous potential to benefit the lives of people who currently exist. I predict that AIs can eventually substitute for human researchers, and thereby accelerate technological progress, including in medicine. In combination with my other beliefs (such as my belief that AI alignment will probably be somewhat easy), this view leads me to think that AI development will likely be net-positive for people who exist at the time of alignment. In other words, if we allow AI development, it is likely that we can use AI to reduce human mortality, and dramatically raise human well-being for the people who already exist. I think these benefits are large and important, and commensurate with the downside potential of existential risks. While a fully committed strong longtermist might scoff at the idea that curing aging might be important — as it would largely only have short-term effects, rather than long-term effects that reverberate for billions of years — by contrast, I think it's really important to try to improve the lives of people who currently exist. Many people view this perspective as a form of moral partiality that we should discard for being arbitrary. However, I think morality is itself arbitrary: it can be anything we want it to be. And I choose to value currently existing humans, to a substantial (though not overwhelming) degree. This doesn't mean I'm a fully committed near-termist. I sympathize with many of the intuitions behind longtermism. For example, if curing aging required raising the probability of human extinction by 40 percentage points, or something like that, I don't think I'd do it. But in more realistic scenarios that we are likely to actually encounter, I think it's plausibly a lot better to accelerate AI, rather than delay AI, on current margins. This view simply makes sense to me given the enormously positive effects I expect AI will likely have on the people I currently know and love, if we allow development to continue.
Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell. Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania).  Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll.  DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which can be useful, do not duck and dive around the fact that virtually every major promoter of scientific racism ever, including allegedly mainstream figures like Jensen, worked with or published with actual literal Nazis (https://www.splcenter.org/fighting-hate/extremist-files/individual/arthur-jensen).  I love most of the people I have met through EA, and I know that-despite what some people say on twitter- we are not actually a secret crypto-fascist movement (nor is longtermism specifically, which whether you like it or not, is mostly about what its EA proponents say it is about.) But there is in my view a disturbing degree of tolerance for this stuff in the community, mostly centered around the Bay specifically. And to be clear I am complaining about tolerance for people with far-right and fascist ("reactionary" or whatever) political views, not people with any particular personal opinion on the genetics of intelligence. A desire for authoritarian government enforcing the "natural" racial hierarchy does not become okay, just because you met the person with the desire at a house party and they seemed kind of normal and chill or super-smart and nerdy.  I usually take a way more measured tone on the forum than this, but here I think real information is given by getting shouty.  *Anyone who thinks it is automatically far-right to think about any kind of genetic enhancement at all should go read some Culture novels, and note the implied politics (or indeed, look up the author's actual die-hard libertarian socialist views.) I am not claiming that far-left politics is innocent, just that it is not racist. 
Animal Justice Appreciation Note Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it. Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
Marcus Daniell appreciation note @Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!

Load more months