About a week ago, Spencer Greenberg and I were debating what proportion of Effective Altruists believe enlightenment is real. Since he has a large audience on X, we thought a poll would be a good way to increase our confidence in our predictions
Before I share my commentary...
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he ...
I'd class those comments as mostly a disagreement around ends . The emphasis on not getting the credit from his own support base and Republicans not wanting to talk about it are the most revealing. A sizeable fraction of his most committed support base are radically antivax to the point there was audible booing at his own rally when he recommended they got the vaccine, even after he'd very carefully worded it in terms of their "freedoms". It's less a narrow disagreement about a specific layer of Biden bureaucracy and more a recognition that his base sees less government involvement in healthcare and less reaction to future pandemics and in some cases even rejection of evidence based medicine as valuable ends. And whilst he clearly doesn't reject evidence-based medicine himself, above all Trump loves adulation from that fanbase.
Either way, his position is quite different from those EAs who see pandemic preparedness as an extremely important permanent priority rather than a reactive thing..
EA is very important to me. I’ve been EtG for 5 years and I spend many hours per week consuming EA content. However, I have zero EA friends (I just have some acquaintances).
(I don't live near a major EA hub. I've attended a few meetups but haven't really connected with ...
I made a lot of my early friends in EA through my local group. I'm guessing you don't have one since you said you're not in an EA hub (?) but there's always EA Anywhere.
You could also organise an online discussion group yourself — a couple of my closest friends today were people I met because I started an online discussion group on animal welfare during the pandemic. We would discuss an article or paper on animal advocacy for like an hour in the evening, and then some people would stay and chat all evening. It was really nice :)
A crucial consideration in assessing the risks of advanced AI is the moral value we place on "unaligned" AIs—systems that do not share human preferences—which could emerge if we fail to make enough progress on technical alignment.
In this post I'll consider three potential...
I disagree with the implied theses in statements like "I'm not very sympathetic to pausing or slowing down AI as a policy proposal."
This is my own opinion, not the main thesis. It seems perfectly fine to say, "The reasons to believe X are weak, in my opinion, so I'm not strongly swayed by arguments that we need to do Y because of X". More importantly, you're completely overlooking my arguments in section 3, which were absolutely critical to forming my opinion here. And you omitted the beginning of that sentence in which I simply stated "This is a big reaso...
Manifund is a philanthropic startup that runs a website and programs to fund awesome projects. From January to now, we wrapped up 3 different programs for impact certificates (aka venture-style funding for charity projects): ACX Grants, Manifold Community Fund, and the ...
Executive summary: Manifund ran several impact certificate programs in Q1 2023 with mixed results, and is exploring new directions like regranting and prize challenges to find product-market fit for funding awesome projects.
Key points:
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. But reporting from Politico shows that these commitments have fallen through.
OpenAI, Anthropic, and Meta have all failed to share their models with the UK AISI before deployment. Only Google DeepMind, headquartered in London, has given pre-deployment access to UK AISI.
Anthropic released the most powerful publicly available language model, Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack...
The UK government’s public consultation for their proposed animal welfare labelling scheme[1] closes on the 7th of May. I.e. a week away. If you’re in the UK and care about animal welfare, I think you should probably submit an answer to it. If you don't care about ...
This is a sensationalist video put out by an influential YouTuber who generally creates good science videos but in this case does not do the work of the FHI justice nor its substantial and pioneering achievements. To sweep the rug from under the feet of such crucial researchers...
There should be a moderate bar for linkposting, as it takes up one of the frontpage slots. People may be downvoting because they see a link post with no body text as a low-effort post, and thus less likely to reflect consideration of the bar.
I was interviewed in yesterday’s 80,000 hours podcast: Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives. As I say in the podcast, there’s good evidence that this is a cost-effective way to save lives. Many peer-reviewed articles show that Kangaroo Mother Care is effective. The 80k link has many further links to the articles and data behind the podcast. You can see GiveWell’s write up of their support for our project at this link.
This partnership with a large government medical college is able to reach many babies. And with more funding, we could achieve more. Anyone can support this project by donating, at riceinstitute.org, to a 501(c)3 public charity.
If you have any questions, please feel free to ask below!
I agree that it's surprising this doesn't receive more attention in EA. I imagine a big part of it is it would get a lot of pushback from the more rationalist EAs who feel like it's too 'woo'/new age-y and find the stigma/connotations/vibes around it offputting. It does get a fair bit of attention on Twitter/X though- you might be interested in the discussion around this post.
I do think there would be some appetite in the community to fund research related to this, but am not sure it would appeal to the usual 'big funders'.