Update (January 28): Marco Rubio has now issued a temporary waiver for "humanitarian programs that provide life-saving medicine, medical services, food, shelter and subsistence assistance."[1]
PEPFAR's funding was recently paused as a result of the recent executive order on foreign aid.[2] (It was previously reauthorized until March 25, 2025.[3]) If not exempted, this would pause PEPFAR's work for three months, effective immediately.
Marco Rubio has issued waivers for some forms of aid, including emergency food aid, and has the authority to issue a similar waiver for PEPFAR, allowing it to resume work immediately.[4] Rubio has previously expressed (relatively generic) positive sentiments about PEPFAR on Twitter,[5] and I don't have specific reason to think he's opposed to PEPFAR, as opposed to simply not caring strongly enough to give it a waiver without anyone encouraging him to.
I think it is worth considering calling your representatives to suggest that they encourage Rubio to give PEPFAR a waiver, similarly to the waiver he provided to programs giving emergency food aid. I have a lot of uncertainty here — in particular, I'm not sure whether this is likely to persuade Rubio — but I think it is fairly unlikely to make things actively worse. I think the argument in favor of calling is likely stronger for people who are represented by Republicans in Congress; I expect Rubio would care much more about pressure from his own party than about pressure from the Democrats.
1. ^
https://apnews.com/article/trump-foreign-assistance-freeze-684ff394662986eb38e0c84d3e73350b
2. ^
My primary source for this quick take is Kelsey Piper's Twitter thread, as well as the Tweets it quotes and the articles it and the quoted Tweet link to. For a brief discussion of what PEPFAR is, see my previous Quick Take.
3. ^
https://www.kff.org/policy-watch/pepfars-short-term-reauthorization-sets-an-uncertain-course-for-its-long-term-future/
4. ^
htt
Is anyone in EA coordinating a response to the PEPFAR pause? Seems like a very high priority thing for US-based EAs to do, and I'm keen to help if so and start something if not.
I do not believe Anthropic as a company has a coherent and defensible view on policy. It is known that they said words they didn't hold while hiring people (and they claim to have good internal reasons for changing their minds, but people did work for them because of impressions that Anthropic made but decided not to hold). It is known among policy circles that Anthropic's lobbyists are similar to OpenAI's.
From Jack Clark, a billionaire co-founder of Anthropic and its chief of policy, today:
Dario is talking about countries of geniuses in datacenters in the context of competition with China and a 10-25% chance that everyone will literally die, while Jack Clark is basically saying, "But what if we're wrong about betting on short AI timelines? Security measures and pre-deployment testing will be very annoying, and we might regret them. We'll have slower technological progress!"
This is not invalid in isolation, but Anthropic is a company that was built on the idea of not fueling the race.
Do you know what would stop the race? Getting policymakers to clearly understand the threat models that many of Anthropic's employees share.
It's ridiculous and insane that, instead, Anthropic is arguing against regulation because it might slow down technological progress.
The Belgian senate votes to add animal welfare to the constitution.
It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. The relevant section reads:
It's a very good day for Belgian animals but I do want to note that:
1. This does not mean an effective shutdown of the meat industry, merely that all future pro-animal welfare laws and lawsuits will have an easier time. And,
2. It still needs to pass the Chamber of Representatives.
If there's interest I will make a full post about it if once it passes the Chamber.
EDIT: Translated the linked article on our site into English.
While quartz countertop sales grow, millions of people have silicosis from inhaling silica dust:
https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-023-16295-2
Hundreds of thousands died in the last couple decades from the incurable disease.
Australia's the first country to enact a ban:
https://www.theguardian.com/australia-news/2023/dec/14/australia-will-become-the-first-county-to-ban-engineered-stone-bench-tops-will-others-follow
Are you an EU citizen? If so, please sign this citizen’s initiative to phase out factory farms (this is an approved EU citizen’s initiative, so if it gets enough signatures the EU has to respond):
stopcrueltystopslaughter.com
It also calls for reducing the number of animal farms over time, and introducing more incentives for the production of plant proteins.
(If initiatives like these interest you, I occasionally share more of them on my blog)
EDIT: If it doesn't work, try again in a couple hours/days. The collection has just started and the site may be overloaded. The deadline is in a year, so no need to worry about running out of time.
The recently released 2024 Republican platform said they'll repeal the recent White House Executive Order on AI, which many in this community thought is a necessary first step to make future AI progress more safe/secure. This seems bad.
From https://s3.documentcloud.org/documents/24795758/read-the-2024-republican-party-platform.pdf, see bottom of pg 9.
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary.
https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus
TLDR;
80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world.
----------------------------------------
According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations.
Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated.
To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating.
Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.
Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claimi