Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
@Toby Tremlett🔹 and I will be repping the EA Forum Team at EAG SF in mid-Feb — stop by our office hours to ask questions, give us your hottest Forum takes, or just say hi and come get a surprise sweet! :) Reminder: applications for EAG SF close soon (this Sunday!)
Heads up for job-board users: you can now find more roles (1,200+) and set custom email alerts on our job board. For added context: As promised earlier, we’re continuing to scale and improve our job board, to help talented people find impactful roles (including in causes, regions, and orgs that might have been underrepresented in EA so far). Mainly, you can now find more roles than before and set alerts for your chosen filters, in addition to smaller improvements, like being able to filter for highlighted roles. We’ll continue doing significant work on the board in the coming months, including by adding more roles, and improving existing features while adding new ones. Here’s how you can help us help you: * If you land a role that you found on the job board, please let us know! Even a short message about how our services helped you makes a huge difference to our ability to continue providing these services. * If you know of any orgs you think we should monitor for the board (including ones you work for), please share them! * If you work at an org that's listed on the board and have a question on your application forms regarding where candidates heard about the role, please consider adding "Probably Good" as an option (or consider asking your recruiting team to do it). We’re also happy to collaborate on adding a UTM parameter to links or doing something similar; let us know if you’re interested. * If you’re a hiring manager/recruiter who ends up hiring a candidate who found your role through our job board, please let us know! Other than that, please also share the job board with people you think could benefit from it, and get in touch with us if you have any feedback or other suggestions. Thank you!
Despite the slightly terrifying implications of the breakdown in unity between America and the rest of the NATO alliance from a security perspective - I think it also offers a really promising opportunity r.e. shifting global AI development and governance towards a safer path in some scenarios.  Right now US and China have adopted a 'race' dynamic...needless to say this is hugely dangerous and really raises the risk of irresponsible practices and critical errors as we enter the critical phase towards AGI from both AI super-powers. The rupture of UK/EU from US over Greenland/tariffs has led to immediate warming with China (PM Starmer just left Beijing and now there's visa free travel to China for UK citizens and talk of strategic partnerships). Prior to this point there was little reason for China to head any warning from middle powers over AI safety - they were on 'the other side' of the race/struggle for global influence. That strategic image has shifted dramatically.  With warming relations with China there's possibility for rigorous UK/EU advocacy combined with effective AI policy that focuses more on caution/preparedness for AI response over pure-play race to the finish. So far the Track 1 talks between China and US have yielded limited results - trust remains low and neither side wants to show weakness. If these macro-strategic changes in UK/EU relations with China offer possibility of a route towards influencing their perspective on AI risk maybe it could yield some positive results? Adherence to a multilateral AI safety regime? (perhaps a bit too optimistic?)...but if this can offer an opportunity to China shifting even somewhat to the cautionary side it could open up room for more effective cooperative actions globally, including with the US, and shifting us somewhat from the full steam ahead path we're currently on. Am I wrong to view any sort of optimism for how this could impact the AI governance space? Considering writing a more fleshed out piece on t
Rethink Priorities is hiring an AI Strategy Team Lead. The full job description and application form are available on our careers page. If you know anyone who may be a strong fit for applied strategy, research, or programmatic work focused on reducing AI-related existential risks and securing positive outcomes, we’d appreciate you sharing this opportunity. We warmly encourage anyone who thinks they might be a good fit to apply.
Oscar Wilde once wrote that "people nowadays know the price of everything and the value of nothing." I can see a particular type of uncharitable EA-critic say the same about our movement, grossed out by how we try to put a price tag on human (or animal) lives. This is wrong. What they should be appalled by is if a life was truly worth $3500, but that's not what we are claiming. The claim is that a life is invaluable. The world just happens to be in such a way that we can buy this incredibly precious thing for the meager cost of a few thousand.  Nate Sores has written about this in greater detail here.