New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
33
ASB
11h
0
I’d be keen for great people to apply to the Deputy Director role ($180-210k/y, remote) at the Mirror Biology Dialogues Fund. I spoke a bit about mirror bacteria on the 80k podcast, James Smith also had a recent episode on it. I generally think this is among the most important roles in the biosecurity space and I’ve been working with the MBDF team for a while now and am impressed by what they’re getting done. People might be surprised to hear that I put ballpark 1% p(doom) on mirror bacteria alone at the start of 2024. That risk has been cut substantially by the scientific consensus that has formed against building it since then, but there is some remaining risk that the boundaries are not drawn far enough from the brink that bad actors could access it. Having a great person in this role would help ensure a wider safety margin.
Got sent a set of questions from ARBOx to handle async; thought I'd post my answers publicly:   * Can you explain more about mundane utility? How do you find these opportunities? * Lots of projects need people and help! E.g. Can you contribute to EleutherAI, or close issues in Neuronpedia? Some more ideas: * Contribute to the projects within SK’s github follows and stars * Make some contributions within Big list of lists of AI safety project ideas 2025 * Reach out to projects that you think are doing cool work and ask if you can help! * BlueDot ideas for SWEs * I'm an experienced software engineer. How can I contribute to AI safety?  * The software engineer’s guide to making your first AI safety contribution in <1 week  * From a non-coding perspective, you could e.g.  * Facilitate BlueDot courses * Give people feedback on their research proposals, drafts, etc. * Be accountability partners * Offer to talk to people and share what you know with those who know less than you * Check out these pieces from my colleagues: * How to have an impact when the job market is not cooperating by Laura G Salmeron * Your Goal Isn’t Really to Get a Job by Matt Beard * What is your theory of change? * As an 80k advisor, my ToC is “Try and help someone to do something more impactful than if they had not spoken to me.” * Mainly, this is helping get people more familiar with/excited about/doing things related to AI safety. It’s also about helping them with resources and sometimes warm introductions to people who can help them even more. * Are there any particular pipelines / recommended programs for control research? * Just the things you probably already know about – MATS, Astra are likely your best bets, but look through these papers to see if there are any low hanging fruit as future work * What are the most neglected areas of work in the AIS space?  * Hard question, with many opinions! I’m particularl
I’ve donated about $150,000 over the past couple years. Here are some of the many (what I believe to be) mistakes in my past giving: 1. Donating to multiple cause areas. When I first started getting into philosophy more seriously, I adopted a vegan lifestyle and started identifying as EA within only a few weeks of each other. Deciding on my donation allocations across cause areas was painful, as I assign positive moral weights to both humans and animals and they might even be close in intrinsic value. I felt the urge to apologize to my vegan, non AI-worrier friends for increasing my ratio of AI safety donations to animal welfare donations, and my non-vegan, non-EA friends and family thought that donating to animals over humans was crazy. Now my view is something like: donations to AI safety are probably orders of magnitude more effective than to animal welfare or global health + development, so I should (and do) allocate 100% to AI safety. 2. Donating to multiple opportunities within the same cause area. Back in my early EA global health + development days, I found and still find the narrative of “some organizations are 100x more effective than others” pretty compelling, but I internally categorized orgs into two buckets: high EV and low EV. I viewed GiveWell-recommended organizations as broadly 'High EV,' assuming that even if their point estimates differed, their credence intervals overlapped sufficiently to render the choice between them negligible. This might even be true! However, I do not believe this to generalize to animal welfare and AI safety. Now I’ve come full circle in a way, and believe that actually, some things are multiple times (or even orders of magnitude) higher EV than other things, and have chosen to shut up and multiply. If you are a smaller donor, it is unlikely that your donation will sufficiently saturate a donation opportunity such that your nth dollar should go elsewhere. 3. Donating to opportunities that major organizations recommend
We've heard from a lot of people who feel they're getting rejected from jobs for being overqualified, which can be pretty frustrating. One thing that can help with this is to think about overqualification as an issue with poor fit for a particular role. Essentially, what feels like a general penalty for past success is usually about more specific concerns that your hiring manager might have, like: * Will you actually be good at this work? You might have years of experience in senior roles, or other impressive credentials, but this doesn’t always mean you’ll be able to perform well in a more junior role. For instance, if you've been managing teams for years, they may worry you lack recent hands-on experience and don't know current best practices. * Will you stick around? If you've been leading large teams but are applying for an individual contributor role, they might wonder if you'll actually find the work engaging or get bored without the higher-stakes responsibilities. They may worry you're just using this as a stepping stone until something better comes along. Hiring is costly and time-consuming, so they don't want to invest in someone who'll be gone in a few months. * Will you expect more than they can offer? If you've worked in more senior roles, an organization might think you’ll be looking for opportunities for growth, benefits, and a salary beyond what the organization is able to offer. If you’re likely to demand more than they’re able to give, they won’t want to waste time advancing you through the process. If you're genuinely excited about a role, but are worried about being perceived as overqualified, the good news is that you can address these concerns in your application (especially your cover letter or application answers). For instance, if you're stepping down in seniority, explain why you actually want to do this work. If you’ve worked in management and are wanting a return to the hands-on work you’re really passionate about, then mention this. 
I'm running a small fundraise match for Innovate Animal Ag until January 16th. IAA helped accelerate in-ovo sexing in the US, one of Lewis' Ten big wins in 2024 for farmed animals. I think Robert and team have a thoughtful and different approach to welfare that seems tractable. At least - it's a bet worth placing. I imagine IAA bringing new welfare technologies above the line of commercial viability and providing the fuel for orgs like Humane League to push forward. Join me in my (small) match!