I am considering a job as a Senior Product Manager with BenevolentAI - https://www.benevolent.com/  

BenevolentAI work on drug discovery and development. They use AI and machine learning to identify targets and therapeutic interventions efficiently. 

On the face of it (not least because of their name), it seems the company is closely aligned to combatting a number of global risks. In AI safety and alignment, they are spearheading some interesting work in the use of AI in pharmaceuticals. By developing capabilities here it seems there would be positive spillover for the AI community as a whole. In biosecurity they are working towards pandemic preparedness in speeding up target and therapy identification using AI. If successful,  their models have potential to speed up the delivery time of an effective vaccine against any future pandemic. 

Despite this seemingly positive area of work, I am conscious they are absent from any EA literature, and don't feature on the 80k jobs board, for example. They are ultimately still gunning to play in the 'big pharma' world, which clearly doesn't have the best reputation (no offence to the pioneering scientists who work within their machines). 

Does anyone have any experience of the company directly, or have any impressions on it, or thought/reflections more generally? I am keen to tap into the community mind on this and gain any insight that's out there. 

16

0
0

Reactions

0
0
New Answer
New Comment

5 Answers sorted by

I worked there as a software engineer a bit over a year from the beginning of 2020.

My two cents:

I really liked the people there, how work is structured into cross-functional teams, learned a ton about biology and felt doing something useful (was part of the team whose work resulted in 2 novel targets in AstraZeneca's portfolio that were derived by AI-enabled target identification process).

There is definitely potential to create something impactful, but I was not really considering them an EA organization myself (compared to CEA, Open Phil and smilar).

This is super helpful thank you - would be keen to speak to you some more about your time there, and your involvement with EA more generally/since. Would you be happy to have a chat?

7
siim
1y
Hey, sure. Happy to chat

In AI safety and alignment, they are spearheading some interesting work in the use of AI in pharmaceuticals

I don't know anything about this company, but from your description I'd be very surprised if their work matters much for AI Alignment (either helping or harming). This seems solidly focused on finding profitable ways to use AI in industry in fairly niche, specific applications.

I'd guess the key arguments for or against are biosecurity considerations, where I don't know much!

This seems fair. I think something I should have added  to the post is that I think it would also help with my own career capital. To date I have not worked on any products using AI. I'm keen on moving into AI alignment in future, and feel that working in the space before advising on policy would be helpful. For various reasons the company is very suitable to my career trajectory at the moment, so I think it would stand me in good stead to learn while there, with an eye on the future.

I'd like to point out the possibility of "dual use":

  1. Tech that can be used to develop a vaccine quickly might (maybe, sometimes, not always) also be used to "print" a pathogen (such as smallpox) cheaply.
  2. Tech that advances what we know about AIs might also help creating an AGI faster, which would leave us less time to figure out how to make it safe.

I didn't look into the company yet, this is just a response to the things you wrote. I got the sense like this company advances AI capabilities and bio capabilities, which might maybe be used for safety, but also sometimes not.

Here's a nice post from Scott Alexander about times that good intentions went wrong in this way. It isn't rare, unfortunately

I don't think I've heard of them, but they seem to do really exciting work! I'm more enthusiastic about the direct health benefits they could offer, rather than the potential flowthrough effects of preventing x-risks, unless they have specially designed programs for major pandemics. 

I like that their main solution is "disease agnostic" (in their principles for the ethical deployment of AI), and that they work in building tools for (their medical) researchers rather than just automating everything. Looking at their work it does look like there's a lot of low-hanging fruits to be plucked. 

+1 for asking!

I recommend changing the title to something like "Should I work at BenevolentAI?" or "Is BenevolentAI impactful?" so that it will be easier for readers to know if this post is relevant for them

It seems to me Jack believes that they are impactful and is wondering why they are therefore absent from EA literature. I could be wrong here, he could instead be unsure how impactful it is and assuming that if EA hasn't indexed it it's not impactful (fwiw I think this general inference pattern is pretty wrong). Seems to additionally be wondering whether he should work there, and taking into account views people from this community might have when making his decision. 

Both of these comments are somewhat true. My default position is definitely 'they seem impactful to me'. It's also a bit of 'why aren't companies like this talked about much within EA'. That said, the point of my post is to gain unbiased insight from the community, so it might be better posed as 'is BenevolentAI (positively) impactful?'. 

I suppose as I wrote the title, I was reading the acronym as 'Effectively Altruistic' and as an adjective, rather than questioning whether it 'deserves a spot in the tribe'. Clearly 'impactful' would have been a bette... (read more)

2
Yonatan Cale
1y
(I like the current title!)