Hide table of contents

If You've Never Heard of Kat Woods (But Really, Who in EA Hasn't?), Here's a List of Projects She Has Cofounded:


- Charity Entrepreneurship, an incubator that has launched 18 Charities so far

-Charity Science Health, now Suvita, which has helped vaccinate over 200,000 children

- Nonlinear, a longtermist EA incubator

-Superlinear, a platform which hosts competitions to solve X-Risk problems (with some pretty huge prizes)

As  if that isn't enough, she's also a prolific contributor to the intellectual sphere of the community - just look at her post history

 

So What should I ask her?

I'm planning on asking her about her plans to grow the AI Safety community and what the biggest issues are in the AI community. 

I'm also very interested in asking questions about her mindset; Kat mentioned to me that she was able to overcome imposter syndrome, which I know many of us suffer with. We'll also talk about concrete ways to become happier, and I'm really keen on figuring out how she manages to stay so productive. 

Send more questions! I interview her in 22 hours. 

Also, if you're interested in my podcast, here's an episode I filmed with Jack Rafferty, co founder of the Lead Exposure Elimination Project (also funded by Charity Entrepreneurship. Thanks Kat!)

4

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 3:01 AM

She co-authored a piece a few months back about finding AI safety emotionally compelling. I’d be interested in her thoughts on the following two questions related to that!

  • How worried should we be about suspicious convergence between AI safety being one of the most interesting/emotionally compelling questions to think about and it being the most pressing problem? There used to be a lot of discussion around 2015 about how it seemed like people were working on AI safety because it’s really fun and interesting to think about, rather than because it’s actually that pressing. I think that argument is pretty clearly false, but I’d be curious how she views this post as interacting with those concerns. 
  • It seems a bit like the post doesn’t draw a clean distinction between capabilities and safety. I agree that, to some extent, they’re inseparable (the people building transformative AI should care about making it safe), but how does she view the downside risks of, e.g., some of the most compelling parts of AI work being capabilities-related? More generally, how worried should we be, as a community, about how interconnected safety and capabilities work are? 
    • Somewhat related: As Patrick Collison puts it, people working on making more effective engineered viruses aren’t high-status among people working on pandemic prevention, so why are capabilities researchers high-status among safety researchers? 
    • (I have a decent sense of different answers within the community – this is not really a top concern of mine – but I’d nonetheless be interested in her take! My sense is that (1) the distinction isn’t nearly as clean since you want to build AI and make it go safely and (2) it’s good for capabilities work to be more safety-geared than the counterfactual.) 

Great questions! Can you clarify a little more on how safety and capabilities work is interconnected?

Curated and popular this week
Relevant opportunities