Fergus Fettes

53Joined Nov 2022


The latter is just a far, far smaller harm per person--far less than 1/100 as great.

Surely it makes more sense to compare the upside-- someone forming a long lasting and loving relationship.

Maybe that's extreme, but taking a balance of outcomes I doubt it would be 1/100.

Also strange that you chose to say 1/100 and also 100x as many people-- surely if you have high confidence in those numbers then that would balance out by definition? Or is the somewhere where you think this sort of scale insensitivity is valid?

Re 5)

Would also like to plug inciteful which seems pretty similar to connectedsearch, which I have had a lot of good results with, and semanticscholar

Also probably worth watching this space, it shows potential too :).

And while I'm at it, I feel obliged to plug the jolly pirates who make research possible for the rest of us:

  • Sci Hub <- top tip, if you cant get a paper, just add 'sci-hub.ru/' to the start of it, without even cutting off the http etc, like so:


Hello all,

long time lurker here. I was doing a bunch of reading today about polygenic screening, and one of the papers was so good that I had to share it, in case anyone interested in animal welfare was unfamiliar with it. The post is awaiting moderation but will presumably be here in due time.

So while I am making my first post I might as well introduce myself.

I have been sort of vaguely EA aligned since I discovered the movement 5ish years ago, listened to every episode of the 80k podcast and read a tonne of related books and blog posts.

I have a background in biophysics, though I am currently working as a software engineer in a scrappy startup to improve my programming skills. I have vague plans to return to research and do a phd at some point but lets see.

EA things I am interested in:

  • bio bio bio (everything from biorisk and pandemics to the existential risk posed by radical transhumanism)
  • ai (that one came out of nowhere! I mean I used to like reading Yudkowskys stuff thinking it was scifi but here we are. AGI timelines shrinking like spinach in frying pan, hoo-boy)
  • global development (have lived, worked and travelled extensively in third world countries. lots of human capital out there being wasted)
  • animal welfare! or I was until I gave up on the topic in despair (see my essay above) though I am still a vegan-aligned vegetarian
  • philosophy?
  • economics?
  • i mean its all good stuff basically

Recently I have also been reading some interesting criticisms of EA that have expanded my horizons a little, the ones I enjoyed the most were

But at the end of the day I think EAs own personal brand of minimally deontic utilitarianism is simple and useful enough for most circumstances. Maybe a little bit of Nietzschean spice when I am feeling in the mood.. and frankly I think fundamentally e/acc is mostly quite compatible, aside from the details of the coming AI apocalypse and [how|whether] to deal with it.

I also felt a little bit like coming out of the woodwork recently after all the twitter drama and cancellation shitstorms. Just to say that I think you folks are doing a fine thing actually, and hopefully the assassins will move on to the next campaign before too long.

Best regards! I will perhaps be slightly more engaged henceforth.