Downvoted because it was very uninformative on the topic that matters most. Just saying 'there are a range of estimates' is about as unhelpful as you can be w.r.t a datapoint.
If I take the time to read through the linked papers I will return with a more substantive comment.
Also the fact that the scene ended with the most pessimistic estimates highlighted was annoying.
Edit: also the title is clickbaity and unsupported-- I get the impression they don't really know what the word 'probably' means beyond what they read in a pdf about SEO?
The latter is just a far, far smaller harm per person--far less than 1/100 as great.
Surely it makes more sense to compare the upside-- someone forming a long lasting and loving relationship.
Maybe that's extreme, but taking a balance of outcomes I doubt it would be 1/100.
Also strange that you chose to say 1/100 and also 100x as many people-- surely if you have high confidence in those numbers then that would balance out by definition? Or is the somewhere where you think this sort of scale insensitivity is valid?
Re 5)
Would also like to plug inciteful which seems pretty similar to connectedsearch, which I have had a lot of good results with, and semanticscholar.
Also probably worth watching this space, it shows potential too :).
And while I'm at it, I feel obliged to plug the jolly pirates who make research possible for the rest of us:
Hello all,
long time lurker here. I was doing a bunch of reading today about polygenic screening, and one of the papers was so good that I had to share it, in case anyone interested in animal welfare was unfamiliar with it. The post is awaiting moderation but will presumably be here in due time.
So while I am making my first post I might as well introduce myself.
I have been sort of vaguely EA aligned since I discovered the movement 5ish years ago, listened to every episode of the 80k podcast and read a tonne of related books and blog posts.
I have a background in biophysics, though I am currently working as a software engineer in a scrappy startup to improve my programming skills. I have vague plans to return to research and do a phd at some point but lets see.
EA things I am interested in:
Recently I have also been reading some interesting criticisms of EA that have expanded my horizons a little, the ones I enjoyed the most were
But at the end of the day I think EAs own personal brand of minimally deontic utilitarianism is simple and useful enough for most circumstances. Maybe a little bit of Nietzschean spice when I am feeling in the mood.. and frankly I think fundamentally e/acc is mostly quite compatible, aside from the details of the coming AI apocalypse and [how|whether] to deal with it.
I also felt a little bit like coming out of the woodwork recently after all the twitter drama and cancellation shitstorms. Just to say that I think you folks are doing a fine thing actually, and hopefully the assassins will move on to the next campaign before too long.
Best regards! I will perhaps be slightly more engaged henceforth.
Great work.
I think the headline is very fair. I agree with other commentators here saying 'ah but how the tides will turn'-- but you clearly take this into account and say as much in the headline.
Lets not get too complacent or, ahem, count our free roaming domesticated junglefowl.
OTOH if we get something like the results of the Malan 2022 field experiment 'for free' once we have PTC parity, I feel like the ball will be well and truly rolling and well get scale-ups and hopefully a phase transition sometime thereafter, maybe with a few other clever interventions.
Again, great work and thanks!