it's the arguments you least agree with that you should extend the most charity to
I strongly disagree with flat earthers, but I don't think that I should extend a lot of charity to arguments for a flat earth.
Also, on a quick skim, I could not find where this is argued for in the linked "I Can Tolerate Anything Except The Outgroup"
Caveat: I consider these minor issues, I hope I don't come across as too accusatory.
Interesting, why's that? :)
It seems that the reason for cross-posting was that you personally found it interesting. If you use the EA forum team account, it sounds a bit like an "official" endorsement, and makes the Forum Team less neutral.
Even if you use another account name (eg "selected linkposts") that is run by the Forum Team, I think there should be some explanation how those linkposts are selected, otherwise it seems like arbitrarily privileging some stuff over other stuff.
A "LinkpostBot" account would be good if the cross-posting is automated (e.g. every ACX article who mentions Effective Altruism).
I also personally feel kinda weird getting karma for just linking to someone else's work
I think its fine to gain Karma by virtue of linkposting and being an active forum member, I will not be bothered by it and I think you should not worry about that (although i can understand that it might feel uncomfortable to you). Other people are also allowed to link-post.
Personally when I see a linkpost, I generally assume that the author here is also the original author
I think starting the title with [linkpost] fixes that issue.
the child is adding an expected value of $27,275 per year in social surplus.
It would take $133,333 per year to raise a child to adulthood for it to not be worthwhile
I think the comparison of "social surplus" to effective donations is mistaken here. A social surplus of $27,275 (in the US) does not save 5 lives, but an effective donation of that size might.
There used to be such a system: https://forum.effectivealtruism.org/posts/YhPWq784eRDr5999P/announcing-the-ea-donation-swap-system It got shut down 7 months ago (see the comments on that post).
Some (or all?) Lightspeed grants are part of SFF: https://survivalandflourishing.fund/sff-2023-h2-recommendations
Having better hacking capability than China seems like a low bar for super-human AGI. The AGI would need to be better at writing and understanding code than a small group of talented humans, and have access to some servers. This sounds easy if you accept the premise of smarter-than-human AGI.