jimrandomh

Comments
68

It samples unread posts from a curated list, then when that list is empty samples weighted based on karma. Unfortunately if you read posts logged out, or on a previous version of the site, then old posts won't be marked-as-read so they'll come up again.

jimrandomh24d12-1

I didn't make that claim in the grandparent comment, and I don't know of any specific other deceptive statements in it. But, on consideration... yeah, there probably are. Most of the post is about internal details of FHI operations which I know little about and have no easy way to verify. The claim about the Apology is different in that it's easy to check; it seems reasonable to expect that if the most-verifiable part contains an overreach, then the less-verifiable parts probably do too.

jimrandomh24d5254

In my experience, there's a pattern, in social attacks like this, where critics are persistently, consistently unwilling to restrain themselves to only making criticisms that are true, regardless of whether the true criticisms would have been enough. This is a big deal and should not be tolerated.

jimrandomh2mo1515

reducing existential risk by .00001 percent to protect 1018 future humans

Very-small-probability of very-large-impact is a straw man. People who think AGI risk is an important cause area think that because they also  think that the probability is large.

jimrandomh2mo8877

I roll to disbelieve on these numbers. "Multiple reports a week" would be >100/year, which from my perspective doesn't seem consistent with the combination of (1) the total number of reports I'm aware of being a lot smaller than that, and (2) the fact that I can match most of the cases in the Time article (including ones that had names removed) to reports I already knew about.

(It's certainly possible that there was a particularly bad week or two, or that you're getting filled in on some sort of backlog.)

I also don't believe that a law school, or any group with 1300 members in it, would have zero incidents in 3-5 years. That isn't consistent with what we know about the overall rate of sexual misconduct in the US population; it seems far more likely that incidents within those groups are going unreported, or are being reported somewhere you don't see and being kept quiet.

They aren't currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we'll wind up doing it.

Answer by jimrandomhJan 24, 2023154

As a datum from the LessWrong side as a moderator, when the crossposting was first implemented, initially there were a bunch of crossposts that weren't doing well (from a karma perspective) and seemed to be making the site worse. To address this, we added a requirement that to crosspost from EAF to LW, you need 100 karma on LW. I believe the karma requirement is symmetrical: in order to crosspost an LW post onto EAF, you need 100 EAF karma

The theory being, a bit of karma shows that you probably have some familiarity with the crosspost-destination site culture, and probably aren't just crossposting out of a vague sense of wanting to maximize your post's engagement.  I don't think it's been a problem (in the EAF->LW crossposting direction) since.

Suppose there's a spot in a sentence where either of two synonyms would be effectively the same. That's 1 bit of available entropy. Then a spot where either a period or a comma would both work; that's another bit of entropy.  If you compose a message and annotate it with 48 two-way branches like this, using a notation like spintax, then you can programmatically create 2^48 effectively-identical messages. Then if you check the hash of each, you have good odds of finding one which matches the 48-bit hash fragment.

(Fyi a hash of only 12 hex digits (48 bits) is not long enough to prevent retroactively composing a message that matches the hash-fragment, if the message is long enough that you can find 48 bits of irrelevant entropy in it.)

jimrandomh3mo3416

One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger  effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.

This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, "all people count equally" would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn't favor people close to you over people in distant poor countries at all, even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they're willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.

Load more