BenMillwood

Posts

Sorted by New

Wiki Contributions

Comments

Can human extinction due to AI be justified as good?

"if AI has moral status, then AI helping its replicas grow or share pleasant experiences is morally valuable stuff". Sure, but I think the claim is that "most" AI won't be interested in doing that, and will pursue some other goal instead that doesn't really involve helping anyone.

The Cost of Rejection

It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful to the community, or at least it could save the time and effort of them applying for more jobs that have the same requirements you did, that they didn't meet -- and save the time and effort of the hiring team there rejecting them.

A unique characteristic of EA hiring is that it's often good for your goals to help candidates who didn't succeed at your process succeed at something else nearby. I often think we don't realize how significantly this shifts our incentives in cases like these.

How would you run the Petrov Day game?

Like Sanjay's answer, I think this is a correct diagnosis of a problem, but I think the advertising solution is worse than the problem.

  • A month of harm seems too long to me,
  • I can't think of anything we'd want to advertise on LW that we wouldn't already want to advertise on EAF, and we've chosen "no ads" in that case.
How would you run the Petrov Day game?

I'd like to push the opt-in / opt-out suggestion further, and say that the button should only affect people who have opted in (that is, the button bans all the opted-in players for a day, rather than taking the website down for a day). Or you could imagine running it on another venue than the Forum entirely, that was more focused on these kinds of collaborative social experiments.

I can see an argument that this takes away too much from the game, but in that case I'd lean towards just not running it at all. I think it's a cute idea but I don't think it feels important enough to me to justify obstructing unrelated uses of the forum and creating a bunch of unnecessary frustration. I'd like the forum to remain accessible to people who don't think of themselves as "in the community", and I think stuff like this gets in the way of that.

How would you run the Petrov Day game?

I think this correctly identifies a problem (not only is it a bad model for reality, it's also confusing for users IMO). I don't think extra karma points is the right fix, though, since I imagine a lot of people only care about karma insofar as it's a proxy for other people's opinions of their posts, which you can't just give 30 more of :)

(also it's weird inasmuch as karma is a proxy for social trust, whereas nuking people probably lowers your social trust)

Honoring Petrov Day on the EA Forum: 2021

Sure, precommitments are not certain, but they're a way of raising the stakes for yourself (putting more of your reputation on the line) to make it more likely that you'll follow through, and more convincing to other people that this is likely.

In other words: of course you don't have any way to reach probability 0, but you can form intentions and make promises that reduce the probability (I guess technically this is "restructuring your brain"?)

Honoring Petrov Day on the EA Forum: 2021

Yeah, that did occur to me. I think it's more likely that he's telling the truth, and even if he's lying, I think it's worth engaging as if he's sincere, since other people might sincerely believe the same things.

Honoring Petrov Day on the EA Forum: 2021

I downvoted this. I'm not sure if that was an appropriate way to express my views about your comment, but I think you should lift your pledge to second strike, and I think it's bad that you pledged to do so in the first place.

I think one important disanalogy between real nuclear strategy and this game is that there's kind of no reason to press the button, which means that for someone pressing the button, we don't really understand their motives, which makes it less clear that this kind of comment addresses their motives.

Consider that last time LessWrong was persuaded to destroy itself, it was approximately by accident. Especially considering the context of the event we're commemorating was essentially another accident, I think the most likely story for why one of the sites gets destroyed is not intentional, and thus not affected by precommitments to retaliate.

Cultured meat predictions were overly optimistic

While I think it's useful to have concrete records like this, I would caution against drawing conclusions about the cultured meat community specifically unless we draw a comparison with other fields and find that forecast accuracy is better anywhere else. I'd expect that overoptimistic forecasts are just very common when people evaluate their own work in any field.

The motivated reasoning critique of effective altruism

Another two examples off the top of my head:

Load More