Dragon God

Topic Contributions

Comments

On Deference and Yudkowsky's AI Risk Estimates

(I hadn't seen this reply when I made my other reply).

What do you think of legitimising behaviour that calls out the credibility of other community members in the future?

I am worried about displacing the concrete object level arguments as the sole domain of engagement. A culture in which arguments cannot be allowed to stand by themselves. In which people have to be concerned about prior credibility, track record and legitimacy when formulating their arguments...

It feels like a worse epistemic culture.

On Deference and Yudkowsky's AI Risk Estimates

To expand on my complaints in the above comment.

I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.

I think that's unhealthy and contrary to collaborative knowledge growing.

Yudkowsky has laid out his arguments for doom at length. I don't fully agree with those arguments (I believe he's mistaken in 2 - 3 serious and important ways), but he has laid them out, and I can disagree on the object level with him because of that.

Given that the explicit arguments are present, I would prefer posts that engaged with and directly refuted the arguments if you found them flawed in some way.

I don't like this direction of attacking his overall credibility.

Attacking someone's credibility in lieu of their arguments feels like a severe epistemic transgression.

I am not convinced that the community is better for a norm that accepts such epistemic call out posts.

On Deference and Yudkowsky's AI Risk Estimates

I prefer to just analyse and refute his concrete arguments on the object level.

I'm not a fan of engaging the person of the arguer instead of their arguments.

Granted, I don't practice epistemic deference in regards to AI risk (so I'm not the target audience here), but I'm really not a fan of this kind of post. It rubs me the wrong way.

Challenging someone's overall credibility instead of their concrete arguments feels like bad form and [logical rudeness] (https://www.lesswrong.com/posts/srge9MCLHSiwzaX6r/logical-rudeness).

I wish EAs did not engage in such behaviour and especially not with respect to other members of the community.

Dragon God's Shortform

There is an ongoing "friend matching" campaign for GiveWell.

Anyone who donates through a friend link will have their donation matched up to $250. Please donate.

My friend link.