Jack R

Currently thinking and learning what I can about alignment theory, strategy, and community success.

Topic Contributions

Comments

You should join an EA organization with too many employees

Thanks!

Is it correct that this assumes that the marginal cost of supporting a user doesn’t change depending on the firm’s scale? It seems like some amount of the 50x difference between EAF and reddit could be explained by the EAF having fewer benefits of scale since it is a smaller forum (though should this be counter balanced by it being a higher quality forum?)

Continuing the discussion since I am pretty curious how significant the 50x is, in case there is a powerful predictive model here

[This comment is no longer endorsed by its author]Reply
You should join an EA organization with too many employees

Could someone show the economic line of reasoning one would use to predict ex ante from the Nordhaus research that the Forum would have 50x more employees per user? (FYI, I might end up working it out myself.)

Some potential lessons from Carrick’s Congressional bid

Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick

It's not obvious to me that according to the EA framework, AI Safety is helpful

No worries! Seemed mostly coherent to me, and please feel free to respond later.

I think the thing I am hung up on here is what counts as "happiness" and "suffering" in this framing.

It's not obvious to me that according to the EA framework, AI Safety is helpful

Could you try to clarify what you mean by the AI (or an agent in general) being "better off?"

It's not obvious to me that according to the EA framework, AI Safety is helpful

I’m actually a bit confused here, because I'm not settled on a meta-ethics: why isn't it the case that a large part of human values is about satisfying the preferences of moral patients, and human values consider any or most advanced AIs as non-trivial moral patients?

I don't put much weight on this currently, but I haven't ruled it out.

Choosing causes re Flynn for Oregon

If you had to do it yourself, how would you go about a back-of-the-envelope calculation for estimating the impact of a Flynn donation?

Asking this question because I suspect that other people in the community won't actually do this, and since you are maybe one of the best-positioned people to do this since you seem interested in it.

Rational predictions often update predictably*

e.g. from P(X) = 0.8, I may think in a week I will - most of the time - have notched this forecast slightly upwards, but less of the time notching it further downwards, and this averages out to E[P(X) [next week]] = 0.8.

I wish you had said this in the BLUF -- it is the key insight, and the one that made me go from "Greg sounds totally wrong" to "Ohhh, he is totally right"

ETA: you did actually say this, but you said it in less simple language, which is why I missed it

Load More