Daniel_Eth

Comments

Thoughts on being overqualified for EA positions

It seems like one solution would be to pay people more. I feel like some in EA are against this because they worry high pay will attract people who are just in it for the money - this is an argument for perhaps paying people ~20% less than they'd get in the private sector, not ~80% less (which seems to be what some EA positions pay relative to the skills they'd want for the hire).

Case studies of self-governance to reduce technology risk

Thank you for this post, I thought it was valuable. I'd just like to flag that regarding your recommendation, "we could do more to connect “near-term” issues like data privacy and algorithmic bias with “long-term” concerns" - I think this is good if done in the right way, but can also be bad if done in the wrong way. More specifically, insofar as near-term and long-term concerns are similar (eg., lack of transparency in deep learning means that we can't tell if parole systems today are using proxies we don't want, and plausibly could mean that we won't know the goals of superintelligent systems in the future), then it makes sense to highlight these similarities. On the other hand, insofar as the concerns aren't the same, statements that gloss over the differences (eg., statements about how we need UBI because automation will lead to super intelligent robots that aren't aligned with human interests) can be harmful, for several reasons: people who understand the logic doesn't necessarily flow through will be turned off, if people are convinced that long-term concerns are just near-term concerns at a larger scale then they might ignore solving problems necessary for long-term success that don't have near-term analogues, etc. 

peterbarnett's Shortform

Humans seem like (plausible) utility monsters compared to ants, and  many religious people have a conception of god that would make Him a utility monster ("maybe you don't like prayer and following all these rules, but you can't even conceive of the - 'joy' doesn't even do it justice - how much grander it is to god if we follow these rules than even the best experiences in our whole lives!"). Anti-utility monster sentiments seem to largely be coming from a place where someone imagines a human that's pretty happy by human standards, and thinks the words "orders of magnitude happier than what any human feels", and then they notice their intuition doesn't track the words "orders of magnitude".

alexrjl's Shortform

Just flagging that space doesn't solve anything - it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won't keep up with either exponential or hyperbolic growth.

MichaelA's Shortform

"Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI"

As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.

Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected

I'm honestly not certain - I don't believe we'll solve any of these problems by a degrowth approach, so the only way to get a real solution is via innovation and/or adoption of solutions. More people would help with that, but also would contribute more to the problem in the meantime. I think whether the sign was positive or negative might depend on the specifics (eg, I think if environmentalists have fewer kids because of a fear of overpopulation, that will generally be bad for the environment).

Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected

Another point - more humans means more people to find solutions. So we have more people polluting the planet, but also more people working on clean energy solutions that will get us off fossil fuels.

Do power laws drive politics?

Worth noting that if some political choices have very large negative outcomes, then choosing political paths that avoid those outcomes would have very positive counterfactual impact, even if no one sees it.

Good v. Optimal Futures

I agree with the general point that: 

E[~optimal future] - E[good future] >> E[good future] - E[meh/no future]

It's not totally clear to me how much can be done to optimize chances of ~optimal future (as opposed to, there's probably a lot more that can be done to decrease X-risk), but I do have an intuition that probably some good work on the issue can be done. This does seem like an under-explored area, and I would personally like to see more research in it.

I'd also like to signal-boost this relevant paper by Bostrom and Shulman:
https://nickbostrom.com/papers/digital-minds.pdf
which proposes that an ~optimal compromise (along multiple axes) between human interests and totalist moral stances could be achieved by, for instance, filling 99.99% of the universe with hedonium and leaving the rest to (post-)humans.

What types of charity will be the most effective for creating a more equal society?

I think it's really bad if people feel like they can't push back against claims they don't agree with (especially regarding cause/intervention prioritization), and I don't think the author of a post saying (effectively) "please don't push back against this claim if you disagree with it" should be able to insulate claims from scrutiny. Note that the author didn't say "if we think claim X is true, what should we do, but please let's stay focused and not argue about claim X here" but instead "I think claim X is true - given that, what should we do?"

Load More