IT

Ian Turner

925 karmaJoined

Comments
228

I would suggest thinking about it this way: Do I need to know what Gary Kasparov's winning move would be, in order to know that he would beat me at chess? The answer is "no", he would definitely win, even if I can't predict exactly how.

As I wrote a couple of years ago, are you able to use your imagination to think of ways that a well-resourced and motivated group of humans could cause human extinction? If so, is there a reason to think that an AI wouldn't be able to execute the same plan?

I would welcome a blog post about RCTs, and if you decide to write one, I hope you consider the perspective below.

As far as I can tell ~0% of nonprofits are interested in rigorously studying their programs in any way, RCTs or otherwise, and I can't help but suspect that this is largely because mostly when we do run RCTs we find that these cherished programs have ~no effect. It's not at all surprising to me that most charities that conduct RCTs feel pressured to do so by donors; but on the other hand basically all charity activities ultimately flow from donor preferences, because donors are the ones with most of the power.

Living Goods is one interesting example, where they ran an RCT because a donor demanded it, got an unexpected (positive) result, and basically pivoted the whole charity based on that. I view that as a success story.

I am certainly not claiming that RCTs are appropriate for all kinds of programs, or some kind of silver bullet. It's more like, if you ask charities "would you like more or less accountability for results", the answer is almost always going to be, "less, thanks".

I don't understand how you think these legal mechanisms would actually serve to bind superintelligent AIs. Or to put it another way, could chimpanzees or dolphins have established a legal mechanism that would have prevented human incursion into their habitat? If not, how is this hypothetical situation different?

Regarding the idea of trade — doesn't this basically assume that humans will get a return on capital that is at least as good as the AIs' return on capital? If not, wouldn't the AIs eventually end up owning all the capital? And wouldn't we expect superintelligent AIs to be better than humans at managing capital?

I agree that it aged well in terms of the expected effects of certain electoral outcomes, but the way I see it, that is different from claiming that electoral interventions would be cost-effective (even in retrospect). There was so much money and effort put into the election, it's not at all clear to me that EA would have been able to make a difference, even with the full weight of the movement dedicated to it.

Side note, I’m coming around to the idea the Prohibition isn’t actually a failed policy (except in the sense that it was overturned), because the decrease in domestic violence actually exceeded the amount of violence perpetrated by bootleggers. But from a democratic policymaking perspective, the legibility of the violence matters.

Here’s an essay from Vox making this case.

there is ample opportunity for peaceful and mutually beneficial trade with AIs that do not share our utility functions

What would humans have to offer AIs for trade in this scenario, where there are "more competitive machine alternatives to humans in almost all societal functions"?

as long as this is done peacefully and lawfully

What do these words even mean in an ASI context? If humans are relatively disempowered, this would also presumably extend to the use of force and legal contexts.

The use of the word “shall” makes it sound like you are confidently predicting the EU will do it, as opposed to to proposing asking the EU to do it.

This question was also discussed in this other forum post, and probably in some other posts that I can’t find. Why Brain Drain Isn't Something We Should Worry About

I feel like @Zvi  put it well when he wrote,

I don’t see any wolves. So why are you proposing to have a boy watch the sheep and yell ‘wolf’ if a wolf shows up? Stop crying wolf.

Thanks Julia for writing this. It’s correct all the way around.

I can’t help but feel though that there is something a little mean-spirited in targeting those donating to Notre Dame, the opera, etc. There is a common and (in my opinion) somewhat toxic pattern where if someone spends their money on yachts, mansions, etc., then nobody complains but as soon as they do something even a little bit public spirited then all of a sudden everyone feels free to criticize. Like, we can have plenty of objections to MacKenzie Scott’s philanthropic choices, but shouldn’t Jeff Bezos get at least as much commentary for his non-philanthropic choices?

Load more