trevor1

126Joined Sep 2019

Comments
150

I was helped greatly by this; if not for this forum summary, I would have missed Zvi's "How to Bounded Distrust" which is very helpful for my job. It is also an excellent summary.

I don't think it's very surprising that 80% of the value comes from 20% of the proposed solutions.

The list of proposed solutions here are pretty illustrative of the Pareto Principle:  80% of the value comes from 20% of the proposed solutions.

I'm glad that there's more really good work in this area, and I'm looking forward to rates much better than $180/DALY or $150/woman/year.

A lot of people in EA have no idea of the extent that EA strategies are capable of revolutionizing existing work in this area. It seems that people won't touch this cause area with a ten foot pole because the entire cause area is associated with a very large number of deranged people on contemporary social media. 

But the presence or absence or frequency of deranged people on social media tells us virtually nothing about the scope and severity of a cause area, let alone to what extent that existing programs can have their effectiveness increased by orders of magnitude. Social media (and the entertainment/media industry in general) is a broken clock, and it shouldn't distract or divert anyone from Akhil's game-changing research here.

It's off topic, I know, but does anyone here have any really good articles or papers indicating why correct AI timelines would be short? This seems like a good place to ask and I'm not aware of a better one, which is why I'm asking here even though I know I'm not supposed to.

Disclaimer: 

the human tendency to refuse to seriously think about contingencies just because they're "unthinkably horrible" is the entire reason why a bunch of hobbyists from SF are humanity's first line of defense in the first place

This is not necessarily true. There are some very solid alternative explanations for why it ended up like this.

I upvoted this because your plane analogy is fantastic, and epistemic-downvoted this because "EA needs to be more open to external pragmatism" could mean a lot of things, including the obvious "EA needs to get better at unknown unknowns and people who understand them" but simultaneously also a dog whistle for "underdogs like me should be in charge instead" or "EA should be more in-line with status quo ideologies that already have 100 million people".

I also do weasel words a lot, so I know what I'm talking about.

I disagree with this disagreement.

EA is built on a foundation of rejecting the status quo. EA might only do that in places where the status quo is woefully inadequate of false in some way, but the status quo is still the status quo and it will strike back at people who challenge it.

The phenomenon described above is a side effect of optimization, not "contrarian bias". Contrarian bias is also a problem that many people in EA and especially rationalists have, but the only common factor is that there aren't the kind of people who assume that everything is all right and go along with it.

The top 10% of submissions to the AI safety arguments competition was a crowdsourced attempt to make some really good one-liners and short paragraphs that do a really good job arguing and articulating specific AI concepts. I tested that document on someone though, and by itself I don't think it worked very well. So it might be one of those things where it looks good to someone already familiar with the concepts, but doesn't work very well in the field.

Load More