Wei_Dai

Wiki Contributions

Comments

Introducing a New Course on the Economics of AI

Not directly related to the course, but since you're an economist with an interest in AI, I'm curious what you think about AGI will drastically increase economies of scale.

Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming

My own fantasy is that people will eventually be canceled for failing to display sufficient moral uncertainty. :)

Why AI alignment could be hard with modern deep learning

Sounds like their positions are not public, since you don't cite anyone by name? Is there any reason for that?

Why AI alignment could be hard with modern deep learning

There’s a very wide range of views on this question, from “misalignment risk is essentially made up and incoherent” to “humanity will almost certainly go extinct due to misaligned AI.” Most people’s arguments rely heavily on hard-to-articulate intuitions and assumptions.

My sense is that the disagreements are mostly driven "top-down" by general psychological biases/inclinations towards optimism vs pessimism, instead of "bottom-up" as the result of independent lower-level disagreements over specific intuitions and assumptions. The reason I think this is that there seems to be a strong correlation between concern about misalignment risk and concern about other kinds of AI risk (i.e., AI-related x-risk). In other words, if the disagreement was "bottom-up", then you'd expect that at least some people who are optimistic about misalignment risk would be pessimistic about other kinds of AI risk, such as what I call "human safety problems" (see examples here and here) but in fact I don't seem to see anyone whose position is something like, "AI alignment will be easy or likely solved by default, therefore we should focus our efforts on these other kinds of AI-related x-risks that are much more worrying."

(From my limited observation, optimism/pessimism on AI risk also seems correlated to optimism/pessimism on other topics. It might be interesting to verify this through some systematic method like a survey of researchers.)

In favor of more anthropics research

See this comment by Vladimir Slepnev and my response to it, which explain why I don't think UDT offers a full solution to anthropic reasoning.

AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

Do you have a place where you've addressed critiques of Against Democracy that have come out after it was published, like the ones in https://quillette.com/2020/03/22/against-democracy-a-review/ for example?

AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

Can you address these concerns about Open Borders?

  1. https://www.forbes.com/sites/modeledbehavior/2017/02/26/why-i-dont-support-open-borders

  2. Open borders is in some sense the default, and states had to explicitly decide to impose immigration controls. Why is it that every nation-state on Earth has decided to impose immigration controls? I suspect it may be through a process of cultural evolution in which states that failed to impose immigration controls ceased to exist. (See https://en.wikipedia.org/wiki/Second_Boer_War for one example that I happened to come across recently.) Do you have another explanation for this?

Towards a Weaker Longtermism

This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.

If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don't, that seems intuitively wrong also, analogous to a group of people who don't take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven't thought carefully about this.)

Draft report on existential risk from power-seeking AI

I’m focused, here, on a very specific type of worry. There are lots of other ways to be worried about AI -- and even, about existential catastrophes resulting from AI.

Can you talk about your estimate of the overall AI-related x-risk (see here for an attempt at a comprehensive list), as well as total x-risk from all sources? (If your overall AI-related x-risk is significantly higher than 5%, what do you think are the other main sources?) I think it would be a good idea for anyone discussing a specific type of x-risk to also give their more general estimates, for a few reasons:

  1. It's useful for the purpose of prioritizing between different types of x-risk.
  2. Quantification of specific risks can be sensitive to how one defines categories. For example one might push some kinds of risks out of "existential risk from misaligned AI" and into "AI-related x-risk in general" by defining the former in a narrow way, thereby reducing one's estimate of it. This would be less problematic (e.g., less likely to give the reader a false sense of security) if one also talked about more general risk estimates.
  3. Different people may be more or less optimistic in general, making it hard to compare absolute risk estimates between individuals. Relative risk levels suffer less from this problem.
Concerns with ACE's Recent Behavior

If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, "I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion"

About online versus offline, I'm confused why you think you'd be able to convey your model offline but not online, as the bandwidth difference between the two don't seem large enough that you could do one but not the other. Maybe it's not just the bandwidth but other differences between the two mediums, but I'm skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it's not clear what it would mean if you could convince someone of some idea over one medium but not the other.

If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn't seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum's norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)

I'm still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can't be spread through a text medium because of its inherent bias?

Load More