Wei_Dai

Wei_Dai's Comments

Concerning the Recent 2019-Novel Coronavirus Outbreak

I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.

I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?

Attempted summary of the 2019-nCoV situation — 80,000 Hours

Robert (or anyone else), do you know anyone who actually works in pandemic preparedness? I'm wondering how to get ideas to such people. For example:

  1. artificial summer (optimize indoor temperature and humidity to reduce viral survival time on surfaces)
  2. study mask reuse, given likely shortages (for example bake used masks in home ovens at low enough temperature to not damage the fibers but still kill the viruses)
  3. scale up manufacturing of all drugs showing effectiveness against 2019-nCoV in vitro, ahead of clinical trial results

longer term:

  1. subsidize or mandate anti-microbial touch surfaces in public spaces (door handles, etc.)
  2. stockpile masks and other supplies, make the stockpiles big enough, and publicize them to avoid panic/hoarding/shortages
Candidate Scoring System recommendations for the Democratic presidential primaries

I'm trying to figure out which Democratic presidential candidate is likely to be best with regard to epistemic conditions in the US (i.e., most likely to improve them or at least not make them worse). This seems closely related to "sectarian tension" which is addressed in the scoring system but perhaps not identical. I wonder if you can either formally incorporate this issue into your scoring system, or just comment on it informally here.

A small observation about the value of having kids

There a common thought that Effective Altruists can, through careful, good parenting, impart positive values and competence to their descendants.

I'm pretty interested in this topic. Can you say more about the best available evidence for this, and best guesses as to how to go about doing it? For example are there books you can recommend?

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

It feels very relevant that you’re flagrantly violating the “Don’t Make Things Worse” principle.

By triggering the bomb, you're making things worse from your current perspective, but making things better from the perspective of earlier you. Doesn't that seem strange and deserving of an explanation? The explanation from a UDT perspective is that by updating upon observing the bomb, you actually changed your utility function. You used to care about both the possible worlds where you end up seeing a bomb in the box, and the worlds where you don't. After updating, you think you're either a simulation within Omega's prediction so your action has no effect on yourself or you're in the world with a real bomb, and you no longer care about the version of you in the world with a million dollars in the box, and this accounts for the conflict/inconsistency.

Giving the human tendency to change our (UDT-)utility functions by updating, it's not clear what to do (or what is right), and I think this reduces UDT's intuitive appeal and makes it less of a slam-dunk over CDT/EDT. But it seems to me that it takes switching to the UDT perspective to even understand the nature of the problem. (Quite possibly this isn't adequately explained in MIRI's decision theory papers.)

I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA

(I really should ask you some questions about AI risk and policy/strategy/governance ("Policy" from now on). I was actually thinking a lot about that just before I got sidetracked by the SJ topic.)

  1. My understanding is that aside from formally publishing papers, Policy researchers usually communicate with each other via private Google Docs. Is that right? Would you find it useful to have a public or private forum for Policy discussion similar to the AI Alignment Forum? See also Where are people thinking and talking about global coordination for AI safety?
  2. In the absence of a Policy Forum, I've been posting Policy-relevant ideas to the Alignment Forum. Do you and other Policy researchers you know follow AF?
  3. In this comment I wrote, "Worryingly, it seems that there’s a disconnect between the kind of global coordination that AI governance researchers are thinking and talking about, and the kind that technical AI safety researchers often talk about nowadays as necessary to ensure safety." Would you agree with this?
  4. I'm interested in your thoughts on The Main Sources of AI Risk?, especially whether any of the sources/types of AI risk listed there are new to you, if you disagree with any of them, or if you can suggest any additional ones.
I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA

Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.

This does not reassure me very much, because academia used to have strong openness norms but is quickly losing them or has already lost them almost everywhere, and it seems easy for founders to lose their influence (i.e., be pushed out or aside) these days, especially if they do not belong to one of the SJ-recognized marginalized/oppressed groups (and I think founders of EA mostly do not?).

Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly “canceled” are quite small from an EA perspective.

One could say that seeking knowledge and maximizing profits are somewhat incongruous with these things, but that hasn't stopped academia and corporations from adopting harmful SJ practices.

Heavy influence of and connection to philosophy selects for openness norms as well.

Again it doesn't seem like openness norms offer enough protection against whatever social dynamics is operating.

Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

Surely people in academia and business also had the motivation to avoid the most harmful practices, but perhaps didn't have the ability? Why do you think that EA has the ability? I don't see any evidence, at least from the perspective of someone not privy to private or internal discussions, that any EA person has a good understanding of the social dynamics driving adoption of the harmful practices, or (aside from you and a few others I know who don't seem to be close to the centers of EA) are even thinking about this topic at all.

I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA
  1. Social justice in relation to effective altruism

I've been thinking a lot about this recently too. Unfortunately I didn't see this AMA until now but hopefully it's not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various "isms" or "phobias", or worse, get demoted/fired/deplatformed) will come to affect EA as well.

I can see at least two ways of this happening to EA:

  1. Whatever social dynamic is responsible for this happening within SJ applies to EA as well, and EA will become like SJ in this regard for purely internal reasons. (In this case EA will probably come to have a different set of politically correct beliefs from SJ that one must profess faith in.)
  2. SJ comes to control even more of the cultural/intellectual "high grounds" (journalism, academia, K-12 education, tech industry, departments within EA organizations, etc.) than it already does, and EA will be forced to play by SJ's rules. (See second link above for one specific scenario that worries me.)

From your answers so far it seems like you're not particularly worried about this. If you have good reasons to not worry about this, please share them so I can move on to other problems myself.

(I think SJ is already actively doing harm because it pursues actions/policies based on these politically correct beliefs, many of which are likely wrong but can't be argued about. But I'm more worried about EA potentially doing this in the future because EAs tend to pursue more consequential actions/policies that will be much more disastrous (in terms of benefits foregone if nothing else) if they are wrong.)

Load More