All of Brian Wang's Comments + Replies

Panoplia Laboratories is developing broad-spectrum antivirals in order to fight future pandemics, and we are looking to hire one researcher to be a part of our team.

Job description: Responsibilities would include:

  • Designing and cloning new antiviral candidates
  • Designing and executing in vitro assays to characterize antiviral candidates
  • Assisting with the design and execution of in vivo studies for the characterization of antiviral candidates
  • Analyzing data from in vitro and in vivo studies
  • Actively communicating results with the rest of the team

As an early memb... (read more)

1
Jmd
2mo
Exciting!!! Wish I could come back and join you guys :)

Besides the 3-month-duration broadly effective antiviral prophylactics that Josh mentioned, I think that daily broadly antiviral prophylactics could also be promising if they could eventually become widespread consumer products. However, the science is still pretty nascent  – at least for prophylaxis, I don't believe there is much human data at all, and I haven't seen anything I've seen reaches truly 24 h duration efficacy (which I'd see as a major barrier to consumer uptake).

Here are some links:

"PCANS", and its commercial product Profi nasal spray

INN... (read more)

I think that if the broadly effective antiviral prophylactic was truly effective on an individual level, then there could be a reasonable market for it. But the market value would be based on its efficacy at protecting individuals, not on transmission reduction.

Which I think is fine - in the absence of specific incentives to make drugs that reduce transmission, a strategy that involves bringing transmission reduction "along for the ride" on otherwise already-valuable drugs makes sense to me. 

How does this change affect the eligibility of near-term applicants to LTFF/EAIF (e.g., those who apply in the next 6 months) who have received OpenPhil funds in the past / may receive funds from OpenPhil in the future? Currently my understanding is that these applicants are ineligible for LTFF/EAIF by default – does this change if EA funds and Open Philanthropy are more independent?

Estimates of the mortality rate vary, but one media source says, "While the single figures of deaths in early January seemed reassuring, the death toll has now climbed to above 3 percent." This would put it roughly on par with the mortality rate of the 1918 flu pandemic.

It should be noted that the oft-cited case-fatality ratio of 2.5% for the 1918 flu might be inaccurate, and the true CFR could be closer to 10%: https://rybicki.blog/2018/04/11/1918-influenza-pandemic-case-fatality-rate/?fbclid=IwAR3SYYuiERormJxeFZ5Mx2X_00QRP9xkdBktfmzJmc8KR-iqp... (read more)

It seems that there are two factors here leading to a loss in altruistic belief:

1. Your realization that others are more selfish than you thought, leading you to feel a loss of support as you realize that your beliefs are more uncommon than you thought.

2. Your uncertainty about the logical soundness of altruistic beliefs.

Regarding the first, realize that you're not alone, that there are thousands of us around the world also engaged in the project of effective altruism – including potentially in your city. I would investigate to see if there are local ... (read more)

I think the central "drawing balls from an urn" metaphor implies a more deterministic situation than that which we are actually in – that is, it implies that if technological progress continues, if we keep drawing balls from the urn, then at some point we will draw a black ball, and so civilizational devastation is basically inevitable. (Note that Nick Bostrom isn't actually saying this, but it's an easy conclusion to draw from the simplified metaphor). I'm worried that taking this metaphor at face value will turn people towards br... (read more)

Interesting idea. This may be worth trying to develop more fully?

Yeah. I'll have to think about it more.

I'm still coming at this from a lens of "actionable advice for people not in ea". It might be that the person doesn't know many other trusted individuals, what should be the advice then?

Yeah, for people outside EA I think structures could be set up such that reaching consensus (or at least a majority vote) becomes a standard policy or an established norm. E.g., if a journal is considering a manuscript with potential info hazards, then pe... (read more)

0
WillPearson
6y
There is a growing movement of maker's and citizen scientists that are working on new technologies. It might be worth targeting them somewhat (although again probably without the math). I think the approaches for ea/non-ea seem sensible. I also like to weigh the downside of the lack of releasing the information as well. If you don't release information you are making everyone make marginally worse decisions (if you think someone will release it anyway later). For example in the nuclear fusion example, you think that everyone currently building new nuclear fission stations are wasting their time, that people training on how to manage coal plants should be training on something else etc, etc. I also have another consideration which is possibly more controversial. I think we need some bias to action, because it seems like we can't go on as we are for too much longer (another 1000 years might be pushing it). The level of resources and coordination towards global problems fielded by the status quo seems insufficient. So it is a default bad outcome. With this consideration, going back to the fusion pioneers, they might try and find people to tell so that they could increase the bus factor (the number of people that would have to die to lose the knowledge). They wouldn't want the knowledge to get lost (as it would be needed in the long term) and they would want to make sure that whoever they told understood the import and potential downsides of the technology. Edit: Knowing the sign of an intervention is hard, even after the fact. Consider the invention and spread of the knowledge about nuclear chain reactions. Without it we would probably be burning a lot more fossil fuels, however with it we have the existential risk associated with it. If that risk never pays out, then it may have been a spur towards greater coordination and peace. I'll try and formalise these thoughts at some point, but I am bit work impaired for a while.

If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf.

Ah right. I suppose the unilateralist's curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn't really apply. Although one wrinkle might be considering the unilateralist's curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other acto... (read more)

2
WillPearson
6y
Interesting idea. This may be worth trying to develop more fully? I'm still coming at this from a lens of "actionable advice for people not in ea". It might be that the person doesn't know many other trusted individuals, what should be the advice then? It would probably also be worth giving advice on how to have the conversation as well. The original article gives some advice on what happens if consensus can't be reached (voting/such like). As I understand it you shouldn't wait for consensus else you have the unilateralist's curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster (this seems very possible if you consider crucial considerations flipping signs, rather than just random noise in beliefs in desirability). Would you suggest discussion and vote (assuming no other courses of action can be agreed upon)? Do you see the need to correct for status quo bias in any way? This seems very important to get right. I'll think about this some more.

The unilateralists curse only applies if you expect other people to have the same information as you right?

My understanding is that it applies regardless of whether or not you expect others to have the same information. All it requires is a number of actors making independent decisions, with randomly distributed error, with a unilaterally made decision having potentially negative consequences for all.

You can figure out if they have the same information as you to see if they are concerned about the same things you are. By looking at the mitigation's pe

... (read more)
2
WillPearson
6y
Information determines the decisions that can be made. For example you can't spread the knowledge of how to create effective nuclear fusion without the information on how to make it. If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf. They may expect it to be net positive but they also expect some downsides and are unsure of whether it will be net good or not. To give a potential downside of nuclear fusion, let us say they are worried about creating excess heat over what the earth can dissipate due to widescale deployment in the world (even if it fixes global warming due to trapping solar energy, it might cause another heat related problem). I forget the technical term for this unfortunately. The fusion expert(s) cannot expect other people to release this information for them, for as far as they know they are the only people making that exact decision. What the researcher can do is try and build consensus/lobby for a collective decision making body on the internal climate heating (ICH) problem. Planning to release the information when they are satisfied that there is going to be a solution in time for fixing the problem when it occurs. If they find a greater than expected number of people lobbying for solutions to the ICH problem, then they can expect they are in a unilateralist's curse scenario. And they may want to hold off on releasing information even when they are satisfied with the way things are going (in case there is some other issue they have not thought of). They can look to see what the other people are doing that have been helping with ICH and see if there other initiatives they are starting, that may or may not be to do with the advent of nuclear fusion. I think I am also objecting to the expected payoff being thought of as a fixed quantity. You can either learn more about the world to alter your knowledge of the payoff or try and introduce th

The relevance of unilateralist's curse dynamics to info hazards is important and worth mentioning here. Even if you independently do a thorough analysis and decide that the info-benefits outweigh the info-hazards of publishing a particular piece of information, that shouldn't be considered sufficient to justify publication. At the very least, you should privately discuss with several others and see if you can reach a consensus.

0
WillPearson
6y
The unilateralists curse only applies if you expect other people to have the same information as you right? You can figure out if they have the same information as you to see if they are concerned about the same things you are. By looking at the mitigation's people are attempting. Altruists should be attempting mitigations in a unilateralist's curse position, because they should expect someone less cautious than them to unleash the information. Or they want to unleash the information themselves and are mitigating the downsides until they think it is safe. I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.

I wonder how much the "spend 1 year choosing and 4 years relentless pursuing a project" rule of thumb applies to having a high-impact career. Certain career paths might rely on building a lot of career capital before you can have high-impact, and career capital may not be easily transferable between domains. For example, if you first decide to relentlessly pursue a career in advancing clean meat technology for four years, and then re-evaluate and decide that influencing policymakers with regards to AI safety is the highest-value thing for you to ... (read more)

7
Joey
6y
I am more skeptical about transferable career capital. I tend to see people doing impressive things even in unrelated fields as providing a lot of career capital. E.g. A lot of EAs would hire someone who had done a successful project in another EA cause vs just doing something less related but more transferable. E.g. going into consulting. Also generally in line with the argument above, I tend to see that doing great focused work leads to better outcomes than “building generalized career capital” with the idea of eventually using it in a high impact direction. The most common outcome I see with EAs doing that is them spending a bunch of time saving/building career capital and then them leaving the EA movement, having caused pretty minimal good in the world. Additionally, doing impressive things in the EA movement is a way to both build career capital and do good at the same time. That being said, I think it’s somewhat a different question of what to factor in. You might decide after one year that the best thing to do is X (e.g. get a degree) which sets you up better for your next plan revaluation point 4 years later with minimal re-evaluation until you have gotten your degree.

Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here.

Part of what I'm confused about is what the positive case is for giving everyone... (read more)

0
Jeffhe
6y
Hi Brian, I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy's burning to death plus Susie's sore throat involves more or greater pain than Bob's burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear. But importantly, I don't share your belief that Amy's burning to death and Susie's sore throat involves more or greater pain than Bob's burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don't share this belief, so please read that if you want to know why. On the contrary, I think Amy's burning to death and Susie's sore throat involves just as much pain as Bob's burning to death. So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy's and Susie's side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.) But even if the suffering on Amy's and Susie's side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy's and Susie's together. (My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial) At the end of the day, I think one's intuitions are based on one's implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly to

Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob's argument that you should give him a 1/3 chance of being helped even though he wo... (read more)

0
Jeffhe
6y
Hey Brian, No worries! I've enjoyed our exchange as well - your latest response is both creative and funny. In particular, when I read "They have read your blog post on the EA forum and decide to flip a coin", I literally laughed out loud (haha). It's been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to. Btw, for the benefit of first-time readers, I've updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I've also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response. You write, "In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach." This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn't know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. So, just because Bob doesn't know anything about his future, it does not mean that he has an equal chance

I do think Bob has an equal chance to be in Amy's or Susie's position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don't know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don'... (read more)

0
Jeffhe
6y
It would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. Of course, if a person is behind the veil of ignorance and thus lacks relevant knowledge about his/her position, it might SEEM to him/her that he has an equal chance of being in any one's position, and he/she might thereby be led to make this mistake and consequently choose to save the greater number. In any case, what I just said doesn't really matter because you go on to say, "Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance." Let us then suppose that Bob, in fact, had no chance of being in either Amy's or Susie's position. Now imagine Bob asks you why you are choosing to save Amy and Susie and giving him no chance at all, and you reply, "Look, Bob, I wished I could help you too but I can't help all. And the reason I'm not giving you any chance is that if you, Amy and Susie were all behind the veil of ignorance and was led to assume that each of you had an equal chance of being in anyone else's position, then all of you (including you, Bob) would have agreed to the principle of saving the greater number in the kind of case you find yourself in now." Don't you think Bob can reasonably reply, "But Brian, whether or not I make that assumption under the veil of ignorance is irrelevant. The fact of the matter is that I had no chance of being in Amy's or Susie's position. What you should do shouldn't be based on what I would agree to in a condition where I'm imagined as making a

One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you w... (read more)

0
Jeffhe
6y
Hey Brian, I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person. I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven't actually read the book). Interestingly, kbog - another person I've been talking with on this forum - accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy's or Susie's position. In such a situation, do you think you should just save Amy and Susie?
2
Jeffhe
6y
Hi Brian, Thanks for your comment and for reading my post! Here's my response: Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy's or Susie's position? If it is the case, then saving the greater number would in effect give each of them a 2/3 chance of being saved (the best chance as you rightly noted). But if it isn't, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy's or Susie's position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance? Intuitively, for Bob to have had an equal chance of being in Amy's position or Susie's position or his actual position, he must have had an equal chance of living Amy's life or Susie's life or his actual life. That's how I intuitively understand a position: as a life position. To occupy someone's position is to be in their life circumstances - to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy's position or Susie's position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy's parents or Susie's parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents' cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc). Of course, being in someone's position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy's position just requires being in her actual l

To add onto the "platforms matter" point, you could tell a story similar to Bostrom's (build up credibility first, then have impact later) with Max Tegmark's career. He explicitly advocates this strategy to EAs in 25:48 to 29:00 of this video: https://www.youtube.com/watch?v=2f1lmNqbgrk&feature=youtu.be&t=1548.

0
alexflint
6y
Thanks for the pointer - noted!

I'd like to hear more about your estimate that another non-human civilization may appear on Earth on the order of 100 million years from now; is this mostly based on the fact that our civilization took ~100 million years to spring up from the first primates?

If there is a high probability of another non-human species with moral value reaching our level of technological capacity on Earth in ~100 million years conditional on our own extinction, then this could lessen the expected "badness" of x-risks in general, and could also have implications for ... (read more)

1
turchin
6y
Basically, there are two constraints on the timing of the new civilization, which are explored in details in the article: 1) As closest our relative are chimps with 7 million genetic difference from us, human extinction means that at least 7 million years there will be no other civilization, and likely more, as most causes of human extinction would kill great apes too. 2) Life on Earth will be possible approximately next 600 mln years based on the Earth and Sun models. Thus the next civilization timing is between 7 and 600 mln years, but the probability peaks closer to 100 mln years, as it is time needed for the evolution of primates "again" from the "rodents", and it will later decline as the conditions on the planet will deteriorate. We explored the difference between human extinction risks and l-risks, that is life extinction risk in another article: http://effective-altruism.com/ea/1jm/paper_global_catastrophic_and_existential_risks/ In it, we show that life extinction is worse than human extinction, and universe destruction is even worse than life extinction, and this should be taken into account in risk prevention prioritisation.

I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to... (read more)

The change in ethical views seems very slow and patchy, though - there are something like 30 million slaves in the world today, compared to 3 million in the US at its peak (I don't know how worldwide numbers have changed over time.)

This is a good point; however, I would also like to point out that it could be the case that a majority of "dedicated donors" don't end up taking the pledge, without this becoming a norm. The norm instead could be "each individual should think through themselves, giving their own unique situations, whether or not taking the pledge is likely to be valuable," which could lead to a situation where "dedicated donors" tend not to take the pledge, but not necessarily to a situation where, if you are a "dedicated donor," y... (read more)

I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from ... (read more)

Regarding your point (2), couldn't this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?

1
CarlShulman
8y
I don't think I'm following your argument. Are you saying that we should care about the absolute size of the difference in effort in the two areas rather than proportions? Research has diminishing returns because of low-hanging fruit. Going from $1MM to $10 MM makes a much bigger difference than going from $10,001 MM to $10,010 MM.

Quick feedback forms for workshops/discussion groups would be nice; I think most of the workshops I attended didn't allow any opportunity for feedback, and I would have had comments for them.

A guarantee that all the talks/panels will be recorded.

The booklet this year stated that "almost" all the talks would be recorded, which left me worried that, if I missed a talk, I wouldn't be able to watch it in the future (this might just be me). I probably would have skipped more talks and talked to more people if I had a guarantee that all the talks would be recorded.

Also, it would be nice to have a set schedule that didn't change so much during the conference. The online schedule was pretty convenient and was (for the most part) up to date, but people using the physical booklet may have been confused.

I think that adopting your first resolution, in addition to the assumption by commenters that being a child with malaria is a net negative experience, can rescue some of the value of AMF. Say in situation 1, a family has a child, Afiya, who eventually gets malaria and dies, and thus has a net negative experience. Because of this, the family decides to have a second child, Brian, who does not get malaria and lives a full and healthy life. In situation 2, where AMF is taken to have a contribution, a family has just one child, Afiya, who is prevented from ... (read more)

I have a question for those who donate to meta-charities like Charity Science or REG to take advantage of their multiplier effect (these charities typically raise ~$5-10 per dollar of expenditure). Do you donate directly towards the operations expenses of these meta-charities? For example, REG's donations page has the default split of your donations as 80% towards object-level charities (and other meta-charities), while 20% is towards REG's operating expenses, which include the fundraising efforts that the multiplier presumably is coming from. It seems ... (read more)

2
Peter Wildeford
9y
Yes. I donate to both GiveWell top charities (which Charity Science supports) and Charity Science's operations. This seems largely right, though it's important to note that donating to the recommended charities of a meta-charity do help that charity.