I have read and reread this comment and am honestly not sure whether this was a reply to my answer or to something else.
On point 1, I think the past week is a fair indication that the coronavirus is a big problem, and we can let this point pass.
On point 2, as of my answer, there seemed to be no academic talk of human challenge trials to shorten vaccine timelines, regardless of how many were working on vaccines. The problem I see is that if a human challenge trial would shorten timelines, authorities and researchers might still hesitate to run one due to paternalistic attitudes in medical ethics. The problem not that authorities and researchers are not trying to make a vaccine or need amateurs to do their job for them. So, this problem in particular seemed neglected, and worth raising to their attention.
On point 3, I'm not sure if you intended to discuss the expected impact of speeding vaccine development, or if you were confused on what a human challenge trial is? I did not discuss making theoretical models of the impact of the coronavirus on the world.
Points 4 and 5 do not seem to engage with my answer at all.
If this was a mispost, no harm no foul.
Otherwise- I'm not opposed to having a respectful, in-depth discussion of this issue. But the majority of your reply was off-topic and the rest only vaguely engaged with what I wrote. If future replies are similar I'm not going to respond.
Medicine isn't my area, but I'd guess the timelines for vaccine trial completion might be significantly accelerated if some trial participants agreed to be deliberately exposed to SARS-CoV-2, rather than getting data by waiting for participants to get exposed on their own. This practice is known as a "human challenge trial" (HCT), and is occasionally used to get rapid proof-of-concept on vaccines. Using live, wild-type SARS-CoV-2 on fully informed volunteers could possibly provide valuable enough data to reduce the expected development time of the vaccine by several weeks, with a large expected number of lives saved as a result.
Similar usage of HCT's seems to generally be permitted by the relevant ethics committees for low-risk diseases, such as dengue fever, but not high-risk ones, like Ebola or HIV. A brief look at a WHO document on these, and a longer look at relevant US federal law, didn't turn up any hard rules on how dangerous a disease can be before exposure to a "wild-type" virus is forbidden, and both at least mention considering societal benefits as a factor. However, sometimes HCT's for relatively minor diseases like Zika are refused.
The WHO document mentions that these sorts of tests are considered better for selecting between vaccine candidates or supporting evidence than as robust proof of effectiveness for general usage (see Section 5 of the linked document). The document seems to expect that most usages for preventing dangerous diseases will involve modified diseases. Using wild-type coronavirus would be both faster and stronger evidence of efficacy.
There are probably many other people on this forum who could address the expected value of such a trial better than I could, but my suggestion is that EA's engage with the relevant regulators to push for allowing such trials to take place if they would help. Basically, having volunteers put themselves at risk for a faster vaccine would be net positive; independent ethics committees might reject such a study anyways; generating regulatory or public support could make this less likely.
If this were to happen, it seems like a key narrative point would be that the government is allowing people to voluntarily take on risks to find a cure. I think that there would be plenty of volunteers if you asked right, and if some EA's were to do this, it would help their optics tremendously if several of them vocally volunteered.
It might be worthwhile to have some sort of flag or content warning for potentially controversial posts like this.
On the other hand, this could be misused by people who dislike the EA movement, who could use it as a search parameter to find and "signal-boost" content that looks bad when taken out of context.
|...having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism...
I had to go back and double-check that this comment was written before Asilomar 2017. It describes some of the talks very well.
I would also like to be added to the crazy EA's investing group. Could you send an invite to me on here?
The 'Stache is great! He's actually how I heard about Effective Altruism.
Right, I'm accounting for my own selfish desires here. An optimally moral me-like person would only save enough to maximize his career potential.
| It just seems rather implausible, to me, that retirement money is anywhere close to being a cost-effective intervention, relative to other likely EA options.
I don't think that "Give 70-year-old Zach a passive income stream" is an effective cause area. It is a selfish maneuver. But the majority of EAs seem to form some sort of boundary, where they only feel obligated to donate up to a certain point (whether that is due to partially selfish "utility functions" or a calculated move to prevent burnout). I've considered choosing some arbitrary method of dividing income between short term expenses, retirement and donations, but I am searching for a method that someone considers non-arbitrary, because I might feel better about it.
Suppose tomorrow MIRI creates a friendly AGI that can learn a value system, make it consistent with minimal alteration, and extrapolate it in an agreeable way. Whose values would it be taught?
I've heard the idea of averaging all humans' values together and working from there. Given that ISIS is human and that many other humans believe that the existence of extreme physical and emotional suffering is good, I find that idea pretty repellent. Are there alternatives that have been considered?
It seems like people in academia tend to avoid mentioning MIRI. Has this changed in magnitude during the past few years, and do you expect it to change any more? Do you think there is a significant number of public intellectuals who believe in MIRI's cause in private while avoiding mention of it in public?