ZachWeems

Posts

Sorted by New

Topic Contributions

Comments

EA will likely get more attention soon

Agreed. 

My model is, he has a number of frustrations with EA. That on its own isn't a big deal. There are plenty of valid, invalid, and arguable gripes with various aspects of EA. 

But he also has a major bucket error where the concept of "far-right" is applied to a much bigger Category of bad stuff. Since some aspects of EA & longtermism seem to be X to him, and X goes in the Category, and stuff in the Category is far-right, EA must have far-right aspects. To inform people of the problem, he writes articles claiming they're far-right. 

If EA's say his claims are factually false, he thinks the respondents are fooling themselves. After all, they're ignoring his wider point that EA has stuff from the Category, in favor of the nitpicky technicalities of his examples. He may even think they're trying to motte & bailey people into thinking EA & longtermism can't possibly have X. To me, it sounds like his narrative is now that he's waging a PR battle against Bad Guys.

I'm not sure what the Category is, though. 

At first I thought it was an entirely emotional thing- stuff that make him sufficiently angry, or a certain flavor of angry, or anything where he can't verbalize why it makes him angry, are assumed to be far-right. But I don't think that fits his actions. I don't expect many people can decide "this makes me mad, so it's full of white supremacy and other ills", run a years-long vendetta on that basis, and still have a nuanced conversation about which parts aren't bad.

Now I think X has a "shape"- with time & motivation, in a safe environment, Torres could give a consistent definition of what X is and isn't. And with more of those, he could explain what it is & why he hates it without any references to far-right stuff. Maybe he could even do an ELI5 of why X goes in the same Category as far right stuff in the first place. But not much chance of this actually happening, since it requires him being vulnerable with a mistrusted representative of the Bad Guys.

Response to Recent Criticisms of Longtermism

Commenting from five months into the future, when this is topically relevant:

I disagree. I read Torres' arguments as not merely flawed, but as attempts to link longtermism to the far right in US culture wars. In such environments people are inclined to be uncharitable, and to spread the word to others who will also be uncharitable. With enough bad press it's possible to get a Common Knowledge effect, where even people who are inclined to be openminded are worried about being seen doing so. That could be bad for recruiting, funding, cooperative endeavors, & mental health.

Now, there's only so many overpoliticized social media bubbles capable of such a wide effect, and they don't find new targets every day. So the chances of EA becoming a political bogeyman are low, even if Torres is actively attempting this. But I think bringing up his specific insinuations to a new audience invites more of this risk than is worth it.

Are there good EA projects for helping with COVID-19?

I have read and reread this comment and am honestly not sure whether this was a reply to my answer or to something else.

On point 1, I think the past week is a fair indication that the coronavirus is a big problem, and we can let this point pass.

On point 2, as of my answer, there seemed to be no academic talk of human challenge trials to shorten vaccine timelines, regardless of how many were working on vaccines. The problem I see is that if a human challenge trial would shorten timelines, authorities and researchers might still hesitate to run one due to paternalistic attitudes in medical ethics. The problem not that authorities and researchers are not trying to make a vaccine or need amateurs to do their job for them. So, this problem in particular seemed neglected, and worth raising to their attention.

On point 3, I'm not sure if you intended to discuss the expected impact of speeding vaccine development, or if you were confused on what a human challenge trial is? I did not discuss making theoretical models of the impact of the coronavirus on the world.

Points 4 and 5 do not seem to engage with my answer at all.

If this was a mispost, no harm no foul.

Otherwise- I'm not opposed to having a respectful, in-depth discussion of this issue. But the majority of your reply was off-topic and the rest only vaguely engaged with what I wrote. If future replies are similar I'm not going to respond.

Are there good EA projects for helping with COVID-19?

Medicine isn't my area, but I'd guess the timelines for vaccine trial completion might be significantly accelerated if some trial participants agreed to be deliberately exposed to SARS-CoV-2, rather than getting data by waiting for participants to get exposed on their own. This practice is known as a "human challenge trial" (HCT), and is occasionally used to get rapid proof-of-concept on vaccines. Using live, wild-type SARS-CoV-2 on fully informed volunteers could possibly provide valuable enough data to reduce the expected development time of the vaccine by several weeks, with a large expected number of lives saved as a result.

Similar usage of HCT's seems to generally be permitted by the relevant ethics committees for low-risk diseases, such as dengue fever, but not high-risk ones, like Ebola or HIV. A brief look at a WHO document on these, and a longer look at relevant US federal law, didn't turn up any hard rules on how dangerous a disease can be before exposure to a "wild-type" virus is forbidden, and both at least mention considering societal benefits as a factor. However, sometimes HCT's for relatively minor diseases like Zika are refused.

The WHO document mentions that these sorts of tests are considered better for selecting between vaccine candidates or supporting evidence than as robust proof of effectiveness for general usage (see Section 5 of the linked document). The document seems to expect that most usages for preventing dangerous diseases will involve modified diseases. Using wild-type coronavirus would be both faster and stronger evidence of efficacy.

There are probably many other people on this forum who could address the expected value of such a trial better than I could, but my suggestion is that EA's engage with the relevant regulators to push for allowing such trials to take place if they would help. Basically, having volunteers put themselves at risk for a faster vaccine would be net positive; independent ethics committees might reject such a study anyways; generating regulatory or public support could make this less likely.

If this were to happen, it seems like a key narrative point would be that the government is allowing people to voluntarily take on risks to find a cure. I think that there would be plenty of volunteers if you asked right, and if some EA's were to do this, it would help their optics tremendously if several of them vocally volunteered.

Leverage Research: reviewing the basic facts

Meta:

It might be worthwhile to have some sort of flag or content warning for potentially controversial posts like this.

On the other hand, this could be misused by people who dislike the EA movement, who could use it as a search parameter to find and "signal-boost" content that looks bad when taken out of context.

Two Strange Things About AI Safety Policy

|...having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism...

I had to go back and double-check that this comment was written before Asilomar 2017. It describes some of the talks very well.

Investment opportunity for the risk neutral

I would also like to be added to the crazy EA's investing group. Could you send an invite to me on here?

Open Thread #36

The 'Stache is great! He's actually how I heard about Effective Altruism.

Open Thread #36

Right, I'm accounting for my own selfish desires here. An optimally moral me-like person would only save enough to maximize his career potential.

Open Thread #36

| It just seems rather implausible, to me, that retirement money is anywhere close to being a cost-effective intervention, relative to other likely EA options.

I don't think that "Give 70-year-old Zach a passive income stream" is an effective cause area. It is a selfish maneuver. But the majority of EAs seem to form some sort of boundary, where they only feel obligated to donate up to a certain point (whether that is due to partially selfish "utility functions" or a calculated move to prevent burnout). I've considered choosing some arbitrary method of dividing income between short term expenses, retirement and donations, but I am searching for a method that someone considers non-arbitrary, because I might feel better about it.

Load More