Tsunayoshi

Comments

Concerning the Recent 2019-Novel Coronavirus Outbreak

For posterity, I was wrong here because I was unaware of the dispersion parameter k that is substantially higher for SARS than for Covid-19.

Non-pharmaceutical interventions in pandemic preparedness and response

Truly excellent post! 

My intuition is that research abouts NPIs  on behavioural change might be more tractable and therefore impactful than research where the endpoint is infection.  If the endpoint is infection, any study that enrolls the general population will need to have very large sample sizes, as the examples you listed illustrate. I am sure these problems can be overcome, but I assume that one reason we have not seen more of these studies is that it is infeasible to do so without larger coordination.

 While it is unfortunate and truly surprising that we have very little research on e.g. the impact of mask wearing and distancing, we  do know that certain behavioural, realistic changes would be completely sufficient to squash the pandemic in many regions.

 The change does not have to be large: As the reproductive number R is magically hovering around ~1.1 to ~1.3 in most regions in the Western world, it would be sufficient if people would act just a little bit more careful to get R below 1: That could mean reducing private meetings by e.g. one third (or moving them outside),  widespread adoption of contact tracing apps,  placing air filters in schools, or targeting public health messaging towards people that currently are not reached or persuaded. I have seen some research about vaccine hesitancy, but far less about these other areas. At the very least, a randomized study comparing different kinds of public health messaging  seems really easy to do.and fairly useful.  This might look  differently for the next pandemic though.     

More broadly: As you alluded to, fostering and increasing coordination between researchers looking to conduct a study might also be really useful. This applies probably even more to research about drug interventions, but way too much of it is underpowered and badly conducted,  and thus pretty much useless before results have even been published.  This paper argues that the solutions are already  known  (e.g. multicenter trials), but not implemented widely due to institutional inertia. Again, it is worth looking into how to facilitate such  coordination, I believe that large cash grants by EA aligned institutions conditional on coordination between different trial sites could work.

Non-pharmaceutical interventions in pandemic preparedness and response

There's an additional factor: Marketing and public persuasion. It is one thing to say: Based on a theoretical model, air filters work, and a totally different thing to say: We saw that air filters cut transmission by X% . My hope would be that the certainty and the effect estimate could serve to overcome the collective inaction we saw in the pandemic (in that many people agree that e.g. air filters would probably help, but barely nobody installed them in schools). 

[Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative?

[Epistemic status: This is mostly hobbyist research that I did to evaluate which tests to buy for myself]

The numbers listed by the manufacturers are not very useful, sadly. These are generally provided without a standard protocol or independent evaluation, and can be assumed to be a best case scenario in a sample of symptomatic individuals. On the other hand, as you note, the sensitivity of antigen tests increases when infectiousness is high.

 I am absolutely out of depth trying to balance these two factors, but luckily an empirical study from the UK  estimates based on contact tracing data that "The most and least sensitive LFDs [a type of rapid antigen tests used in the UK] would detect 90.5% (95%CI 90.1-90.8%) and 83.7% (83.2-84.1%) of cases with PCR-positive contacts respectively." So, if a person tests negative but is still Covid-19 positive, you can assume the likelihood of infection to be 10-20% of an average Covid-19 contact.  

With regards to self vs. professional testing, there does not seem to be a very clear picture yet, but this German study suggests basically equivalent sensitivity.    

You should also make sure to buy tests that were independently evaluated, you can find lists of such tests here or here. The listed numbers are hard to compare between different studies and tests, however, but the one you mentioned seems to have good results compared to other tests

I am honestly not sure how long the test results are valid, but 2 hours seems safe. I cannot comment on the other numbers provided by microCovid. 

Dutch anti-trust regulator bans pro-animal welfare chicken cartel

No, my impression is that willingness to pay is a sufficient but not necessary condition to conclude that an industry standard benefits customers. A different sufficient condition would be an assessment of the effects of the standard by the regulators in terms of welfare. I assume that is the reason why the regulators in this case carried out an analysis of the welfare benefits, because why even do so if willingness-to-pay is the only factor? 

More speculatively, I would guess that Dutch regulators also take account welfare improvements to other  humans , and would not strike down an industry standard for safe food (if the standard actually contributed to safety). 

Google's ethics is alarming

Thank you for this post. My stance is that when engaging with hot-button topics like these, we need to pay particular attention to the truthfulness and the full picture of the topic. I am afraid that your video simplifies the reasons for the dismissal of the two researchers quite a bit to "they were fired for being critical of the AI", and would benefit from giving a fuller account. I do not want to endorse any particular side here, but it seems important to mention that 

  1. Google wanted the paper to mention that  some techniques exist to mitigate the problems mentioned by Dr. Gebru. "Similarly, it [the paper] raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues"
  2.  Dr. Gebru sent an email to colleagues telling them to stop working on one of their assigned  tasks (diversity initiatives) because she did not believe those initiatives were sincere.  "Stop writing your documents because it doesn’t make a difference"
  3. Google alleges that Dr. Mitchell  shared company correspondence with outsiders.  

Whether or not you think any of this justifies the dismissal, these points should be mentioned in a truthful discussion.

Dutch anti-trust regulator bans pro-animal welfare chicken cartel

I think you might have an incorrect impression of the ruling. The agreement was not just struck down because consumers seemed to not be willing to pay for it, but also because the ACM (on top (!) of the missing willingness to pay) decided that the agreement  did not benefit consumers by the nature of the improvements (clearly, most of the benefit goes to the chickens). 

From the link: "In order to qualify for an exemption from the prohibition on cartels under the Dutch competition regime it is necessary that the benefits passed on to the consumers exceed the harm inflicted upon them under agreements."

vaidehi_agarwalla's Shortform

There is also a quite active EA Discord server, which serves the function of "endless group discussions" fairly well, so another Slack workspace might have negligible benefits.

EA and the Possible Decline of the US: Very Rough Thoughts

[Epistemic status: Uncertain, and also not American, so this is a 3rd party perspective]

As for the likelihood of some form of collapse, to me the current trajectory of polarization in the US seems unsustainable. Nowadays, members of both parties are split about whether they consider members of the other party "a threat to their way of life"(!)  and feelings towards the other party are rapidly declining.  

I do not think that this is just a fluke, as many political scientists argue that this is driven by an ideological sorting and a creation of a "mega-identity", where race, education and political leanings now all align with each other. Political debate seems overwhelmingly likely to get more acrimonious when disagreement is not just about facts, but about your whole identity, and when you consider the other side to be your enemy.  

It is only a slight overstatement to say that members of both parties live in two very different realities. There is almost no overlap  in the trusted news organizations  and the unprecedentedly constant approval rating of Donald Trump indicates that neither side changed their mind much in response to new information coming in.   

On the up side, "67% comprise 'the Exhausted Majority', whose members share a sense of fatigue with our polarized national conversation, a willingness to be flexible in their political viewpoints, and a lack of voice in the national conversation.” My worry is that this majority is increasingly drowned out by the radical voices in traditional and social media. 

It is also pertinent that political collapse can happen very fast and without much warning, like the Arab Spring and the collapse of the Soviet Union showed, which came unexpected to observers. Decline can also take the form of persistent riots/unrest where no one party has the political capital/strength to reach an agreement with the rioters or to stop it. Consequently, if decline of the US seems likely and bad, I would worry about it possibly happening quickly (<10 years).

80k hrs #88 - Response to criticism

Hi Mark, thanks for writing this post. I only had a cursory reading of your linked paper and the 80k episode transcript, but my impression is that Tristan's main worry (as I understand it)  and your analysis are not incompatible:  

Tristan and parts of broader society fear that through the recommendation algorithm, users discover radicalizing content. According to your paper, the algorithm does not favour and might even  actively be biased against e.g conspiracy content.

 Again, I am not terribly familiar with  the whole discussion, but so far I have not yet seen the point made clearly (enough), that both these claims can be true: The algorithm could show less "radicalizing" content than an unbiased algorithm would, but  even these fewer recommendations could be enough to radicalize the viewers compared to a baseline where the algorithm would recommend no such content.  Thus, YouTube could be accused of not "doing enough". 

Your own paper cites this paper arguing that there is a clear pattern of viewership migration from moderate "Intellectual Dark Web" channels to alt-right content based on an analysis of user comments. Despite the limitation of using only user comments that your paper mentions, I think that commenting users are still a valid subset of all users and  their movement towards more radical content  needs to be explained, and that the recommendation algorithm is certainly a plausible explanation.  Since you have doubts about this hypothesis, may I ask if you think there are likelier ways these users have radicalized?

A way to test the role of the recommendation algorithm could be to redo the analysis of the user movement data for comments left after the change of the recommendation algorithm. If the movement is basically the same despite less recommendations for radical content, that is evidence that the recommendations never played a role like you argue in this post. If however the movement towards alt-right or radical content is lessened, it is reasonable to conclude that recommendations have played a role in the past, and by extension could still play a (smaller) role now.

Load More