atlas

Posts

Sorted by New

Comments

[Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative?
Answer by atlasMar 26, 202111

[Another hobbyist here]

I agree with Tsunayoshi's answer.

Another thing to keep in mind that even the best studies on rapid antigen tests usually compare against PCR tests; that is, if they agree with PCR tests in all cases, the sensitivity is reported as 100%. However, the sensitivity of PCR tests is (as far as I can tell) not 100%, and can vary a lot based on factors such as how the sample is collected and transported.

Here's an article on the issue. Key quote:

Whether a SARS-CoV-2 test detects clinical disease depends on biologic factors, pre-analytic factors, and analytic performance. Someone with a large amount of virus in their nose/throat will have a positive test with a nose/throat swab. However, someone with little to no virus in their nose or throat may have a negative test even if they have virus somewhere else (like the lungs). [...] If no virus is present at the site of collection, the collection fails to get virus in the sample, or the sample is severely degraded from storage or transport (for example baking in the sun on a car dash) then the test will be negative no matter how sensitive the test is.

Then there's studies like Kucirka et al, which is summarized in a later paper via this graph of false negative rates in PCR tests:

The study concludes

If clinical suspicion is high, infection should not be ruled out on the basis of RT-PCR alone, and the clinical and epidemiologic situation should be carefully considered.

I don't know how trustworthy the Kucirka et al study is, since the false negative rates reported are a lot worse than any I've seen elsewhere. But I think the upshot is that even "gold-standard" PCR testing is messy, and we shouldn't trust studies that estimate antigen-test sensitivity by comparison to PCR (or at least adjust for low PCR sensitivity).

A different conclusion that I think is reasonable is that RT-PCR tests are a good baseline given competent administration and possibly re-testing. I don't know enough about the mechanics of testing to evaluate whether a given study does well on this or not. 

Feedback from where?

I don't have this impression.

In the sentence you quoted, you literally state that 80k tracks the # of calls and # of career plan changes, but doesn't track the long-run impacts of their advisees.

Feedback from where?

I also downvoted for the same reason. I've looked at 80k's reports pretty closely (bc I was basing our local EA group's metrics on them) and it seemed pretty obvious to me that the counterfactual impact their advisees have is in fact the main thing they try to track & that they use for decisionmaking.

I haven't looked into the other orgs as deeply, but your statement about 80k makes me disinclined to believe the rest of the list.

Where do you get the impression that they focus mainly on # of calls?

The ITN framework, cost-effectiveness, and cause prioritisation

So here's a framing that I found useful, maybe someone else will too.

Given some problem area, let's say is the importance of the problem, defined as the total value we gain from solving the whole thing, and write for the proportion of the problem solved depending on the total resources invested (this is the graph in the post).

Now let's say is the amount of resources that are currently being used to combat the problem. We want to estimate the current marginal value of additional resources, which is given by .

The ITN framework splits the second factor into tractability and neglectedness. If we write for resources normalized by the current investment , then


The factors on the right-hand side represent tractability and neglectedness . So we've recovered the familiar = marginal value of additional resources.

But this feels like a kinda clumsy way to do it―it's not clear what we gain from introducing . Instead, we should just try to estimate directly (this is the main argument I think OP is making).

Defending Philanthropy Against Democracy

Thanks for pointing that out! I should have read more carefully. I might still be reading you wrong here (if so, sorry) but it feels like this doesn't directly engage with the point.

The paragraph argues that since foundations are currently sanctioned by governments, Reich and other critics ought to respect that decision because it's democratic. I think this is a strawman of their argument; you're assuming an abstract notion of 'democraticness' that infuses everything the government does, whereas the critics don't care whether it's a democratic government that's making a bad decision―it's still a bad decision that leaves individuals with outsized power.

(And note that you can simultaneously believe that government makes some bad legislative decisions and that we would be better off by substituting private spending with gov spending).

What actions would obviously decrease x-risk?
  • Most actions that seem to make arms races or war more unlikely, e.g. the world's major powers committing to strengthening international institutions and multilateralism.
  • Any well-connected and well-resourced actor dedicating themselves to research ways to improve decision-making that affects the long-term in large institutions.
  • Everyone in the AI research community taking a few weeks to engage deeply with AI risk arguments.

Defending Philanthropy Against Democracy

I agree with the general point that large foundations are a force for good on net. But I also feel like you haven't engaged with the main point of critics like Rob Reich, which (as I understand it) is that philanthropic foundations are a powerful lever that wealthy people can use to build influence―a lever that can be weakened by regulating foundations.

To defend (not that they're in need of much defending) billionaire philanthropy I think you need to argue that foundations provide enough value that having them is worth empowering the wealthy. (fwiw I think this is very likely true)