gavintaylor

gavintaylor's Comments

gavintaylor's Shortform

Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars - nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)


Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord's risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.


My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?

[Stats4EA] Uncertain Probabilities

This brings to mind the assumption of normal distributions when using frequentest parametric statistical tests (t-test, ANOVA, etc.). If plots 1-3 represented random samples from three groups, an ANOVA would indicate there was no significant difference between the mean values of any group, which usually be reported as there being no significant difference between the groups (even though there is clearly a difference between them). In practice, this can come up when comparing a treatment that has a population of non-responders and strong responders vs. a treatment where the whole population has an intermediate response. This can be easily overlooked in a paper if the data is just shown as mean and standard deviation, and although better statistical practices are starting to address this now, my experience is that even experienced biomedical researchers often don't notice this problem. I suspect that there are many studies which have failed to identify that a group is composed of multiple subgroups that respond differently by averaging them out in this way.

The usual case for dealing with non-normal distributions is to test for normality (i.e. Shapiro-Wilk's test) in the data from each group and move to a non-parametric test if that fails for one or more groups (i.e. Mann-Whitney's, Kruskal-Wallis's or Friedman's tests), but even that is just comparing medians so I think it would probably still indicate no significant difference between (the median values of) these plots. Testing for difference between distributions is possible (i.e. Kolmogorov–Smirnov's test), but my experience is that this seems to be over-powered and will almost always report a significant difference between two moderately sized (~50+ samples) groups, and the result is just that there is a significant difference in distributions, not what that actually represents (i.e differing means, standard deviations, kurtosis, skewness, long-tailed, completely non-normal, etc. )

Is there a Price for a Covid-19 Vaccine?

The author mentioned veterinary vaccines near the end of the post. I search around this and was surprised to find there are already commercially available veterinary vaccines against coronaviruses (that link lists 5). This raised my expectation that a human coronavirus vaccine could be successfully developed.

Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2?

Good post, and this also seems to be a very opportune time to be promoting wild animal vaccination. A few thoughts:

To start with, programs of this kind would only be implemented after a vaccine is developed and distributed among human beings.

In relation to the current pandemic, the media often mentions that there are 7 coronaviruses that can effect humans and we don't have an effective vaccine for any of them. However, I was recently surprised to learn that there are several commercially available veterinary vaccines against coronaviruses - this raised my expectation that a human coronavirus vaccine could be successfully developed and seems promising for animal vaccination as well.


I think it's worth thinking more about what level of safety testing goes into developing animal vaccines. The Hendra virus vaccine for horses might be an interesting case study for this. Hendra virus was relatively recently discovered in Australian, and can be transmitted from flying foxes (a megabat species), via horses, to humans where it has 60%+ case fatality. Fruit bat culling was very widely called for after a series of outbreaks in 2011, but the government decided to fund development for a horse vaccine instead (by unfortunate coincidence, a heat-wave latter killed 1/3rd of the flying fox population a few years later). A vaccine was developed within a year and widely administered soon after. However, some owners (particularly those of racing horses) reported severe side-effects (including death) and eventually started a class-action against the vaccine manufacturer. I don't know if the anecdotal reports of side-effects stood up to further scrutiny (there could have been some motivated reasoning going on similar to that used by human anti-vaxxers), but it seems plausible that veterinary vaccine development accepts, or does not even attempt to consider, much worse side-effects that would be approved in a vaccine developed for humans. Given animal's inability to self-report, some classes of minor side-effects may only be noticed by owners of companion animals who are very familiar with their behaviour. While I don't think animal side-effects would be a consideration in developing vaccines for pandemic control or economic purposes, it seems more relevant in the context of vaccinating animals to increase their own welfare.


This may be the case especially for bats, because they have one of the highest disease burdens among wild mammals. Among other conditions, they are harmed by a number of different coronaviruses-caused diseases. In fact, they harbor more than half of all known coronaviruses.

Why do bats have so many diseases (lots of which humans seem to catch)? This comment (which I found in an SSC article) frames the question in another way:

There are over 1,250 bat species in existence. This is about one fifth of all mammal species. Just to get a sense of this, let me ask a modified version of the question in the title:
"Why do human beings keep getting viruses from cows, sheep, horses, pigs, deer, bears, dogs, seals, cats, foxes, weasels, chimpanzees, monkeys, hares, and rabbits?"

This re-framing doesn't really change the problem, but it suggests that just viewing 'bats' as a single animal group comparable to 'cows' or 'deers' is concealing the scope of species diversity involved.


I heard Jonathan Epstein talk at a panel discussion on biosecurity last year. He was in favour of disease monitoring and management in wild animal populations, and also seemed sympathetic to the idea of doing this from both a human health and animal welfare standpoints. He might be interested in discussing this further, and is in a position where he could advocate for or implement these ideas.

Interview with Aubrey de Grey, chief science officer of the SENS Research Foundation

Thanks for asking the questions I suggested. I thought found Aubrey's response to this question the most informative:

Has any effort been made to see if the effects of multiple treatments are additive, in terms of improved lifespan, in a pre-clinical study?
No, and indeed we would not expect them to be additive, because we would not expect any one of them to make a significant difference to lifespan. That’s because until we are fixing them all, the ones we are not yet fixing would be predicted to kill the organism more-or-less on schedule. Only more-or-less, because there is definitely cross-talk between different damage types, but still we would not expect that lifespan would be a good assay of efficacy until we’re fixing pretty much everything.

I don't have a background in anti-aging biology and my intuition was that the treatments would be have more of an additive effect. However, I agree with his view that there won't be much effect on total life-span until everything is fixed.

My feeling is that this may make the expected value of life-extension research lower (by decreasing probability of success) given that all hallmarks need to be effectively treated in parallel to realize any benefit. If one proves much harder to treat in humans, or if all the treatments don't work together, then that reduces the benefit gained from treating the other hallmarks, at least as far as LEV is concerned. This makes SRF's approach of focusing on the most difficult problems seem quite reasonable and probably the most effective way to make a marginal contribution to life-extension research at the moment. Once all hallmarks are treatable pre-clinically in-vivo, then it seems like research into treatment interactions may become the most effective way to contribute (as noted, this will probably also be hard to get main-stream funding for).

Bioinfohazards
Biosecurity researchers are often better-educated and/or more creative than most bad actors.

I generally agree with the above statement and that the risk of openly discussing some topics outweigh the benefits of doing so. But I recently realised there are some people outside of EA that I think are generally well educated, probably more creative than many biosecurity researchers, and who often write openly about topics the EA community may consider bioinfohazards: authors of near-future science fiction.

Many of the authors in this genre have STEM backgrounds, often write about malicious-use GCR scenarios (thankfully, the risk is usually averted), and I've read several interviews where authors mention taking pains to do research so they can depict a scenario that represents a possible, if sometimes ambitious, future risk. While these novels don't provide implementation details, the 'attack strategies' are often described clearly and the accompanying narrative may well be more inspiring to a poorly educated bad actor looking for ideas than a technical discussion would be.

I haven't seen (realistic) fiction discussed in the context of infohazards before and would be interested to know what others think of this. In the spirit of the post, I'll refrain from creating an 'attention hazard' (or just advertising?) by mentioning any authors who I think describe GCR's particularly well.

Why making asteroid deflection tech might be bad
Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes

I haven't seen this mentioned in other discussion of asteroid risk (i.e. I don't think Ord mentions it in the Precipice) but I don't think it should be ignored so quickly. If states/corporations develop technology to transfer asteroids to Earth orbit then this seems like it would represent an equivalent dual-use concern. Indeed, it may be even riskier than just developing tools for deflection, as activities like mining could provide 'cover' for maliciously aiming an asteroid at Earth. On the positive side, similar tools can probably be used for both orbital transfer and deflection, so the risky technology may also be its own counter-technology.

gavintaylor's Shortform

At the start of Chapter 6 in the precipice, Ord writes:

To do so, we need to quantify the risks. People are often reluctant to put numbers on catastrophic risks, preferring qualitative language, such as “improbable” or “highly unlikely.” But this brings serious problems that prevent clear communication and understanding. Most importantly, these phrases are extremely ambiguous, triggering different impressions in different readers. For instance, “highly unlikely” is interpreted by some as one in four, but by others as one in 50. So much of one’s work in accurately assessing the size of each risk is thus immediately wasted. Furthermore, the meanings of these phrases shift with the stakes: “highly unlikely” suggests “small enough that we can set it aside,” rather than neutrally referring to a level of probability. This causes problems when talking about high-stakes risks, where even small probabilities can be very important. And finally, numbers are indispensable if we are to reason clearly about the comparative sizes of different risks, or classes of risks.

This made me recall hearing about Matsés, a language spoken by an indigenous tribe in the Peruvian Amazon, that has the (apparently) unusual feature of using verb conjugations to indicate the certainty of information being provided in a sentence. From an article on Nautilus:

In Nuevo San Juan, Peru, the Matsés people speak with what seems to be great care, making sure that every single piece of information they communicate is true as far as they know at the time of speaking. Each uttered sentence follows a different verb form depending on how you know the information you are imparting, and when you last knew it to be true.
...
The language has a huge array of specific terms for information such as facts that have been inferred in the recent and distant past, conjectures about different points in the past, and information that is being recounted as a memory. Linguist David Fleck, at Rice University, wrote his doctoral thesis on the grammar of Matsés. He says that what distinguishes Matsés from other languages that require speakers to give evidence for what they are saying is that Matsés has one set of verb endings for the source of the knowledge and another, separate way of conveying how true, or valid the information is, and how certain they are about it. Interestingly, there is no way of denoting that a piece of information is hearsay, myth, or history. Instead, speakers impart this kind of information as a quote, or else as being information that was inferred within the recent past.

I doubt the Matsés spend much time talking about existential risk, but their language could provide an interesting example of how to more effectively convey aspects of certainty, probability and evidence in natural language.

The Case for Impact Purchase | Part 1
I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn't seem conducive to focusing on impact.

Agreed. In my brief experience with academic consulting one thing I've realised is that it is really quite reasonable for contracted consultants to charge a 50-100% premium (on top of their utilisation ratio - usually 50%, so another x2 markup) to account for their lack of benefits.

So if somebody is expecting to earn a 'fair' salary from impact purchases compared to employment (or from any other type of short-term contract work really) they should expect a funder to pay premium for this compared to employing them (or funding another organisation to do so) - this doesn't seem like a good use of funds in the long-term if it is possible to employee that person.

The Case for Impact Purchase | Part 1

I'm interested in seeing a second post on impact purchases and would personally consider selling impact in the future. I have a few general comments about this:

  • Impact purchases seem similar to value-based fees that are sometimes used in commercial consulting (instead of time- or project-based fees) and may be able to provide a complementary perspective. Although in business the 'impact' would usually be something easy to track (like additional revenue) and the return the consultant gets (like percentage of revenue up to a capped value) would be agreed on in advance. I wonder if a similar pre-arrangement for impact purchase could work for EA projects that have quantifiable impact outcomes, such as through a funder agreeing to pay some amount per intervention distributed, student educated, etc. Of course, the tracked outcome should reflect the funders true goals to prevent gaming the metric.
  • It seems like impact purchases would be particularly helpful for people coming into the EA community who don't yet have good EA references/prestige/track-record but are confident they can complete an impactful project, or who want to work on unorthodox ideas that the community doesn't have the expertise to evaluate. If they try something out and it works then they can get funds to continue and preliminary results for a grant, if not, it's feedback to go more mainstream. For this dynamic to work people should probably be advised to plan relatively short projects (say a up too few months), otherwise they could spend a lot of time on something nobody values.
  • This could be a particularly interesting time to trial impact purchases used in conjunction with government UBI (if that ends up being fully brought in anywhere). UBI then removes the barrier of requiring a secure salary before taking on a project.
  • From my experience applying to a handful of early-career academic grants and a few EA grants, I agree that almost none provide any/useful feedback (beyond accepted or declined), either for the initial application or for progress or completion reports. However, worse than having no feedback is that I once heard from an European Research Council (ERC) grant reviewer that their review committees are required to provided feedback on rejected applications, but also instructed to make sure the feedback is vague and obfuscated so the applicant will have no grounds to ask for an appeal, which means the applicant gets feedback the reviewers know won't be useful for improving their project... Why do they bother???
  • With regards to implementation. I think one point to consider is the demand from impacters relative to funds of purchasers. At least in academia, funding is constrained and grant success rates are often <20%, and so grantees know that it is unlikely they'll get a grant to do their project (academic granters often say they turn away a lot of great projects they want to fund). If impact purchasers were similarly funding constrained relative to the number of good projects, I think the whole scheme would be less appealing as then even if I complete a great project, getting its impact bought would still involve a bit/lot of luck.
  • These posts about impact prizes and altruistic equity may also be of interest to consider.
Load More