Panoplia Laboratories is developing broad-spectrum antivirals in order to fight future pandemics, and we are looking to hire one researcher to be a part of our team.
Job description: Responsibilities would include:
As an early memb...
Besides the 3-month-duration broadly effective antiviral prophylactics that Josh mentioned, I think that daily broadly antiviral prophylactics could also be promising if they could eventually become widespread consumer products. However, the science is still pretty nascent – at least for prophylaxis, I don't believe there is much human data at all, and I haven't seen anything I've seen reaches truly 24 h duration efficacy (which I'd see as a major barrier to consumer uptake).
Here are some links:
"PCANS", and its commercial product Profi nasal spray
INN...
I think that if the broadly effective antiviral prophylactic was truly effective on an individual level, then there could be a reasonable market for it. But the market value would be based on its efficacy at protecting individuals, not on transmission reduction.
Which I think is fine - in the absence of specific incentives to make drugs that reduce transmission, a strategy that involves bringing transmission reduction "along for the ride" on otherwise already-valuable drugs makes sense to me.
How does this change affect the eligibility of near-term applicants to LTFF/EAIF (e.g., those who apply in the next 6 months) who have received OpenPhil funds in the past / may receive funds from OpenPhil in the future? Currently my understanding is that these applicants are ineligible for LTFF/EAIF by default – does this change if EA funds and Open Philanthropy are more independent?
Estimates of the mortality rate vary, but one media source says, "While the single figures of deaths in early January seemed reassuring, the death toll has now climbed to above 3 percent." This would put it roughly on par with the mortality rate of the 1918 flu pandemic.
It should be noted that the oft-cited case-fatality ratio of 2.5% for the 1918 flu might be inaccurate, and the true CFR could be closer to 10%: https://rybicki.blog/2018/04/11/1918-influenza-pandemic-case-fatality-rate/?fbclid=IwAR3SYYuiERormJxeFZ5Mx2X_00QRP9xkdBktfmzJmc8KR-iqp...
It seems that there are two factors here leading to a loss in altruistic belief:
1. Your realization that others are more selfish than you thought, leading you to feel a loss of support as you realize that your beliefs are more uncommon than you thought.
2. Your uncertainty about the logical soundness of altruistic beliefs.
Regarding the first, realize that you're not alone, that there are thousands of us around the world also engaged in the project of effective altruism – including potentially in your city. I would investigate to see if there are local ...
I think the central "drawing balls from an urn" metaphor implies a more deterministic situation than that which we are actually in – that is, it implies that if technological progress continues, if we keep drawing balls from the urn, then at some point we will draw a black ball, and so civilizational devastation is basically inevitable. (Note that Nick Bostrom isn't actually saying this, but it's an easy conclusion to draw from the simplified metaphor). I'm worried that taking this metaphor at face value will turn people towards br...
Interesting idea. This may be worth trying to develop more fully?
Yeah. I'll have to think about it more.
I'm still coming at this from a lens of "actionable advice for people not in ea". It might be that the person doesn't know many other trusted individuals, what should be the advice then?
Yeah, for people outside EA I think structures could be set up such that reaching consensus (or at least a majority vote) becomes a standard policy or an established norm. E.g., if a journal is considering a manuscript with potential info hazards, then pe...
If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf.
Ah right. I suppose the unilateralist's curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn't really apply. Although one wrinkle might be considering the unilateralist's curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other acto...
The unilateralists curse only applies if you expect other people to have the same information as you right?
My understanding is that it applies regardless of whether or not you expect others to have the same information. All it requires is a number of actors making independent decisions, with randomly distributed error, with a unilaterally made decision having potentially negative consequences for all.
...You can figure out if they have the same information as you to see if they are concerned about the same things you are. By looking at the mitigation's pe
The relevance of unilateralist's curse dynamics to info hazards is important and worth mentioning here. Even if you independently do a thorough analysis and decide that the info-benefits outweigh the info-hazards of publishing a particular piece of information, that shouldn't be considered sufficient to justify publication. At the very least, you should privately discuss with several others and see if you can reach a consensus.
I wonder how much the "spend 1 year choosing and 4 years relentless pursuing a project" rule of thumb applies to having a high-impact career. Certain career paths might rely on building a lot of career capital before you can have high-impact, and career capital may not be easily transferable between domains. For example, if you first decide to relentlessly pursue a career in advancing clean meat technology for four years, and then re-evaluate and decide that influencing policymakers with regards to AI safety is the highest-value thing for you to ...
Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here.
Part of what I'm confused about is what the positive case is for giving everyone...
Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob's argument that you should give him a 1/3 chance of being helped even though he wo...
I do think Bob has an equal chance to be in Amy's or Susie's position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don't know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don'...
One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you w...
To add onto the "platforms matter" point, you could tell a story similar to Bostrom's (build up credibility first, then have impact later) with Max Tegmark's career. He explicitly advocates this strategy to EAs in 25:48 to 29:00 of this video: https://www.youtube.com/watch?v=2f1lmNqbgrk&feature=youtu.be&t=1548.
I'd like to hear more about your estimate that another non-human civilization may appear on Earth on the order of 100 million years from now; is this mostly based on the fact that our civilization took ~100 million years to spring up from the first primates?
If there is a high probability of another non-human species with moral value reaching our level of technological capacity on Earth in ~100 million years conditional on our own extinction, then this could lessen the expected "badness" of x-risks in general, and could also have implications for ...
I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to...
The change in ethical views seems very slow and patchy, though - there are something like 30 million slaves in the world today, compared to 3 million in the US at its peak (I don't know how worldwide numbers have changed over time.)
This is a good point; however, I would also like to point out that it could be the case that a majority of "dedicated donors" don't end up taking the pledge, without this becoming a norm. The norm instead could be "each individual should think through themselves, giving their own unique situations, whether or not taking the pledge is likely to be valuable," which could lead to a situation where "dedicated donors" tend not to take the pledge, but not necessarily to a situation where, if you are a "dedicated donor," y...
I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from ...
Regarding your point (2), couldn't this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?
Quick feedback forms for workshops/discussion groups would be nice; I think most of the workshops I attended didn't allow any opportunity for feedback, and I would have had comments for them.
A guarantee that all the talks/panels will be recorded.
The booklet this year stated that "almost" all the talks would be recorded, which left me worried that, if I missed a talk, I wouldn't be able to watch it in the future (this might just be me). I probably would have skipped more talks and talked to more people if I had a guarantee that all the talks would be recorded.
Also, it would be nice to have a set schedule that didn't change so much during the conference. The online schedule was pretty convenient and was (for the most part) up to date, but people using the physical booklet may have been confused.
I think that adopting your first resolution, in addition to the assumption by commenters that being a child with malaria is a net negative experience, can rescue some of the value of AMF. Say in situation 1, a family has a child, Afiya, who eventually gets malaria and dies, and thus has a net negative experience. Because of this, the family decides to have a second child, Brian, who does not get malaria and lives a full and healthy life. In situation 2, where AMF is taken to have a contribution, a family has just one child, Afiya, who is prevented from ...
I have a question for those who donate to meta-charities like Charity Science or REG to take advantage of their multiplier effect (these charities typically raise ~$5-10 per dollar of expenditure). Do you donate directly towards the operations expenses of these meta-charities? For example, REG's donations page has the default split of your donations as 80% towards object-level charities (and other meta-charities), while 20% is towards REG's operating expenses, which include the fundraising efforts that the multiplier presumably is coming from. It seems ...
We miss you!