This is a special post for quick takes by SiebeRozendal. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 2:05 PM

Ray Dalio is giving out free $50 donation vouchers: tisbest.org/rg/ray-dalio/

Still worked just a few minutes ago

GiveWell is available (search Clear Fund)!

No longer working.

Just did it, still works. You can donate to what looks like any registered US charity, so plenty of highly effective options whether you care about poverty or animal welfare.

Worked for me just now, gave $50 to The Humane League :) 

Worked 20 minutes ago. Process took me ~5 minutes total.

Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.

A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.

Some initial insight on what this might look like practically, is that Trump has promised to repeal Biden's executive order on AI (YMMV on how serious you take trump's promises)

Really interesting initiative to develop ethanol analogs. If successful, replacing ethanol with a less harmful substance could really have a big effect on global health. The CSO of the company (GABA Labs) is prof. David Nutt, a prominent figure in drug science.

I like that the regulatory pathway might be different from most recreational drugs, which would be very hard to get de-scheduled.

I'm pretty skeptical that GABAergic substances are really going to cut it, because I expect them to have pretty different effects to alcohol. We already have those (L-theanine , saffran, kava, kratom) and they aren't used widely. But who knows, maybe that's just because ethanol-containing drinks have received a lot of optimization in terms of taste, marketing, and production efficiency.

It also seems like finding a good compound by modifying ethanol would be hard, because it's not a great lead compound in terms of toxicity (I expect).

People massively underestimate the damage alcohol causes per use because of how normalised it is.

Agreed. Alcohol is ubiquitous because it’s normalized, and its damaging health effects are glossed over for the same reason (as well as corporate profits).

GABA Labs is a good initiative, I think. I do know kava (a popular drink in parts of Polynesia) acts on GABA receptors and can have similar effects to alcohol in high doses, but I’m not sure what the long-term health effects of kava use are.

Heavy use of kava is associated with liver damage, but it seems much less toxic than alcohol. (I use it in my insomnia stack)

Hi, I agree ETOH is extremely harmful. However, there are existing medications which act on GABA, many of which are both highly addictive and therefore highly regulated themselves. Barbituates are a (now outdated) drug class which acts on GABA, others include benzodiazepines and more modern sleep drugs like Zolpidem. All have significant side effects. 

This website strikes me as very selective in how scientific it is - for example, "At higher levels (blood ethanol >400mg%, as would occur after drinking a litre of vodka) then these two effects of ethanol – the increase in GABA inhibition and the blockade of glutamate excitation – can combine to produce a lethal level of sedation and respiratory depression. In terms of health impacts, alcohol (strictly speaking, ethanol) is in a class of its own, and very different from GABA." ETOH is not that different from GABA, as you can also overdose and cause respiratory depression and death from GABA inhibition. I would like to see some more peer-reviewed studies around this new drink, and a comparison to placebo (if you're giving people this drink and saying it will enhance "conviviality and relaxation" then it probably will).

As with pretty much anything health related, there's no quick fix. Things which depress the CNS are addictive, and not that dissimilar from one another. I can see the marketing opportunity for this in the "health food" arena, which makes me more skeptical of this site. I imagine, if released, it may have a similar fate to cannabinoid molecules being included in all sorts of products - allowed because they are ineffective, or vapes - with a different risk profile to the original substance.

There is a natural alliance that I haven't seen happen, but both are in my network: pandemic preparedness and covid-caution. Both want clean indoor air.

The latter group of citizens is a very mixed group, with both very reasonable people and unreasonable 'doomers'. Some people have good reason to remain cautious around COVID: immunocompromised people & their household, or people with a chronic illness, especially my network of people with Long Covid, who frequently (~20%) worsen from a new COVID case.

But these concerned citizens want clean air, and are willing to take action to make that happen. Given that the riskiest pathogens trend to also be airborne like SARS-COV-2, this would be a big win for pandemic preparedness.

Specifically, I believe both communities are aware of the policy objectives below and are already motivated to achieve it:

 

1) Air quality standards (CO2, PM2.5) in public spaces.

Schools are especially promising from both perspectives, given that parents are motivated to protect their children & children are the biggest spreaders of airborne diseases. Belgium has already adopted regulations (although very weak, it's a good start), showing that this is a tractable policy goal.

Ideally, air quality standards also incentivize Far UVC deployment, which would create the regulatory certainty for companies to invest in this technology.

Including standards for airborne pathogen concentrations would be great, but has many technical limitations at the moment I think.

 

2) Public R&D investments to bring down cost & establish safety of Far UVC

Most of these concerned citizens are actually aware of Far UVC and would support this measure. It appears safe in terms of no radiation damage, but may create unhealthy compounds (e.g. ozone) by chemically reacting with indoor air particles. 

I also believe that governments have good reasons to adopt these policies, given that they would reduce the pressures on healthcare and could reduce the disease burden in developed countries by ~5% if not more.

 

If anyone wants to be connected to the other side, send me a DM!

 

*Presumably, more interest groups can be identified that aren't in my network, such as patient groups with lung diseases. Or nurses specifically for hospital air quality. Hospital-acquired covid is a bad and preventable thing.

Another group that naturally could be in a coalition with those 2 – parents who just want clean air for their children to breathe from a pollution perspective, unrelated to covid. (In principle, I think may ordinary adults should also want clean air for themselves to breath due to the health benefits, but in practice I expect a much stronger reaction from parents who want to protect their children's lungs.)

Chevron deference is a legal doctrine that limits the ability of courts to overrule federal agencies. It's increasingly being challenged, and may be narrowed or even overturned this year. https://www.eenews.net/articles/chevron-doctrine-not-dead-yet/

This would greatly limit the ability of, for example, a new regulatory agency on AI Governance to function effectively.

More:

I'm very skeptical of this. Chevron deference didn't even exist until 1984, and the US had some pretty effective regulatory agencies before then. Similarly, many states have rejected the idea of Chevron deference (e.g. Delaware) and I am not aware of any strong evidence that they have suffered 'chaos'. 

In some ways it might be an improvement from the perspective of safety regulation: getting rid of Chevron would reduce the ability of future, less safety-cautious administrations to relax the rules without the approval of Congress. To the extent you are worried about regulatory capture, you should think that Chevron is a risk. I think the main crux is whether you expect Congress or the Regulators to have a better security mindset, which seems like it could go either way.

In general the ProPublica link seems more like a hatchet job than a serious attempt the understand the issue.

I'm not knowledgeable enough to argue this, actually! (So apologies if the main part sounds too confident - I wanted to put the possibility out there)

Monoclonal antibodies can be as effective as vaccines. If they can be given intramuscularly and have a long half life (like Evusheld, ~2 months), they can act as prophylactic that needs a booster once or twice a year.

They are probably neglected as a method to combat pandemics.

Their efficacy is easier to evaluate in the lab, because they generally don't rely on people's immune system.

Difficulty here is mass-scale production, which has to be done at great expense in sterile bioreactors IIRC (my biochem days are way behind me).

Good point

Widespread use would put heavy selection pressure on the pathogen. I suspect the "effective half life" would be much shorter.

Update to my Long Covid report: https://forum.effectivealtruism.org/posts/njgRDx5cKtSM8JubL/long-covid-mass-disability-and-broad-societal-consequences#We_should_expect_many_more_cases_

UPDATE NOV 2022: turns out the forecast was wrong and incidence (new cases) is decreasing, severity of new cases is decreasing, and significant amounts of people are recovering in the <1 year category. I now expect prevalence to be stagnating/decreasing for a while, and then slowly growing over the next few years.]

I still believe the other sections to be roughly correct, including long-term immune damage from COVID for 'fully recovered' people.

This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!

I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up  not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.

Why I think risk analysis is useful:

EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.

The field of Risk Analysis

Risk analysis is a pretty small field, and most is focused on risks of limited scope and risks that are easier to quantify than the risks EAs commonly look at. There is a Society of Risk Analysis (SRA), which manages the Journal of Risk Analysis (the main journal of this field). I found most of their study topics not so interesting, but it was useful to get an overview of the field, and there were some useful contacts to make (1). The EA-aligned org GCRI is active and well-established in SRA, but no other EA orgs are.

Topics & advisers

I hoped to work on GCR/X-risk directly, which substantially reduced my options. It would have been useful to just invest in learning a method very well, but I was not motivated to research something not directly relevant. I think it's generally difficult to make an academic career as a general x-risk researcher, and it's easier to research 1 specific risk. However, I believes this leaves open a number of cross-cutting issues.

I have a shortlist of potential supervisors I considered/contacted/was in conversation with, including in public policy and philosophy. I can provide this list privately on request.

Best grad programs:

The best background for grad school seems to be mathematics or more specifically, engineering. (I did not have this, which excluded a lot of options). The following 2 programs seemed most promising, although I only investigated PRGS in depth:

-- 


(1) For example, I had a nice conversation with the famous psychology researcher Paul Slovic, who now does research into the psychology involved in mass atrocities. https://psychology.uoregon.edu/profile/pslovic/

Aww yes, people writing about their life and career experiences! Posts of this type seem to have some of the best ratio of "how useful people find this" to "how hard it is to write" -- you share things you know better than anyone else, and other people can frequently draw lessons from them.

I'm predicting a 10-25% probability that Russia will use a weapon of mass destruction (likely nuclear) before 2024. This is based on only a few hours of thinking about it with little background knowledge.

Russian pro-war propagandists are hinting at use of nuclear weapons, according to the latest BBC podcast Ukrainecast episode. [Ukrainecast] What will Putin do next? #ukrainecast https://podcastaddict.com/episode/145068892 via @PodcastAddict

There's a general sense that, in light of recent losses, something needs to change. My limited understanding sees 4 options:

  1. Continue on the current course despite mounting criticism. Try to make the Ukrainians lives difficult by targeting their infrastructure, limit losses until winter, and try to reorganize during winter. This seems a pretty good option for now, even though I doubt Russia can really shore up its deeply set weaknesses. They can probably prepare to dig in, threaten and punish soldiers for fleeing. This wouldn't go well for either party long-term, but Russia might bet on outlasting/undermining Western support. Probability: 40%?

  2. Negotiation: I don't think Putin wants this seriously, as even the status quo could be construed as a loss. Ukraine will have a strong bargaining position and demand a lot. Undesirable option. Maybe 10%? 20%? (Metaculus predicts 8% before 2023: https://www.metaculus.com/questions/10046/ukraine--russia-peace-talks-2022/)

  3. Full-scale mobilisation of the population and the economy. This is risky for Putin: there's supposedly a large anti-war sentiment in Russian culture, a legacy of the enormous losses during the 2nd World War. People don't like to join a poorly-equipped poorly managed and losing army, even if it were a good cause.. This may be chosen, Putin may be misinformed and badly reading the public's sentiment. I have no idea how this would develop internally. I doubt it will make a big difference in the course of the war, except by prolonging the war a bit. Maybe 25%? Maybe 50% if Putin underestimates public resistance.

  4. Escalation by other means: I don't know how many options Russia has. Chemical weapons, electro magnetic pulse, a single tactical nuclear strike on the battlefield for deterrence, multiple nuclear strikes for strategic reasons, population strike for deterrence. In the mind of Putin, I can see this as preferable: it leads to a potential military advantage, has limited risk for destabilising his internal power base. I don't know how the international community would respond to this, nor how Putin thinks the international community would respond. In my (uninformed) view, only China can make a real difference here as the West already has stringent sanctions. I don't know how China would respond to this. They wouldn't like it, but I think the West won't really punish China for its support in the short term. I'd say on this inside view, 10-25% seems reasonable. I'm setting the point estimate at 15%.

I have a concept of paradigm error that I find helpful.

A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.

Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.

It is related to what I see as

  • parameter errors (= the value of parameters being inaccurate)
  • model errors (= wrong model structure or wrong/missing parameters)

Paradigm errors are one level higher: they are the wrong type of model.


Relevance to EA

I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.

I think I call this "the wrong frame".

"I think you are framing that incorrectly etc"

eg in the UK there is often discussion of if LGBT lifestyles should be taught in school and at what age. This makes them seem weird and makes it seem risky. But this is the wrong frame - LGBT lifestyles are typical behaviour (for instance there are more LGBT people than many major world religions). Instead the question is, at what age should you discuss, say, relationships in school. There is already an answer here - I guess children learn about "mummies and daddies" almost immediately. Hence, at the same time you talk about mummies and daddies, you talk about mummies and mummies, and single dads and everything else. 

By framing the question differently the answer becomes much clearer. In many cases I think the issue with bad frames (or models) is a category error.

I like this, I think i use the wrong models when trying to solve challenges in my life.

Large study: Every reinfection with COVID increases risk of death, acquiring other diseases, and long covid.

https://twitter.com/dgurdasani1/status/1539237795226689539?s=20&t=eM_x9l1_lFKqQNFexS6FEA

We are going to see a lot more issues with COVID still, including massive amounts of long COVID.

This will affect economies worldwide, as well as EAs personally.

Ah sorry I'm not going to do that, mix of reasons. Thanks for offering it though :)