Dawn Drescher

Cofounder @ GoodX
Working (6-15 years of experience)

Bio

Participation
3

I’m working on Impact Markets – markets to trade nonexcludable goods. 

I have a conversation menu and a Calendly for you to pick from! 

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally they, but she and he are fine too. I also still go by Denis and Telofy in various venues.

How others can help me

GoodX needs: advisors, collaborators, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets.

How I can help others

I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.

Please check out my Conversation Menu!

Sequences
2

Impact Markets
Researchers Answering Questions

Comments
430

Another note on 4: A friend of mine contracted Covid at EAGx and says that she knows of many people how have. That’s just one pick from almost a thousand people. Her bubble may be unusually Covidious due to being a bubble with Covid though. So I don’t think Microcovid overestimates the risk of infection.

I’ve so far used the individual’s risk of infection and multiplied it with the number of individuals. But of course these people infect each other, so they are very much not independent. I would imagine that an EAG has either very few or very many infections. So that would require tracking the number over the course of several events to be able to average over them.

But a relatively Covid-conscious event like the Less Wrong Community Weekend may also cause or be correlated with more people afterwards reporting their Covid infections. A more Covid-oblivious EAG probably suffers underreporting afterwards. Maybe 10x from the same source that causes people not to fill in feedback surveys unless they are strongly coerced to and maybe another 10x from bad tests and bad sample-taking.

Some people don’t have the routine figured out of rubbing the swap first against the tonsils and then sticking it through the nose all the way down into the throat. Plus there are order-of-magnitude differences in the sensitivity of the self-tests. Bad tests and bad sample-taking can easily make a difference of 10x among the people who think they just had a random cold. So maybe a follow-up survey should ask about symptoms rather than confirmed positive tests, be embedded in various other feedback questions (so that it’s not just filled in by people with Covid), and then be used as a sample to extrapolate to the whole attendee population.

I’ve been trying to find studies on medical conferences but the only one I could find had various safety mechanisms in place, very much unlike EAGx, so it’s unsurprising that very few people got Covid. (I’m assuming that the vaccination statuses of the attendees are similar between a medical conference and an EAG.)

I see! Yeah, I don’t have an overview of the bottlenecks in the biosecurity ecosystem, so that’s good to consider.

  1. Yeah, but I can see Guy’s point that there’s some threshold where an event is short enough that a social intervention is cheaper than a technical one, so that different solutions are best for different contexts. But I don’t really have an opinion on that.
  2. Hmm, true. Testing for fever maybe?
  3. Thanks!
  4. My model (based on Microcovid) would’ve predicted about 9 cases (3–26) for a 1,000-person event around nowish in Berlin. I don’t have easy access to the data of London back then, but the case count must’ve been higher. With these numbers we “only” lose about a year of EA time in expectation and have less than one case of long-covid. 

At EAGx Berlin just now, I and a few others discussed 80/20 interventions.

My first suggestion was mandatory FFP2 or better masks indoors and many outdoors activities, ideally with some sort of protection from rain – a roof or tent.

Another participant anticipated the objection that people would probably object to that that it’s harder to read facial expressions with masks, which could make communication harder for those people who are good at using and reading facial expressions. A counter-suggestion was hence to mandate masks only for the listeners during talks since that is a time when they might fill a room with Covid spray but don’t need to talk.

Improving the air quality is another good option that I do a lot at home but haven’t modeled. It feels like one that is particularly suitable to EA offices and group houses.

The Less Wrong Community Weekend in Berlin was successful with very rigorous testing every day with the most sensitive test that is available.

All in all I would just like to call for a lot more risk modeling to get a better idea of the magnitude of the risks to EA and EAs, and then proportionate solutions (technical or social) to mitigate the various sources of risk. Some solution may be better suited for short events, some for offices and group houses.


This seems all easily important enough that someone should quantitatively model it. 

I did the math for the last EAG London, though I underestimated the attendee count by 3–4x. (Does someone know the number?)

Without mask, the event cost 6 years of EA time (continuous, so 24 hours in a day, not 8 h). Maybe it was worth it, maybe not, hard to tell. But if everyone had worn N95 or better masks, that would’ve been down to about 17 days. They could’ve kept about 100% of the value of EAG while reducing the risk to < 1%.

If the event really had more like 900 attendees, then that’s almost 20 years of EA time that is lost in expectation through these events. I’m not trying to model this conservatively; I don’t know in which direction I’m erring.

One objection that I can see is that maybe this increases the time lost from EAGs by some low single digit factor, and since the event is only 3 days long, that doesn’t seem so bad on an individual level. (Some people spend over a week on a single funding application, so if it’s rejected, maybe that comes with a similar time cost.)

Another somewhat cynical objection could be that maybe there’s the risk that someone doesn’t contribute to the effective altruism enterprise throughout two decades of their life because they were put off by having to wear a mask and so never talked to someone who could answer their objections to EA. Maybe losing a person like that is as bad as a few EAs losing a total of 20 years of their lives. This seems overly cynical to me, but I can’t easily argue against it either.

My Guesstimate model is here.

Indeed! I think this transition from impact markets to other sources of funding can happen quite naturally. A new, unknown researcher may enjoy the confidence in her abilities of some close friends but has little to show that would convince major funders that she can do high-quality research. But once she has used impact markets to fund her first few high-quality pieces of research, she will have a good track record to show, and can plausibly access other sources of funding. Then she can choose between them freely, is not dependent on impact markets alone anymore.

I’m quite confused about that too. I don’t know of any real statistics, but my informal impression is that almost everyone is on board with not speeding capabilities work. There’s the vague argument floating around that actively impeding capabilities work would do nothing but burn bridges (which doesn’t seem right in full generality since animal rights groups also manage to influence whole production chains to switch to more human methods that form a new market equilibrium), but all the pitches for AI safety work always stress all the ways in which the groups will be careful not to work on anything that might differentially benefit capabilities and will keep everything secret by default unless they’re very sure that it won’t enhance capabilities. So I think my intuition that this is the dominant view is probably not far off the mark.

But the recruiting for non-safety roles is (seemingly) in complete contradiction to that. That’s what I’m completely confused about. Maybe the idea is that the organizations can be pushed in safer directions if there are more safety-conscious people working at them, so that it’s good to recruit EAs into them, since they are more likely to be safety-conscious than random ML people. (But the EAs you’d want to recruit for that are not the usual ML EAs but probably rather ML EAs who are also really good at office politics.) Or maybe these these groups are actually very safety-conscious and are years ahead of everyone else and are only gradually releasing stuff that they’ve completed years ago to keep the investors happy but are keeping all the really dangerous stuff completely secret.

An alternative that we’ve been toying with are reverse charity fundraisers of sorts. You do your thing, and when you’re done, you publish it, and then there’s a reward button where anyone can reward you for it. “Your thing” can be doing research, funding research, copyediting research, etc.

I love the simplicity of it, but there are a few worries that we have when it comes to incentives for collaboration when participants have different levels of social influence. Still, it’s a very promising model in my mind.

By “measurement” do you mean the measurements of metrics that the payouts are conditional on (we don’t use those) or measurements of the extent to which the prizes encourage efforts that would not otherwise have happened (we’d be very interested in those)?

I haven’t but I’m aware of a forthcoming report by Rethink Priorities that covers prize contests. If you find more research on that – or on the similar dynamic of hopes for acquisitions incentivizing entrepreneurship – I’d be very interested!

Prizes: Yes, totally! I’ve found that people understand what I’m getting at with impact markets much quicker if I frame it as a prize contest!

That point system sounds interesting. Maybe you can explain it again in our call as I’m not sure I follow the explanation here. But we’re currently betting on indirect normativity, so it won’t be immediately applicable for us.

Load More