Dawn Drescher

Cofounder @ GoodX
2296 karmaJoined Nov 2014Working (6-15 years)8303 Bassersdorf, Switzerland
impactmarkets.io

Bio

Participation
3

I’m working on Impact Markets – markets to trade nonexcludable goods. (My profile.)

I have a conversation menu and a Calendly for you to pick from! 

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally they, but she and he are fine too. I also still go by Denis and Telofy in various venues.

How others can help me

GoodX needs: advisors, collaborators, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets.

How I can help others

I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.

Please check out my Conversation Menu!

Sequences
2

Impact Markets
Researchers Answering Questions

Comments
526

I’m considering crossposting this prize, but is it still funded? If you already received the funding, will you be able to pay out even if it’s clawed back? Thank you!

Hiii! Thanks! Yeah, what’s a market and what isn’t… I’m used to a rather wide definition from economics, but we did briefly consider whether we should use a different or sub-brand (like ranking.impactmarkets.io or so) for this project.

The idea is that, if all goes well, we roll out something like the carbon credit markets but for all positive impact via a three-phase process:

  1. In the first phase we want to work with just the donor impact score. Any prizes will be attached to such a score and basically take the shape of follow-on donations. This is probably a market to the extent that Metaculus is a market. They say “sort of” and prefer the term “prediction aggregator.” So maybe we’re currently an impact aggregator.
  2. In the second phase, we want to introduce a play money currency that we might call “impact credit” or “impact mark.” The idea is to reward people with high scores with something that they can transfer within the platform so that incentives for donors will be controlled less and less by the people with the prize money and increasingly by the top donors who have proved their mettle and received impact credits as a result. We’ll start moving in that direction if we get something like 100+ monthly active users. Metaculus would probably consider this an “impact market” and Manifold Markets even has it in its name. But rebranding away from “market” and then maybe rebranding back towards “market” a year later seemed unwise to us.
  3. Eventually, and this brings us to the third phase, we want to understand the legal landscape well enough to allow trade of impact credits against dollars or other currencies. We would like for impact credits to enjoy the same status that carbon credits already have. They should function like generalized carbon credits. I think at this point the resulting market will be widely considered a literal “market.” This is much more of a long-term vision though.
Phases

Hi Jack! Wonderful to hear that you’ve been reading up on all these sources already! 

Rethink Priorities has identified lots of markers that we can draw on to get a bit of a probabilistic idea of whether invertebrates are sentient. I wonder which of these might carry over to digital sentience. (It’s probably hard to arrive at strong opinions on this, but if we did, I’d also be worried that those could be infohazardous.) The concept of reinforcement learning (testable through classic conditioning) is a marker that I think is particularly fundamental. When I talk about sentience, I typically mean positive and negative feedback or phenomenal consciousness. That is intimately tied to reinforcement learning because an agent has no reason to value or disvalue certain feedback unless it is inherently un-/desirable to the agent. This doesn’t need to be pain or stress (just as we also can correct someone without causing them pain or stress) and it’s unclear how intense it is anyway, but at least when classic conditioning behavior is present, I’m extra cautious and when it’s absent less worried that the system might be conscious.

You’ve probably seen Tobias’s typology of s-risks. I’m particularly worried about agential s-risks where the AI, though it might not have phenomenal consciousness itself, creates beings that do, such as emulations of animal brains. But there are also incidental s-risks, which are worrying particularly if the AI ends up in the situation where it has to create a lot of aligned subagents, e.g., because it has expanded a lot and is incurring communication delays. But generally I think you’ll hear the most convincing arguments in 1:1 conversations with people from CLR, CRS, probably MIRI, and others.  

I think how most will have interpreted it at this point, it’s the whole amount that a person plans to donate in a given year, so the first. The second would be difficult to assess. I’m hoping that a lot of donors will convince their favorite charities to join the platform so they can register their donations to them. So some of the charities or projects you’ll consider best will not be listed yet.

I imagine that at any point in time either big tech or AI safety orgs/funders are cash-constrained. Or maybe that at any point in time we’ll have an estimate which party will be more cash-constrained during the crunch time.

When the estimate shows that safety efforts will be more cash-constrained, then it stands to reason that we should mission-hedge by investing (in some smart fashion) in big tech stock. If the estimate shows that big tech will be more cash-constrained (e.g., because the AI safety bottlenecks are elsewhere entirely), then it stands to reason that we should perhaps even divest from big tech stock, even at a loss.

But if we’re in a situation where it doesn’t seem sensible to divest, then investing is probably also not so bad at the current margin.

I’m leaning towards thinking that investing is not so bad at the current margin, but I was surprised by the magnitude of the effect of divesting according to Paul Christiano’s analysis, so I could easily be wrong about that.

Indeed, I think I’m in the same predicament. Around 2020, pretty much due to bio anchors, I started to think much more about how I could apply myself more to x- and s-risks from AI rather than priorities research. I tried a few options, but found that direct research probably didn’t have sufficiently quick feedback loops to keep my attention for long. What stuck in the end was improving the funding situation through impactmarkets.io, which is already one or two steps removed from the object level work. I imagine if I didn’t have any CS background, it would’ve been even harder to find a suitable angle.

Quite plausible, thanks! I’ve been wondering whether the “infinity shades” from infinite ethics may play into this. Then again I don’t know many people who are very explicit about their particular way of resolving infinite ethics.

I haven’t read the novel, so I can’t comment on that part but, as I commented above, “I can think of plenty of scenarios that are ‘realistic’ by AI safety standards… Scenarios that are inspired by stuff that terrorists do all the time when they’re fighting powerful governments, so lots of precedents in history, and whose realism only suffers a bit because they would not be technically possible for humans with today’s technology.”

I can think of plenty of scenarios that are “realistic” by AI safety standards… Scenarios that are inspired by stuff that terrorists do all the time when they’re fighting powerful governments, so lots of precedents in history, and whose realism only suffers a bit because they would not be technically possible for humans with today’s technology.

Maybe you think that multipolar scenarios are likely to result in AI's that are almost but not completely aligned?

Exactly! Even GPT-4 sounds pretty aligned to me, maybe dangerously so. And even if that might have nothing to do with any real goals it might have deep down if it’s a mesa optimizer, the appearance could still lead to trouble in adversarial games with less seemingly aligned agents. 

Load more