All of Dawn Drescher's Comments + Replies

Only half a person per sandal I think!

Even scandal-prone individuals can't survive in a vacuum. (You may be thinking of sandals, not scandals?)

2
Guy Raveh
24d
Is it definitely established that a living person is required for every scandal?

We have sympathies towards both movements, and consider ourselves to take the middle path. We race forward and accelerate as quickly as possible while mentioning safety.

Mentioning safety is a waste of resources that you could direct toward attaching propulsion to asteroids to get them here faster.

In fact, asteroids will inevitably hit earth earlier or later, and if they kill humanity, clearly they are superior to humanity. The true masters of our future lightcone are the asteroids. That which can be destroyed by asteroids ought to be destroyed by asteroids.

True progress is in speeding the inevitable. Resistance is futile.

This post is also a great info hazard. It risks causing impostors with sub-146 IQs (2009 LW survey) to feel adequate!

That's a good point. Time discounting (a “time premium” for the past) has also made me very concerned about the welfare of Boltzmann brains in the first moments after the Big Bang. It might seem intractable to still improve their welfare, but if there is any nonzero chance, the EV will be overwhelming!

But I also see the value in longtermism because if these Boltzmann brains had positive welfare, it'll be even more phenomenally positive from the vantage points of our descendants millions of years from now!

There is a Swiss canton Appenzell Innerrhoden (AI). Maybe we can hide there and trick the AI into thinking it already invaded it?

Scandals don't just happen in the vacuum. You need to create the right conditions for them. So I suggest:

  1. We spread concern about the riskiness of all altruistic action so that conscientious people (who are often not sufficiently scandal-prone) self-select out of powerful positions and open them up to people with more scandal potential.
  2. We encourage more scathing ad-hom attacks on leadership so that those who take any criticism to heart self-select out of leadership roles.
  3. We make these positions more attractive to scandal-prone people by abandoning cost-effe
... (read more)

Scandals don't just happen in the vacuum

Has anyone tested this? Because if we could create them in a vacuum, that might save a lot of energy usually lost to air resistance, and thus be more effective

5
Guy Raveh
25d
FTFY

Great summary!

You probably base “Even though this use of funds was unintentional and sounds extremely sketchy, FTX's general counsel testified that FTX's terms of service did not prohibit it” on:

The government didn’t want to focus you on that. Why? Again, the only witness who said he had read the terms of service was Can Sun, the general counsel who had helped to draft it. Even though he was very careful in what he told you, he admitted that nowhere do the terms of service contain language that prevents FTX from loaning customer fiat deposits to Alameda or

... (read more)
1
SteadyPanda
25d
Jason's comment and my response here are relevant.
4
Ben_West
1mo
Thanks! I updated the summary and included a link to this comment – let me know if you think it's inaccurate.

Yep, failing fast is nice! So you were just skeptical on priors because any one new thing is unlikely to succeed?

2
NunoSempere
1mo
Yes, and also I was extra-skeptical beyond that because you were getting a too small amount of early traction.

Yep, that makes a lot of sense. I've done donation forwarding for < 10 projects once, and it was already quite time-consuming!

I've (so far) read the first post and love it! But when I was working full-time on trying to improve grantmaking in EA (with GiveWiki, aggregating the wisdom of donors), you mostly advised me against it. (And I am actually mostly back to ETG now.) Was that because you weren't convinced by GiveWiki's approach to decentralizing grantmaking or because you saw little hope of it succeeding? Or something else? (I mean, please answer from your current perspective; no need to try to remember last summer.)

2
NunoSempere
1mo
Iirc I was skeptical but uncertain about GiveWiki/your approach specifically, and so my recommendation was to set some threshold such that you would fail fast if you didn't meet it. This still seems correct in hindsight.

Oh, great! I interpreted “This is an offer to donate this amount to the project on the condition that it eventually becomes active” to mean that the project might not become active for any number of reasons, only one of them being the funding goal.

2
BrianTan
1mo
Got it, no problem!

Oh, brilliant! USDC would also be my top choice. But I'm basically paying into a DAF, and so can't get a refund if this project doesn't succeed, right? That would have a high cost in option value since I don't know whether my second-best donation opportunity will be on Manifund. Is there a way to donate to Rethink Priorities or the Center on Long-Term Risk through Manifund? That would lower-bound the cost in option value.

4
Austin
1mo
In principle we'd be happy to forward donations to RP, CLTR or other charities (in principle any 501c3, doesn't have to be EA); in practice the operational costs of tracking these things mean that we don't really want to be doing this except for larger donation sizes. Although since EA Philippines has set its minimum project threshold at a fairly low $500, I'd 95% expect them to succeed and that this wouldn't come up.
2
BrianTan
1mo
Hi Dawn, thanks for your interest in donating to EA Philippines! I'm unsure if I understood you correctly, but EA PH's minimum funding goal on Manifund is $500, as the team would appreciate any amount of funding. So I think if your donation is at least $400, then the project would indeed be funded, and you wouldn't need to be refunded. Let me know if this answers your concern!

From what I've learned about Shapley values so far, this seems to mirror my takeaway. 

 

Nice! To be sure, I want to put an emphasis on any kind of attribution being an unnecessary step in most cases rather than on the infeasibility of computing it.

There is complex cluelessness, nonlinearity from perturbations at perhaps even the molecular level, and a lot of moral uncertainty (because even though I think that evidential cooperation in large worlds can perhaps guide us toward solving ethics, that'll take enormous research efforts to actually make p... (read more)

Great work; I hope you'll succeed with the fundraiser! Do you have blockchain-based donation options like, e.g., Rethink Priorities offers? Ideally one of the major Ethereum layer 2s or Solana? Ty!

5
Austin
1mo
Hey Dawn! At Manifund we support crypto-based donations for adding to your donation balance; USDC over Eth or Solana is preferred but we could potentially process other crypto depending on the size you have in mind. We generally prefer to do this for larger donation sizes (eg $5k+) because of the operational overhead, but I'd be willing to make an exception in this case to help support the EA Philippines folks. More details here.

Shapley values are a great tool for divvying up attribution in a way that feels intuitively just, but I think for prioritization they are usually an unnecessary complication. In most cases you can only guess what they might be because you can't mentally simulate the counterfactual worlds reliably, and your set of collaborators contains billions of potentially relevant actors. But as EAs we can “just” choose whatever action will bring about the world history with the greatest value regardless of any impact attribution to ourselves or anyone. 

I like the... (read more)

1
Sarah Weiler
1mo
From what I've learned about Shapley values so far, this seems to mirror my takeaway. I'm still giving myself another 2-3 days until I write up a more fleshed-out response to the commenters who recommended looking into Shapley values, but I might well end up just copying some version of the above; so thanks for formulating and putting it here already! I do not understand this point but would like to (since the stance I developed in the original post went more in the direction of "EAs are too individualist"). If you find the time, could you explain or point to resources to explain what you mean by "the core from cooperative game theory" and how that links to (non-)individualist perspectives, and to impact modeling? Very glad to read that, thank you for deciding to add that piece to your comment :)!

I use “impact” to mean “net impact,” basically.

An output could be a piece of forest that is protected from logging. An outcome is some amount of CO2 converted into O2 that wouldn't otherwise. But also a different piece of forest getting logged that wouldn't otherwise. And a bunch of r-strategist animals dying of parasites, starvation, and predation who would otherwise not have been born. Some impact on the workers who now have to travel further to log trees. And much more.

The attempt to trade off all of these effects (perhaps using an open source repository of composeable probabilistic models like Squiggle) is what results in an impact estimate.

Frankie made a nice explainer video for that!

What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.

AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn't h... (read more)

Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.

We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US. 

We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has... (read more)

2
Elizabeth
2mo
GiveWiki just looks a list of charities to me; what's the additional thing you are doing? 

Yeah, we're not marketing people, so we've probably made plenty of subtle mistakes. But we have compiled a CRM of hundreds of leads from EA Hub, Donor Lottery, conferences, etc. and cold-emailed them; gone through charities to try to get in touch with their donors; posted in various grantmaker Slacks and Discords; posted in various AI safety groups; held talks at many EA and LW meetups, events, and conferences; networked in the refi space; tried to tap into our personal contacts to get in touch with grantmakers; ran a newsletter; produced an explainer vide... (read more)

Yes, exactly! Yeah, I'm quite sad that it hasn't caught on. 

I think there are just too few donors at the moment. That makes it hard for us to reach them because they're so few and far in between, and makes it easy for them to find plenty of great funding gaps among the projects they know so they never need to search for them.

We'll keep the platform running, so if AI safety goes mainstream or another billionaire funder pops up, we're ready to serve them with our recommendations.

3
Chris バルス
2mo
I also imagine that it hasn't caught on is be because there is a lack of people just knowing it exists.  Have you considered cold-emailing people you that could plausibly find this valuable, for example finding potential people from a list such as this one?   Or sending out cold-emails and asking orgs such as these  if you could have a quick keynote presentation for them (if there is potential EV) and having a Q&A in the end?  My intuition just tells me that this is an obviously valuable service for many, but, like many of these good SaaS products die, not because they aren't good, but because it doesn't reach a critical mass of users soon enough. 

This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that.

Oh, interesting… I'm autistic and I've heard that autistic people give off subtly weird “uncanny valley”–type vibes even if they mask well. So I mostly just assume it's that. Close friends of mine who surely felt perfectly free to tell me anything were also at a loss to describe it. They said the vibes were less when I made a ponytail rather than had open hair, but they couldn't des... (read more)

1
Richard_Leyba_Tejada
2mo
I find your comments fun and authentic. I like your approach to voicing your concern that you don't know something and it helps filter good managers.

Haha! But that sounds tame compared to what I imagined! I like math core and Fantômas, though, just haven't quite warmed up to extratone yet.

Oooh, Brighter Than Today is cooool! :-D

Awww, that's so relatable! But I'm very curious now: What is Holly's and your music taste? :-D (Math core? Extratone? Fantômas? Same song on repeat for a year?)

9
Holly Morgan
2mo
A lot of pop, a lot of musicals... I'd like to say that my music taste has become a lot more sophisticated over the past 12 years, but that would be false. And shout-out to this old favourite from @Raemon ✨.

Individually, altruists (to the extent that they endorse actually doing good) can make a habit of asking themselves and others what risks they may be overlooking, dismissing, or downplaying.

I think this works well when done in private, but asking around among friends is difficult for people who don't have an extensive EA network and risks that they inadvertently only ask around within their filter bubble.

Asking around publicly, e.g., on the Forum, is something that I and probably others too have mostly come to regret. Currently it's still uncommon to try t... (read more)

Thanks! Yeah, I could imagine that particular aid programs beat GiveDirectly, but they'll be even harder to find, be confident in, and make legible to others. But if someone has the right connections, then that'd be amazing too! (I'm mostly thinking of donors here whose bar is GiveDirectly and not (say) Rethink Priorities.)

I quite often listened to interviews with Noam Chomsky on the topic, and yeah, my takeaway was typically that the situation is too complex and intricate for me to try to understand it by just listening to a few hours of interviews… If I were a history and policy buff, that'd be different. :-/

I've been reading your comments with great interest. Thank you! Do you maybe want to write a top-level post on the topic? Since it's December (but also generally), I'd be quite interested in whether you can think of donation opportunities that are sufficiently leveraged to plausibly be competitive with (say) GiveWell top charities. Perhaps there are highly competent peace-building organizations in Israel. (I imagine few EAs will have the right expertise for direct work on this, and the ones who do will not benefit much from the post – but money is flexible.)

4
ezrah
4mo
From what I've seen, peace building initiatives are more a matter of taste than proven effectiveness. And I would wait until after the war to understand which orgs are able to effectively deliver aid to Gazans who have been affected, things will be clearer then. Now everything is complicated by the political / military situation.
4
Guy Raveh
5mo
I, very sadly, cannot recommend any org operating in this area. I'm a big fan of Standing Together, so maybe them, but I'm very pessimistic about the chances of the peace process. [Edit: I'd rather say I'm not optimistic enough. One of the major determiners of the future here will be foreign (and in particular, American) pressure - so maybe lobbying the US government to push for a peace accord would be good?] If I were a non-Israeli person wanting to donate, I'd focus on aid for Gaza, but there too I cannot point to any organisation able to reliably move goods or funds into the hands of the citizens who need them. The situation is dire and very hard to deal with, in both the short and long term. I'd be happy to have better recommendations.

Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat!


Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam... (read more)

Caude.ai summary for those in a hurry:

The article argues in defense of the effective altruism movement, citing its accomplishments in areas like global health, animal welfare, and AI safety, while contending criticisms of it are overblown. It makes the case that effective altruism's commitment to evidence-based altruism that focuses on the most tractable interventions to help others is a positive development worth supporting, despite some mistakes. The article concludes the movement has had significant positive impact that outweighs the negatives.

I'll read... (read more)

1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn't have expected someone to have a problem with that though…) I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?

2. I'm super noncompetitive… When it comes to EA jobs, I find it reassuring that I'm probably not good at making a good first impression because it reduces the risk that I replace ... (read more)

2
Yonatan Cale
2mo
1.a and b. This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that. My focus wouldn't be on trying to interpret the literal words (like "what vibe") but rather making them comfortable to give you actual real feedback. This is a skill in itself which you can practice. Here's a draft to maybe start from: "Hey, I think I have some kind of blind spot in interviews where I'm doing something wrong, but I don't know what it is and a friend told me I probably won't notice it myself and I better get feedback from someone else. Any chance you'd tell me more about what didn't work for you? I promise not to be insulted or complain for not passing or anything like that"   2. This is super common. Like, I'm not making this up, I had dozens of conversations and this is a common thing to worry about, and it's probably true to many other people interviewing to the same position. My own approach to this is to tell the interviewer what I'm worried about, and also the reasons that I might not be a good match for whatever this is. For example, "I never worked with some-tech-you-use". If after hearing my worries they still want to hire me, that's great, and I don't need to pretend to know anything. I also think this somewhat filters for hiring managers that appreciate transparency (and not pretending everything is perfect), which is pretty important to me personally. (also, reasonable managers understand you will need onboarding time, and if they don't understand that - then I prefer they don't hire me) This all totally might be a Yonatan-thing, idk.   4. Yeah I think (?) I'd aim for something short that I could do right-after, so my brain will understand this is a positive reinforcement and not just an unrelated fun evening? This is just my own intuition. I guess it would work if I'd meet a friend and they'd keep saying "good job for interviewing! now let'

That makes a lot of sense! I've been working on that, and maybe my therapist can help me too. It's gotten better over the years, but I used to feel intense shame over mistakes I made or might've made for years after such situations, so that I'm still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.

I have thoughts on how to deal with this. My priors are this won't work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down

 

My recommendation on how to read this:

  1. If this advice fits you, it should read as "ah obviously, how didn't I think of that?". If it reads as "this is annoying, I guess I'll do it, okay...." - then something doesn't fit you well, I missed some preference of yours. Please don't make me a source of annoying social pressure
  2. Again, for some reason this works better w
... (read more)

Haha! Where exactly do you disagree with me? My mind autocompleted that you'd proffer this objection: 

If you work for a 9x job, chances are that you're in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you'll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills.

What do you think?

I'm a... (read more)

Thanks! Yeah, I've included that in the application form in one or two cases in the hope it'll save time (well, not only time – I find interview processes super stressful, so if I'm going to get rejected or decline, I'd like (emotionally) for that to happen as early as possible) but I suppose that's too early. I'll ask about it later like you do. I haven't gotten so far yet with any impact-focused org.

4
Yonatan Cale
5mo
Seems to me from your questions that your bottle neck is specifically finding the interview process stressful. I think there's stuff to do about that, and it would potentially help with lots of other tradeoffs (for example, you'd happily interview in more places, get more offers, know what your alternatives are, ..) wdyt?

Same… Anna Riedl recommended working for something that is at least clearly net positive, a product that solves some important problem like scaling Ethereum or whatever. Emotionally, the exact order of magnitude of the impact probably doesn't make a proportional difference so that the motivation will be there, and the actual impact can flow from the donations. Haven't tried it yet, but I will if I go back to ETG.

6
Yonatan Cale
5mo
I might disagree with this. I know, this is controversial, but hear me out (and only then disagree-vote :P )   So, 1. Some jobs are 1000x+ more effective than the "typical" job. Like charities 2. So picking one of the super-impactful ones matters, compared to the rest. Like charities 3. But picking something that is 1x or 3x or 9x doesn't really matter, compared to the 1000x option. (like charities) 4. Sometimes people go for a 9x job, and they sacrifice things like "having fun" or "making money" or "learning" (or something else that is very important to them). This is the main thing I'm against, so if you can avoid this, great. For example, if you're also excited to work on ethereum, and they have a great dev community that mentors you and so on 5. I do think it's important to work on something that you enjoy 1. So I do think you should have a bar of "do enough good to have a good time", but this is a super subjective bar, and I wouldn't lose track of the ball that is "your motivation" (super under rated btw) 2. I'll also note that (imo) most (though not all) companies are net positive. So having a bar of "net positive", if it works for you emotionally, won't reduce many options and I think it's great 6. (and I recommend sometimes checking if there's a high impact job that could use your skillset and applying) 1. (I'm also not against doing high-risk high-reward things, or projects that aren't "recognized" by EA orgs. Such as open source stuff) 7. I do personally think I have a bar of not taking harmful jobs, not ruining coordination, things like that. 8. Oh, and: While you're working on something fun, learning and making money, I do think (in the typical case) you could see yourself as "preparing" for a potential very high impact job you might have in the future, and I think our community would be better off if people would take this path happily and without guilt. Just don't forget to check for the high impact jobs sometimes. I have many m

Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG? 

4
Yonatan Cale
5mo
TL;DR: The orgs know best if they'd rather hire you or get the amount you'd donate. You can ask them. I'd apply sometimes, and ask if they prefer me or the next best candidate plus however much I'd donate. They have skin in the game and an incentive to answer honestly. I don't think it's a good idea to try guessing this alone   I wrote more about this here, some orgs also replied (but note this was some time ago)   (If you're asking for yourself and not theoretically - then I'd ask you if you applied to all (or some?) of the positions that you think are really high impact. because if not - then I think once you know which ones would accept you, and once you can ask the hiring managers things like this, then your dillema will become much easier, almost trivial)

AI Safety Events is one of the projects where we expanded the time window because they were on a hiatus in earlier 2023. The events that got evaluated were from 2022. Otherwise yes. (But just to be clear, this is about the retroactive evaluation results mentioned at the bottom of the post.)

First it could make sense not to focus too much on the credits. The ranking has to bottom out somewhere, and that's where the credits come into it, to establish a track record for our donors. The ranking itself is better thought of as the level of endorsement of a project weighed by the track record of the endorsing donors.

We're still thinking about how we want to introduce funding goals and thus some approximation of short-term marginal utility. At the moment all projects discount donations at the same rate. Ideally we'd be able to use something like the ... (read more)

2
calebp
6mo
So is it reasonable to interpret your process as saying FAR was similarly impactful to AI safety events over the last year?

Oh true! I was only thinking of financial support for struggling projects and project developers, but those kinds of support are also super valuable!

Rethink Wellbeing is definitely on board with mental health for EAs being an important cause area. 

I don't think personal identity makes too much sense, so preventing the extinction of EA-related values (or maybe even some wider set of prosocial, procivilizational values) could be an underexplored cause area. Some sort of EA crisis fund could be a way to achieve that, but also archival of important insights and such.

Yeah… I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

Only Nonlinear attempted something similar in the EA community. (But I condemn exploitative treatment of employees of course!) Open Phil picked up an AI safety prize contest, and I might miss a few cases. I was very disa... (read more)

4
Ebenezer Dukakis
7mo
There was a supportive response, to some degree, in the wake of FTX: https://forum.effectivealtruism.org/posts/BesfLENShzSMeb7Xi/community-support-given-ftx-situation https://forum.effectivealtruism.org/posts/7PqmnrBhSX4yCyMCk/effective-peer-support-network-in-ftx-crisis-update https://forum.effectivealtruism.org/posts/gbjxQuEhjAYsgWz8T/a-job-matching-service-for-affected-ftxff-grantees Maybe now is a good time to review that response and figure out what could've been done better. For example, maybe some person or organization could've made a point of reaching out individually to each and every FTX grantee. ---------------------------------------- For me, the OP resonates well beyond just the FTX stuff though. There's an element of making personal sacrifices for the greater good that exists in EA, which doesn't exist in the same way for an academic discipline. I myself found the lack of supportiveness in EA very alienating, and it's a major reason why I'm not very involved these days. One idea is something like a "Basefund for mental health", to provide free or low-cost therapy for EAs -- possibly group therapy. EAs have already made the argument that mental health could be an effective cause area. If that's true, "mental health for EAs" could be a doubly effective cause area. Beyond the first-order benefit of improving someone's mental health, you can improve someone's mental health in a way that enables them to do good.
4
quinn
7mo
openphil did some lost wages stuff after FTXsplosion, but I think evaluated case by case and some people may have been left behind. 

I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

I think if this happened with, say, a conference you would see this kind of response within EA. A group of people stuck in a specific place is very different from the FTX collapse.

I’ll phrase this as a question to not be off-vibe: Would you like to create accounts with AI Safety Impact Markets so that you’ll receive a regular digest of the latest AI safety projects that are fundraising on our platform? 

That would save them time since they don’t have to apply to you separately. If their project descriptions left open any questions you have, you can ask them in the Q & A section. You can also post critiques there, which may be helpful for the project developers and other donors.

Conversely, you can also send any rejected projects our way, especially if you think they’re net-positive but just don’t meet your funding bar.

2
Linch
8mo
Thanks for the offer! I think we won't have the capacity (or tbh, money) to really work on soliciting new grants in the next few weeks but feel free to ping Caleb or I again in say a month from now!

Friends of mine rented a venue for 1200+ people for a weekend in a central location in Germany for some €60k iirc. There was no catering, but we were allowed to buy food at nearby diners (with great vegan options) and bring it into the venue.

Do the practices around forcing catering companies on organizers maybe vary by countries, so that EAGs could move to nearby countries where the venues are cheaper and more chill? Maybe countries with smooth air and rail connections from (e.g.) London?

There’s also the Zuzalu model. Zuzalu itself may be an option. Or som... (read more)

Yeah, agreed! I haven’t thought about impact markets through Linch’s particular lens. (I’m cofounder of AI Safety Impact Markets.) 

Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.

But most people I’ve talked to don’t consider costly in terms of reputation to be a ... (read more)

FYI: Our (GoodX’s) project AI Safety Impact Markets is a central place where everyone can publish their funding applications, and AI safety funders can subscribe to them and fund them. We have ~$350k in total donation budget (current updated number) from interested donors.

(If you’re a donor interested in supporting early-stage AI safety projects and you’re interested in this crowdsourced charity evaluator, please sign here.)

I’m considering crossposting this prize, but is it still funded? If you already received the funding, will you be able to pay out even if it’s clawed back? Thank you!

3
christian
11mo
Hi Dawn, yes, Metaculus will ensure tournament winners receive their prizes. 

Hiii! Thanks! Yeah, what’s a market and what isn’t… I’m used to a rather wide definition from economics, but we did briefly consider whether we should use a different or sub-brand (like ranking.impactmarkets.io or so) for this project.

The idea is that, if all goes well, we roll out something like the carbon credit markets but for all positive impact via a three-phase process:

  1. In the first phase we want to work with just the donor impact score. Any prizes will be attached to such a score and basically take the shape of follow-on donations. This is probably a
... (read more)
Load more