I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)
I have a conversation menu and a Calendly for you to pick from!
If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.
Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.
GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.
I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.
Frankie made a nice explainer video for that!
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn't have to register as a broker-dealer to be allowed to match startups to investors. I think they have some funds that are led by seasoned investors, and then the newbie investors can follow the seasoned ones by investing into their funds. Or some mechanism of that sort.
We're probably not getting a no-action letter, and we don't have the money yet to start the legal process to get our impact credits registered with the CFTC. So instead we recognized that in the above example investors are treating valuations basically like scores. So we're just using scores for now. (Some rich people say money is just for keeping score. We're not rich, so we use scores directly.)
The big advantage of actual scores (rather than using monetary valuations like scores) is that it's legally easy. The disadvantage is that we can't pitch GiveWiki to profit-oriented investors.
So unlike AngelList, we're not giving profit-oriented investors the ability to follow more knowledgeable profit-oriented investors, but we're allowing donors/grantmakers to follow more knowledgeable donors/grantmakers. (One day, with the blessing of the CFTC, we can hopefully lift that limitation.)
We usually frame this as a process of three phases:
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has happened according to certain standards and then pay out the impact credits (or carbon credits) associated with that standard.
That system seems more promising to us as it has all the advantages of impact certificate markets but also the advantage that one party (e.g., us) can do the legal battle in the US once for this impact credit (and can even rely on the precedent of carbon credits), and thereby pave the ground for all the other market participants that come after and don't have to worry about the legalities anymore. There are already a number non-EA organizations that are working toward a similar vision.
Even outside such restrictive jurisdictions as the US, this system has the advantage that it allows for deeper liquidity on the impact credit markets (compared to the auctions for individual impact certificates). But the US is an important market for EA and AI safety, so we couldn't just ignore it even if it hadn't been for this added benefit.
We've started bootstrapping this system with GiveWiki in January of last year. But over the course of the year we've found it very hard to find anyone who wanted to use the system as a donor/grantmaker. Most of the grantmakers we were in touch with had lost their funding in Nov. 2022; others wanted to wait until the system is mature; and many smaller donors had no trouble finding great funding gaps without our help.
We will keep the platform running, but we'll probably have wait for the next phase of funding overhang when there are more grantmakers and they actually have trouble finding their funding gaps.
(H/t to Dony for linking this thread to me!)
Yeah, we're not marketing people, so we've probably made plenty of subtle mistakes. But we have compiled a CRM of hundreds of leads from EA Hub, Donor Lottery, conferences, etc. and cold-emailed them; gone through charities to try to get in touch with their donors; posted in various grantmaker Slacks and Discords; posted in various AI safety groups; held talks at many EA and LW meetups, events, and conferences; networked in the refi space; tried to tap into our personal contacts to get in touch with grantmakers; ran a newsletter; produced an explainer video for those who don't like to read so much; etc. We also collabed with GWWC on a shoutout in their newsletter.
Over the course of a year I probably contacted some 500+ people and collected ~50 expressions of interest. But only 5 of them replied to our feedback survey, and none of those 5 ended up using the platform.
So while I imagine there are probably more than 50 donors in AI safety, it's probably not much more, perhaps double or they are all far away geographically and socially. I sourced the first 10+ just from my friends, which took a week or so. The rest trickled in very slowly one conference 1:1 at a time or with every 50th cold email. So 50 is probably already a good chunk of them. (Not all were interested in using the platform for their giving, but an aggregate donation budget of $700k of them was.) And they probably all had an easy time giving away their donation budget, so they never needed to look for more projects to give to.
Groups like StakeOut.AI want to do more mainstream outreach for AI safety. That could create a new source of users for us!
Yes, exactly! Yeah, I'm quite sad that it hasn't caught on.
I think there are just too few donors at the moment. That makes it hard for us to reach them because they're so few and far in between, and makes it easy for them to find plenty of great funding gaps among the projects they know so they never need to search for them.
We'll keep the platform running, so if AI safety goes mainstream or another billionaire funder pops up, we're ready to serve them with our recommendations.
This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that.
Oh, interesting… I'm autistic and I've heard that autistic people give off subtly weird “uncanny valley”–type vibes even if they mask well. So I mostly just assume it's that. Close friends of mine who surely felt perfectly free to tell me anything were also at a loss to describe it. They said the vibes were less when I made a ponytail rather than had open hair, but they couldn't describe it. (Once I transition more, I hope people will just attribute the vibes to my probably-unfortunately-slightly-imperfect femininity and not worry about it. ^.^ I just need to plant enough weirdness lightning rods. xD)
But he was US-based at the time, and I've heard employers in the US are much more careful with giving feedback than around here, so maybe it was just guardedness in that case.
I like your template! I remember another series of interviews where I easily figured out what the problems were (unless they were pretenses). I think I'm quite attuned (by dint of social anxiety) to subtle indications of disappointment and such. When I first mentioned earning to give in an interview, I noticed a certain hesitancy and found out that it's because the person was looking for someone who has an intrinsic motivation for building hardware for supply chain optimization rather than someone who does it for the money. But in other cases I'm clueless, so the template can come into action!
My own approach to this is to tell the interviewer what I'm worried about, and also the reasons that I might not be a good match for whatever this is. For example, "I never worked with some-tech-you-use". If after hearing my worries they still want to hire me, that's great, and I don't need to pretend to know anything. I also think this somewhat filters for hiring managers that appreciate transparency (and not pretending everything is perfect), which is pretty important to me personally.
Oh yes, I love this! I think I've done this in virtually every interview simply because I actually didn't know something. One interviewer even asked me whether I know the so-and-so design pattern. I asked what that is, and then concluded that I had never heard of it. Good call too, because that thing turned out to be ungoogleable. Idk whether he made it up or whether it was an invention of his CS professor, but being transparent about such things has served me well. :-D
Examples of things that might worry you:
I think for me it's mostly about what the other people in the room will think about me, not about consequences for me. I'm also afraid of playing games with friends or strangers for the same reason even though my blunders in such games wouldn't realistically have any consequences for me. :-/
My training with actual interviews will have to wait though because I found a great ETG-oriented EA-run company that is basically my best-case employer. :-D My personal growth will have to continue not down the path of becoming braver but down the path of understanding gas optimization in Uniswap v3. ^.^
Thank you so much for all your ideas! (How is your work going? :-D)
Haha! But that sounds tame compared to what I imagined! I like math core and Fantômas, though, just haven't quite warmed up to extratone yet.
Oooh, Brighter Than Today is cooool! :-D
Awww, that's so relatable! But I'm very curious now: What is Holly's and your music taste? :-D (Math core? Extratone? Fantômas? Same song on repeat for a year?)
Individually, altruists (to the extent that they endorse actually doing good) can make a habit of asking themselves and others what risks they may be overlooking, dismissing, or downplaying.
I think this works well when done in private, but asking around among friends is difficult for people who don't have an extensive EA network and risks that they inadvertently only ask around within their filter bubble.
Asking around publicly, e.g., on the Forum, is something that I and probably others too have mostly come to regret. Currently it's still uncommon to try to red-team your own interventions publicly, so when someone does do it, the intervention is not perceived as particularly well red-teamed but as particularly risky.
This could be avoided by making such red-teaming a lot more common, but that is hard. Perhaps a dedicated subforum could help too, one where only people interested in helping with such red-teaming efforts see the posts.
Thanks! Yeah, I could imagine that particular aid programs beat GiveDirectly, but they'll be even harder to find, be confident in, and make legible to others. But if someone has the right connections, then that'd be amazing too! (I'm mostly thinking of donors here whose bar is GiveDirectly and not (say) Rethink Priorities.)
I quite often listened to interviews with Noam Chomsky on the topic, and yeah, my takeaway was typically that the situation is too complex and intricate for me to try to understand it by just listening to a few hours of interviews… If I were a history and policy buff, that'd be different. :-/
I use “impact” to mean “net impact,” basically.
An output could be a piece of forest that is protected from logging. An outcome is some amount of CO2 converted into O2 that wouldn't otherwise. But also a different piece of forest getting logged that wouldn't otherwise. And a bunch of r-strategist animals dying of parasites, starvation, and predation who would otherwise not have been born. Some impact on the workers who now have to travel further to log trees. And much more.
The attempt to trade off all of these effects (perhaps using an open source repository of composeable probabilistic models like Squiggle) is what results in an impact estimate.