Even scandal-prone individuals can't survive in a vacuum. (You may be thinking of sandals, not scandals?)
We have sympathies towards both movements, and consider ourselves to take the middle path. We race forward and accelerate as quickly as possible while mentioning safety.
Mentioning safety is a waste of resources that you could direct toward attaching propulsion to asteroids to get them here faster.
In fact, asteroids will inevitably hit earth earlier or later, and if they kill humanity, clearly they are superior to humanity. The true masters of our future lightcone are the asteroids. That which can be destroyed by asteroids ought to be destroyed by asteroids.
True progress is in speeding the inevitable. Resistance is futile.
That's a good point. Time discounting (a “time premium” for the past) has also made me very concerned about the welfare of Boltzmann brains in the first moments after the Big Bang. It might seem intractable to still improve their welfare, but if there is any nonzero chance, the EV will be overwhelming!
But I also see the value in longtermism because if these Boltzmann brains had positive welfare, it'll be even more phenomenally positive from the vantage points of our descendants millions of years from now!
There is a Swiss canton Appenzell Innerrhoden (AI). Maybe we can hide there and trick the AI into thinking it already invaded it?
Scandals don't just happen in the vacuum. You need to create the right conditions for them. So I suggest:
Scandals don't just happen in the vacuum
Has anyone tested this? Because if we could create them in a vacuum, that might save a lot of energy usually lost to air resistance, and thus be more effective
Great summary!
You probably base “Even though this use of funds was unintentional and sounds extremely sketchy, FTX's general counsel testified that FTX's terms of service did not prohibit it” on:
...The government didn’t want to focus you on that. Why? Again, the only witness who said he had read the terms of service was Can Sun, the general counsel who had helped to draft it. Even though he was very careful in what he told you, he admitted that nowhere do the terms of service contain language that prevents FTX from loaning customer fiat deposits to Alameda or
Yep, failing fast is nice! So you were just skeptical on priors because any one new thing is unlikely to succeed?
Yep, that makes a lot of sense. I've done donation forwarding for < 10 projects once, and it was already quite time-consuming!
I've (so far) read the first post and love it! But when I was working full-time on trying to improve grantmaking in EA (with GiveWiki, aggregating the wisdom of donors), you mostly advised me against it. (And I am actually mostly back to ETG now.) Was that because you weren't convinced by GiveWiki's approach to decentralizing grantmaking or because you saw little hope of it succeeding? Or something else? (I mean, please answer from your current perspective; no need to try to remember last summer.)
Oh, great! I interpreted “This is an offer to donate this amount to the project on the condition that it eventually becomes active” to mean that the project might not become active for any number of reasons, only one of them being the funding goal.
Oh, brilliant! USDC would also be my top choice. But I'm basically paying into a DAF, and so can't get a refund if this project doesn't succeed, right? That would have a high cost in option value since I don't know whether my second-best donation opportunity will be on Manifund. Is there a way to donate to Rethink Priorities or the Center on Long-Term Risk through Manifund? That would lower-bound the cost in option value.
From what I've learned about Shapley values so far, this seems to mirror my takeaway.
Nice! To be sure, I want to put an emphasis on any kind of attribution being an unnecessary step in most cases rather than on the infeasibility of computing it.
There is complex cluelessness, nonlinearity from perturbations at perhaps even the molecular level, and a lot of moral uncertainty (because even though I think that evidential cooperation in large worlds can perhaps guide us toward solving ethics, that'll take enormous research efforts to actually make p...
Great work; I hope you'll succeed with the fundraiser! Do you have blockchain-based donation options like, e.g., Rethink Priorities offers? Ideally one of the major Ethereum layer 2s or Solana? Ty!
Shapley values are a great tool for divvying up attribution in a way that feels intuitively just, but I think for prioritization they are usually an unnecessary complication. In most cases you can only guess what they might be because you can't mentally simulate the counterfactual worlds reliably, and your set of collaborators contains billions of potentially relevant actors. But as EAs we can “just” choose whatever action will bring about the world history with the greatest value regardless of any impact attribution to ourselves or anyone.
I like the...
I use “impact” to mean “net impact,” basically.
An output could be a piece of forest that is protected from logging. An outcome is some amount of CO2 converted into O2 that wouldn't otherwise. But also a different piece of forest getting logged that wouldn't otherwise. And a bunch of r-strategist animals dying of parasites, starvation, and predation who would otherwise not have been born. Some impact on the workers who now have to travel further to log trees. And much more.
The attempt to trade off all of these effects (perhaps using an open source repository of composeable probabilistic models like Squiggle) is what results in an impact estimate.
Frankie made a nice explainer video for that!
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn't h...
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has...
Yeah, we're not marketing people, so we've probably made plenty of subtle mistakes. But we have compiled a CRM of hundreds of leads from EA Hub, Donor Lottery, conferences, etc. and cold-emailed them; gone through charities to try to get in touch with their donors; posted in various grantmaker Slacks and Discords; posted in various AI safety groups; held talks at many EA and LW meetups, events, and conferences; networked in the refi space; tried to tap into our personal contacts to get in touch with grantmakers; ran a newsletter; produced an explainer vide...
Yes, exactly! Yeah, I'm quite sad that it hasn't caught on.
I think there are just too few donors at the moment. That makes it hard for us to reach them because they're so few and far in between, and makes it easy for them to find plenty of great funding gaps among the projects they know so they never need to search for them.
We'll keep the platform running, so if AI safety goes mainstream or another billionaire funder pops up, we're ready to serve them with our recommendations.
This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that.
Oh, interesting… I'm autistic and I've heard that autistic people give off subtly weird “uncanny valley”–type vibes even if they mask well. So I mostly just assume it's that. Close friends of mine who surely felt perfectly free to tell me anything were also at a loss to describe it. They said the vibes were less when I made a ponytail rather than had open hair, but they couldn't des...
Haha! But that sounds tame compared to what I imagined! I like math core and Fantômas, though, just haven't quite warmed up to extratone yet.
Oooh, Brighter Than Today is cooool! :-D
Awww, that's so relatable! But I'm very curious now: What is Holly's and your music taste? :-D (Math core? Extratone? Fantômas? Same song on repeat for a year?)
Individually, altruists (to the extent that they endorse actually doing good) can make a habit of asking themselves and others what risks they may be overlooking, dismissing, or downplaying.
I think this works well when done in private, but asking around among friends is difficult for people who don't have an extensive EA network and risks that they inadvertently only ask around within their filter bubble.
Asking around publicly, e.g., on the Forum, is something that I and probably others too have mostly come to regret. Currently it's still uncommon to try t...
Thanks! Yeah, I could imagine that particular aid programs beat GiveDirectly, but they'll be even harder to find, be confident in, and make legible to others. But if someone has the right connections, then that'd be amazing too! (I'm mostly thinking of donors here whose bar is GiveDirectly and not (say) Rethink Priorities.)
I quite often listened to interviews with Noam Chomsky on the topic, and yeah, my takeaway was typically that the situation is too complex and intricate for me to try to understand it by just listening to a few hours of interviews… If I were a history and policy buff, that'd be different. :-/
I've been reading your comments with great interest. Thank you! Do you maybe want to write a top-level post on the topic? Since it's December (but also generally), I'd be quite interested in whether you can think of donation opportunities that are sufficiently leveraged to plausibly be competitive with (say) GiveWell top charities. Perhaps there are highly competent peace-building organizations in Israel. (I imagine few EAs will have the right expertise for direct work on this, and the ones who do will not benefit much from the post – but money is flexible.)
Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat!
Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam...
Caude.ai summary for those in a hurry:
The article argues in defense of the effective altruism movement, citing its accomplishments in areas like global health, animal welfare, and AI safety, while contending criticisms of it are overblown. It makes the case that effective altruism's commitment to evidence-based altruism that focuses on the most tractable interventions to help others is a positive development worth supporting, despite some mistakes. The article concludes the movement has had significant positive impact that outweighs the negatives.
I'll read...
1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn't have expected someone to have a problem with that though…) I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?
2. I'm super noncompetitive… When it comes to EA jobs, I find it reassuring that I'm probably not good at making a good first impression because it reduces the risk that I replace ...
That makes a lot of sense! I've been working on that, and maybe my therapist can help me too. It's gotten better over the years, but I used to feel intense shame over mistakes I made or might've made for years after such situations, so that I'm still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.
I have thoughts on how to deal with this. My priors are this won't work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down
My recommendation on how to read this:
Haha! Where exactly do you disagree with me? My mind autocompleted that you'd proffer this objection:
If you work for a 9x job, chances are that you're in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you'll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills.
What do you think?
I'm a...
Thanks! Yeah, I've included that in the application form in one or two cases in the hope it'll save time (well, not only time – I find interview processes super stressful, so if I'm going to get rejected or decline, I'd like (emotionally) for that to happen as early as possible) but I suppose that's too early. I'll ask about it later like you do. I haven't gotten so far yet with any impact-focused org.
Same… Anna Riedl recommended working for something that is at least clearly net positive, a product that solves some important problem like scaling Ethereum or whatever. Emotionally, the exact order of magnitude of the impact probably doesn't make a proportional difference so that the motivation will be there, and the actual impact can flow from the donations. Haven't tried it yet, but I will if I go back to ETG.
Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG?
AI Safety Events is one of the projects where we expanded the time window because they were on a hiatus in earlier 2023. The events that got evaluated were from 2022. Otherwise yes. (But just to be clear, this is about the retroactive evaluation results mentioned at the bottom of the post.)
First it could make sense not to focus too much on the credits. The ranking has to bottom out somewhere, and that's where the credits come into it, to establish a track record for our donors. The ranking itself is better thought of as the level of endorsement of a project weighed by the track record of the endorsing donors.
We're still thinking about how we want to introduce funding goals and thus some approximation of short-term marginal utility. At the moment all projects discount donations at the same rate. Ideally we'd be able to use something like the ...
Oh true! I was only thinking of financial support for struggling projects and project developers, but those kinds of support are also super valuable!
Rethink Wellbeing is definitely on board with mental health for EAs being an important cause area.
I don't think personal identity makes too much sense, so preventing the extinction of EA-related values (or maybe even some wider set of prosocial, procivilizational values) could be an underexplored cause area. Some sort of EA crisis fund could be a way to achieve that, but also archival of important insights and such.
Yeah… I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.
Only Nonlinear attempted something similar in the EA community. (But I condemn exploitative treatment of employees of course!) Open Phil picked up an AI safety prize contest, and I might miss a few cases. I was very disa...
I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.
I think if this happened with, say, a conference you would see this kind of response within EA. A group of people stuck in a specific place is very different from the FTX collapse.
I’ll phrase this as a question to not be off-vibe: Would you like to create accounts with AI Safety Impact Markets so that you’ll receive a regular digest of the latest AI safety projects that are fundraising on our platform?
That would save them time since they don’t have to apply to you separately. If their project descriptions left open any questions you have, you can ask them in the Q & A section. You can also post critiques there, which may be helpful for the project developers and other donors.
Conversely, you can also send any rejected projects our way, especially if you think they’re net-positive but just don’t meet your funding bar.
Friends of mine rented a venue for 1200+ people for a weekend in a central location in Germany for some €60k iirc. There was no catering, but we were allowed to buy food at nearby diners (with great vegan options) and bring it into the venue.
Do the practices around forcing catering companies on organizers maybe vary by countries, so that EAGs could move to nearby countries where the venues are cheaper and more chill? Maybe countries with smooth air and rail connections from (e.g.) London?
There’s also the Zuzalu model. Zuzalu itself may be an option. Or som...
Yeah, agreed! I haven’t thought about impact markets through Linch’s particular lens. (I’m cofounder of AI Safety Impact Markets.)
Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.
But most people I’ve talked to don’t consider costly in terms of reputation to be a ...
FYI: Our (GoodX’s) project AI Safety Impact Markets is a central place where everyone can publish their funding applications, and AI safety funders can subscribe to them and fund them. We have ~$350k in total donation budget (current updated number) from interested donors.
(If you’re a donor interested in supporting early-stage AI safety projects and you’re interested in this crowdsourced charity evaluator, please sign here.)
I’m considering crossposting this prize, but is it still funded? If you already received the funding, will you be able to pay out even if it’s clawed back? Thank you!
Hiii! Thanks! Yeah, what’s a market and what isn’t… I’m used to a rather wide definition from economics, but we did briefly consider whether we should use a different or sub-brand (like ranking.impactmarkets.io or so) for this project.
The idea is that, if all goes well, we roll out something like the carbon credit markets but for all positive impact via a three-phase process:
Only half a person per sandal I think!