I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)
I have a conversation menu and a Calendly for you to pick from!
If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.
Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.
GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.
I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.
Caude.ai summary for those in a hurry:
The article argues in defense of the effective altruism movement, citing its accomplishments in areas like global health, animal welfare, and AI safety, while contending criticisms of it are overblown. It makes the case that effective altruism's commitment to evidence-based altruism that focuses on the most tractable interventions to help others is a positive development worth supporting, despite some mistakes. The article concludes the movement has had significant positive impact that outweighs the negatives.
I'll read the article itself later, so be warned that I don't know how good this summary is.
Update: The summary is correct but significantly less viscerally motivating than the original. I love it!
1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn't have expected someone to have a problem with that though…) I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?
2. I'm super noncompetitive… When it comes to EA jobs, I find it reassuring that I'm probably not good at making a good first impression because it reduces the risk that I replace someone better than me. But in non-EA jobs I'm also afraid that I might not live up to some expectations in the first several weeks when I'm still new to everything.
3. Haha! Excellent! I should do that more. ^.^
4. You mean as positive reinforcement? I could meet with a friend or go climbing. :-3
5. Aw, yes, spot on. I spent a significant fraction of my time over the course of 3–4 months practicing for Google interviews, and then never dared to apply anyway (well, one recruiter stood me up and I didn't try again with another). Some of the riddles in Cracking the Coding Interview were so hard for me that I could never solve them in 30 minutes, and that scared me even more. Maybe I should practice minimally next time to avoid that.
Thank you so much for all the tips! I think written communication works perfectly for me. I don't actually remember your voice well enough to imagine you speaking the text, but I think you've gotten everything across perfectly? :-D
I'll only pounce on amazing opportunities for now and continue GoodX fulltime, but in the median future I'll double down on the interviewing later in 2024 when our funds run out fully. Then I'll let you know how it went! (Or I hope I'll remember to!) For now I have a bunch more entrepreneurial ideas that I want to have at least tried. :-3
That makes a lot of sense! I've been working on that, and maybe my therapist can help me too. It's gotten better over the years, but I used to feel intense shame over mistakes I made or might've made for years after such situations, so that I'm still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.
Haha! Where exactly do you disagree with me? My mind autocompleted that you'd proffer this objection:
If you work for a 9x job, chances are that you're in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you'll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills.
What do you think?
I'm a bit worried about this too and would avoid 9x jobs where I suspect this could happen. But having a bunch of altruistic colleagues sounds great otherwise. :-D
I think I will need to aim for something a bit above background economic growth levels of good to pacify my S1 in the long run. ^.^
Thanks! Yeah, I've included that in the application form in one or two cases in the hope it'll save time (well, not only time – I find interview processes super stressful, so if I'm going to get rejected or decline, I'd like (emotionally) for that to happen as early as possible) but I suppose that's too early. I'll ask about it later like you do. I haven't gotten so far yet with any impact-focused org.
Same… Anna Riedl recommended working for something that is at least clearly net positive, a product that solves some important problem like scaling Ethereum or whatever. Emotionally, the exact order of magnitude of the impact probably doesn't make a proportional difference so that the motivation will be there, and the actual impact can flow from the donations. Haven't tried it yet, but I will if I go back to ETG.
Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG?
AI Safety Events is one of the projects where we expanded the time window because they were on a hiatus in earlier 2023. The events that got evaluated were from 2022. Otherwise yes. (But just to be clear, this is about the retroactive evaluation results mentioned at the bottom of the post.)
Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat!
Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam, Dawn (that's me) crashes. I'd never knowingly deploy software that can be DoSed so easily. I imagine people must put false things about Anthropic into this input field all the time, yet you keep going! That's really cool! How do you do it? What can I learn from you?
Thank you, that is already very helpful! I love focusing on service over conflict; I abhor conflict, so it's basically my only choice anyway. The only wrinkle is that most of the people I help are unidentifiable to me, but I really want to help those who are victims or those who help others. I really don't want to help those who attack or exploit others. Yet I have no idea what the ratio is. Are the nice people vastly outnumbered by meanies? Or are there so many neutral people that the meanies are in the minority even though the nice people are too?
If a few meanies benefit from my service, then that's just the cost of doing business. But if they are the majority beneficiaries, I'd feel like I'm doing something wrong game theoretically speaking.
Does that make sense? Or do you think I'm going wrong somewhere in that train of thought?
Awww, you're so kind! I think a lot of this will help me in situations where I apply control at the first stage of my path to impact. But usually my paths to impact have many stages, and while I can give freely at the first stage and only deny particular individuals who have lost my trust, I can't do the same further downstream. In particular, I hope that future generations and posthumans will abhor suffering and use their enormous resources to replace the sorts of genes or subroutines that produce it not just in themselves but in all sentient beings. But the more often I see inconsiderate meanness, the more I update toward a future in which future generations squander their resources and ignore or negligently exacerbate suffering. All of these future generations are so far downstream of my actions that I have no granular control over who I'm helping.
Are there reasons that I'm overlooking to not lose hope in the universal beneficence of posthumans, should they exist? Or feel free to tell me if that's not the key question I should be asking.
Trusting moral progress… I wish I could. I think I generally have a hard time trusting mechanisms that I don't understand at a gears level. For all I know, moral progress might be about a social contract just among active contributors to a civilization; that's far from universal beneficence because of all the beings born into forms in which they cannot contribute to any meaningful degree – but can suffer.
At least it would leave the thoughtless meanies in the dust, though. So that's something.
But it could also be a fluke, like the Bitcoin relief rally in early 2022. Robin Hanson has argued that subsistence-level incomes have been the norm throughout history, so that the current greater level of affluence (which as probably enabled a lot of the altruism we can currently afford) must be a brief aberration from the norm and will soon regress back to subsistence.
(Also what are examples in which AI can enable better cooperation around global priorities like existential risks and suffering reduction?)
That's a lot of good points that I'll try to bear in mind! But I could also imagine a world in which resistance to taxation ruins efforts to introduce a UBI as more and more jobs get automated.
Wealth will then split sharply between those who held the right industry investments and those who didn't. The first group will probably be much much smaller than the second, maybe by a factor of 100 or more. So even if they have enough money to sustain their standard of living, demand for anything but the bare necessities will drop by 100x. That could destroy industries that are currently viable because of scale economies.
The rich 1% could perhaps still afford some things beyond the bare necessities, but because they'll then again have to be produced individually like in preindustrial times, they'll be even more expensive. That seems to me like it would just lead toward a cyberpunk-like dystopia where the rich dehumanize the poor because there are too many and they are too close for a rich person's empathetic capacity.
The moral circle of the rich will contract because they don't want to feel guilty, and the moral circle of the poor will contract because they have to fight for their own survival. That seems like one pathway to me in which moral progress could be indefinitely reversed.
Do you think it is unlikely? And that other scenarios with similar implications are also unlikely?
Re 1: That is reassuring. A worrying possibility is that I think a lot of existing instability even in highly stable totalitarian regimes like North Korea is due to outside influences (e.g., Chinese currency and phone networks reaching across the border). If the scenario that I describe is a global one that produces an elite that is already quite homogeneous (there's the acronym WEIRD for the sort of people who probably disproportionately have some stock or ETF holdings), they might coordinate to copy-paste the North Korean regime on a global level where outside influences are impossible. But I can see that that's a very speculative worry.
Re 2: I imagine that the new jobs will be much fewer so that most people will need to rely on UBI or passive income from stocks.
Re 3: Network effects usually produce Pareto-distributed uptake, so that a greater ability to network will again just lead to extreme inequality… or not? Put differently, what sort of interventions are there to use these networks to enable social safety nets? I'm aware of some decentralized attempts at creating new UBI currencies. Can you think of other interventions in that vein?
Re 4: Education might be maxed out considering how slow humans are at learning (time-, not sample-efficiency). The moral circle expansion from complexity might be a function of individual specialization – everyone knows that they have to rely on everyone else for all the things that they have no idea how to produce. With production largely automated, people won't need to specialize anymore, and moral circles can collapse back to the level of mere genetic kin.
Yeah, I don't think my scenarios, while not even worst cases (I can be more pessimistic than that), are inevitable. It's just that naturally, the better outcomes are not worrying me. There's nothing for me to do about them. The bad ones are the ones I need to prevent if at all possible…
Thank you!
Totalitarian control: Black markets have not destroyed North Korea over the past decades, and the regime has even stopped fighting them to some extent. They don't seem like much a threat. North Korea has a directed acyclical graph type of approach to surveillance where a node is punished by its parent nodes for any failure to report or punish nonconforming behaviors of any of its children nodes. Technology could allow a regime to implement an arbitrary graph structure, so that no one would even know who they have to lie to disguise their nonconformity. The German Democratic Republic had some features of that system, but it was less powerful then, perhaps for lack of the right surveillance coordination technology. :-/ Encryption has plenty of weak points such as the torture of sender and recipient and general policies that outlaw it and where the accused has to prove their innocence by presenting the plain text of any data that is not all zeros. Or are there steganographic techniques that let you disguise the public key and the encrypted message as perfectly sensible plain text? If, say, there are no systematic differences between a real poem and a poem that is really a public key that was used to encrypt something into a much larger poem, it should be possible to send encrypted messages while leaving absolutely no one who can still prove their innocence, so that totalitarian regimes may be disincentivized from enforcing laws like that!
Jobs: Yes, UBI again… But the rich actually have to choose to give up some of their riches – and as prices increase due to collapses of scale economies, they might not even feel like they can spare much money anymore.
Networks: Some of these are not currently well monitizable so that they'll disappear when no one has the slack anymore to maintain them. Or actually I suppose that depends on whether they are more like a collective prisoners dilemma or more like a collective assurance game. The second might survive. But Matrix seems more like the first at the moment, and I imagine there are countless examples like that throughout the open source communities and in may other contexts… That might all go away. Unless there is a great cultural shift towards dominant assurance contracts that turn all of these cases into assurance games. But somehow DACs have not caught on so far despite seeming like an absolutely amazing idea.
Moral circles: Hmm, is that so? I imagine it might be on the level of a single generation. Once you've learned the rules of your social contract, you extent them to everyone who you communicate with and who seems like they include them. But if, between generations, the rules of the social contract change to be much less inclusive (for all the reasons I fear), then all the travel and communication might not help anymore. Plus people might not have the slack anymore to communicate or travel much if it's not critical to their survival.
I suspect though that you're absolutely right about the mindset, at least as I'm concerned. Most of the highly prolific people I know seem ridiculously over-optimistic to me, so it stands to reason that that there's a tradeoff to be made between productivity-enhancing optimism and directionally guiding realism. Perhaps I have for too long tried to be well-calibrated and to stare down the abyss as some people say and thereby forgotten to cultivate the right degree of the right kind of delusion that would've maintained my motivation. Or are such Dark Arts (as Less Wrongians would call them) likely to backfire in the end anyway? Or is it not Dark Arts if I'm just countering a pessimistic bias with an optimistic bias? Will I not end up being biased in both directions in different domains instead of the perfect calibration that I'm hoping for?
Yeah, I'll think about that… Human potential: I think I find the hedonistic imperative to be most inspiring – humanity or its decedents using their superior intellect to root out the sources of suffering on a genetic basis for all sentient beings. If we were made in the image of God, who is to say that God is not a naked mole rat so that we serve God through our genetic transformation. (J/k.) But yeah, the hedonistic imperative (of course extended to all beings of all substrates) feels really inspiring to me.
Agreed. In my mind involuntary suffering precludes ipso facto that someone might want it. But that's a cop out. I don't know how to determine for an individual who can't speak or otherwise indicate preferences or isn't born yet what sorts of sensations constitute involuntary suffering for them… But well, you asked for a vision, not a pragmatic step-by-step plan. Maybe David Pearce has already figured these things out for me. ^.^
Thank you so much for the great chat! Can I post it to my short form on the EA Forum for others to read?
Yes, thank you so much for your thoughtful and considerate guidance! We care a lot about AI alignment out here, but I also know plenty of humans who I wish were aligned with you.