People who decide how to spend health budgets hold the lives or livelihoods of many other people in their hands. They are literally making life-or-death decisions. Most decisions of this sort take dramatically insufficient account of cost-effectiveness. As a result, thousands or millions of people die who otherwise would have lived. The few are saved at the expense of the many. It is typically done out of ignorance about the significance of the cost-effectiveness landscape rather than out of prejudice, but the effects are equally serious.
—Toby Ord, The Moral Imperative Towards Cost-Effectiveness in Global Health
What if GiveWell could generate an amount of value equivalent to donating $3-$20 million to GiveDirectly, without spending any money at all?
Charities, even within GiveWell's top charities list, vary in their cost-effectiveness. This...
Hello Internet!
As I promised, the "weird" shop is now live. It is an experiment of mine in which I take a regular online shop but instead of treating it as my source of income I turn it into something more communitarian. The shop sells my drawings, nothing special, except for two things: a 51% donation of all profits and a ceiling to my income.
So, any item bought in that shop will have 51% of its profits spent with some truly good thing, at this moment a donation to Against Malaria Foundation. When my income hits the ceiling, then 100% of profits are devoted to the cause until the current month closes.
In short, because I always wanted my art to go beyond aesthetics and...
TL;DR: Many people aren’t sure if it would be more impactful for them to earn to give or to work at an EA aligned org. I suggest a solution to solve this problem quickly: Ask the org.
“Would you prefer to hire me, or to hire your next best candidate and have me donate [as much as you’d donate]?”
I think big longtermism/meta orgs will definitely prefer to hire you if you’re their top candidate.
Not sure about other orgs.
Ask!
Ben:
That is why I left quite large margins for error, one of which you note, the other being that those 6 were only earning 1m+, not donating.
P(misalignment x-risk | AGI) is high.
Intent alignment should not be the goal for AGI x-risk reduction. If AGI is developed, and we solve AGI intent alignment, we will not have lowered x-risk sufficiently, and we may have even increased it higher than it would have been otherwise.
P(misalignment x-risk | intent-aligned AGI) >> P(misalignment x-risk | societally-aligned AGI).
The goal of AI alignment should be alignment with (democratically determined) societal values (because these have broad buy-in from humans).
P(misalignment x-risk | AGI) is higher if intent alignment is solved before societal-AGI alignment.
Most technical AI alignment research is currently focused on solving intent alignment. The (usually implicit, sometimes explicit) assumption is that solving intent alignment will help subsequently solve societal-AGI alignment. This would only be the case if all the humans...
This is cross-posted from the AI Impacts blog
This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems1.
To start, here’s an outline of what I take to be the basic case2:
I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’
Reasons to expect this:
II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty...
It seems that these are good arguments against the quick AI Doom.
I think I fall into the slow AI doom, that is we will gradually lose what is of value to us to due to too much competition in a capitalist environment. You can see the slow doom fictionalised in accelerando, in that AI doesn't kill everyone, just economically marginalises them.
Thinking about the future of uploads and brain alteration via nanotech also leads to some of the same places. Deletion of parts of current humanity by a minority leading to economic marginalisation of the people.
T...
This post argues that P(misalignment x-risk | AGI) could be lower than anticipated by alignment researchers due to an overlooked goal specification technology: law.
P(misalignment x-risk | AGI that understands democratic law) < P(misalignment x-risk | AGI)
To be clear, this post does not argue that P(misalignment x-risk | AGI) is negligible – we believe it is much higher than most mainstream views. A purpose of this post is to shed light on a neglected mechanism for lowering the probability of misalignment x-risk.
The mechanism that is doing the work here is not the enforcement of law on AGI. In fact, we don’t discuss the enforcement of the law at all in this post.[1] We discuss AGI using law as information. Unless we conduct further research and development on how to...
I have just published my new book on s-risks, titled Avoiding the Worst: How to Prevent a Moral Catastrophe. You can find it on Amazon or read the PDF version.
The book is primarily aimed at longtermist effective altruists. I wrote it because I feel that s-risk prevention is a somewhat neglected priority area in the community, and because a single, comprehensive introduction to s-risks did not yet exist. My hope is that a coherent introduction will help to strengthen interest in the topic and spark further work.
Here’s a short description of the book:
...From Nineteen Eighty-Four to Black Mirror, we are all familiar with the tropes of dystopian science fiction. But what if worst-case scenarios could actually become reality? And what if we could do something now to put the world on a
This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don't need to be persuaded to care about animal/insect/machine suffering in the first place.
As part of my work with the Quantified Uncertainty Research Institute, I am experimenting with speculative evaluations that could be potentially scalable. Billionaires were an interesting evaluation target because there are a fair number of them, and at least some are nominally aiming to do good.
For now, for each top 10 billionaire, I have tried to get an idea of:
I then assigned a subjective score based on my understanding of the answers to the above questions. Overall I've spent in the neighborhood of 20 hours (maybe 7 to 40 hours) between research and editing, so this is by no mean the final word on this topic.
Elon Musk...
For what it's worth, Nuno and I were both expecting this post to get a lot less attention. Maybe 30 karma or so (for myself). I think a lot of the interest is mainly due to the topic.
Seems like a signal that much more rigorous work here would be read.
I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.
Try to ask your first batch of questions by Monday, October 17 (so that people who want to answer questions can know to make some time around then).
Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers.
This is a test thread — we might try variations on it later.[1]
Ask anything you’re wondering about that has anything to do with effective altruism.
More guidelines:
I encourage everyone to view asking...
Thank you for replying several times and sharing your perspective. I appreciate that.
I think this kind of attitude to quotes, and some related widespread attitudes (where intellectual standards could be raised), is lowering the effectiveness of EA as a whole by over 20%. Would anyone like to have a serious discussion about this potential path to dramatically improving EA's effectiveness?
Hi @Brad West. My family also suggested the same when I mentioned it, but I think giving me a fixed salary would be disadvantageous for the causes being helped and buyers would intuitively catch that and hesitate to buy or contribute:
Suppose I set my salary to 2.500 euros/month and a particular month runs with low sales amounting a total profit of 1.500 euros (which is well expected in the beginning).
If I set a fixed salary and follow this rule strictly I would have to pocket those 1.500 and leave 0.00 to good causes. But that is not what I want. On the ot... (read more)