Recent Discussion

People who decide how to spend health budgets hold the lives or livelihoods of many other people in their hands. They are literally making life-or-death decisions. Most decisions of this sort take dramatically insufficient account of cost-effectiveness. As a result, thousands or millions of people die who otherwise would have lived. The few are saved at the expense of the many. It is typically done out of ignorance about the significance of the cost-effectiveness landscape rather than out of prejudice, but the effects are equally serious.

—Toby Ord, The Moral Imperative Towards Cost-Effectiveness in Global Health

What if GiveWell could generate an amount of value equivalent to donating $3-$20 million to GiveDirectly, without spending any money at all?

Charities, even within GiveWell's top charities list, vary in their cost-effectiveness. This...

Hello Internet!

As I promised, the "weird" shop is now live. It is an experiment of mine in which I take a regular online shop but instead of treating it as my source of income I turn it into something more communitarian. The shop sells my drawings, nothing special, except for two things: a 51% donation of all profits and a ceiling to my income.

So, any item bought in that shop will have 51% of its profits spent with some truly good thing, at this moment a donation to Against Malaria Foundation. When my income hits the ceiling, then 100% of profits are devoted to the cause until the current month closes.

Why I am doing this?

In short, because I always wanted my art to go beyond aesthetics and...

Hi @Brad West. My family also suggested the same when I mentioned it, but I think giving me a fixed salary would be disadvantageous for the causes being helped and buyers would intuitively catch that and hesitate to buy or contribute:

Suppose I set my salary to 2.500 euros/month and a particular month runs with low sales amounting a total profit of 1.500 euros (which is well expected in the beginning).

If I set a fixed salary and follow this rule strictly I would have to pocket those 1.500 and leave 0.00 to good causes. But that is not what I want. On the ot... (read more)

1Cesar Scapella1h
I was going to answer, burner, but Brad West pretty much explained it very well... Advertising to prospective buyers that the shop will donate 51% of profits is in fact one strategy to earn more money.
3Brad West3h
Hmm I guess that goes into a broader discussion, but I don't think that the EA community profits itself by not including artists and those with skills that aren't squarely in the conventional Earning to Give purview. In any case, I think more efforts like this to further impact within the art commerce space is an important contribution. Oftentimes, people will not be able to radically change their vocation and its important to look for opportunities for impact within a framework that someone is able to do in a given time. Further, I don't buy the premise that this is not high EV through a combination of direct impact and promoting a model that is potentially high EV.

TL;DR: Many people aren’t sure if it would be more impactful for them to earn to give or to work at an EA aligned org. I suggest a solution to solve this problem quickly: Ask the org.

What to ask?

“Would you prefer to hire me, or to hire your next best candidate and have me donate [as much as you’d donate]?”

My prior on their answer

I think big longtermism/meta orgs will definitely prefer to hire you if you’re their top candidate.

Not sure about other orgs.

Ask!

I asked Ben West from the Centre for Effective Altruism

Ben:

  1. I personally previously averaged ~$2M/year EtGing but think my labor at CEA is more valuable.
  2. I can't think of an instance where I thought someone shouldn't have applied to CEA because EtG was so obviously better.
  3. I can
...
10Ben Millwood10h
I think it's reasonably likely that people earning $1m / year are systematically less inclined to bother with the survey, so I would be cautious about using the community response rate to extrapolate. (On the other hand, 2000 is 5% of 40000, not 10000)

That is why I left quite large margins for error, one of which you note, the other being that those 6 were only earning 1m+, not donating.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

P(misalignment x-risk | AGI) is high. 


Intent alignment should not be the goal for AGI x-risk reduction. If AGI is developed, and we solve AGI intent alignment, we will not have lowered x-risk sufficiently, and we may have even increased it higher than it would have been otherwise.
 

P(misalignment x-risk | intent-aligned AGI) >> P(misalignment x-risk | societally-aligned AGI). 

 

The goal of AI alignment should be alignment with (democratically determined) societal values (because these have broad buy-in from humans).
 

P(misalignment x-risk | AGI) is higher if intent alignment is solved before societal-AGI alignment. 

 

Most technical AI alignment research is currently focused on solving intent alignment. The (usually implicit, sometimes explicit) assumption is that solving intent alignment will help subsequently solve societal-AGI alignment. This would only be the case if all the humans...

This is cross-posted from the AI Impacts blog

This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems1

To start, here’s an outline of what I take to be the basic case2:

I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’

Reasons to expect this:

  1. Goal-directed behavior is likely to be valuable, e.g. economically. 
  2. Goal-directed entities may tend to arise from machine learning training processes not intending to create them (at least via the methods that are likely to be used).
  3. ‘Coherence arguments’ may imply that systems with some goal-directedness will become more strongly goal-directed over time.

II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty...

It seems that these are good arguments against the quick AI Doom.

I think I fall into the slow AI doom, that is we will gradually lose what is of value to us to due to too much competition in a capitalist environment. You can see the slow doom fictionalised in accelerando, in that AI doesn't kill everyone, just economically marginalises them.

Thinking about the future of uploads and brain alteration via nanotech also leads to some of the same places. Deletion of parts of current humanity by a minority leading to economic marginalisation of the people. 

T... (read more)

1Pauline Kuss7h
I am intrigued by your point that superhuman intelligence does not imply an AI’s superhuman power to take over the world. Highlighting the importance of connecting information-based intelligence with social power, including mechanisms of coordinating and influencing humans, suggests that AI risks ought to be considered not from a purely technical, but from a socio-technical perspective instead. Such socio-technical framing raises the question how technical factors (e.g. processing power) and social factors (e.g. rights and trust vested into the system by human actors; social standing of the AI) interrelate in the creation of AI risk scenarios. Do you know of current work in the EA community on the mechanisms and implications of such socio-technical understanding of AI risks?

This post argues that P(misalignment x-risk | AGI) could be lower than anticipated by alignment researchers due to an overlooked goal specification technology: law. 

P(misalignment x-risk | AGI that understands democratic law) < P(misalignment x-risk | AGI) 

To be clear, this post does not argue that P(misalignment x-risk | AGI) is negligible – we believe it is much higher than most mainstream views. A purpose of this post is to shed light on a neglected mechanism for lowering the probability of misalignment x-risk.

The mechanism that is doing the work here is not the enforcement of law on AGI. In fact, we don’t discuss the enforcement of the law at all in this post.[1] We discuss AGI using law as information. Unless we conduct further research and development on how to...

I have just published my new book on s-risks, titled Avoiding the Worst: How to Prevent a Moral Catastrophe. You can find it on Amazon or read the PDF version.

The book is primarily aimed at longtermist effective altruists. I wrote it because I feel that s-risk prevention is a somewhat neglected priority area in the community, and because a single, comprehensive introduction to s-risks did not yet exist. My hope is that a coherent introduction will help to strengthen interest in the topic and spark further work.

Here’s a short description of the book:

From Nineteen Eighty-Four to Black Mirror, we are all familiar with the tropes of dystopian science fiction. But what if worst-case scenarios could actually become reality? And what if we could do something now to put the world on a

...

This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

As part of my work with the Quantified Uncertainty Research Institute, I am experimenting with speculative evaluations that could be potentially scalable. Billionaires were an interesting evaluation target because there are a fair number of them, and at least some are nominally aiming to do good.

For now, for each top 10 billionaire, I have tried to get an idea of:

  1. How much value have they created through their business activities?
  2. How much impact have they created through their philanthropic activities

I then assigned a subjective score based on my understanding of the answers to the above questions. Overall I've spent in the neighborhood of 20 hours (maybe 7 to 40 hours) between research and editing, so this is by no mean the final word on this topic.

Elon Musk (B)

Elon Musk...

For what it's worth, Nuno and I were both expecting this post to get a lot less attention. Maybe 30 karma or so (for myself). I think a lot of the interest is mainly due to the topic.

Seems like a signal that much more rigorous work here would be read.

I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.

Try to ask your first batch of questions by Monday, October 17  (so that people who want to answer questions can know to make some time around then).

Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers.

This is a test thread — we might try variations on it later.[1]

How to ask questions

Ask anything you’re wondering about that has anything to do with effective altruism.

More guidelines:

  1. Try to post each question as a separate "Answer"-style comment on the post.
  2. There’s no such thing as a question too basic (or too niche!).
  3. Follow the Forum norms.[2]

I encourage everyone to view asking...

2Lorenzo Buonanno2h
I personally think that the purpose of text is to share information that's decision-relevant, and everything else is secondary. Being humans gives a reason for making all sorts of mistakes / imprecise things, I think it's OK as long as the information is not misleading, otherwise it's worth sending a (polite) correction.

Thank you for replying several times and sharing your perspective. I appreciate that.

I think this kind of attitude to quotes, and some related widespread attitudes (where intellectual standards could be raised), is lowering the effectiveness of EA as a whole by over 20%. Would anyone like to have a serious discussion about this potential path to dramatically improving EA's effectiveness?

1Answer by EP2h
Is anyone in EA thinking about (or has come across) the metacrisis? For those not familiar with the term, this article [https://www.sloww.co/meta-crisis-101/#meta-definition] can provide a rough idea of what it's about, while this article [https://systems-souls-society.com/tasting-the-pickle-ten-flavours-of-meta-crisis-and-the-appetite-for-a-new-civilisation/] provides a more in-depth (though somewhat esoteric) exploration. My questions to any EAs who are familiar with the concept (and the wider ' metamodern [https://metamoderna.org/metamodernism/]' perception of our current historical context) are: 1. Do you think this is a good assessment of the fundamental drivers of some of the major challenges of our time? 2. If you feel that the assessment is roughly correct, do you think it has any implications for cause prioritisation within EA? 1. ^ [#fnrefvoprpmyhdk] https://systems-souls-society.com/tasting-the-pickle-ten-flavours-of-meta-crisis-and-the-appetite-for-a-new-civilisation/