I recently came across this post on the r/neoliberal subreddit:

Try to sell me on immigration(meaning give me a pro immigration pitch) if I'm a blue collar working class guy who's afraid of someone taking my job for lower pay or my wages going down

I like the format of this request and I’d like to suggest some EA ones in the same format:

 

Try to sell me on working with large food companies to improve animal welfare, if I’m a vegan abolitionist.
 

Try to sell me on donating millions of dollars made from crypto trading to the most effective causes, if I’m an environmentalist concerned with the CO2 generated by bitcoin mining.

 

Try to sell me on donating to the global poor if I live in the developed world and have a very strong sense of responsibility to my family and local community.

 

Try to sell me on the dangers of AGI if I’m interested in evidence-based ways to reduce global poverty, and think EA should be pumping all its money into bednets and cash transfers.

 

Try to sell me on the need for strong international governing bodies to reduce existential risk if I’m a libertarian skeptical of large government.

 

I like the format of these questions (ok they're not strictly questions, but they have an asker and answerer format) for a few reasons:

  • They encourage the answerer to imagine a variety of viewpoints they don’t hold, which may change their views on certain issues.
  • They encourage the answerer to consider how EA may appear to those outside the movement.
  • They encourage the answerer to consider how people in EA or the causes EA is interested in may have very different opinions to one another. This is probably good for fostering positive disagreement within EA.
  • They allow the asker to pose the question without seeming as if they hold the viewpoint described. The asker may have heard the concern raised by people with the viewpoints described, but don’t want to seem as if they hold that viewpoint themselves.
  • They allow the answerer to address criticism of EA from a very safe, low pressure environment. The hypothetical nature of the questions makes them feel less like severe criticism. This is good practice, for when you do come into a debate with people who hold these views. The fact that you’ve considered the question beforehand, imagined some answers and actively extended your imagination to consider how the person your debating may think, should make the debate more constructive.

I think these kinds of questions could be run on the forum each month. Each month, the top voted question gets a cash prize, and each month, the top voted answer(s) get a cash prize. To think of this another way, we could ask the question "which ideas should we pitch to which kinds of people... and what are the best ways of doing that?" 

If you like this idea, feel free to comment on this post with "Try to sell me... if I'm" pitches, or write answers someone else's, or the ones I asked in this post.

43

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 5:21 AM
tae
2y13
0
0

Try and sell me on AGI safety if I'm a social justice advocate! That's a big one I come across.

I gave this a shot and it ended up being an easier sell than I expected:

"AI is getting increasingly big and important. The cutting edge work is now mainly being done by large corporations, and the demographics of the people who work on it are still overwhelmingly male and exclude many disadvantaged groups.

In addition to many other potential dangers, we already know that AI systems trained on data from society can unintentionally come to reflect the unjust biases of society: many of the largest and most impressive AI systems right now have this problem to some extent. A majority of the people working on AI research are quite privileged and many are willfully oblivious to the risks and dangers.

Overall, these corporations expect huge profits and power from developing advanced AI, and they’re recklessly pushing forward in improving its capabilities without sufficiently considering the harms it might cause.

We need to massively increase the amount of work we put into making these AI systems safe. We need a better understanding of how they work, how to make them reflect just values, and how to prevent possible harm, especially since any harm is likely to fall disproportionately on disadvantaged groups. And we even need to think about making the corporations building them slow down their development until we can be sure they’re not going to cause damage to society. The more powerful these AI systems become, the more serious the danger — so we need to start right now."

I bet it would go badly if one tried to sell a social justice advocate on some kind of grand transhumanist vision of the far future, or even just on generic longtermism, but it's possible to think about AI risk without those other commitments.

When I have these conversations, they are not at all in the format of [this is the opinion of the other person --> here's my pitch]

 

The format is "debugging the other person"

 

As a naive example, for the person based on evidence based intervention, I may ask:

  1. If an astroid was heading to earth and we estimate a 70% probability it would hit (but not certain), would you want to do something about it?
  2. How about 1%?
  3. Ah ok, so the question is if AGI has over 1%?
  4. Ok, good questions, how do we get our priors for this at all? tough one, do you have ideas?

Part of my concern from reading this post is

when you do come into a debate with people who hold these views. The fact that you’ve considered the question beforehand, imagined some answers and actively extended your imagination to consider how the person your debating may think, should make the debate more constructive.

It seems potentially counter productive to assume you have already thought about the other person's view point

More from Elicit.org's "reason from one claim to another"

I am responsible for my family and local community ~ I should donate to the global poor

I am responsible for my family and local community ➟ I feel some moral obligation to them. ➟ I feel equally compelled to care about others more globally. ➟ Therefore, I should donate to the global poor.

I am responsible for my family and local community ➟ It is an innate moral obligation to care for loved ones ➟ The global poor are as deserving of our assistance as loved ones.

I am responsible for my family and local community ➟ Lived through periods of poverty ➟ Would not wish this on anyone ➟ Wouldn't want my children to grow up in such a world ➟ I should donate to the global poor

I am responsible for my family and local community ➟ I have a wealth of resources. ➟ My opportunities for making a difference are larger than most. ➟ Someone else can care for my immediate surroundings. ➟ The less fortunate of the world need support. ➟ I should donate to the global poor

From Elicit.org's "reason from one task to another" 
 


I’m a vegan abolitionist ➟ I want animals to have equal or greater moral consideration than humans do. ➟ At present, large food companies would be unlikely to favouring the abolition of farming over the reduction of suffering, so we should work with them. ➟ This will enable us to simultaneously cause greater improvements in animal welfare and reduce future farming intensity
 

[anonymous]2y2
0
0

I suggested a similar idea here, although my posts wasn't as clear:

https://forum.effectivealtruism.org/posts/bsBF5KnavhLZ4PAeL/ea-frontpages-should-compare-contrast-with-other-movements

Also I think [person with a viewpoint] needs to be narrowed down to the few most popular viewpoints in the world or atleast developed world, otherwise this becomes a very large task.

Try to sell me on working with large food companies to improve animal welfare, if I’m a vegan abolitionist.

There is more political traction on improving animal welfare in large food companies than there is in ending systematic slaughter and abuse of animals completely. 

Becoming aware of the harm one is causing and then undoing that harm can lift the blinds that were hiding your seemingly innocuous everyday actions.Having large food companies improve animal welfare can increase the sensitivity to animal harm of those within the companies. These people may then go on to push for further increases in animal welfare, and maybe even for the end of the systematic slaughter and abuse.

Work with large food companies to increase animal welfare doesn't necessarily exclude the possibility of work to end the slaughter and abuse completely. 

The animals, although still having a bleak life overall, will probably feel grateful for the small breaks they will be given in their lives.

Try to sell me on donating to the global poor if I live in the developed world and have a very strong sense of responsibility to my family and local community.

Doing what you can to help yourself and others around you is logical. However, not everyone in the world has the luxury to help themselves and others close to them.

By reducing global poverty you make places around the world better and safer places to live. Therefore, if, say, one of your grandchildren chooses to live somewhere else in the world, their experience will be better and safer.

Try to sell me on the dangers of AGI if I’m interested in evidence-based ways to reduce global poverty, and think EA should be pumping all its money into bednets and cash transfers.

Even experts,sometimes, are taken off guard by huge technological milestones that come with huge risks. Not working to be prepared for such risky technological advances would be doing a disservice to those around you that you care about, by doing nothing for the world as something bad happens that takes the world off guard. Being passive about the dangers of AGI can render all other humanitarian efforts moot.