This is a special post for quick takes by saulius. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

What’s a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.

I think the problem is that I just don't have a grand vision of the future I am trying to contribute to.

 

For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)

How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.

If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.

There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.

7
saulius
Thanks. Yeah, I now agree that it's better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future. Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely. 
4
saulius
the person in the private message also told me about this serenity prayer: "grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference."
4
saulius
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it's for the better long-term.
8
David_Moss
One possible way of thinking about this, which might tie your work in smaller battles into a 'big picture', is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1] Note that I'm only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects). 1. ^ As in this passage from one of Yudkowsky's short stories:

I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy's CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.

I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don't want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.

6
saulius
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this. 
5
Ben_West🔸
I like Bostrom's Letter from Utopia
8
Imma🔸
FWIW: definitely not a world vision, but Ozy's blog is the most heart-warming thing I've read after the recent US elections.
6
Benevolent_Rain
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project - I think most people's dreams of a future are pretty wild and not something I would want for myself!
6
Linch
It might be too hard to envision an entire grand future, but it's possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer's).
3
Locke
Edge Esmeralda seems like a great bottom up experiment in a nontrivially better way of living together: https://www.edgeesmeralda.com/ A marginal rather than transformative revolution if you will. 
4
Gemma 🔸
Sorry to hear that you're having a rough time! When I'm feeling like this, I find that the only thing that helps is actually finishing a project end-to-end so I feel momentum. Something I intrinsically think is valuable but wasn't going to get done otherwise. (Like improving wikis or cleaning up a mess in a park). Going as small as possible while still being satisfying helps remind me that there are things within my control and people around me that I can help. I also liked this post from FarmKind https://www.linkedin.com/posts/aidan-alexander_𝐌𝐲-𝐌𝐚𝐬𝐭𝐞𝐫𝐩𝐥𝐚𝐧-𝐭𝐨-𝐄𝐧𝐝-activity-7262449165924712451-lb7T?utm_source=share&utm_medium=member_android
4
NickLaing
I don't have a suggestion, but I've been encouraged and "heartwarmed" by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether  its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we're skeptical abut whether "future designing" is a good idea at all.
3
ElliotTep
On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there's a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing. I also think on the front of factory farming we focus too much on the entire problem, and not enough on how good the wins are in and of themselves. 
2
Ebenezer Dukakis
I find videos about space colonization pretty inspiring. Of course, space colonization would ideally be paired with some level of suffering abolition, so we aren't spreading needless suffering to other planets. Space colonization could help with political discord, since people with different ideas of a "good society" can band together and peacefully disperse through the solar system. If you think traveling the world to experience different cultures is fun, I expect visiting other planets to experience different cultures will be even better. On the AI front, rumor has it that scaling is slowing down... that could grant more time for alignment work, and increase the probability that an incredible future will come to pass.

I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:

  • A Trump voter or a liberal voter
  • A woman who chose to have an abortion or an anti-abortion activist
  • A transgender person or someone opposed to transgender rights
  • A person from another race, religion, or a country your country might be at odds with

Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others. 

The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authentici... (read more)

8
Larks
That's a cool idea. Presumably you would need a relatively uncensored chatbot - when I recently tried to get several to produce arguments in favour of Trump policies they all refused.
4
Ian Turner
Why is this better than actually talking to someone with the opposing viewpoint?
  • more accessible to everyone. Those conversations often don't happen in real life.
  • lower stakes, you can just speak your mind about your prejudices you have with no consequences
  • the chatbot can listen and explain things better and with more "empathy", won't be tired about answering the same questions again and again
  • You can make up different engaging situations and talk to people like candidates for presidency
  • It can be a bit like a warm up to talking to a real person

People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.

4
David T
This feels like it could easily be counterproductive.  A chatbot's "relatable backstory" is generative fiction, and the default "Trump supporter" or "liberal voter" is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.  Whereas the default person that doesn't agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn't actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind. So if you're talking to the first option rather than the second, you're actually understanding less. I don't think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side - in many cases because they didn't really pay a lot of attention to politics and had one particularly big concern - by getting them to talk to a robot trained on the other side's talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there's a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
4
Linch
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.  You might imagine a single person per "archetype" to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren't judged by a real human, etc.
2
saulius
I wanted to share the ChatGPT story about how someone might become a terrorist. Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context) ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh. Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusuf’s father tried to resist, and they took him away. They left Yusuf’s family without its provider. His mother’s health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away. Life became bleak and colorless. One day, a man named Hakim, a former student from Yusuf’s school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasn’t alone. Hakim said he worked for a “freedom movement” that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hope—maybe he could do something for his family and others like them. Over the weeks, Hakim kept visiting, always with stories of their people’s suffering and injustices, stories Yusuf hadn’t heard from the media. He talked about the honor of defending one’s family and land and how the so-called “enemies” were responsible for everything Yusuf’s family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a family—a group of brothers all dedicated to the same cause. Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, som

[More responses are not necessary right now, thank you very much to all who filled it]

I’d be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you're interested why you might want to do it, you’ll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like “switch from battery cage to cage-free reduces su... (read more)

2
Vasco Grilo🔸
Hi Saulius, Ambitious Impact has also compiled estimates about pain intensities to estimate suffering-adjusted days (SADs). You can ask Vicky for the sheet.
3
Alfredo Parra 🔸
I'm very interested in what you find out! It'd be great to compare the results of the survey with the predictions of the heavy-tailed valence hypothesis (see here or here).
2
ElliotJDavies
Completed this, but was difficult! 

EAG and covid [edit: solved, I'm not attending the EAG (I'm still testing positive as of Saturday)]

I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly:

  1. Attend the conference as normal, wear a mask when it’s not inconvenient and when I’m around many people.
  2. Only go to 1-1s, wear a mask when I have to be inside but perhaps not during 1-1s (I find prolonged talking with a mask difficult)
  3. Don’t go inside, have all of my 1-1s outside. Looking at google maps, there doesn’t seem to be any benches or nice places to sit just outside the venue, so I might have to ask people to sit on the floor and to use my laptop on the floor, and I don’t know how I’d ch
... (read more)

I think you're likely to be contagious, even though you're asymptomatic.

In my experience, many people express gratitude when I inform them I'm infected and offer to cancel meetings.

In terms of advice from the EA Global team we don’t have a strict policy on covid and you can use your best judgement. You may wish to test/mask. 

I (Iz) would personally ask that you inform your 1:1 meeting partners and that you aren't unmasked inside whilst still testing positive.

Thanks, 
Iz

7
EdoArad
So sad that this happens to you again :(  Thanks for taking the safety measures!
4
Ben Millwood🔸
This obviously doesn't help you now, but in case you weren't already aware, you can buy more vaccine doses now
4
Joseph Lemien
Might it be possible to inform your conversational partners and get their consent to either A) use some type of transcription software (such as Otter.ai), or B) use your smartphone to record the audio of the conversation so that you can take notes manually later on? This would allow you to focus fully on the conversation, but it would also remove the limitation of typing on your laptop and would thus allow you to have walking meetings.
2
saulius
thanks but in this case there are other reasons why I need to use the laptop and make people I meet and survey to look at my laptop. I guess I mostly want to gaze how big of a deal people think covid is nowadays.
2
Amber Dawn
Ugh bad luck Saulius, I totally feel your frustration. I've had a few covid-bouts where I tested positive for over 2 weeks. It feels really frustrating to have to miss out on important things when it's unclear that you're even infectious, and also unclear that others are taking similar precautions.  It sounds like you've made your decision but fwiw, in your position I'd tell people about my covid status and offer them outdoor meetings if they were comfortable with that. 

Why it's difficult to find cost-effective wild animal welfare interventions we could do now

Introduction

Most Wild Animal Welfare (WAW) researchers I talked to thought that we are unlikely to find WAW interventions that would be directly competitive with farmed animal welfare interventions in terms of direct short-term cost-effectiveness. After spending some months trying to find such interventions myself, I tentatively agree. In this text, I will try to explain why.

Experience from the project

I spent some months trying to find a WAW intervention that is:

  • tractable (can in principle be funded >$100K/yr in the next two years even if we choose not to do so), 
  • non-controversial (>40% support and <30% oppose in a US poll), and 
  • directly cost-effective (10%+ as cost-effective in expectation as chicken welfare reforms). 

The first step in the process was listing all potential interventions. Even though many people contributed to it, I found this list to be underwhelming (unfortunately, I don’t think I can share the list without asking for permission from everyone who contributed to it). I feel that coming up with plausible interventions for farmed animals is much ea... (read more)

4
Tyler Johnston
Thank you for sharing this! If I’m reading correctly, you found that many researchers thought “it’s unlikely that they will find [cost-competitive WAW interventions]” which surprised me, since it seems like you found reducing aquatic noise to be borderline already. Did you just mean in the very near future? Or do many researchers think it’s unlikely we will ever identify such interventions?
5
saulius
Good question :) I researched aquatic noise because that was the only intervention where it seemed at least possible for me to estimate cost-effectiveness. But the estimate ended up being so uncertain that it didn't provide much information. Science simply doesn't have answers yet. I expect it to be the same for most WAW interventions. That is, I expect there to be huge uncertainty on how cost-effective they are (and whether they are even good for WAW when all things are considered), and in the best-case scenario, they might be as cost-effective as farmed animal welfare interventions. But we might never find out if we are in the best-case scenario. It's difficult for me to say that aquatic noise is not worth looking into further because I spent like six months researching it but I think that for now there are enough better ideas in farmed animal welfare space so I don't think we should pursue it. I can see WAW interventions being worth it if the animal advocacy movement gets a lot more funding or, I don't know, very advanced artificial intelligence can be used to figure out all ecological consequences of nature somehow. Assuming AI does not change everything, I'd give a 15% chance that in the next 15 years, someone will find a WAW intervention that to me would seem "directly cost-effective (10%+ as cost-effective in expectation as chicken welfare reforms)" and "non-controversial (>40% support and <30% oppose in a US poll)." I'm not counting WAW interventions that have to do with the far future or changing values of the society here. NOTE: I edited the shortform text to match what I say here. I used to say that I'm on the fence whether EA resources should be spent on reducing aquatic noise.
3
Sharang Phadke
Hi, I'm curious what made you write this as a shortform rather than a post?
3
saulius
Hey, I will link to this shortform from a post that I plan to publish this week together with a short summary, so I didn't feel a need to get more attention to these thoughts. Maybe I should've still posted it as a post as this is not what shortforms are for, but I guess I'm  bit shy about posting :)
1
Sharang Phadke
Thanks for the details! Just curious. It does feel post-y, but I can understand. Maybe posting as a personal blog (not on the frontpage) or on the animal welfare topic but not on the frontpage would have worked for what you are looking for

Research grants with outcome-based payouts

If I 1) had savings that cover over a year of my living expenses, 2) wasn’t already employed at an EA think tank, and 3) wanted to do EA research independently, I would probably apply to EA funds to do research on unspecified topics (if they would allow me to do that). I would ask them to give funds not now, but after the research period is over (let’s say 6 months). At the end of the research period, I would produce text that shows instances where I think I had impact and include reasoning why what I did may have had impact. Note that this could include not just published articles, but also comments or in-person communications with trusted advocates that changed how a certain organization does something, reviews of work of others, wikipedia article edits, etc. The amount of funds that I would receive would depend on EA funds manager’s opinion on how good or impactful my work was (or how good of a chance what I did had to be impactful). I imagine that there would be pre-agreed sums of money the manager could choose from. E.g.:

  • No significant effort to achieve impact - $0
  • Significant effort to achieve impact in ways that were plausible but mos
... (read more)
4
Chris Leong
Interesting idea. I think this could be useful in cases where people know that they don't have the credibility to receive a direct grant.
2
saulius
I guess this would also not necessarily have to be research. E.g., a grant for corporate campaigns where payout depends on the commitments they won. I imagine multiple problems with this and it's probably a bad idea but perhaps it's worth consideration in some cases.

Q: Has anyone estimated what is the risk of catching covid at the EAG London this year? Is it more like 5%, 20%, 50%, or 80%? I still haven't decided whether to go (the only argument for not going being covid) and knowing what is the risk would make it a lot easier. Travelling is not a concern since I live in London not that far from the venue.

Hi Saulius, I've done 3 very basic estimates here:

https://docs.google.com/spreadsheets/d/1C6lU4klgisqG150-yR_jZjt253sVrgp2umIbgkUbKbU/edit#gid=0

To get e.g. more than 20% probability, it seems like you'd have to make some very bad assumptions (weirdly high base rates of Covid amongst presumptive attendees, combined with incompetence or malice when it comes to testing). Seems more like 1-5% risk.

6
saulius
Thank you Matt!! After reading your answer I bought the ticket :)
8
Amy Labenz
Thanks for the suggestion!  We've added a spreadsheet microcovid estimates for a few different scenarios to our COVID Protocol. You can see the whole protocol here. Link to the estimates here.  See you soon :) 

I sometimes meet people who claim to be vegetarians (don't eat meat but consume milk and eggs) out of the desire to help the animals. If appropriate, I show them the http://ethical.diet/ website and explain that the production of eggs likely requires more suffering per calorie than most of the commonly consumed meat products. Hence, if they care about animals, avoiding eggs should be a priority. If they say that this is too many food products to give up, I suggest that perhaps instead of eating eggs, they could occasionally consume some beef (although that is bad for the environment). I think that the production of beef requires less suffering per calorie, even though I'm unsure how to compare suffering between different animals. In general, I'm skeptical about dietary change advocacy, but my intuition is that talking about this with vegetarians in situations where it feels appropriate is worth the effort. But I'm uncertain and either way, I don't think this is very important.

A tip for writing EA forum posts with footnotes First press on your nickname in the top right corner, go to Edit Settings and make sure that a checkbox Activate Markdown Editor is checked. Then write a post in Google docs and then use Google Docs to Markdown add-on to convert it to markdown. If you then paste the resulting markdown into the EA forum editor and save it, you will see your text with footnotes. It might also have some unnecessary text that you should delete.

Tables and images If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/) and write a code like ![imageName](https://i.imgur.com/yMc2e8x.jpg "imageName") in your markdown. Of course, the image address should be changed to your image. Currently, the only way to add tables is to make a screenshot of a table and add an image of it.

As I understand it, there will be a new EA forum editor some time soon and all this will not be needed anymore but for now this is how I make my EA forum posts.

1
Eevee🔹
What's the syntax for footnotes?
3
saulius
Main text[^1] [^1]: footnote
3
saulius
This displays as: Main text[1] ---------------------------------------- 1. footnote ↩︎
1
Eevee🔹
Thanks!
3
Aaron Gertler 🔸
You can also write "in-line" footnotes: See this guide to footnote syntax.
1
Lukas Finnveden
If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text. I'm not sure whether I should expect google or imgur to preserve their image-links for longer.
1
Gordon Seidoh Worley
Thanks for the gdocs to markdown tip. I didn't know I could do that, but it'll make writing posts for LW and EAF more convenient!

Why don’t we fund movies and documentaries that explore EA topics? 

It seems to me that the way society thinks about the future is largely shaped by movies and documentaries. Why don’t we create movies that shape the views in a way that’s more realistic and useful? E.g., I haven’t read the discussion on whether Terminator is or is not a good comparison for AI risks but it’s almost certainly not a perfect comparison. Why don’t we create a better one that we could point people to? Something that would explore many important points. Now that EA has more m... (read more)

5
Dave Cortright 🔸
Check out #33 on the Future Fund list of project ideas: https://ftxfuturefund.org/projects/
4
Charles He
I think people have been working with Kurzgesagt and probably others.    Maybe a consideration is that these sorts of collaborations are harder to setup than it seems.  Basically, execution and alignment seems important and hard.  Even if there is available media talent and available funding, setting up the right aesthetic (in more than one sense) and content seems difficult.  * It's unclear, but there may be downside risk (from looking silly or condescending). * This may not pertain to Saulius's point, which isn't focused on outreach, but people have cringed at, or even vehemently opposed, certain kinds of involvement, like US federal agencies getting involved in funding AI safety. So directly promoting the movement (as opposed to underlying ideas or topics) isn't seen as robustly good, but this is highly unclear.
3
saulius
ah, thanks so much for pointing this out, happy to see that funders already have this idea on their radar and I don't need to do anything :)

There was an interesting discussion on whether EA organizations should reveal the authors of posts they publish here. You may want to check it out if this is relevant to you (not just the linked comment, but also the replies.)

If I were to read one of EA-related books (e.g. Doing Good Better, The Most Good You Can Do, The Life You Can Save, The Precipice, Superintelligence, etc.), I would consider writing/improving a summary of the book in wikipedia while reading it, in a way that conveys main points well. It could help you to digest the book better and help others to understand the ideas a bit. You could do it in english as well as maybe in some other language. To see whether it’s worth putting in the effort, you can check out Wikipedia pageview statistics of the books I mentio

... (read more)

Shower thought, probably not new: some EAs think that expanding the moral circle to include digital minds should be a priority. But the more agents care about the suffering of digital minds, the more likely it is that some agent that doesn’t care about it will use creating vast amounts of digital suffering as a threat to make other agents do something. To make the threat more credible, in at least some cases it may follow through, although I don’t know what is the most rational strategy here. Could this be a dominant consideration that could make the expec... (read more)

This is an interesting idea. I'm trying to think of it in terms of analogues: you could feasibly replace "digital minds" with "animals" and achieve a somewhat similar conclusion. It doesn't seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible. 

Perhaps the reason we don't see this happening is that nobody really wants to influence vegans alone. There aren't many strategic reasons to target an unorganized group of people whose sole common characteristic is that they care about animals. There isn't much that an agent could gain from a threat.

I imagine the same might be true of digital minds. If it's anything similar to the animal case, moral circle expansion to digital minds will likely occur in the same haphazard, unorganized way--and so there wouldn't be much of a reason to specifically target people who care about digital minds. That said, if this moral circle expansion caught on predominantly in one country (or maybe within one powerful company), a competitor or opponent might then have a real use for threatening the digital mind-welfarists. Such an unequal distribution of digital mind-welfarists seems quite unlikely, though.

At any rate, this might be a relevant consideration for other types of moral circle expansion, too.

8
saulius
I think it is useful to think about something like this happening in the current world like you did here because we have better intuitions about the current world. Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn't happen for multiple reasons. One of them is that it would be irrational for vegans to agree to give money because then other people would continue exploiting them with this simple trick. I think that the same applies to far future scenarios. If an agent allows itself to be manipulated this easily, it won't become powerful. It's more rational to just make it publicly known that you refuse to engage with such threats. This is one of the reasons why most Western countries have a publicly declared policy to not negotiate with terrorists. So yeah, thinking about it this way, I am no longer concerned about this threats thing.

Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn't happen for multiple reasons.

Interestingly, there is at least one instance where this apparently has happened. (It's possible it was just a joke, though.) There was even a law review article about the incident.

2
Linch
I think this is an interesting point but I'm not convinced that it's true with high enough probability that the alternative isn't worth considering.  In particular, I can imagine luck/happenstance to shake out enough that arbitrarily powerful agents on one dimension are less powerful/rational on other dimensions.  Another issue is the nature of precommitments[1]. It seems that under most games/simple decision theories for playing those games (eg "Chicken" in CDT), being the first to credibly precommit gives you a strategic edge under most circumstances. But if you're second in those situations, it's not clear whether "I don't negotiate with terrorists" is a better or worse stance than swerving.  (And in the former case, with both sides precommitting, a lot of torture will still happen).  [1] using  what I assume is the technical definition of precommitment
2
EdoArad
Interesting!  Other analogies might be human rights and carbon emissions, as used in politics. Say that Party A cares about reducing emissions, then the opposing Party B has an incentive to appear as though they don't care about it at all and even propose actions that would increase emissions so that they could trade "not doing that" with some concession from Party A. I'm sure that we could find lots of real-world examples of that. Similarly, some (totalitarian?) regimes might have some incentive to make major parts of the population politically conceived as unworthy and let them have a very poor lifestyle, so that other countries who care about that population would be open to trade where helping those people would be considered a benefit for those other countries. 
Curated and popular this week
Relevant opportunities