saulius

My name is Saulius Simcikas. I am a researcher at Rethink Priorities. I currently focus on topics within farmed animal welfare. Previously, I was a research intern at Animal Charity Evaluators, organised Effective Altruism events in the UK and Lithuania, and earned-to-give as a programmer.

Wiki Contributions

Comments

saulius's Shortform

Thank you Matt!! After reading your answer I bought the ticket :)

saulius's Shortform

Q: Has anyone estimated what is the risk of catching covid at the EAG London this year? Is it more like 5%, 20%, 50%, or 80%? I still haven't decided whether to go (the only argument for not going being covid) and knowing what is the risk would make it a lot easier. Travelling is not a concern since I live in London not that far from the venue.

The motivated reasoning critique of effective altruism

Nice post! Regarding

1. Strong ideological reasons to believe in a pre-existing answer before searching further (consider mathematical modeling of climate change or coronavirus lockdowns vs pure mathematics)
 [...]
 Unfortunately, effective altruism is on the wrong side of all these criteria.

I'm curious what you think these strong ideological reasons are. My opinion is that EA is on the right side here on most questions. This is because in EA you get a lot of social status (and EA forum karma) for making good arguments against views that are commonly held in EA. I imagine that in most communities this is not the case. Maybe there is an incentive to think that a cause area or an intervention is promising if you want to (continue to) work within that cause area but anything you can challenge within a cause area or an intervention seems encouraged.  

EA Superpower?! 😋

…AND I SHOW YOU HOW DEEP THE RABBIT HOLE GOES is a great slatestarcodex story which explores what can be done with different superpowers. I'd take the black pill which on the face of it looks like low magic.

Buck's Shortform

I would also ask these people to optionally write  or improve a summary of the book in Wikipedia if it has an Wikipedia article (or should have one). In many cases, it's not only EAs who would do more good if they knew ideas in a given book, especially when it's on a subject like pandemics or global warming rather than topics relevant to non-altruistic work too like management or productivity. When you google a book, Wikipedia is often the first result and so these articles receive a quite lot of traffic (you can see here how much traffic a given article receives). 

Key Lessons From Social Movement History

While there may be some benefits to increasing issue salience, our case studies provide weak evidence that high issue salience can decrease the tractability of legislative change,[35] which is evidence against tactics that are aimed at increasing salience. This might be especially so if advocates are trying to push through unpopular policies.

Because the animal farming industry has a lot of political power in most countries, I feel that it is they who are likely to push through unpopular policies that benefit animal farmers financially but hurt animals. I may be wrong, but I don't think that animal advocates pushing through unpopular policies has much precedent. I'm not sure what leverage animal advocates could use to do that.

Looking for more 'PlayPumps' like examples

I don't know much about it, but this talk mentions how sending free second-hand clothing as aid has damaged local textile industry in some countries. Quick google reveals some articles like this that should talk about it in more depth (I haven't read them though). Also, this article came to my mind but it seems that no charity was involved so it probably doesn't fit your purpose.

saulius's Shortform

I think it is useful to think about something like this happening in the current world like you did here because we have better intuitions about the current world. Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn't happen for multiple reasons. One of them is that it would be irrational for vegans to agree to give money because then other people would continue exploiting them with this simple trick.

I think that the same applies to far future scenarios. If an agent allows itself to be manipulated this easily, it won't become powerful. It's more rational to just make it publicly known that you refuse to engage with such threats. This is one of the reasons why most Western countries have a publicly declared policy to not negotiate with terrorists. So yeah, thinking about it this way, I am no longer concerned about this threats thing.

Linch's Shortform

I think that all of us RP intern managers took the same 12-hour management training from The Management Center. I thought that there was some high-quality advice in it but I'm not sure if it applied that much to our situation of managing research interns.  I haven't been to other such trainings so I can't compare.

saulius's Shortform

Shower thought, probably not new: some EAs think that expanding the moral circle to include digital minds should be a priority. But the more agents care about the suffering of digital minds, the more likely it is that some agent that doesn’t care about it will use creating vast amounts of digital suffering as a threat to make other agents do something. To make the threat more credible, in at least some cases it may follow through, although I don’t know what is the most rational strategy here. Could this be a dominant consideration that could make the expected value of moral circle expansion to be negative for negative utilitarians? Because the intensity and the scale of purposefully created suffering could outweigh the incidental suffering that would be prevented in other scenarios by an expanded moral circle.

EDIT: I no longer think that this is a legitimate concern, see my comment below.

Load More