Stefan_Schubert

Comments

Stefan_Schubert's Shortform

Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).

Stefan_Schubert's Shortform

On encountering global priorities research (from my blog).


People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.

This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field.

But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field.

First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with.

Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world requires much more systematic thinking than most people will have done prior to encountering the field of global priorities research.

Third, people may underestimate how much thinking global priorities researchers have done over the past 10-20 years, and how sophisticated that thinking is. This is to some extent understandable, given how young the field is. But if you start to truly engage with the best global priorities research, you realize that they have an answer to most of your objections. And you’ll discover that they’ve come up with many important considerations that you’ve likely never thought of. This was definitely my personal experience.

For these reasons, people who are new to global priorities research may come to dismiss existing research prematurely. Of course, that’s not the only mistake you can make. You can also go too far in the other direction, and be overly deferential. It’s a tricky balance to strike. But in my experience, premature dismissal is relatively common - and maybe especially so among smart and experienced people. So it’s something to watch out for.

Thanks to Ryan Carey for comments.

Long-Term Future Fund: April 2020 grants and recommendations

I'd say most PhD students don't publish in the Journal of Philosophy or other journals of a similar or better quality (it's the fourth best general philosophy journal according to a poll by Brian Leiter).

This blog post seems to suggest it has an acceptance rate of about 5%.

Long-Term Future Fund: September 2020 grants

Yes. Also, regarding this issue:

you could find someone with a similar talent level ... who could produce many more videos

It seems that the Long-Term Future Fund isn't actively searching for people to do specific tasks, if I understand the post correctly. Instead, it's reviewing applications that come to them. (It's more labour-intensive to do an active search.) That means that it can be warranted to fund an applicant even if it's possible that there could be better candidates for the same task somewhere out there. (Minor edits.)

How do political scientists do good?

Great suggestions.

Tyler John and Will MacAskill also have this paper, "Longtermist Institutional Reform" (in the forthcoming book The Long View, edited by Natalie Cargill).

Are social media algorithms an existential risk?

There are some studies suggesting fake news isn't quite the problem some think.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3316768

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3107731

There are also a number of papers which are sceptical of there being pervasive social media "echo chambers" or "filter bubbles".

http://eprints.lse.ac.uk/87402/

https://www.sciencedirect.com/science/article/abs/pii/S0747563216309086

Cf also this recent book by Hugo Mercier, which argues that people are less gullible than many think.

I don't know this literature well and am not quite sure what conclusions to draw. My impression is, however, that some claims of the dangers of fake news on social media are exaggerated.

Cf also my comment on the post on recommender systems, relating to other effects of social media.

Stefan_Schubert's Shortform

I've written a blog post on naive effective altruism and conflict.


A very useful concept is naive effective altruism. The naive effective altruist fails to take some important social or psychological considerations into account. Therefore, they may end up doing harm, rather than good.

The standard examples of naive effective altruism are maybe lies and theft for the greater good. But there are other and less salient examples. Here I want to discuss one of them: the potential tendency to be overly conflict-oriented. There are several ways this may occur.

First, people may neglect the costs of conflict - that it’s psychologically draining for them and for others, that it reduces the potential for future collaboration, that it may harm community culture, and so on. Typically, you enter into a conflict because you think that some individual or organisation is making a poor decision - e.g. that reduces impact. My hunch is that people often decide to take the conflict because they exclusively focus on this (supposed) direct impact cost, and don’t consider the costs of the conflict itself.

Second, people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.

Third, overconfidence may lead you to think that a decision is obviously flawed, where there’s actually reasonable disagreement. That can make you push more than you should.

*

These considerations don’t mean that you should never enter into a conflict. Of course you should. Exactly when to do so is a tricky problem. All I want to say is that we should be aware that there’s a risk that we enter into too many conflicts if we apply effective altruism naively.

How have you become more (or less) engaged with EA in the last year?

In contrast to some of the responses here, I think that EA has become more intellectually sophisticated in recent years. It's true that there were many new ideas at the beginning. But it feels a bit unfair to just look at the number of new ideas, given that it's easier at the start - when there's more low-hanging fruit.

Relatedly, it seems to me that EA organisations also are getting more mature and skilled. There are several new impressive organisations, and others have expanded considerably.

Asking for advice

Maybe one option would be to both send the Calendly and write a more standard email? E.g.:

"When would suit you? How about Tuesday 3pm or Wednesday 4pm? Alternatively, you could check my Calendly, if you prefer."

Maybe some find that overly roundabout.

Asking for advice

I think that for many , it's primarily the act of sending a calendly link that is off-putting (for social, potentially status-related, reasons), rather than the experience of interacting with the software. My hunch is that people don't have the same aversion to, e.g. Doodle, which is more symmetric (it's not that one person sends their preferences to the other, but everyone lists their preferences). (But you may be different.)

Load More