SS

Stefan_Schubert

6459 karmaJoined Sep 2014

Bio

I'm a researcher in psychology and philosophy.

https://stefanschubert.substack.com/

Comments
706

Topic contributions
39

Thanks, this is great. You could consider publishing it as a regular post (either after or without further modification).

I think it's an important take since many in EA/AI risk circles have expected governments to be less involved:

https://twitter.com/StefanFSchubert/status/1719102746815508796?t=fTtL_f-FvHpiB6XbjUpu4w&s=19

It would be good to see more discussion on this crucial question.

The main thing you could consider adding is more detail; e.g. maybe step-by-step analyses of how governments might get involved. For instance, this is a good question that it would be good to learn more about:

"does it look more like much more regulations or international treaties with civil observers or more like almost-unprecedented nationalization of AI as an industry[?]"

But of course that's hard.

I don't find it hard to imagine how this would happen. I find Linch's claim interesting and would find an elaboration useful. I don't thereby imply that the claim is unlikely to be true.

Thanks, I think this is interesting, and I would find an elaboration useful.

In particular, I'd be interested in elaboration of the claim that "If (1, 2, 3), then government actors will eventually take an increasing/dominant role in the development of AGI".

The reasoning is that knowledgeable people's beliefs in a certain view is evidence for that view.

This is a type of reasoning people use a lot in many different contexts. I think it's a valid and important type of reasoning (even though specific instances of it can of course be mistaken).

Some references:

https://plato.stanford.edu/entries/disagreement/#EquaWeigView

https://www.routledge.com/Why-Its-OK-Not-to-Think-for-Yourself/Matheson/p/book/9781032438252

https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty

Yes; it could be useful if Stephen briefly explained how his classification relates to other classifications. (And which advantages it has - I guess simplicity is one.)

Thoughtful post.

If you're perceived as prioritising one EA cause over another, you might get pushback (whether for good reason or not). I think that's more true for some of these suggestions than for others. E.g. I think having some cause-specific groups might be seen as less controversial than having varying ticket prices for the same event depending on the cause area. 

I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.

If that's so, one might wonder why that happens.

In these cases, it seems that there are three questions; e.g.:

1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?

You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.

 It's possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) ("the conclusion"). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.

If that's the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don't do).

(Effectively, I'm saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)

A corollary of this is that it's maybe not as common as one might think that "a little knowledge" is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they'll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.

My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that "a little knowledge" is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.

In general, I think a little knowledge is usually beneficial, meaning our prior that it's harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.

How much of this is lost by compressing to something like: virtue ethics is an effective consequentialist heuristic?

It doesn't just say that virtue ethics is an effective consequentialist heuristic (if it says that) but also has a specific theory about the importance of altruism (a virtue) and how to cultivate it.

There's not been a lot of systematic discussion on which specific virtues consequentialists or effective altruists should cultivate. I'd like to see more of it.

@Lucius Caviola and I have written a paper where we put forward a specific theory of which virtues utilitarians should cultivate. (I gave a talk along similar lines here.) We discuss altruism but also five other virtues.

Another factor is that recruitment to the EA community may be more difficult if it's perceived as very demanding. 

I'm also not convinced by the costly signalling-arguments discussed in the post. (This is from a series of posts on this topic.)

Load more