All of Stefan_Schubert's Comments + Replies

This is a summary of Temporal Distance Reduces Ingroup Favoritism by Stefan Schubert, @Lucius Caviola, Julian Savulescu, and Nadira S. Faber.

Most people are morally partial. When deciding whose lives to improve, they prioritise their ingroup – their compatriots or their local community – over distant strangers. And they are also partial with respect to time: they prioritise currently living people over people who will live in the future. This is well known from psychological research.

But what has received less attention is how these psychological dime... (read more)

Robin Hanson's post Marginal Charity seems relevant, even though it's a distinct idea.

I think it's a positive sign. I think the answer to the second question is no.

I might call that "meritocratic about ideas".

calebp
14
0
0
13

Why would I listen to you? You don't even have an English degree.

Yes, I agree.

The OP seems to talk about cause-agnosticism (uncertainty about which cause is most pressing) or cause-divergence (focusing on many causes).

3
Holly Elmore ⏸️ 🔸
But EA is not cause agnostic OR cause neutral atm

Another disadvantage of moving to Reddit is that it would give the existing material on the EA Forum (which includes a lot of good stuff) less visibility (even though it would presumably stay online).

Overall I'd prefer the EA Forum to continue to exist.

A groundbreaking paper by Aidan Toner-Rodgers at MIT recently found that material scientists assisted by AI systems "discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation."

MIT just put up a notice that they've "conducted an internal, confidential review and concluded that the paper should be withdrawn from public discourse".

2
Garrison
Yeah, I asked my editor at TIME adding an update. Will edit this piece as well. 

Right. I think it could be useful to be quite careful about what terms to use since, e.g. some who might actually be fine with some level of monitoring and oversight would be more sceptical of it if it's described as "soft nationalisation".

You could search the literature (e.g. on other industries) for existing terminology.

Part of our linguistic struggle here is that we're attempting to map the entire spectrum of gov. involvement and slap an overarching label on it. 

One approach could be to use terminology that's explicit about there being a spectrum. ... (read more)

Some of the listed policy levers seem in themselves insufficient for the government's policy to qualify as soft nationalization. For instance, that seems true of government contracts and some forms of government oversight. You might consider coming up with another term to describe policies that are towards the lower end of government intervention.

In general, you focus on the contrast between soft and total nationalization, but I think it could also be useful to make contrasts with lower levels of government intervention. In my view, there's a lot of ground... (read more)

2
Deric Cheng
I'd agree - for many of these individual policy levers (esp. the monitoring & oversight mechanisms), "soft nationalization" wouldn't be the best term to describe them!  Part of our linguistic struggle here is that we're attempting to map the entire spectrum of gov. involvement and slap an overarching label on it. "Soft nationalization" gets the general point across, but definitely breaks down on a case-by-case basis.

I don't think one can infer that without having the whole distribution across different countries. It may just be that small countries have greater variance. (Though I don't know what principle the author used for excluding certain countries.)

6
Ebenezer Dukakis
Another point regarding small countries. Imagine, hypothetically, that EA Estonia and EA Poland are on an identical membership growth curve. Perhaps EAGx attendance is about the same, and is growing at a rate of, say, 10% per year in both countries. Intuitively that could suggest comparable levels of cultural affinity for EA. However, since Poland has a population which is 28x as large, it ends up looking very different in OP's chart. Speaking of growth, it would be interesting to see a plot for percent annual growth by country as well. Would Estonia's growth number be high, indicating cultural affinity? Or would its growth number be low, indicating saturation? Perhaps the reason the Anglosphere has so many EAs is just because so many EA materials are in English. Might be interesting to compare the early growth curve in Anglosphere countries vs emerging EA countries. It would be super cool if we are on track to become a more global movement!

I agree with that.

Also, notice that the top countries are pretty small. That may be because random factors/shocks may be more likely to push the average up or down for small countries. Cf:

Kahneman begins the chapter with an example of data interpretation using cases of kidney cancer. The lowest rates of kidney cancer are in counties that are rural and vote Republican. All sorts of theories jump to mind based on that data.  However, a few paragraphs later Kahneman notes that the data also shows that the counties with the highest rates of kidney cancer

... (read more)
2
OscarD🔸
True! Of course if we had all the data we could run a fancier statistical test. I suppose my observation is limited to the fact that the English-speaking vs European ranges seem similar rather than e.g. all the Anglosphere countries being distinctly higher than all the European countries.

The classic fact about variance in small populations, from the start of "thinking fast and slow". Love it!

@Lucius Caviola and I discuss such issues in Chapter 9 of our recent book. If I understand your argument correctly I think our suggested solution (splitting donations between a highly effective charity and the originally preferred "favourite" charity) amounts to what you call a barbell strategy.

5
niplav
Huh, the convergent lines of thought are pretty cool! Your suggested solution is indeed what I'm also gesturing towards. A "barbell strategy" works best if we only have few dimensions we don't want to make comparable, I think. (AFAIU It grows only linearly, but we still want to perform some sampling of the top options to avoid the winners curse?)

Detail, but afaict there were at least five Irish participants.

4
Lorenzo Buonanno🔸
Thanks! I was using old data, I updated the table. I'm surprised there were only five

I was going to make a point about a ‘lack of EA leadership’ turning up apart from Zach Robinson, but when I double-checked the event attendee list I think I was just wrong on this. Sure, a couple of big names didn’t turn up, and it may depend on what list of ‘EA leaders’ you’re using as a reference, but I want to admit I was directionally wrong here.

Fwiw I think there was such a tendency.

1
Tobias Dänzer
Argh, I only posted this because I'd checked the forum - quite thoroughly, or so I thought - and was surprised to see no existing post on the subject.
4
Jason
The title of the piece is: "Sam Bankman-Fried, the effective altruist who wasn’t." I don't think <the self-styled effective altruist who actually wasn't one> is an implausible interpretation of that ambiguous title. Other plausible-to-me interpretations include: <the former effective altruist who wasn't one at the end">, <the effective altruist who wasn't a true EA>, etc. Of course, "EA's CEO" wasn't accurate (which the linkposter changed), and I would not assume that the CEO wrote the headline. But I do think a lack of clarity in the headline is at play here.

Thanks, this is great. You could consider publishing it as a regular post (either after or without further modification).

I think it's an important take since many in EA/AI risk circles have expected governments to be less involved:

https://twitter.com/StefanFSchubert/status/1719102746815508796?t=fTtL_f-FvHpiB6XbjUpu4w&s=19

It would be good to see more discussion on this crucial question.

The main thing you could consider adding is more detail; e.g. maybe step-by-step analyses of how governments might get involved. For instance, this is a good question tha... (read more)

4
Linch
Thanks! I don't have much expertise or deep analysis here, just sharing/presenting my own intuitions. Definitely think this is an important question that analysis may shed some light on. If somebody with relevant experience (eg DC insider knowledge, or academic study of US political history) wants to cowork with me to analyze things more deeply, I'd be happy to collab. 

I don't find it hard to imagine how this would happen. I find Linch's claim interesting and would find an elaboration useful. I don't thereby imply that the claim is unlikely to be true.

4
NickLaing
Apologies will fix that and remove your name. Was just trying to credit you with triggering the thought.

Thanks, I think this is interesting, and I would find an elaboration useful.

In particular, I'd be interested in elaboration of the claim that "If (1, 2, 3), then government actors will eventually take an increasing/dominant role in the development of AGI".

I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at th... (read more)

The reasoning is that knowledgeable people's beliefs in a certain view is evidence for that view.

This is a type of reasoning people use a lot in many different contexts. I think it's a valid and important type of reasoning (even though specific instances of it can of course be mistaken).

Some references:

https://plato.stanford.edu/entries/disagreement/#EquaWeigView

https://www.routledge.com/Why-Its-OK-Not-to-Think-for-Yourself/Matheson/p/book/9781032438252

https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty

Yes; it could be useful if Stephen briefly explained how his classification relates to other classifications. (And which advantages it has - I guess simplicity is one.)

Thoughtful post.

If you're perceived as prioritising one EA cause over another, you might get pushback (whether for good reason or not). I think that's more true for some of these suggestions than for others. E.g. I think having some cause-specific groups might be seen as less controversial than having varying ticket prices for the same event depending on the cause area. 

I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.

If that's so, one might wonder why that happens.

In these cases, it seems that there are three questions; e.g.:

1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?

You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.

 It's possible that i... (read more)

4
Richard Y Chappell🔸
Thanks, yeah, I think I agree with all of that!

How much of this is lost by compressing to something like: virtue ethics is an effective consequentialist heuristic?

It doesn't just say that virtue ethics is an effective consequentialist heuristic (if it says that) but also has a specific theory about the importance of altruism (a virtue) and how to cultivate it.

There's not been a lot of systematic discussion on which specific virtues consequentialists or effective altruists should cultivate. I'd like to see more of it.

@Lucius Caviola and I have written a paper where we put forward a specific theory of wh... (read more)

Another factor is that recruitment to the EA community may be more difficult if it's perceived as very demanding. 

I'm also not convinced by the costly signalling-arguments discussed in the post. (This is from a series of posts on this topic.)

I think this discussion is a bit too abstract. It could be helpful with concrete examples of non-academic EA research that you think should have been published in academic outlets. It would also help if you would give some details of what changes they would need to make to get their research past peer reviewers.

3
Hans Waschke-Wischedag
I just stumbled across this post:  https://forum.effectivealtruism.org/posts/KGqAvdxN6KypkmdPb/effective-self-help-a-guide-to-improving-your-subjective Which I strongly think would be of interest outside of the small bubble that is EA. It should be published as a pre-print or review article.  
3
Hans Waschke-Wischedag
Thank you for your comment. This is a good point. I thought it was obvious, but it indeed isn't. A perfect example would be:  https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/ This model is a fully-fledged analysis by an "EA-research" team that would probably gain interest and scrutiny by academic researchers in economics and artificial intelligence. It even has, despite the fact that it is not published in any conventional research outlets that would be picked up by search machines.  I think it could be uploaded to arXiv with little to no change. I think this would greatly enhance the viewership and reach of the research. 

I'm saying that there are many cases where well-placed people do step up/have stepped up.

Assume by default that if something is missing in EA, nobody else is going to step up.

In many cases, it actually seems reasonable to believe that others will step up; e.g. because they are well-placed to do so/because it falls within a domain they have a unique competence in.

4
calebp
I think Linch is saying that empirically, other EAs don't seem to step up - not that there aren't people who could step up if they wanted to.

One aspect is that we might expect people who believe unusually strongly in an idea to be more likely to publish on it (winner's curse/unilateralist's curse).

He does, but at the same time I think it matters that he uses that shorthand rather than some other expression (say CNGS), since it makes the EA connection more salient.

1
Ben_West🔸
Agreed, it's just important to understand that what the author means by that term is not what most of us would mean by the term.

Yes, I think the title should be changed.

7
Kirsten
"A Summary of Every Mention of EA in Going Infinite"? "How EA is portrayed in Going Infinite"?

Some evidence that people tend to underuse social information, suggesting they're not by default epistemically modest:


Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental lit

... (read more)

The post seems to confuse the postdoctoral fellowship and the PhD fellowship (assuming the text on the grant interface is correct). It's the postdoc fellowship that has an $80,000 stipend, whereas the PhD fellowship stipend is $40,000.

3
Zhijing Jin
Thank you for spotting it! I just did the fix :).

I think "Changes in funding in the AI safety field" was published by the Centre for Effective Altruism.

3
Stephen McAleese
Thanks for spotting that. I updated the post.

The transcript can be found on this link as well.

You may want to have a look at the list of topics. Some of the terms above are listed there; e.g. Bayesian epistemology, counterfactual reasoning, and the unilateralist's curse.

Nice comment, you make several good points. Fwiw, I don't think our paper is conflict with anything you say here.

On this theme: @Lucius Caviola and myself have written a paper on virtues for real-world utilitarians. See also Lucius's talk Against naive effective altruism.

4
Severin
awesome, looks good!

I gave an argument for why I don't think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn't engage with my argument.

I'm not sure what you're trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we've solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.

5
MvK🔸
You're right - my comment is addressing an additional problem. (So I maybe should've made it a standalone comment) As far as your second point is concerned - that's true, unless we will face risk (again, and possibly more) at a later point. I agree with you that "crying wolf-effects" matter less or not at all under conditions where a problem is solved once and for all (unless it affects the credibility of a community which simultaneously works on other problems which remain unsolved, as is probably true of the EA community).

I also guess cry wolf-effects won't be as large as one might think - e.g. I think people will look more at how strong AI systems appear at a given point than at whether people have previously warned about AI risk.

6
MvK🔸
There's an additional problem that people who sound the alarms will likely be accused by some of "crying wolf" regardless of the outcome: World A) Group X cries wolf. AI was not actually dangerous, nothing bad happens. Group X (rightly) gets accused of crying wolf and loses credibility, even if AI gets dangerous at some future point. World B) Group X cries wolf. AI is actually dangerous, but because they cried wolf, we manage the risk and there is no catastrophe. Seeing the absence of a catastrophe, some people will accuse group X of crying wolf and they lose credibility.

Yeah, I was going to post that tweet. I'd also like to mention my related thread that if you have a history of crying wolf, then when wolves do start to appear, you’ll likely be turned to as a wolf expert.

Thanks, very interesting.

Regarding the political views, there are two graphs, showing different numbers. Does the first include people who didn't respond to the political views question, whereas the second exclude them? If so, it might be good to clarify that. You might also clarify that the first graph/sets of numbers don't sum to 100%. Alternatively, you could just present the data that excludes non-responses, since that's in my view the more interesting data.

3
David_Moss
Hi Stefan, Thanks for the comment! I'm inclined to agree it's clearer and easier to just show the 'excluding' numbers in both cases, so this is changed now. We'll update this to be the same anywhere else in the post it applies too.
7
Jason
On religion too, I think.

Yes, I think that him, e.g. being interviewed by 80K didn't make much of a difference. I think that EA's reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.

Yes. The Life You Can Save and Doing Good Better are pretty old. I think it's natural to write new content to clarify what EA is about.

"Co-writing with Julia would be better, but I suspect it wouldn't go well. While we do have compatible views, we have very different writing styles, and I understand taking on projects like this is often hard on relationships."

Perhaps there are ways of addressing this. For instance, you could write separate chapters, or parts; or have some kind of dialogue between the two of you. The idea would be that each person owns part of the book. I'm unsure about the details, but maybe you could find a solution.

Personally, I would not do this to my marriage.

Yes this was my thought as well. I'd love a book from you Jeff but would really (!!) love one from both of you (+ mini-chapters from the kids?).

I don't know the details of your current work, but it seems worth writing one chapter as a trial run, and if you think its going well (and maybe has good feedback) considering taking 6 months or so off.

Do you mean EAGx Berkeley 2022 or EA Global: Bay Area 2023?

3
keller_scholl 🔸
Bay Area 2023. Will edit.

Informed speculation might ... confuse people, since there's already plenty of work people call "AI forecasting" that looks similar to what I'm talking about.

Yes, I think using the term "forecasting" for what you do is established usage - it's effectively a technical term. Calling it "informed speculation about AI" in the title would not be helpful, in my view.

Great post, btw.

Load more