Stefan_Schubert

5965Joined Sep 2014

Bio

I'm a researcher at London School of Economics and Political Science, working in the intersection of moral psychology and philosophy.

https://stefanfschubert.com/

Comments
672

Topic Contributions
39

Hm, Rohin has some caveats elaborating on his claim. 

(Not literally so -- you can construct scenarios like "only investors expect AGI while others don't" where most people don't expect AGI but the market does expect AGI -- but these seem like edge cases that clearly don't apply to reality.)

Unless they were edited in after these comments were written (which doesn't seem to be the case afaict) it seems you should have taken those caveats into account instead of just critiquing the uncaveated claim.

Fwiw I think this is good advice.

If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

This discussion seems a bit of a side-track to your main point. These are just examples to illustrate that intuition is often wrong - you're not focused on the minimum wage per se. Potentially it could have been better if you had chosen more uncontroversial examples to avoid these kinds of discussions.

Fwiw I think it would have been good to explain technical terminology to a greater extent - e.g.  TAI (transformative artificial intelligence), LLM (large language model), transformers, etc.

It says in the introduction:

I expect some readers to think that the post sounds wild and crazy but that doesn’t mean its content couldn’t be true.

Thus, the article seems in part directed to readers who are not familiar with the latest discussions about AI - and those readers presumably would benefit from technical concepts being explained when introduced.

The first paragraph is this:

"If the rise of Sam Bankman-Fried was a modern tale about cryptocurrency tokens and “effective altruism,” his fall seems to be as old as original sin. “This is really old-fashioned embezzlement,” John Ray, the caretaker CEO of the failed crypto exchange FTX, told the House on Tuesday. “This is just taking money from customers and using it for your own purpose, not sophisticated at all.”"

I don't think that amounts to depicting EA as banditry. The subject is Sam Bankman-Fried, not the effective altruism movement.

Nathan, who created the thread, had some fairly general suggestions as well, though, so I think it's natural that people interpreted the question in this way (in spite of the title including the word "specific").

I think more general claims or questions can be useful as well. Someone might agree with the broader claim that  "EA should democratise" but not with the more specific claim that "EA Funds should allow guest grantmakers with different perspectives to make 20% of their grants". It seems to me that more general and more specific claims can both be useful. Surveys and opinion polls often include general questions.

I'm also not sure I agree that "EA should" is that bad of a phrasing. It can help to be more specific in some ways, but it can also be useful to express more general preferences, especially as  a preliminary step.

No, I think yours and Ryan's interpretation is the correct one.

the new axis on the right lets you show much you agree or disagree with the content of a comment

Linked from here.

Fwiw I'm not sure it badly damages the publishability. It might lead to more critical papers, though.

Load More