EA's CEO says Sam Bankmann-Fried was never an effective altruist
I don't think the piece says that.
Thanks, this is great. You could consider publishing it as a regular post (either after or without further modification).
I think it's an important take since many in EA/AI risk circles have expected governments to be less involved:
https://twitter.com/StefanFSchubert/status/1719102746815508796?t=fTtL_f-FvHpiB6XbjUpu4w&s=19
It would be good to see more discussion on this crucial question.
The main thing you could consider adding is more detail; e.g. maybe step-by-step analyses of how governments might get involved. For instance, this is a good question tha...
I don't find it hard to imagine how this would happen. I find Linch's claim interesting and would find an elaboration useful. I don't thereby imply that the claim is unlikely to be true.
Thanks, I think this is interesting, and I would find an elaboration useful.
In particular, I'd be interested in elaboration of the claim that "If (1, 2, 3), then government actors will eventually take an increasing/dominant role in the development of AGI".
I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at th...
The reasoning is that knowledgeable people's beliefs in a certain view is evidence for that view.
This is a type of reasoning people use a lot in many different contexts. I think it's a valid and important type of reasoning (even though specific instances of it can of course be mistaken).
Some references:
https://plato.stanford.edu/entries/disagreement/#EquaWeigView
https://www.routledge.com/Why-Its-OK-Not-to-Think-for-Yourself/Matheson/p/book/9781032438252
https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty
Yes; it could be useful if Stephen briefly explained how his classification relates to other classifications. (And which advantages it has - I guess simplicity is one.)
Thoughtful post.
If you're perceived as prioritising one EA cause over another, you might get pushback (whether for good reason or not). I think that's more true for some of these suggestions than for others. E.g. I think having some cause-specific groups might be seen as less controversial than having varying ticket prices for the same event depending on the cause area.
I’m struck by how often two theoretical mistakes manage to (mostly) cancel each other out.
If that's so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It's possible that i...
How much of this is lost by compressing to something like: virtue ethics is an effective consequentialist heuristic?
It doesn't just say that virtue ethics is an effective consequentialist heuristic (if it says that) but also has a specific theory about the importance of altruism (a virtue) and how to cultivate it.
There's not been a lot of systematic discussion on which specific virtues consequentialists or effective altruists should cultivate. I'd like to see more of it.
@Lucius Caviola and I have written a paper where we put forward a specific theory of wh...
Another factor is that recruitment to the EA community may be more difficult if it's perceived as very demanding.
I'm also not convinced by the costly signalling-arguments discussed in the post. (This is from a series of posts on this topic.)
I think this discussion is a bit too abstract. It could be helpful with concrete examples of non-academic EA research that you think should have been published in academic outlets. It would also help if you would give some details of what changes they would need to make to get their research past peer reviewers.
Assume by default that if something is missing in EA, nobody else is going to step up.
In many cases, it actually seems reasonable to believe that others will step up; e.g. because they are well-placed to do so/because it falls within a domain they have a unique competence in.
One aspect is that we might expect people who believe unusually strongly in an idea to be more likely to publish on it (winner's curse/unilateralist's curse).
He does, but at the same time I think it matters that he uses that shorthand rather than some other expression (say CNGS), since it makes the EA connection more salient.
Some evidence that people tend to underuse social information, suggesting they're not by default epistemically modest:
...
Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental lit
The post seems to confuse the postdoctoral fellowship and the PhD fellowship (assuming the text on the grant interface is correct). It's the postdoc fellowship that has an $80,000 stipend, whereas the PhD fellowship stipend is $40,000.
I think "Changes in funding in the AI safety field" was published by the Centre for Effective Altruism.
You may want to have a look at the list of topics. Some of the terms above are listed there; e.g. Bayesian epistemology, counterfactual reasoning, and the unilateralist's curse.
Nice comment, you make several good points. Fwiw, I don't think our paper is conflict with anything you say here.
On this theme: @Lucius Caviola and myself have written a paper on virtues for real-world utilitarians. See also Lucius's talk Against naive effective altruism.
I gave an argument for why I don't think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn't engage with my argument.
I'm not sure what you're trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we've solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.
I also guess cry wolf-effects won't be as large as one might think - e.g. I think people will look more at how strong AI systems appear at a given point than at whether people have previously warned about AI risk.
Thanks, very interesting.
Regarding the political views, there are two graphs, showing different numbers. Does the first include people who didn't respond to the political views question, whereas the second exclude them? If so, it might be good to clarify that. You might also clarify that the first graph/sets of numbers don't sum to 100%. Alternatively, you could just present the data that excludes non-responses, since that's in my view the more interesting data.
Yes, I think that him, e.g. being interviewed by 80K didn't make much of a difference. I think that EA's reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.
Yes. The Life You Can Save and Doing Good Better are pretty old. I think it's natural to write new content to clarify what EA is about.
"Co-writing with Julia would be better, but I suspect it wouldn't go well. While we do have compatible views, we have very different writing styles, and I understand taking on projects like this is often hard on relationships."
Perhaps there are ways of addressing this. For instance, you could write separate chapters, or parts; or have some kind of dialogue between the two of you. The idea would be that each person owns part of the book. I'm unsure about the details, but maybe you could find a solution.
Yes this was my thought as well. I'd love a book from you Jeff but would really (!!) love one from both of you (+ mini-chapters from the kids?).
I don't know the details of your current work, but it seems worth writing one chapter as a trial run, and if you think its going well (and maybe has good feedback) considering taking 6 months or so off.
Informed speculation might ... confuse people, since there's already plenty of work people call "AI forecasting" that looks similar to what I'm talking about.
Yes, I think using the term "forecasting" for what you do is established usage - it's effectively a technical term. Calling it "informed speculation about AI" in the title would not be helpful, in my view.
Great post, btw.
I find some of the comments here a bit implausible and unrealistic.
What people write online will often affect their reputation, positively or negatively. It may not necessarily mean they, e.g. have no chance of getting an EA job, but there are many other reputational consequences.
I also don't think that updating one's views of someone based on what they write on the EA Forum is necessarily always wrong (even though there are no doubt many updates that are unfair or unwarranted).
Hm, Rohin has some caveats elaborating on his claim.
(Not literally so -- you can construct scenarios like "only investors expect AGI while others don't" where most people don't expect AGI but the market does expect AGI -- but these seem like edge cases that clearly don't apply to reality.)
Unless they were edited in after these comments were written (which doesn't seem to be the case afaict) it seems you should have taken those caveats into account instead of just critiquing the uncaveated claim.
Fwiw I think this is good advice.
If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.
This discussion seems a bit of a side-track to your main point. These are just examples to illustrate that intuition is often wrong - you're not focused on the minimum wage per se. Potentially it could have been better if you had chosen more uncontroversial examples to avoid these kinds of discussions.
Fwiw I think it would have been good to explain technical terminology to a greater extent - e.g. TAI (transformative artificial intelligence), LLM (large language model), transformers, etc.
It says in the introduction:
I expect some readers to think that the post sounds wild and crazy but that doesn’t mean its content couldn’t be true.
Thus, the article seems in part directed to readers who are not familiar with the latest discussions about AI - and those readers presumably would benefit from technical concepts being explained when introduced.
The first paragraph is this:
"If the rise of Sam Bankman-Fried was a modern tale about cryptocurrency tokens and “effective altruism,” his fall seems to be as old as original sin. “This is really old-fashioned embezzlement,” John Ray, the caretaker CEO of the failed crypto exchange FTX, told the House on Tuesday. “This is just taking money from customers and using it for your own purpose, not sophisticated at all.”"
I don't think that amounts to depicting EA as banditry. The subject is Sam Bankman-Fried, not the effective altruism movement.
In fact I would say that, despite the phrase 'effective altruism' appearing in the subtitle, the article is hardly about the movement at all.
Nathan, who created the thread, had some fairly general suggestions as well, though, so I think it's natural that people interpreted the question in this way (in spite of the title including the word "specific").
I think more general claims or questions can be useful as well. Someone might agree with the broader claim that "EA should democratise" but not with the more specific claim that "EA Funds should allow guest grantmakers with different perspectives to make 20% of their grants". It seems to me that more general and more specific claims can both be useful. Surveys and opinion polls often include general questions.
I'm also not sure I agree that "EA should" is that bad of a phrasing. It can help to be more specific in some ways, but it can also be useful to express more general preferences, especially as a preliminary step.
But for the purposes of this question, which is asking about "specific changes", I think the person who thinks "EA should democratise" needs to be clear about what is their preferred operationalization of the general claim.
No, I think yours and Ryan's interpretation is the correct one.
the new axis on the right lets you show much you agree or disagree with the content of a comment
Linked from here.
Fwiw I'm not sure it badly damages the publishability. It might lead to more critical papers, though.
The NYT article isn't an opinion piece but a news article, and I guess that it's a bit less clear how to classify them. Potentially one should distinguish between news articles and opinion pieces. But in any event, I think that if someone who didn't know about EA before reads the NYT article, they're more likely to form a negative than a positive opinion.
My impression is that the coverage of EA has been more negative than you suggest, even though I don't have hard data either. It could be useful to look into.
I agree. This has been discussed for quite some time (it was first raised three years ago) so it would be good to reach a decision.
EA is anarchy. No one is even a little in charge.
I don't think that's true. I've worked at CEA myself, and I know that CEA wields considerable influence.
I also think your way of discussing is inappropriate.
There's already a thread on this afaict.
https://forum.effectivealtruism.org/posts/ETwyzQFccHP54ndi4/sam-harris-and-william-macaskill-on-sbf-and-ea