I would prefer a more failproof anti-spam system; e.g. preventing new accounts from writing Wiki entries, or enabling people to remove such spam. Right now there is a lot of spam on the page, which reduces readability.
Extraordinary growth. How does it look on other metrics; e.g. numbers of posts and comments? Also, can you tell us what the growth rate has been per year? It's a bit hard to eyeball the graph. Thanks.
Thanks! All of our metrics are pretty well correlated with each other; you can see more information here.
Our primary metric is hours of engagement, which I didn't use for this post because the data doesn't stretch back as far. But the growth rate there is:
More about how this is calculated and our historical data can be found here.
This kind of thing could be made more sophisticated by making fines proportional to the harm done
I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there's a small risk of a fine that would be too costly for them.
I take it that Harsimony is proposing for the IC-seller to put up a flexible amount of collateral when they start their project, according to the possible harms.
There are two problems, though:
They refer to Drescher's post. He writes:
... (read more)But we think that is unlikely to happen by default. There is a mismatch between the probability distribution of investor profits and that of impact. Impact can go vastly negative while investor profits are capped at only losing the investment. We therefore risk that our market exacerbates negative externalities.
Standard distribution mismatch. Standard investment vehicles work the way that if you invest into a project and it fails, you lose 1 x your investment; but if you invest into a project and it’s a great s
If anything, I think that prohibiting posts like this from being published would have a more detrimental effect on community culture.
Of course, people are welcome to criticise Ben's post - which some in fact do. That's a very different category from prohibition.
Yeah, that sounds perfectly plausible to me.
“A bit confused” wasn’t meant to be any sort of rhetorical pretend understatement or something. I really just felt a slight surprise that caused me to check whether the forum rules contain something about ad hom, and found that they don’t. It may well be the right call on balance. I trust the forum team on that.
I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form.
That seems like a considerable overstatement to me. I think it would be bad if the forum rules said an article like this couldn't be posted.
Maybe, but I find it important to maintain the sort of culture where one can be confidently wrong about something without fear that it’ll cause people to interpret all future arguments only in light of that mistake instead of taking them at face value and evaluating them for their own merit.
The sort of entrepreneurialness that I still feel is somewhat lacking in EA requires committing a lot of time to a speculative idea on the off-chance that it is correct. If it is not, the entrepreneur has wasted a lot of time and usually money. If additionally it has th... (read more)
This question is related to the question of how much effort effective altruism as a whole should put into movement growth relative to direct work. That question has been more discussed; e.g. see the Wiki entry and posts by Peter Hurford, Ben Todd, Owen Cotton-Barratt, and Nuño Sempere/Phil Trammell.
Yeah, I think it would be good to introduce premisses relating to the time that AI and bio capabilities that could cause an x-catastrophe ("crazy AI" and "crazy bio") will be developed. To elaborate on a (protected) tweet of Daniel's.
Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they're uncorrelated, in your view.
Suppose also that we modify 2 into "a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both c... (read more)
I like this approach, even though I'm unsure of what to conclude from it. In particular, I like the introduction of the accident vs non-accident distinction. It's hard to get an intuition of what the relative chances of a bio-x-catastrophe and an AI-x-catastrophe are. It's easier to have intuitions about the relative chances of:
That's what you're making use of in this post. Regardless of what one thinks of the conclusion, the methodology is interesting.
I agree that more data on this issue would be good (even though I don't share the nervousness, since my prior is more positive). There was a related discussion some years ago about "the meta-trap". (See also this post and this one.)
Thanks - fwiw I think this merits being posted as a normal article as opposed to on the short form.
Thanks for doing this; I think this is useful. It feels vaguely akin to Marius's recent question of the optimal ratio of mentorship to direct work. More explicit estimates of these kinds of questions would be useful.
Blonergan's comment is good, though - and it shows the importance of trying to estimate the value of people's time in dollars.
I've written a blog post relating to this article, arguing that while levels of demandingness are conceptually separate from such trade-offs, what kinds of resources we most demand may empirically affect the overall level of demandingness.
Meta-comment - this is a great question. Probably there are many similar questions about difficult prioritisation decisions that EAs normally try to solve individually (and which many, myself included, won't be very deliberate and systematic about). More discussions and estimates about such decisions could be helpful.
Agree. I guess most EA orgs have thought about this. Some superficially and some extensively. If someone who feels like they have a good grasp on these and other management/prioritization questions, writing a "Basic EA org handbook" could be pretty high impact.
Something like "please don't repeat these rookie mistakes" would already save thousands of EA hours.
Thanks, very helpful! (For other readers; Gavin compiled all those songs on Spotify.)
But afaict you seem to say that the public needs to have the perception that there's a consensus. And I'm not sure that they would if experts only agreed on such conditionals.
Good post. I've especially noticed such a discrepancy when it comes to independence vs deference to the EA consensus. It seems to me that many explicitly argue that one should be independent-minded, but that deference to the EA consensus is rewarded more often than those explicit discussions about deference suggest. (However, personally I think deference to EA consensus views is in fact often warranted.) You're probably right that there is a general pattern between stated views and what is in fact rewarded across multiple issues.
- More work needs to be done on building consensus among consciousness researchers – not in finding the one right theory (plenty of people are working on that), but identifying what the community thinks it collectively knows.
I'm a bit unsure what you mean by that. If consciousness researchers continue to disagree on fundamental issues - as you argue they will in the preceding section - then it's hard to see that there will be a consensus in the standard sense of the word.
Similarly, you write:
They need to speak from a unified and consensus-driven position.
But... (read more)
Thanks a lot for providing this show with English subtitles!
Some of the songs were excluded for copyright reasons. The complete list of songs (afaik) that Bostrom played can be found here. The original version (with all the music) was ~85 minutes, I think.
Sommar i P1 is one of the most popular programs on Swedish Radio - it's been running since 1959. Max Tegmark has also had an episode.
One thing that's lacking a bit here is a concrete path to impact, and how this strategy would be integrated into current effective altruist outreach efforts. It's a very abstract suggestion.
Well, the danger is reducing the net good done, for it may turn some off doing a good deed altogether.
Because of the large differences in effectiveness between different interventions, I'm not that worried about this issue.
There is also another related distinction between optimisation that assumes that current investments in some cause (e.g. the mitigation of some risk) will stay the same (or change in line with some simplistic extrapolation of current trends), and optimisation that assumes that other people will reoptimise their investments due to new evidence (e.g. warning shots). I wrote a post about that in the context of existential risk some years back. Jon Elster argues that we generally underrate the extent that people's reoptimise their actions in the light of a cha... (read more)
Fwiw, I think the usage from moral philosophy is by far the most common outside the EA community, and probably also inside the community. So if someone uses the word "consequentialism", I would normally assume (often unthinkingly) that they're using it in that sense. I think that means that those who use it in any other sense should, in many contexts, be particularly careful to make clear that they're not using the term in that way.
There is a standard distinction in ethics between act consequentialism as a criterion of rightness and as a decision procedure... (read more)
Thanks for your thoughtful response, James - I much appreciate it.
This is an interesting point and one I didn't consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.
My impression is that there are a fair number of people who apply to EA jobs who, while of course being ... (read more)
I didn't interpret Charles He as talking about EA events spending extra money on catering, but about individuals adopting vegan diets.
Thanks, James. Sorry, by using the term "low" I didn't mean to attribute to you the view that EA salaries should be very low in absolute terms. To be honest I didn't put much thought into the usage of this word at all. I guess I simply used it to express the negation of the "high" salaries that you mentioned in your title. This seems like a minor semantic issue.
The reasons for EA vegan diet are subtle, related to the cause area and the fact that vegan diets are costly.
Fwiw, another commentator, Onni Aarne, actually says the opposite - that a vegan diet is motivated because in part because it's not costly (I'm not hereby saying they're right, or that you are).
Consuming factory farmed animal products also indicates moral unseriousness much more strongly because it is so extremely cheap to reduce animal suffering by making slightly different choices .
Signaling and reputation (or some version thereof; could include PR as well) potential tag; unless it already exists. Example articles may include:
Thanks, I think this was a thoughtful, sophisticated, and original essay. I think it's unfortunate that insightful posts like this get downvoted on the EA Forum. Its current level of karma (relative to other posts on the forum) doesn't reflect its quality (relative to those other posts) accurately. (Prior to me strongly up-voting it, it had 7 karma and 8 votes.)
It suggests the karma system might need to be reformed - e.g. that people should be able to express dis-/agreement with the claims, and evaluation of the reasoning, separately (cf. LessWrong's new karma system).
It's not that sophisticated.
The post uses rhetoric, sort of what I call "EA rhetoric" where lengthy writing and language and internal devices and internally consistent arguments gas up a point, while basic, logical points are left out, and their omission is concealed by the same length.
This essay is centered on the truth that a vegan diet "isn’t really quite EA" (in the sense of the "GiveWell, dollars for QALY aesthetic").
The reasons for EA vegan diet are subtle, related to the cause area and the fact that vegan diets are costly. I'm happ... (read more)
Thanks, I think this post is thoughtfully written. I think that arguments for lower salary sometimes are quite moralising/moral purity-based; as opposed to focused on impact. By contrast, you give clear and detached impact-based arguments.
I don't quite agree with the analysis, however.
You seem to equate "value-alignment" with "willingness to work for a lower salary". And you argue that it's important to have value-aligned staff, since they will make better decisions in a range of situations:
... (read more)
- A researcher will often decide which research questions to p
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it's not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing that's important to keep in mind is that deference doesn't necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
Relatedly, I'm a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there's already such a tendency, and involvement in politics could make it worse.
What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?
I'm in favor of not sharing infohazards but that's about the extent of reputation management I endorse-- and I think that leads to a good reputation for EA as honest!
My sense is that the difference in impact between jobs that have higher and lower impact is often very substantial, and that if a higher salary can make people more likely to take the higher-impact jobs, then that extra expenditure is typically worth it. (Though there is the issue whether you think there is a correlation between impact and salary in effective altruism - my guess would be that there is). In any event, I think that jobs at EA-organisations aren't overpaid.
My view is that when you are considering whether to take some action and are weighing up its effects, you shouldn't in general put special weight on your own beliefs about those effects (there are some complicating factors here, but that's a decent first approximation). Instead you should put the same weight on yours and others' beliefs. I think most people don't do that, but put much too much weight on their own beliefs relative to others'. Effective altruists have shifted away from that human default, but in my view it's unlikely - in the light of the ge... (read more)
Hey Stefan,
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let's start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pr... (read more)
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Let's say I have which I updated from a prior of . Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say .
Then I can infer that they updated from to by multiplyi... (read more)
This post seems to amount to replying "No" to Vaidehi's question since it is very long but does not include a specific example.
> I won't be able to give you examples where I demonstrate that there was too little deference
I don't think that Vaidehi is asking you to demonstrate anything in particular about any examples given. It's just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.
I also think that EA consensus views are often unusually well-grounded, meaning there are unusually strong reasons to defer to them. (But obviously this may reflect my own biases.)
Fwiw I think many effective altruists defer too little rather than too much.
Could you a few specific examples of times you have seen EAs deferring too little?
"Neutrality" is this disregard for irrelevant considerations.
...
Two subcases of neutrality are...cause neutrality [and] means neutrality.
There are also other considerations which one should, or arguably should, be neutral about. One example is what resources to use - e.g. money or time. Another is whether to pursue high or low risk interventions: many effective altruists believe that you should be risk neutral and simply maximise expected value.
Still others may include neutrality with respect to how diversified your altruistic investments should be (m... (read more)
You can usually relatively straightforwardly divide your monetary resources into a part that you spend on donations and a part that you spend for personal purposes.
By contrast, you don't usually spend some of your time at work for self-interested purposes and some for altruistic purposes. (That is in principle possible, but uncommon among effective altruists.) Instead you only have one job (which may serve your self-interested and altruistic motives to varying degrees). Therefore, I think that analogies with donations are often a stretch and sometimes misleading (depending on how they're used).
I guess that if one wants to red team effective altruist cost-effectiveness analyses that inform, e.g. giving decisions, non-public analyses may be relevant.
I would guess that other orgs besides GiveWell also have cost-effectiveness models/analyses.
Fwiw, I think the logic is very different when it comes to direct work, and that phrasing it in terms of what fraction of one's time one donates isn't the most natural of thinking about it.
These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.
I agree there's such a risk. But I also think that the tone actually matters a lot.
Thanks for posting this. I also appreciated this thoughtful essay.
There was also this passage (not in your excerpts):
... (read more)An alternate solution, and the one that has, I believe, been adopted by many EAs, has been a form of weak-EA. Strong-EA takes "do the most good you can do" extremely seriously as a central aspect of a life philosophy. Weak-EA uses that principle more as guidance. Donate 1% of your income. Donate 10% of your income, provided that doesn't cause you hardship. Be thoughtful about the impact your work has on the world, and consult many different
Right. Donating 10-50% of time or resources as effectively as possible is still very distinctive, and not much less effective than donating 100%.
Thank you, this is helpful. I do agree with you that there is a difference between supporting GiveWell-recommended charities and supporting American beneficiaries. More generally, my argument wasn't directly about what donations Sam Bankman-Fried or other effective altruists should make, but rather about what arguments are brought to bear on that issue. Insofar as an analysis of direct impact suggests that certain charities should be funded, I obviously have no objection to that. My comment rather concerned the fact that the OP, in my view, put too much em... (read more)
First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational."
The OP focused on PR/reputation, which is what I reacted to.
If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we're just trying to anticipate and strategically preempt a misconception people may have about our true motivations.
I think there's a difference between creating a reputation for integrit... (read more)
Those partnerships between FTX and sports teams and individuals seem wholly different. They are not purporting to directly improve the world, the way donations to an altruistic cause do. (Rather, their purpose is, as far as I understand, to increase FTX's profits - which in turn indirectly can increase their donations.) As such, there is no risk of a conflation between PR-related and direct impact-related reasons for those expenditures: it's clear that they're about PR alone.
FTX is a for-profit enterprise, and it's natural that it engages in marketing. My comment rather concerned whether one should donate to particular causes because it looks good, as opposed to because it has a direct impact.
My sense is that this post - as well as many other recent posts on the forum - focuses too much on PR/reputation relative to direct impact. Also, I think that insofar as we try to build a reputation, part of that reputation should be that we do things because we think they're right for direct, non-reputational reasons. I think that gives a (correct) impression of greater integrity.
I disagree with this for two reasons. First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational." There is overlap in both cases; but if we must categorize I think it's closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations?
To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, ... (read more)
I think there are some posts that should be made invisible; and that it's good if strong downvotes make them so. Thus, I would like empirical evidence that such a reform would do more good than harm. My hunch is that it wouldn't.
Interesting point.
I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.