All of Stefan_Schubert's Comments + Replies

2-factor voting (karma, agreement) for EA forum?

Interesting point. 

I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.

4Owen Cotton-Barratt1d
Yeah to proxy this maybe I'd imagine something like adding a virtual five upvotes and five downvotes to each comment to start it near 50%, so it's a strong signal if you see something with an extreme value. Maybe that's a bad idea; makes it harder (you'd need to hover) to notice when something's controversial.
EA Forum feature suggestion thread

I would prefer a more failproof anti-spam system; e.g. preventing new accounts from writing Wiki entries, or enabling people to remove such spam. Right now there is a lot of spam on the page, which reduces readability.

Product Managers: the EA Forum Needs You

Extraordinary growth. How does it look on other metrics; e.g. numbers of posts and comments? Also, can you tell us what the growth rate has been per year? It's a bit hard to eyeball the graph. Thanks.

Thanks! All of our metrics are pretty well correlated with each other; you can see more information here.

Our primary metric is hours of engagement, which I didn't use for this post because the data doesn't stretch back as far. But the growth rate there is:

  • 2020 (estimated): 90%
  • 2021: 100.2%
  • 2022 (so far): 140%
  • Implied 3-year growth: 912%

More about how this is calculated and our historical data can be found here.

Impact markets may incentivize predictably net-negative projects

This kind of thing could be made more sophisticated by making fines proportional to the harm done

I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there's a small risk of a fine that would be too costly for them.

I take it that Harsimony is proposing for the IC-seller to put up a flexible amount of collateral when they start their project, according to the possible harms.

There are two problems, though:

  • This requires centralised prospective estimation of harms for every project. (A  big part of the point of impact certificates is to evaluate things retroactively, and to outsource prospective evaluations to the market, thereby incentivising accuracy in the latter.
  • This penalises IC-sellers based on how big their harms initially seem, rather than how big they event
... (read more)
Impact markets may incentivize predictably net-negative projects

They refer to Drescher's post. He writes:

But we think that is unlikely to happen by default. There is a mismatch between the probability distribution of investor profits and that of impact. Impact can go vastly negative while investor profits are capped at only losing the investment. We therefore risk that our market exacerbates negative externalities.

Standard distribution mismatch. Standard investment vehicles work the way that if you invest into a project and it fails, you lose 1 x your investment; but if you invest into a project and it’s a great s

... (read more)
On Deference and Yudkowsky's AI Risk Estimates

If anything, I think that prohibiting posts like this from being published would have a more detrimental effect on community culture.

Of course, people are welcome to criticise Ben's post - which some in fact do. That's a very different category from prohibition.

Yeah, that sounds perfectly plausible to me.

“A bit confused” wasn’t meant to be any sort of rhetorical pretend understatement or something. I really just felt a slight surprise that caused me to check whether the forum rules contain something about ad hom, and found that they don’t. It may well be the right call on balance. I trust the forum team on that.

On Deference and Yudkowsky's AI Risk Estimates

I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form. 

That seems like a considerable overstatement to me. I think it would be bad if the forum rules said an article like this couldn't be posted.

Maybe, but I find it important to maintain the sort of culture where one can be confidently wrong about something without fear that it’ll cause people to interpret all future arguments only in light of that mistake instead of taking them at face value and evaluating them for their own merit.

The sort of entrepreneurialness that I still feel is somewhat lacking in EA requires committing a lot of time to a speculative idea on the off-chance that it is correct. If it is not, the entrepreneur has wasted a lot of time and usually money. If additionally it has th... (read more)

What is the right ratio between mentorship and direct work for senior EAs?

This question is related to the question of how much effort effective altruism as a whole should put into movement growth relative to direct work. That question has been more discussed; e.g. see the Wiki entry and posts by Peter Hurford, Ben Todd, Owen Cotton-Barratt, and Nuño Sempere/Phil Trammell.

RyanCarey's Shortform

Yeah, I think it would be good to introduce premisses relating to the time that  AI and bio capabilities that could cause an x-catastrophe ("crazy AI" and "crazy bio") will be developed. To elaborate on a (protected) tweet of Daniel's.

Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they're uncorrelated, in your view.

Suppose also that we modify 2 into "a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both c... (read more)

RyanCarey's Shortform

I like this approach, even though I'm unsure of what to conclude from it. In particular, I like the introduction of the accident vs non-accident distinction. It's hard to get an intuition of what the relative chances of a bio-x-catastrophe and an AI-x-catastrophe are. It's easier to have intuitions about the relative chances of:

  1. Accidental vs non-accidental bio-x-catastrophes
  2. Non-accidental AI-x-catastrophes vs non-accidental bio-x-catastrophes
  3. Accidental vs non-accidental AI-x-catastrophes

That's what you're making use of in this post. Regardless of what one thinks of the conclusion, the methodology is interesting.

Are too many young, highly-engaged longtermist EAs doing movement-building?

I agree that more data on this issue would be good (even though I don't share the nervousness, since my prior is more positive). There was a related discussion some years ago about "the meta-trap". (See also this post and this one.)

5Anonymous_EA11d
Thanks for pointing to these! I had forgotten about them or hadn't seen them in the first place — all are very relevant.
Charlotte's Shortform

Thanks - fwiw I think this merits being posted as a normal article as opposed to on the short form.

3Charlotte10d
Thanks. Done here [https://forum.effectivealtruism.org/posts/QZujaLgPateuiHXDT/against-difference-making-risk-aversion] :)
What is the overhead of grantmaking?

Thanks for doing this; I think this is useful. It feels vaguely akin to Marius's recent question of the optimal ratio of mentorship to direct work. More explicit estimates of these kinds of questions would be useful.

Blonergan's comment is good, though - and it shows the importance of trying to estimate the value of people's time in dollars.

Demandingness and Time/Money Tradeoffs are Orthogonal

I've written a blog post relating to this article, arguing that while levels of demandingness are conceptually separate from such trade-offs, what kinds of resources we most demand may empirically affect the overall level of demandingness.

What is the right ratio between mentorship and direct work for senior EAs?

Meta-comment - this is a great question. Probably there are many similar questions about difficult prioritisation decisions that EAs normally try to solve individually (and which many, myself included, won't be very deliberate and systematic about). More discussions and estimates about such decisions could be helpful.

Agree. I guess most EA orgs have thought about this. Some superficially and some extensively. If someone who feels like they have a good grasp on these and other management/prioritization questions, writing a "Basic EA org handbook" could be pretty high impact. 
 

Something like "please don't repeat these rookie mistakes" would already save thousands of EA hours. 

Nick Bostrom - Sommar i P1 Radio Show

Thanks, very helpful! (For other readers; Gavin compiled all those songs on Spotify.)

The importance of getting digital consciousness right

But afaict you seem to say that the public needs to have the perception that there's a consensus. And I'm not sure that they would if experts only agreed on such conditionals.

2Derek Shiller14d
You’re probably right. I’m not too optimistic that my suggestion would make a big difference. But it might make some. If a company were to announce tomorrow that it had built a conscious AI and would soon have it available for sale, I expect that it would prompt a bunch of experts to express their own opinions on twitter and journalists to contact a somewhat randomly chosen group of outspoken academics to get their perspective. I don’t think that there is any mechanism for people to get a sense of what experts really think, at least in the short run. That’s dangerous because it means that what they might hear would be somewhat arbitrary, possibly reflecting the opinion of overzealous or overcautious academics, and because it might lack authority, being the opinions of only a handful of people. In my ideal scenario, there would be some neutral body, perhaps that did regular expert surveys, that journalists would think to talk to before publishing their pieces and that could give the sort of judgement I gestured to above. That judgement might show that most views on consciousness agree that the system is or isn’t conscious, or at least that there is significant room for doubt. People might still make up their minds, but they might entertain doubts longer, and such a body might provide incentives for companies to try harder to build more likely to be conscious systems.
Leftism virtue cafe's Shortform

Good post. I've especially noticed such a discrepancy when it comes to independence vs deference to the EA consensus. It seems to me that many explicitly argue that one should be independent-minded, but that deference to the EA consensus is rewarded more often than those explicit discussions about deference suggest. (However, personally I think deference to EA consensus views is in fact often warranted.) You're probably right that there is a general pattern between stated views and what is in fact rewarded across multiple issues.

The importance of getting digital consciousness right
  • More work needs to be done on building consensus among consciousness researchers – not in finding the one right theory (plenty of people are working on that), but identifying what the community thinks it collectively knows.

I'm a bit unsure what you mean by that. If consciousness researchers continue to disagree on fundamental issues - as you argue they will in the preceding section - then it's hard to see that there will be a consensus in the standard sense of the word.

Similarly, you write:

They need to speak from a unified and consensus-driven position.

But... (read more)

2Derek Shiller14d
I was imagining that the consensus would concern conditionals. I think it is feasible to establish what sets of assumptions people might naturally make, and what views those assumptions would support. This would allow a degree of objectivity without settling the right theory. It might also involve assigning probabilities, or ranges of probabilities to view themselves, or to what it is rational for other researchers to think about different views. So we might get something like the following (when researchers evaluate gpt6): There are three major groups of assumptions, a, b, and c. * Experts agree that gpt6 has a 0% probability of being conscious if a is correct. * Experts agree that the rational probability to assign to gpt6 being conscious if b is correct falls between 2 and 20%. * Experts agree that the rational probability to assign to gpt6 being conscious if c is correct falls between 30 and 80%
Nick Bostrom - Sommar i P1 Radio Show

Thanks a lot for providing this show with English subtitles!

Some of the songs were excluded for copyright reasons. The complete list of songs (afaik) that Bostrom played can be found here. The original version (with all the music) was ~85 minutes, I think.

Sommar i P1 is one of the most popular programs on Swedish Radio - it's been running since 1959. Max Tegmark has also had an episode.

6Gavin14d
https://open.spotify.com/playlist/3OY4Q9y8AOjUOyIsC9NKR4?si=c10b8777ea6a4e94
Call out pathological altruism?

One thing that's lacking a bit here is a concrete path to impact, and how this strategy would be integrated into current effective altruist outreach efforts. It's a very abstract suggestion.

Well, the danger is reducing the net good done, for it may turn some off doing a good deed altogether.

Because of the large differences in effectiveness between different interventions, I'm not that worried about this issue. 

General equilibrium thinking

There is also another related distinction between optimisation that assumes that current investments in some cause (e.g. the mitigation of some risk) will stay the same (or change in line with some simplistic extrapolation of current trends), and optimisation that assumes that other people will reoptimise their investments due to new evidence (e.g. warning shots). I wrote a post about that in the context of existential risk some years back. Jon Elster argues that we generally underrate the extent that people's reoptimise their actions in the light of a cha... (read more)

‘Consequentialism’ is being used to mean several different things

Fwiw, I think the usage from moral philosophy is by far the most common outside the EA community, and probably also inside the community. So if someone uses the word "consequentialism", I would normally assume (often unthinkingly) that they're using it in that sense. I think that means that those who use it in any other sense should, in many contexts, be particularly careful to make clear that they're not using the term in that way.

There is a standard distinction in ethics between act consequentialism as a criterion of rightness and as a decision procedure... (read more)

1Theo Hawking16d
I certainly agree that outside EA, consequentialism just means the moral philosophy. But inside I feel like I keep seeing people use it to mean this process of decision-making, enough that I want to plant this flag. I agree that the criterion of rightness / decision procedure distinction roughly maps to what I'm pointing at, but I think it's important to note that Act Consequentialism doesn't actually give a full decision procedure. It doesn't come with free answers to things like 'how long should you spend on making a decision' or 'what kinds of decisions should you be doing this for', nor answers to questions like 'how many layers of meta should you go up'. And I am concerned that in the absence of clear answers to these questions, people will often naively opt for bad answers.
The dangers of high salaries within EA organisations

Thanks for your thoughtful response, James - I much appreciate it.

This is an interesting point and one I didn't consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.

My impression is that there are a fair number of people who apply to EA jobs who, while of course being ... (read more)

Leftism virtue cafe's Shortform

Interesting, I'd be interested in reading that. I think thinking about what culture (or virtues) EA should optimally have is a bit an underrated angle. Cf 1, 2.

The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous

I didn't interpret Charles He as talking about EA events spending extra money on catering, but about individuals adopting vegan diets.

7Robi Rahman17d
Sorry, yes, didn't mean to imply Charles He was only talking about catering. I was just using that as an example of EAs following vegan diets in a way that costs more money, as opposed to costlessly. This post by Jeff Kaufman is relevant, https://www.jefftk.com/p/two-kinds-of-vegan [https://www.jefftk.com/p/two-kinds-of-vegan] :
9Charles He17d
Yes, that is what I meant. RE: adopting a vegan diet being low cost (vegan being key to ensuring ending of the worst practices) is probably objectively wrong. * The evidence of dietary change efforts failing seems large (decades of conventional efforts, resulting with a flatline in total diet), and is large objective evidence against conventional animal welfare work (I’m unable to be more specific or name the specific practices and organizations involved, for net EV, “moral maze” sort of reasons). * In the otherwise unrelated EA forum discussions about “vultures”/defecting because of money, a common idea/narrative has been that “being vegan” is a powerful signal for altruism. This can’t be true if it’s easy. Note that this belief about dietary change has not just been wrong but very costly to the actual cause. This concrete and specific realization has been a large update for me against all leftist causes (as opposed to for ideological or political reasons). At the same time, very small changes in diet can reduce suffering enormously. This truth is probably a key part a “ruthless” critique against EA vegan diets being effective (critiques which I do not fully agree with).
The dangers of high salaries within EA organisations

Thanks, James. Sorry, by using the term "low" I didn't mean to attribute to you the view that EA salaries should be very low in absolute terms. To be honest I didn't put much thought into the usage of this word at all. I guess I simply used it to express the negation of the "high" salaries that you mentioned in your title. This seems like a minor semantic issue.

The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous

The reasons for EA vegan diet are subtle, related to the cause area and the fact that vegan diets are costly

Fwiw, another commentator, Onni Aarne, actually says the opposite - that a vegan diet is motivated because in part because it's not costly (I'm not hereby saying they're right, or that you are).

Consuming factory farmed animal products also indicates moral unseriousness much more strongly because it is so extremely cheap to reduce animal suffering by making slightly different choices .

6Charles He17d
This comment was at -3 before I strong upvoted. Im not sure why that it is so but that is bad and may reflect some deterioration in norms (that maybe I’m contributing to?). It’s think it’s good to argue ruthlessly but I avoid downvoting things I disagree with a lot of the time.
8Robi Rahman17d
The specific points being made in those quotations aren't mutually exclusive. Onni Aarne is saying you can make very inexpensive adjustments to your diet that greatly reduce animal suffering, and Charles He is saying that EA events spend extra money on catering to satisfy the constraint of making it vegan. I think both claims are correct.
The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous

Thanks, I think this was a thoughtful, sophisticated, and original essay. I think it's  unfortunate that insightful posts like this get downvoted on the EA Forum. Its current level of karma (relative to other posts on the forum) doesn't reflect its quality (relative to those other posts) accurately. (Prior to me strongly up-voting it, it had 7 karma and 8 votes.)

It suggests the karma system might need to be reformed - e.g. that people should be able to express dis-/agreement with the claims, and evaluation of the reasoning, separately (cf. LessWrong's new karma system).

It's not that sophisticated. 

The post uses rhetoric, sort of what I call "EA rhetoric" where lengthy writing and language and internal devices and internally consistent arguments gas up a point, while basic, logical points are left out, and their omission is concealed by the same length.

 

This essay is centered on the truth that a vegan diet "isn’t really quite EA" (in the sense of the "GiveWell, dollars for QALY aesthetic"). 

The reasons for EA vegan diet are subtle, related to the cause area and the fact that vegan diets are costly. I'm happ... (read more)

The dangers of high salaries within EA organisations

Thanks, I think this post is thoughtfully written. I think that arguments for lower salary sometimes are quite moralising/moral purity-based; as opposed to focused on impact. By contrast, you give clear and detached impact-based arguments.

I don't quite agree with the analysis, however. 

You seem to equate "value-alignment" with "willingness to work for a lower salary". And you argue that it's important to have value-aligned staff, since they will make better decisions in a range of situations:

  • A researcher will often decide which research questions to p
... (read more)
9David_Moss16d
I don’t think we can infer too much from this result about this question. The first thing to note, as observed here [https://forum.effectivealtruism.org/posts/7f3sq7ZHcRsaBBeMD/?commentId=gm4c8RXjXfJTBZYJq] , is that taken at face value, a correlation of around 0.243 is decently large, both relative to other effect sizes in personality psychology and in absolute terms. However, more broadly, measures that have been constructed in this way probably shouldn’t be used to make claims about the relationships between psychological constructs (either which constructs are associated with EA or how constructs are related to each other). This is because the ‘expansive altruism’ and ‘effectiveness-focus’ measures were constructed, in part, by selecting items which most strongly predict your EA outcome measures (interest in EA etc.). Items selected to optimise prediction are unlikely to provide unbiased measurement (for a demonstration, see Smits et al (2018) [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5997739/pdf/11136_2017_Article_1720.pdf] ). The items can predict well both because they are highly valid and because they introduce endogeneity, and there is no way to tell the difference just by observing predictive power. This limits the extent to which we can conclude that psychological constructs ( expansive altruism and effectiveness-focus) are associated with attitudes towards effective altruism, rather than just that the measures (“expansive altruism” and “effectiveness-focus”) are associated with effective altruism, because the items are selected to predict those measures. So, in this case, it’s hard to tell whether the correlation between ‘expansive altruism’ and ‘effectiveness focus’ is inflated (e.g. because both measures share a correlation with effective altruism or some other construct) or attenuated (e.g. because the measures less reliably measure the constructs of interest). Interestingly, Lucius’ measure of ‘impartial beneficence’ from the OUS (which see
4James Ozden17d
Hey Stefan, thanks again for this response and will respond with the attention it deserves! I definitely agree, and I talk about this in my piece as well e.g. in the introduction I say "There are clear benefits e.g. attracting high-calibre individuals that would otherwise be pursuing less altruistic jobs, which is obviously great." So I don't think we're in disagreement about this, but rather I'm questioning where the line should be drawn, as there must be some considerations to stop us raising salaries indefinitely. Furthermore, in my diagrams you can see that there are similarly altruistic people that would only be willing to work at higher salaries (the shaded area below). This is an interesting point and one I didn't consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary. Again, I would agree thats it's not the most effective way of ensuring value alignment within organisations, but I would say it's an important factor. This was actually really useful for me and I would definitely say I was generally conflating "willingness to work for a lower salary" with "value-alignment". I've probably updated more towards your view in that "effectiveness-focus" is a crucial component of EA that wouldn't be selected for simply by being willing to take a lower salary, which might more accurately map to "expansive altruism". I agree this is probably the best outcome and certainly what I would like to happen, but I also think it's challenging. Posts such as Vultures Are Circling [https://forum.effectivealtruism.org/posts/W8ii8DyTa5jn8By7H/the-vultures-are-circling] highlight people trying to "game" the system in order to access EA funding, and I think this problem will only grow. Therefo
4James Ozden17d
Thanks for the thoughtful engagement Stefan and kind words! I'm going to respond to the rest of your points in full later but just one quick clarification I wanted to make which might mean we're not so dissimilar on our viewpoints. Just want to be very clear that low salaries is not what I think EA orgs should pay! I tried quite clearly to use the term 'moderate' rather than low because I don't think paying low salaries is good (for reasons you and I both mentioned). I could have been more explicit but I'm talking about concerns with more orgs paying $150,000+(or 120%+ of market rate as a semi-random number) salaries on regular basis, not paying people $80,000 or so. Obviously exceptions apply like I mentioned to Khorton below [https://forum.effectivealtruism.org/posts/WXD3bRDBkcBhJ5Wcr/the-dangers-of-high-salaries-within-ea-organisations?commentId=ruuDbcqc4MFSvomgH] but it should be at least the point where everyone's (and their families/dependents) material needs can be met. Do you have any thoughts on this? Because surely at some point salaries become excessive, have bad optics or counterfactually poor marginal returns but the challenge is identifying where this is. ( I'll update in my main body to be clearer as well)
Deference Culture in EA

I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it's not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.

Fwiw, I think one thing that's important to keep in mind is that deference doesn't necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.

Holly_Elmore's Shortform

Relatedly, I'm a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there's already such a tendency, and involvement in politics could make it worse.

What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?

I'm in favor of not sharing infohazards but that's about the extent of reputation management I endorse-- and I think that leads to a good reputation for EA as honest!

Jobs at EA-organizations are overpaid, here is why

My sense is that the difference in impact between jobs that have higher and lower impact is often very substantial, and that if a higher salary can make people more likely to take the higher-impact jobs, then that extra expenditure is typically worth it. (Though there is the issue whether you think there is a correlation between impact and salary in effective altruism - my guess would be that there is). In any event, I think that jobs at EA-organisations aren't overpaid.

Deference Culture in EA

My view is that when you are considering whether to take some action and are weighing up its effects, you shouldn't in general put special weight on your own beliefs about those effects (there are some complicating factors here, but that's a decent first approximation). Instead you should put the same weight on yours and others' beliefs. I think most people don't do that, but put much too much weight on their own beliefs relative to others'. Effective altruists have shifted away from that human default, but in my view it's unlikely - in the light of the ge... (read more)

Hey Stefan,

Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.

Let's start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pr... (read more)

I think there  are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:

Let's say I have  which I updated from a prior of . Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.

They say 

Then I can infer that they updated from  to  by multiplyi... (read more)

This post seems to amount to replying "No" to Vaidehi's question since it is very long but does not include a specific example. 

> I won't be able to give you examples where I demonstrate that there was too little deference
I don't think that Vaidehi is asking you to demonstrate anything in particular about any examples given. It's just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.

Deference Culture in EA

I also think that EA consensus views are often unusually well-grounded, meaning there are unusually strong reasons to defer to them. (But obviously this may reflect my own biases.)

Fwiw I think many effective altruists defer too little rather than too much.

3kokotajlod20d
Agreed--except that on the margin I'd rather encourage EAs to defer less than more. :) But of course some should defer less, and others more, and also it depends on the situation, etc. etc.

Could you a few specific examples of times you have seen EAs deferring too little? 

Cause neutrality

"Neutrality" is this disregard for irrelevant considerations.

...

Two subcases of neutrality are...cause neutrality [and] means neutrality.

There are also other considerations which one should, or arguably should, be neutral about. One example is what resources to use - e.g. money or time. Another is whether to pursue high or low risk interventions: many effective altruists believe that you should be risk neutral and simply maximise expected value.

Still others may include neutrality with respect to how diversified your altruistic investments should be (m... (read more)

5Pablo21d
Okay, I've revised the lead section of the entry, created a separate entry on means neutrality [https://forum.effectivealtruism.org/topics/means-neutrality], and added a disambiguation page [https://forum.effectivealtruism.org/topics/neutrality] on neutrality. In the future, we may want to add an entry on resource neutrality and perhaps other types of neutrality.
2Pablo21d
Yes, good points. I'll take a look shortly.
Michael Nielsen's "Notes on effective altruism"

You can usually relatively straightforwardly divide your monetary resources into a part that you spend on donations and a part that you spend for personal purposes.

By contrast, you don't usually spend some of your time at work for self-interested purposes and some for altruistic purposes. (That is in principle possible, but uncommon among effective altruists.) Instead you only have one job (which may serve your self-interested and altruistic motives to varying degrees). Therefore, I think that analogies with donations are often a stretch and sometimes misleading (depending on how they're used).

What is the state of the art in EA cost-effectiveness modelling?

I guess that if one wants to red team effective altruist cost-effectiveness analyses that inform, e.g. giving decisions, non-public analyses may be relevant.

4Gavin23d
Couldn't find any public OP analyses on a cursory look
Michael Nielsen's "Notes on effective altruism"

Fwiw, I think the logic is very different when it comes to direct work, and that phrasing it in terms of what fraction of one's time one donates isn't the most natural of thinking about it.

3nananana.nananana.heyhey.anon23d
Can you say why?
Michael Nielsen's "Notes on effective altruism"

These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.

I agree there's such a risk. But I also think that the tone actually matters a lot.

6Devin Kalish24d
To be clear, I also agree with this.
Michael Nielsen's "Notes on effective altruism"

Thanks for posting this. I also appreciated this thoughtful essay.

There was also this passage (not in your excerpts):

An alternate solution, and the one that has, I believe, been adopted by many EAs, has been a form of weak-EA. Strong-EA takes "do the most good you can do" extremely seriously as a central aspect of a life philosophy. Weak-EA uses that principle more as guidance. Donate 1% of your income. Donate 10% of your income, provided that doesn't cause you hardship. Be thoughtful about the impact your work has on the world, and consult many different

... (read more)

Right. Donating 10-50% of time or resources as effectively as possible is still very distinctive, and not much less effective than donating 100%.

Sam Bankman-Fried should spend $100M on short-term projects now

Thank you, this is helpful. I do agree with you that there is a difference between supporting GiveWell-recommended charities and supporting American beneficiaries. More generally, my argument wasn't directly about what donations Sam Bankman-Fried or other effective altruists should make, but rather about what arguments are brought to bear on that issue. Insofar as an analysis of direct impact suggests that certain charities should be funded, I obviously have no objection to that. My comment rather concerned the fact that the OP, in my view, put too much em... (read more)

Sam Bankman-Fried should spend $100M on short-term projects now

First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational."

The OP focused on PR/reputation, which is what I reacted to.

If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we're just trying to anticipate and strategically preempt a misconception people may have about our true motivations.

I think there's a difference between creating a reputation for integrit... (read more)

9AndrewDoris1mo
If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is "actually behaving with integrity." Why is it more honest - or even perceived as more honest - for SBF to exempt reputational consequences from what he thinks is most helpful? Insofar as SBF's reputation and EA's reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he would be transparently buying popularity. But I don't think funding GiveWell's short-term causes - nor even funding them more than you otherwise would for reputational reasons - is equally hypocritical in a way that suggests a lack of integrity. These are still among the most helpful things our community has identified. They are heavily funded by OpenPhilanthropy and by a huge portion of self-identified EAs, even apart from their reputational benefits. Many, both inside and outside the movement, see malaria bednets as the quintessential EA intervention. Nobody outside the movement would see that as a betrayal of EA principles. Insofar as EA and SBF's reputations are severable, perhaps it doesn't matter what's quintessentially EA, because "EA principles" are broader than SBF's personal priorities. But in that case, because SBF's personal priorities incline him towards political activism on longtermism, they should also incline him towards reputation management. Caring about things with instrumental value to protecting the future should not be seen as a dishonest deviation from longtermist beliefs, because it isn't! In another context, doing broadly popular and helpful things you "actually don't think are the most helpful" might just be called hedging against moral uncertainty. Responsiveness to social pressure on altruists' moral priorities is a humble admission that our niche and esoteric movement may have bli
4Yitz1mo
I didn't focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.
Sam Bankman-Fried should spend $100M on short-term projects now

Those partnerships between FTX and sports teams and individuals seem wholly different. They are not purporting to directly improve the world, the way donations to an altruistic cause do. (Rather, their purpose is, as far as I understand, to increase FTX's profits - which in turn indirectly can increase their donations.) As such, there is no risk of a conflation between PR-related and direct impact-related reasons for those expenditures: it's clear that they're about PR alone.

FTX is a for-profit enterprise, and it's natural that it engages in marketing. My comment rather concerned whether one should donate to particular causes because it looks good, as opposed to because it has a direct impact.

Sam Bankman-Fried should spend $100M on short-term projects now

My sense is that this post - as well as many other recent posts on the forum - focuses too much on PR/reputation relative to direct impact. Also, I think that insofar as we try to build a reputation, part of that reputation should be that we do things because we think they're right for direct, non-reputational reasons. I think that gives a (correct) impression of greater integrity.

I disagree with this for two reasons. First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational." There is overlap in both cases; but if we must categorize I think it's closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations? 

To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, ... (read more)

5Yitz1mo
Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons would impugn on our perceived integrity. The thing is, reputation is actually one of the things we are already paying a tremendous amount of attention to—in the context of both forecasting and charity evaluation. To explain: In forecasting, if you want your predictions to be maximally accurate, it is highly worthwhile to see what domain experts and superforecasters are saying, since they either have a confirmed track record of getting predictions right, or a track record of contributing to the relevant field (which means they will likely have a more robust inside-view). In charity evaluation, the only thing we usually have to go on to determine the effectiveness of existing charities is what the charities themselves say about their impact, and if we’re very lucky, what outside researchers have evaluated. Ultimately, the only real reason we have to trust some people or organizations more than others is their track record (certifications are merely proxies for that). Organizations like GiveWell partially function as track-record evaluators, doing the hard parts of the work for us to determine if charities are actually doing what they say they’re doing (comparing effectiveness once that’s done is the other aspect of their job, of course). When dealing with longtermist charities, things get trickier. It’s impossible to evaluate a direct track record of impact, so the only thing we have to go on is proxies for effectiven
8aogara1mo
FWIW I think SBF disagrees. FTX has spent hundreds of millions on marketing so far (see here [https://en.wikipedia.org/wiki/FTX_(company)#Partnerships]). For an organization that already believes in the power of PR, making donations that are more legibly altruistic seems like a great way to demonstrate their core values. Personally I would love to see a commitment to fund any charities that GiveWell projects as 5x to 8x better than direct cash transfers, which currently do not receive donations from GiveWell [https://blog.givewell.org/2021/11/22/we-aim-to-cost-effectively-direct-around-1-billion-annually-by-2025/] . You're right that we should do good things for the right reasons, and I would argue that it's the right thing to do for FTX to fill that funding gap.
Revisiting the karma system

I think there are some posts that should be made invisible; and that it's good if strong downvotes make them so. Thus, I would like empirical evidence that such a reform would do more good than harm. My hunch is that it wouldn't.

Load More