All of Guy Raveh's Comments + Replies

Monthly Overload of EA - June 2022

Oh wow this is great! Subscribed.

There are no people to be effectively altruistic for on a dead planet: EA funding of projects without conducting Environmental Impact Assessments (EIAs), Health and Safety Assessments (HSAs) and Life Cycle Assessments (LCAs) = catastrophe

I agree that externalities should be taken into account in analyses of EA projects, and as as aogara's comment shows, they may be non-negligible (though the order of magnitude in that calculation wasn't changed). I think it's important to raise this point.

However:

  1. The adversarial perspective in this post looks very wrong to me. Environmental effects are one part of the calculation. It's not necessarily good to only do "sustainable" things. E.g. the death of all humans would be much more sustainable, but is something we would fight against.
  2. Moreover, it sh
... (read more)
2Deborah W.A. Foulkes18h
I'm so sorry you find my post 'adversarial'. I do apologise if that is the impression you have received. It was not intended. By way of explanation - I've arrived at Effective Altruism via a path that started with existential risks and then expanded to longtermism, so I suppose I automatically start from a more risk-averse perspective. X-risks and longtermism lead to one thinking more in terms of the negative effects an intervention could have on vast numbers of future people (since a human extinction event would prevent huge numbers of future people from leading happy fulfilling lives up to the habitable limit of this planet, around one billion years, and prevent even huger, barely comprehensible numbers of future people expanding to settle inhabitable planets throughout the universe), and this often seems to conflict with considerations of smaller (in comparison) numbers of people here on this one planet in the short-term. It is a quite horrible moral dilemma, to weigh these up against one another, and one which is very uncomfortable indeed to contemplate or to even attempt to quantify. But we should not shrink from this difficult task, I feel.
"Big tent" effective altruism is very important (particularly right now)

I don't share your view about what a downvote means. However, regardless of what I think, it doesn't actually have any fixed meaning beyond that which people a assign to it - so it'd be interesting to have some stats on how people on the forum interpret it.

But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.

Most(?) readers won't know who either of them is, not to mention their relationship.

2Aaron Gertler10h
What does a downvote mean to you? If it means "you shouldn't have written this", what does a strong downvote mean to you? The same thing, but with more emphasis? Why not create a poll [https://www.facebook.com/groups/477649789306528]? I would, but I'm not sure exactly which question you'd want asked. Which brings up another question — to what extent should a comment be written for an author vs. the audience? Max's comment seemed very directed at Luke — it was mostly about the style of Luke's writing and his way of drawing conclusions. Other comments feel more audience-directed.
What are we as the EA community?

I like those Polis polls you keep posting. Maybe you should now have one to vote on that :)

What are we as the EA community?

Upvoted even though I disagree with important parts, because I think this kind of post is a good idea.

I'm curious about your idea of the relationship between the community and funders/managers. On the one hand, you say (without much explanation) that funding decisions ought not to be, and never will be, made democratically. On the other hand, you think the community should inspect and check decisions by funders.

This leads me to ask: what do you envision should happen, if the community finds funding decisions to be bad, or points to a new appointment being ... (read more)

2Nathan Young3d
What currently happens is that people talk in private and then later post on the forum about it. This seems reasonable. I don't think community opinions aren't taken into account. I think fund managers are affected by community discourse. But they aren't bound to it. I feel belonging and cooperation for all the other reasons I state. I don't need to be [making decisions] to feel involved. I'm interested in better decisions and I agree that it seems better to have more diverse groups for error correction. Firstly, those people can be part of the error correction systems I mention. Secondly they can gain prestige and then work at funding orgs. Do those answer you questions?
"Big tent" effective altruism is very important (particularly right now)

I think the problem isn't with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).

The problem is the actual reason:

I think you're pointing to some important issues... However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.

The message that, for me, stands out from this is "If you have an important idea but can't present it perfectly - it's better not to write at all." Which I think most of us would not endorse.

4Aaron Gertler2d
I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". * I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote. If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me. *This is partly because I've stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the "yikes" reaction. But that's where the users' identities and relationship [https://forum.effectivealtruism.org/posts/SjK9mzSkWQttykKu6/big-tent-effective-altruism-is-very-important-particularly?commentId=EP2pW9ikkAkpE6kee] comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.
Some unfun lessons I learned as a junior grantmaker

As you noted, it's not you who "has money" as a grantmaker. On the other hand, it is you who knows what parameters make projects valuable in the eyes of EA funders. Which is exactly the needed expertise.

I'm not implying how this should compare to any individual grantmaker's other priorities at a conference. But it seems wrong to me to strike it down as not being valuable use of conference time.

Some unfun lessons I learned as a junior grantmaker

Conference time is valuable precisely because it allows people to do things like "get feedback from an EA experienced in the thing they're trying to do". If "insiders" think their time is too valuable for "outsiders", that's a bad sign.

Getting feedback from someone because they have expertise feels structurally different to me than getting feedback from someone because they have money.

List-omania

It might make sense to have a central List of Lists and a head List Librarian

...

(After jotting this post down, I'm really sick of the word 'list'.)

Maybe we can even aspire to one day rival Wikipedia's legendary list of lists of lists.

LW4EA: 16 types of useful predictions

This post was more interesting than I expected. Thanks!

1Jeremy4d
I agree. When I was facilitating the In Depth virtual program, people often had difficulty finding practical ways to make predictions. It would have been helpful to be able to refer them to this. I emailed to suggest that it be added to the syllabus.
St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

It's zero on the event "three sixes are rolled at some point" and infinity on the event that they're never rolled. The probability of that second event is zero, though. So the expected value is zero.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

Nevertheless, expected value is the best tool we have for analyzing moral outcomes

Expected value is only one parameter of the (consequentialist) evaluation of an action. There are more, e.g. risk minimisation.

It would be a massive understatement to say that not all philosophical or ethical theories so far boil down to "maximise the expected value of your actions".

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

The expected value of this strategy is undefined

It looks to me like there's some confusion in the other comments regarding this. The expected value is, in fact, defined, and it is zero. The problem is that if you look at a sequence of n bets and take n to infinity, that expected value does go to positive infinity. So thinking in terms of adding one bet each time is actually deceiving.

In general, a sequence of pointwise converging random variables does not converge in expected value to the expected value of the limit variable. That requires uniform convergence.

Infinities sometimes break our intuitions. Luckily, our lives and the universe's "life" are both finite.

1Zach Stein-Perlman5d
Is the random variable you're thinking of, whose expectation is zero, just the random variable that's uniformly zero? That doesn't seem to me to be the right way to describe the "bet" strategy; I would prefer to say the random variable is undefined. (But calling it zero certainly doesn't seem to be a crazy convention.)
EA culture is special; we should proceed with intentionality

Forgive me for only skimming this and making a rather off-topic comment, but:

[A]dhering to good cultural norms/existing EA axioms is a good heuristic for generating impact

From an outside perspective, how sure are we of this actually? E.g. have organizations and people that generated large positive impact so far adhered to EA-style culture or axioms?

[Linkpost] Towards Ineffective Altruism

paint a picture that does not very accurately describe what most effective altruists are up to in a practical sense.

And also what they do in their daily lives, outside the time or resources they allot to "effectiveness".

[Linkpost] Towards Ineffective Altruism

My very short summary of the post:

  1. It gives a pretty fair exposition on the EA movement, including recognising its diversity of thought.
  2. It argues that important types of "good" are much less measurable (because they are abstract, long term, or don't make sense on am individual level), but have to be taken into account. It also argues explicitly against Longtermism, ostensibly promoting a discount coefficient close to, but smaller than, 1.
  3. It derives some conclusions on things that ought to be done, like labor unions and mutual aid communities.

I agree w... (read more)

SERI ML application deadline is extended until May 22.

Any idea if the next cohorts will allow applying later?

7Victor Warlop2d
Hey Guy, As said on the application, we are not sure if we will be able to run future iterations. We hope to make this possible, and in that case, we will publicly advertise new applications. We also don't know what mentors we will be working with.
How many people have heard of effective altruism?

Nitpicking: there's a copying error in the summary, in the party affiliation section regarding independents:

We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent).

3David_Moss7d
Thanks for spotting! Edited.
"Big tent" effective altruism is very important (particularly right now)

grow cautiously (maybe around 30%/year)

Are there estimates about current or previous growth rates?

5MaxDalton8d
There are some, e.g. here. [https://forum.effectivealtruism.org/posts/zA6AnNnYBwuokF8kB/is-effective-altruism-growing-an-update-on-the-stock-of#How_many_engaged_community_members_are_there_]
"Big tent" effective altruism is very important (particularly right now)

I think "large groups that reason together on how to achieve some shared values" is something that's so common, that we ignore it. Examples can be democratic countries, cities, communities.

Not that this means reasoning about being effective can attract as large a group. But one can hope.

"Big tent" effective altruism is very important (particularly right now)

Thanks for the post. I agree with most of it.

I think on the one hand, someone participating by donations only may still be huge, as we all know what direct impact GiveWell charities can have for relatively small amounts of money. Human lives saved are not to be taken lightly.

On the other hand, I think it's important to deemphasize donations as a basis for the movement. If we seek to cause greater impact through non-marginal change, relying on philanthropy can only be a first step.

Lastly, I don't think Elon Musk is someone we should associate ourselves with... (read more)

"Big tent" effective altruism is very important (particularly right now)

What is WWOTF?

"What We Owe the Future", Will MacAskill's new book.

"Big tent" effective altruism is very important (particularly right now)

I think there are two ways to frame an expansion of the group of people who are engaged with EA through more than donations.

The first, which sits well with your disagreements: we're doing extremely important things which we got into by careful reasoning about our values and impact. More people may cause value drift or dilute the more impactful efforts to make way on the most important problems.

But I think a second one is much more plausible: we're almost surely wrong about some important things. We have biases that stem from who the typical EAs are, where ... (read more)

Don’t Be Comforted by Failed Apocalypses

I don't know that I agree with this, but it did make me think.

The AI Messiah

Thank you. This brings together nicely some vague concerns I had, that I didn't really know how to formulate beyond "Why are people going around with Will MacAskill's face on their shirts?".

The Fair Trade Scandal - book review and summary

Some things I used to think about when I was active about fair trade, and I wonder if they're discussed:

  • the premiums paid to fair trade growing communities?
  • dependence of communities, fair trade or not, on a single crop
  • alternative certifications e.g. UTZ
EA is more than longtermism

I basically agree with your comment, but wanted to emphasize the part I disagree with:

EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes.

EA is about prioritising in order to (try to) do the most good. The ITN framework is just a heuristic for that, which may very well be wrong in many places; and funding is just one of the resources we can use.

3Michael_Wiebe24d
I claim that [https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework] the ITC framework is not just a heuristic, but captures precisely what we mean by "do the most good". And I agree, 'funding' should be interpreted broadly to include all EA resources (eg. money, labor hours, personal connections, etc).
Should longtermists focus more on climate resilience?

Thanks for this post!

I think it's really important to look at the underlying assumptions of any long-term EA project, and the movement might not be doing this enough. We take as way too obvious that the social and political climate we're currently operating in will stay the same. But in reality, everything could change significantly due to things like climate change (in one direction) or economic growth (in the other).

1Richard Ren24d
Thanks a ton for your comment! I'm planning to write a follow-up EA forum post on cascading and interlinking effects - and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.

That's not what I meant. What I tried to say is that the universe is full of beautiful things, like galaxies, plants, hills, dogs... More generally, complex systems with so many interesting things happening on so many scales. When I imagine a utopia, I picture a thriving human society in "harmony", or at least at peace, with nature. Converting all of it into simulated brains sounds like a dystopian nightmare to me.

Since I first thought about my intrinsic values, I knew there's some divergence between e.g. valuing beauty and valuing happiness singularly. Bu... (read more)

I would mostly like to protest your notion of utopia. A universe where every gram of matter is used for making brains sounds terrible. A "good" life involves interaction with other brains as well as a living environment.

1Joshua Clymer25d
Yeah, I would be in favor of interaction in simulated environments -- other's might disagree, but I don't think this influences the general argument very much as I don't think leaving some matter for computers will reduce the number of brains by more than an order of magnitude or so.
Nuclear Fusion Energy coming within 5 years

I remember hearing in 2018 about orders of millions of potentially autonomous cars by some companies (Intel?) intended for autonomous use in 2021, and we're not even close to that now. Fusion in the near term on some scale seems plausible, but the fact that a company is claiming a very close timeline isn't very indicative of the actual timeline, I think.

Nuclear Fusion Energy coming within 5 years

Another response could be that abundant energy means more destructive power for humanity, and so even more risks.

Though in reality I do tend towards the "sounds good but there's nothing we in particular should do about it" side.

EA frontpages should compare/contrast with other movements

A couple thoughts so far, written at 3am so hopefully at least somewhat clear:

  1. The post isn't short :)
  2. Another bias is in favour of technocracy over democracy. "Impact is calculated through careful analysis, and this analysis can be done by anyone, so the recipients do not need to govern it or give inputs to it." I do not mean by this that anyone in EA would stand behind this quote as written (though some might), but rather that we're biased in this direction.
  3. These biases can be viewed through more than one lens: on the other hand, this is what a newcome
... (read more)
1acylhalide1mo
2 makes sense. Regarding 3, I completely agree. I think you can present this nuance in the intro pages too. Like here is the current distribution of EA opinions on your favorite movement X, but if you feel you can convince us on X we're open to change.
rohinmshah's Shortform

I know this is just a small detail and not what you wrote about, but: much of your comment on the recommender systems post hinged on news articles being uncorrelated with the truth. Do you have data to back that up?

I'm replying here because it's a strong claim that's relevant to many things beyond that specific post.

2Rohin Shah1mo
I have data in the sense that when I read news articles and check how correct they are, they are usually not very correct. (You can have more nuance than this [https://astralcodexten.substack.com/p/bounded-distrust?s=r], e.g. facts about what mundane stuff happened in the world tend to be correct.) I don't have data in the sense that I don't have a convenient list of articles and ways they were wrong such that I could easily persuade someone else of this belief of mine. (Though here's one example [https://forum.effectivealtruism.org/posts/BNKhPZJvG59bCEiAm/any-response-from-openai-or-ea-in-general-about-the] of an article that you at least have to read closely if you want to not be misled.) Also, I could justify ignoring those two particular news articles without this general claim, at least to myself. I did briefly look at them before I wrote that comment; I didn't particularly expect to believe them but if they were the rare good kind of news article I would have noticed. For radicalization, I know specific people who have looked into it and come away unconvinced; Stefan Schubert links to some of this work in a different comment on that post. The article about social media being addictive is basically just a bunch of quotes from people rather than particular studies / data. It generally seems pretty easy to find people saying things you want so I don't update much on "such-and-such person said X". I've also once experienced and many times heard stories of journalists adversarially quoting people to make it sound like their position was very different than it actually was, so I usually don't even update on "such-and-such person believes X".
My bargain with the EA machine

I'm not very confident in my argument, but the particular scenario you describe sounds plausible to me.

Trying to imagine it in a simpler, global health setting - you could ask which of many problems to try to solve (e.g. malaria, snake bites, cancer), some of which may cause several orders of magnitude more suffering than others every year. If the solutions require things that are relatively straightforward - funding, scaling up production of something, etc. - it could be obvious which one to pick. But if the solutions require more difficult things, like r... (read more)

My bargain with the EA machine

This isn't a well thought-out argument, but something is bugging me in your claim. The real impact for your work may have some distribution, but I think the expected impact given career choices can be distributed very differently. Maybe, for example, the higher you aim, the more uncertainty you have, so your expectation doesn't grow as fast.

I find it hard to believe that in real life you face choices that are reflected much better by your graph than Eric's.

I share some of that intuition as well, but I have trouble conveying it numerically. Suppose that among realistic options that we might consider, we think ex post impact varies by 9 OOMs (as Thomas' graph implies). Wouldn't it be surprising if we have so little information that we only have <10^-9 confidence that our best choice is better than our second best choice? 

5ThomasWoodside1mo
This is tricky, because it's really an empirical claim for which we need empirical evidence. I don't currently have such evidence about anyone's counterfactual choices. But I think even if you zoom in on the top 10% of a skewed distribution, it's still going to be skewed. Within the top 10% (or even 1%) of researchers, nonprofits, it's likely only a small subset are making most of the impact. I think it's true that "the higher we aim, the higher uncertainty we have" but you make it seem as if that uncertainty always washes out. I don't think it does. I think higher uncertainty often is an indicator that you might be able to make it into the tails. Consider the monetary EV of starting a really good startup or working at a tech company. A startup has more uncertainty, but that's because it creates the possibility of tail gains. Anecdotally I think that certain choices I've made have changed the EV of my work by orders of magnitude. It's important to note that I didn't necessarily know this at the time, but I think it's true retrospectively. But I do agree it's not necessarily true in all cases.
4alexrjl1mo
I had similar thoughts, discussed here [https://twitter.com/kaseyshib/status/1519524344543727618?s=20&t=3OL85yVlN78axH84M9i0pw] after I tweeted about this post and somebody replied mentioning this comment. (Apologies for creating a circular link loop, as my tweet links to this post, which now has a comment linking to my tweet)
Consider Changing Your Forum Username to Your Real Name

I don't personally think that's a good reason to not use one's name, but I'll concede my phrasing was indeed a bit too dramatic. It's probably because my experience on the forum is that it's really frustrating not being able to connect other commenters to a human identity.

How many EAs failed in high risk, high reward projects?

Beyond asking about projects in a vague, general sense, it could also be interesting to compare the probabilities of success grantmakers in EA assign to their grantees' projects, to the fraction of them that actually succeed.

Consider Changing Your Forum Username to Your Real Name

I'd add that having people use their real names adds to the forum looking like a platform for professional discussion, and adds transparency - both of which are important because of the impact and reach we wish to eventually achieve as a movement.

While pseudonyms have some use cases - the main one I can think of, is when one may fear retaliation for reporting bad behaviour of another EA or organisation - they should indeed be otherwise extremely discouraged.

Edit: ok, this paragraph was in hindsight somewhat exaggerated, and I can think of a few use cases that may be more common. But I still think anyone using a pseudonym should at least have a good reason in mind.

"Extremely discouraged" seems a bit dramatic. Some of us would rather not have our heavy EA involvement be the first thing that shows up when people Google us.

I burnt out at EAG. Let's talk about it.

Thanks for writing this. I didn't attend this EAG, but I came out of the previous one completely exhausted. Every day of the conference I ended up in pain and barely able to move. Maybe it's because of not knowing my limits well enough, maybe it's because of accessibility issues[1].

But also maybe there's a culture that encourages us to stretch beyond our limits? I guess we can see if many people experience this and of there are, start looking for a general problem.


  1. Not enough chairs. I wrote about this in the feedback form, so maybe something has change

... (read more)
FTX/CEA - show us your numbers!

Hi, thanks for your comment.

While it's reasonable not to be able to provide an impact estimate for every specific small grant, I think there are some other things that could increase transparency and accountability, for example:

  • Publishing your general reasoning and heuristics explicitly on the CEA website.
  • Publishing a list of grants, updated with some frequency.
  • Giving some statistics on which sums went to what type of activities - again, updated once in a while.
FTX/CEA - show us your numbers!

There are also a lot of externalities that act at least equally on humans, like carbon emissions, promotion of ethnic violence, or erosion of privacy. Those are all examples off the top of my head for Facebook specifically.

I upvoted Larks' comment, but like you I think this particular argument, "people buy from these firms", is weak.

FTX/CEA - show us your numbers!

I strongly agree we need transparency. In lieu of democracy in funding, orgs need to be accountable to the movement in some way.

Also, what's a BOTEC?

6Jack Lewars1mo
I've updated this now: it's a Back Of The Envelope Calculation.
FTX/CEA - show us your numbers!

Not long enough for the formatting to matter in my opinion. We can, and should, encourage people to post some low-effort posts, as long as they're an original thought.

Would people like to see "curation comments" on posts with high numbers of comments?

given that I would like to have multiple comments-summary comments to choose from

That sounds cool, though I think it's a bit too optimistic.

But I basically agree with the rest.

2Harrison Durland1mo
I’m not strictly opposed to it being a new post, but my concern is that, given that I would like to have multiple comments-summary comments to choose from, you might see a proliferation of such posts which take up space on the front page. (Also, it might be more convenient to have such comments on the target post itself.) Perhaps, though there could be a norm of “graduating” meta-comments if 1) they receive enough likes, and 2) the number of comments covered is very large (e.g., >100). Alternatively, I suppose there could be an in-built feature where users can hide comment summary posts on the frontpage, although this would require changing the site, so it would probably be better to avoid this until people actually start to use the idea more?
"Long-Termism" vs. "Existential Risk"

I mean, physics solves the divergence/unboundedness Problem with the universe achieveing heat death eventually. So one can assume some distribution on the time bound, at the very least. Whether that makes having no time discount reasonable in practice, I highly doubt.

Where is the Social Justice in EA?

No individual, including Elon Musk or Jeff Bezos has more than a quarter of that amount of money

But governments do. Which, while being about a hypothetical, does demonstrate a good reason for EA to try to transition away from relying on individuals instead of governments.

Where is the Social Justice in EA?

GiveDirectly is great and is strongly supported by the EA community.

Theoretically - but GiveWell seems to prefer to keep money rather than give it directly. There may or may not be good reasons for that, but it's not a strong message for direct empowerment of marginalised communities.

Load More