All of Guy Raveh's Comments + Replies

I've also been trying this to people claiming financial interests. On the other hand, the tweet Haydn relied to actually makes another good point though, that does apply to professors - diverting attention to from societal risks that they're contributing to but can solve, to x-risk where they can mostly sign such statements and then go "🤷🏼‍♂️", shields them from having to change anything in practice.

I found this post very interesting and useful for my own thinking about the subject.

Note that while the conclusions here are ones intended for OP specifically, there's actually another striking conclusion that goes against the ideas of many from this community: we need more evidence! We need to build stronger AI (perhaps in more strictly regulated contexts) in order to have enough data to reason about the dangers from it. The "arms race" of DeepMind and OpenAI is not existentially dangerous to the world, but is rather contributing to its chance of survival... (read more)

Thanks for writing this! I've been waiting to hear how that residency played out.

That might be true in theory, but not in practice. People become biased towards the causes they like or understand better.

1
Lauro Langosco
7d
Sure, but that's not a difference between the two approaches.

I'm really sad to hear that! Is the court's decision available somewhere?

8
alene
12d
Yeah it stinks.  The judge just ruled from the bench—He didn't author a written opinion.
Guy Raveh
13d230

Thank you for the work you're doing!

How's the first lawsuit going?

alene
13d232

Thank you so much, Guy!!  Sadly, the judge dismissed the Costco lawsuit.  :-(  

Hi Xueyin!

While I'm not currently working on a career plan, I am also suffering from a disability which limits my ability to work, so I wanted to offer my sympathy. Sadly, disabilities are not very visible in EA, but rest assured you're not alone in dealing with one.

A small word of advice, perhaps, is that in my experience most (though not all) career paths and programs are aimed at able-bodied people who can devote a full work week each to their career. Finding ways to accommodate disabilities will often require thinking outside the box, and being assertive and direct, and asking organizers and contacts about what can be done differently for it.

Good luck!

I wonder why he didn't do anything about it as prime minister though. It's certainly not in the Israeli consensus, but only for the reason that it's out of mind entirely for Israelis. So there would've been no real opposition, except by tech companies maybe.

Guy Raveh
1mo-2-6

I think this is spot on. There have been many discussions on the forum proposing rules like "Never hit on women during daytime in EAG, but it's ok at afterparties". And they're all basically doing something blunt on the one hand, and not preventing people from being a**holes on the other hand.

6
Jason
1mo
I think the bright-line rules serve several important purposes. They are not replacements for "don't be an a**hole," but are rather complements. A norm against seeking romantic or sexual connection during EAG events, for instance, is intended in part to equalize opportunities for professional networking at a networking event (which doesn't happen if, e.g., some people are setting up 1:1s for romantic purposes). It is easier for everyone to realize when a bright-line norm would or has been breached. That should make it more likely that the norms won't be breached in the first place, but also should make it more likely that norm violations will be reported and that appropriate action will be taken. So I think this is a both/and situation.

How many examples do you have of elites making the right decisions for a larger group? And out of how many elites trying to do that in general?

I've been vocal about thinking the community should have a voice here (maybe specifically in CEA, and other stakeholders should be involved for other parts of EVF). But widening the boards is a minimal step in the right direction.

Hi Miranda, thanks for the very clear answer!

I don't necessarily agree with the method of allocation, but from a broad perspective I'm happy to see that a small change in estimates translates to a small, but still meaningful, adjustment in allocation.

And furthermore, will it change how funds from the 'all grants' fund are spent?

GiveWell
2mo100

Hi, Kaleem and Guy!

This is Miranda Kaplan, communications associate at GiveWell. I'll answer both questions here, since they're closely related.

This adjustment updated GiveWell's overall impression of deworming by around 10%. But the bottom-line takeaway on deworming—which is that it's one of the most cost-effective programs we know of in some locations, but we have a higher degree of uncertainty about it than we do our top charities—hasn't changed much, and we think that should probably continue to be the takeaway for followers of our work. 

You can s... (read more)

I think donors motivated by EA principles would be making a mistake, and leaving a lot of value on the table by donating to GiveDirectly or StrongMinds over GiveWell's recommendations

Not going into the wider discussion, I specifically disagree with this idea: there's a trade-off here between estimated impact and things like risk, paternalism, scalability. If I'm risk-averse enough, or give some partial weight to bring less paternalistic, I might prefer donating to GiveDirectly - which I indeed am, despite choosing to donate to AMF in the past.

(In practi... (read more)

What I mean is "these forecasts give no more information than flipping a coin to decide whether AGI would come in time period A vs. time period B".

I have my own, rough, inside views about if and when AGI will come and what it would be able to do, and I don't find it helpful to quantify them into a specific probability distribution. And there's no "default distribution" here that I can think of either.

1
Gabriel Mukobi
2mo
Gotcha, I think I still disagree with you for most decision-relevant time periods (e.g. I think they're likely better than chance on estimating AGI within 10 years vs 20 years)
Guy Raveh
2mo4-20

This isn't personal, but I downvoted because I think Metaculus forecasts about this aren't more reliable than chance, and people shouldn't defer to them.

aren't more reliable than chance

Curious what you mean by this. One version of chance is "uniform prediction of AGI over future years" which obviously seems worse than Metaculus, but perhaps you meant a more specific baseline?

Personally, I think forecasts like these are rough averages of what informed individuals would think about these questions. Yes, you shouldn't defer to them, but it's also useful to recognize how that community's predictions have changed over time.

I'm aware that by prioritising how to use limited resources, we're making decisions about people's lives. But there's a difference between saying "we want to save everyone, but can't" and saying "This group should actually not be saved, because their lives are so bad".

Curiously, I note that people are quite ready to accept that, when it comes to factory farming, those animals would lead bad lives, so it is better that they never exist.

I actually agree! But I don't think it's the same thing. I don't want to kill existing animals; I want to not intention... (read more)

I have a difficulty with this idea of a neutral point, below which it is preferable to not exist. At the very least, this is another baked in assumption - that the worst wellbeing imaginable is worse than non-existence.

There are two reasons for me being troubled with this assumption:

  1. I've been living with a chronic illness for many years, which causes constant suffering. I'm expected to keep living like that for decades to come. I can't accept the idea that there's a point of suffering beyond which I should not live.
  2. Giving such a point will allow one to make decisions about whether people should live or die. As a rule that I personally believe in, we should never make such decisions.

Hello Guy. This is an important, tricky, and often unpleasant, issue to discuss. I'm speaking for myself here: HLI doesn't have an official view on this issue, except that it's complicated and needs more thought; I'm still not sure how to think about this.

I'll respond to your second comment first. You say we should not decide whether people live or die. Whilst I respect the sentiment, this choice is unfortunately unavoidable. Healthcare systems must, for instance, make choices between quality and quantity of lives - there are not infinite resources. The we... (read more)

Guy - thank you for this comment. I'm very sorry about your suffering.

I think EAs should take much more seriously the views of people like you who have first-hand experience with these issues. We should not be assuming that 'below neutral utility' implies 'it's better not to be alive'.  We should be much more empirical about this, and not make strong a priori assumptions grounded in some over-simplified, over-abstracted view of utilitarianism. 

We should listen to the people, like you, who have been living with chronic conditions -- whether pain, depression, PTSD, physical handicaps, cognitive impairments, or whatever -- and try to understand what keeps people going, and why they keep going.

I don't think we actually want to incentivise positive-EV bets as such? Some amount of risk aversion ought to be baked in. Going solely by EV only makes sense if you make many repeated uncorrelated bets, which isn't really what Longtermists are doing.

2
Jason
3mo
Fair enough -- my attempted point was to acknowledge concerns that being too quick to replace leaders when a bad outcome happened might incentivize them to be suboptimally conservative when it comes to risk.

This sounds cool but... It only works with Twitter?!

1
RomanHauksson
3mo
Yeah, it's not perfect... I'd like to be able to silently block people too, in case I no longer want to hang out with them. But hey, it's open source, maybe we can improve it.
Guy Raveh
3mo2223

I propose, on the contrary, that we celebrate having more diverse writing styles on the forum, as one small way to facilitate more diversity in people who come into the movement and stay in it :)

I strongly agree with you: that kind of discourse takes responsibility away from the people who do the actual harm; and it seems to me like the suggested norms would do more harm than good.

Still, it seems that the community and/or leadership have a responsibility to take some collective actions to ensure the safety of women in EA spaces, given that the problem seems widespread. Do you agree? If yes, do you have any suggestions?

I wonder if I can get into this without any knowledge in statistical modelling 😅

Alternatively, what's a good way to gain become proficient in that? I do have a master's in applied mathematics.

5
valiantdegu
3mo
By April 14? You are brave! I'm just guessing, but I imagine it would involve a ton of coding in practice, and tinkering with variations of existing models to make them work. To start from nothing, this book I heard about on Gelman' blog comes to mind: https://dataorigami.net/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/

I agree with your worries, and I doubt either of these options is true.

But I worry that instead of viewing it as a tradeoff, where discussion of rules is warranted, and and instead of seeing relationships as a place where we need caution and norms, it's instead viewed from a lens of meddling versus personal freedom, and so it feels unreasonable to have any rules about what consenting adults should do.

To me, at least, the current suggestions (in top level posts) do feel more like 'meddling' than like reasonable norms. This is because they are on the one hand very broad, ignoring many details and differences - and on the o... (read more)

6
Davidmanheim
3mo
I don't think there were suggestions, really. From what the post actually said, "Organizations handle these conflicts in a range of ways... On the more specific question of what norms to have, I don't know." But in defense of rules, I think it's fine to make rules to deal with the normal cases, and then you tell people you expect them to use their judgement otherwise. Because clearly power differentials are more complex than whether someone has an established place, or is older, or is more senior in an organization. For example, when I was working for 1DaySooner, I was technically junior to a number of people who worked there,  but I still had much more community influence than they did. I've also been in plenty of situations where people significantly younger than me were in more senior roles. Rules that try to capture all the complexity would be stupid, but so would having no rules at all.

Is it as easy (or easy enough) to enroll participants in RCTs if you need their whole household, rather than just them, to consent to participate? Does it create any bias in the results?

4
JoelMcGuire
3mo
I'd assume that 1. you don't need the whole household, depending on the original sample size, it seems plausible to randomly select a subset of household members [1](e.g., in house A you interview recipient and son, in B. recipient and partner, etc...) and 2. they wouldn't need to consent to participate, just to be surveyed, no?  If these assumptions didn't hold, I'd be more worried that this would introduce nettlesome selection issues.  1. ^ I recognise this isn't necessarily simple as I make it out to be. I expect you'd need to be more careful with the timing of interviews to minimise the likelihood that certain household members are more likely to be missing (children at school, mother at the market, father in the fields, etc.).

+1 to "how is this anti-Semitic?" (I'm also Jewish)

charity is a necessarily private act

Why so?

And where votes aren't weighed by karma.

This strikes me as a very good idea.

Forwarding the link to the EA Israel staff :)

Guy Raveh
3mo616

MIRI folk believe they have an unusually clear understanding of risks

"Believe" being the operative word here. I really don't think they do.

2
Davidmanheim
3mo
I don't think they would claim to have significantly better predictive models in a positive sense, they just have far stronger models of what isn't possible and cannot work for ASI, and it constrains their expectations about the long term far more. (I'm not sure I agree with, say, Eliezer about his view of uselessness of governance, for example - but he has a very clear model, which is unusual.) I also don't think their view about timelines or takeoff speeds is really a crux - they have claimed that even if ASI is decades away, we still can't rely on current approaches to scale.
3
David Johnston
3mo
I'm not sold on how well calibrated their predictions of catastrophe are, but I think they have contributed a large number of novel & important ideas to the field.
Guy Raveh
3mo207

I upvoted the post because I like that it tries to tackle power dynamics and sources of problems related to sex, which the community clearly has.

That said, I don't actually agree. I don't think policing people's relationship choices (including casual ones) is necessary - or productive - for preventing harassment etc.

Perhaps the most important point is that out of the sample of comments I've read so far, most were written by men - and I'm much more interested to hear what women in EA think here.

One could question what it even means to either 'not wish you'd never been born' or to 'not want to die when' when your wellbeing is negative.

One could also claim on a hedonic view that, whatever it means to want not to die, having net-negative wellbeing is the salient point and in an ideal world you would painlessly stop existing.

Given that the lived experience of some (most?) of the people who live lives full of suffering is different from tha model, this suggests that the model is just wrong.

The idea of modeling people as having a single utility... (read more)

2
Arepo
3mo
What do you mean 'the model is wrong'? You seem to be confusing functions (morality) with parameters (epistemics). It's also necessary if you want your functions to be quantitative. Maybe you don't, but then the whole edifice of EA becomes extremely hard to justify.
Guy Raveh
3mo1115

This post is now again out of the front page.

I don't know if you meant it like that, but this comment reads to me as very sarcastic towards someone who obviously just misunderstood you :/

Edit: especially as your original comment was clear and I don't think anyone would read this thread and come out with the implied false beliefs about you.

Linch
3mo1610

Thanks, appreciate the feedback. I didn't mean my comment as sarcastic and have retracted the comment. I had an even less charitable comment prepared but realized that "non-native speaker misunderstood what I said" is also a pretty plausible explanation given the international nature of this forum.

I might've been overly sensitive here, because the degree of misunderstanding and the sensitive nature of the topic feels reminiscent of patterns I've observed before on other platforms. This is one of the reasons why I no longer have a public Twitter.

Sorry, I'm on your side here, but read Linch's comment again. He wrote the opposite of what you're saying he did.

titotal
3mo2514

Thank you. I acknowledge I misinterpreted the comment, and have retracted my previous comments on it. 

Guy Raveh
4mo1516

e.g. in an earlier draft of this post, before fact-checking it with her, I said that we talked about “feelings of mutual attraction”

(Followed by)

This was not her experience

6
CuriousEA
4mo
Thanks.
Guy Raveh
4mo9-4

Doesn't seem like a big difference to me.

Guy Raveh
4mo1415

I think on the first report, how far this needs to go depends on the person who was harassed. It's ok not to require a public apology and it's ok not to want the accused to lose their job (although it's also ok to want the opposite!).

But after Wise became aware of more cases, he should have been removed from the board. Personally I think he should have also apologized publicly (like he now did), but I find this less important.

But after Wise became aware of more cases, he should have been removed from the board.

I agree this definitely has to happen if Julia became aware of more cases through further complaints or through an investigation unearthing other things that are at least 50% as bad as the incident described by Owen.

However, if these "other cases" were just Owen going through his memory of any similar interactions and applying what he learned from the staying-at-his-house incident and then scrupulously listing every interaction where, in retrospect, he cannot be 100% conf... (read more)

Note that the future reports did come and therefore he should've been removed before this point in time.

2
Jason
4mo
The idea was that a final-chance warning would hopefully deter future incidents. I didn't specify either way what the consequences would be for future reports of prior-to-warning events (especially if pre-EVF) . That wouldn't have been necessary to resolve upfront because one cannot deter past events.

I agreevoted but I don't think this is a community norm. It's just a life skill.

5
Nathan Young
4mo
I think many communities have a norm against talking to journalists. Or talking to them in a specific way. Eg political communities will probably burn you if they find you talking to journalists, but political people cultivate relationships with journalists who they feed info to to advance their aims.
Guy Raveh
4mo3316

If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments

I've just completed a master's degree in ML, though not in deep learning. I'm very sure there are still major obstacles to AGI, that will not be overcome in the next 5 years nor in the next 20. Primary among them is robust handling of OOD situations.

Look at self-driving cars as an example. It was a test case for AI compani... (read more)

1
lauren
4mo
just so we're clear - self driving cars are, in fact, one of the key factors pushing timelines down, and they've also done some pretty impressive work on non-killeveryone-proof safety which may be useful as hunch seeds for ainotkilleveryoneism. they're not the only source of interesting research, though. also, I don't think most of us who expect agi soon expect reliable agi soon. I certainly don't expect reliability to come early at all by default.
5
Kene David Nwosu
4mo
Aren't there self-driving cars on the road in a few cities now? (Cruise and maybe Zoox, if I recall correctly). 
titotal
4mo140

I will publicly predict now that there will be no AGI in the next 20 years. I expect significant achievements will be made, but only in areas where large amounts of relevant training data exist or can be easily generated. It will also struggle to catch on in areas like healthcare where misfiring results cause large damage and lawsuits.  

I will also predict that there might be a "stall" of AI progress in a few years, once all the low-hanging fruit problems are picked off, and the remaining problems like self-driving cars aren't well suited for the current advantages of AI. 

Guy Raveh
4mo2-2

Traditionally, thought leaders in EA have been careful not to define any "core principles" besides the basic idea of "we want to find out using evidence and reason how to do as much good as possible, and to apply that knowledge in practice". While it's true that various perceptions and beliefs have creeped in over the years, none of them is sacred.

In any case, as far as I understand the "scout mindset" (which I admit isn't much), it doesn't rule out recognising areas which would be better left alone (for real, practical reasons - not because the church said so).

7
Anon Rationalist
4mo
How can we “find out using evidence and reason how to do as much good as possible, and to apply that knowledge in practice" if some avenues to well-being are forbidden? The idea that no potential area is off limits is inherent in the mission. We must be open to doing whatever does the most good possible regardless of how it interacts with our pre-existing biases or taboos.
5
Jgray
4mo
To me, "better left alone" and "sacred" are two sides of the same coin.  

Thanks for the quick answer! I suspected as much but wanted to make sure.

Guy Raveh
4mo2-2

I didn't understand (1).

  1. I don't know if his characterization is right or not, I'm not a Rationalist. But of course subjects being taboo because of the harm discussing them does is fine. Why wouldn't it be?
4
Jgray
4mo
Why would it not be fine for topics to be off limit for discussion? The first principle of EA discusses the need for a "'scout mindset' - seeking the truth, rather than to defend our current ideas." You may be aware that at one point the idea the earth revolves around the Sun was taboo.   What is taboo varies widely over time and by culture. Even the idea that having an open honest discussion about anything could ever be construed as "causing harm" (beside from being a terrible one imo) is a very new concept and one that would have been universally dismissed maybe even 15 years ago. At any rate, it sounds like you are fine with topics being absolutely off limits to discuss.  This is a bit of a surprising admission to me considering the core principles of EA but you are, apparently, certainly not alone in this belief.
Guy Raveh
4mo162

Was the list of rapists redacted by OP or by moderators?

No, it was redacted by me after I wrote the post and before I posted it here - the risk is low, but don't want to risk defamation - or derail the conversation about the overarching issue, in that people might start trying to guess/post names based on my descriptions (which happened with the original post about the Time article).

All the [redacted] parts of this post were written that way by the author, mods did not edit this post and if we edit or remove information, we will always either post a comment explaining what we did, or get in touch with the poster directly. (which we did not do in this case)

  1. Ideas that you talk about don't stand on their own. They exist within a historical and social context. You can't look at the idea without also considering how it affects people. I imagine Matthew personally finds the idea toxic too, as do I - but that's not really the point.

  2. Perhaps Rationalism really argues that fewer ideas should be taboo, or perhaps that's just Hanania's version of it. But EA isn't synonymous with Rationalism, and you don't need to adopt one (certainly not completely) to accept the other.

4
Jgray
4mo
1.  So are you saying "within our current historical and social context" yeppers, too toxic to consider for cost-benefit analysis?  This is a totally acceptable answer -- just means Hanania is right and we can end the convo here. 2. So are you saying you disagree with Hanania's conceptualization of rationalism?  Are subjections being off-limits to cost-benefit analysis fine with you? Sounds like again the answer is yes.
2
Aptdell
4mo
See https://en.wikipedia.org/wiki/Genetic_fallacy [https://en.wikipedia.org/wiki/Genetic_fallacy] As a concrete example, suppose that 100 years ago, a bunch of racist politicians passed a minimum wage law in order to price a local ethnic minority out of the labor market [https://www.forbes.com/sites/carriesheffield/2014/04/29/on-the-historically-racist-motivations-behind-minimum-wage/]. The minimum wage exists within that historical and social context. However, if more recent research shows definitively that the minimum wage is now improving employment outcomes for that same ethnic minority, the historical and social context would appear to be irrelevant.

I'll only answer with a small point: I'm from a different country, and we don't have a "Democratic coalition", neither do we have racism against Chinese people because there are barely any Chinese people here (hence, we didn't have this pressure against making a big deal of COVID). I don't see EA through an American perspective, and mostly ignore phrases like that.

Still, generally speaking, I would side with US democrats on many things, and am sure the mild disagreements needed wouldn't be an actual problem. Progressivism is perceived by conservatives as something that creates extreme homogeneity of thought, but that doesn't really seem the case to me.

1
Aptdell
4mo
You say you happen to already agree on most things, perhaps you therefore wouldn't experience much pressure. https://web.archive.org/web/20220407033207/https://www.canceledpeople.com/cancelations [https://web.archive.org/web/20220407033207/https://www.canceledpeople.com/cancelations]
Guy Raveh
4mo179

I'm really sorry for the experience you've been having, and I appreciate you stepping down to take care of yourself, and by sharing it all here, sending a message to all EAs that they should take care of themselves too.

If the Executive Director of CEA can decide to prioritise his own health, so can anyone else. EA is known to be very demanding - particularly in such high responsibility positions, but also for most other EAs - and in doing this you're leading by example and hopefully preventing other EAs from harming their health.

Guy Raveh
4mo-2-28

Almost every "bad" thing said here about "Woke EA" sounds good to me, while the "good" things EA would otherwise be able to achieve sound absolutely horrible.

Aptdell
4mo2120

dspeyer brought up an interesting example in another thread:

In early 2020, people were reluctant to warn about covid-19 because it could be taken as justification for anti-chinese racism.

Hanania writes:

One path [EA] can take is to be folded into the Democratic coalition. It’ll have to temper its rougher edges, which means purging individuals for magic words, knowing when not to take an argument to its logical conclusion, compromising on free speech, more peer review and fewer disagreeable autodidacts, and being unwilling to engage with other individu

... (read more)
-2
Anon Rationalist
4mo
Could you expand on this? What do you find horrible about the ability to recreate the success of Ashekenazi Jews among different populations, for example?
Load more