Guy Raveh

Software Engineer @ GE Healthcare
3220Haifa, IsraelJoined Jun 2020

Bio

Participation
3

Working in healthcare technology and doing some independent AI alignment research on the side.

MSc in applied mathematics/theoretical ML.

Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.

Comments
774

This isn't personal, but I downvoted because I think Metaculus forecasts about this aren't more reliable than chance, and people shouldn't defer to them.

I'm aware that by prioritising how to use limited resources, we're making decisions about people's lives. But there's a difference between saying "we want to save everyone, but can't" and saying "This group should actually not be saved, because their lives are so bad".

Curiously, I note that people are quite ready to accept that, when it comes to factory farming, those animals would lead bad lives, so it is better that they never exist.

I actually agree! But I don't think it's the same thing. I don't want to kill existing animals; I want to not intentionally create new ones for factory farms. Continued existence is better than death if you already exist. Creating someone just to suffer is a different matter. This isn't symmetric (and as a mathematician, I note that that means it can't be described by just giving some "local" numerical rating to each state of being and comparing them).

I have a difficulty with this idea of a neutral point, below which it is preferable to not exist. At the very least, this is another baked in assumption - that the worst wellbeing imaginable is worse than non-existence.

There are two reasons for me being troubled with this assumption:

  1. I've been living with a chronic illness for many years, which causes constant suffering. I'm expected to keep living like that for decades to come. I can't accept the idea that there's a point of suffering beyond which I should not live.
  2. Giving such a point will allow one to make decisions about whether people should live or die. As a rule that I personally believe in, we should never make such decisions.

I don't think we actually want to incentivise positive-EV bets as such? Some amount of risk aversion ought to be baked in. Going solely by EV only makes sense if you make many repeated uncorrelated bets, which isn't really what Longtermists are doing.

I propose, on the contrary, that we celebrate having more diverse writing styles on the forum, as one small way to facilitate more diversity in people who come into the movement and stay in it :)

I strongly agree with you: that kind of discourse takes responsibility away from the people who do the actual harm; and it seems to me like the suggested norms would do more harm than good.

Still, it seems that the community and/or leadership have a responsibility to take some collective actions to ensure the safety of women in EA spaces, given that the problem seems widespread. Do you agree? If yes, do you have any suggestions?

I wonder if I can get into this without any knowledge in statistical modelling 😅

Alternatively, what's a good way to gain become proficient in that? I do have a master's in applied mathematics.

I agree with your worries, and I doubt either of these options is true.

But I worry that instead of viewing it as a tradeoff, where discussion of rules is warranted, and and instead of seeing relationships as a place where we need caution and norms, it's instead viewed from a lens of meddling versus personal freedom, and so it feels unreasonable to have any rules about what consenting adults should do.

To me, at least, the current suggestions (in top level posts) do feel more like 'meddling' than like reasonable norms. This is because they are on the one hand very broad, ignoring many details and differences - and on the other hand don't seem to me like they'll solve our problems.

For example, I almost agree with you regarding relationships between uni group leaders and members - I think disclosure (to whom?) might be reasonable, but anything beyond that wouldn't be. On the other hand, I think the main factor here, which these suggestions ignore, isn't just the difference in power that comes from the hierarchy, but rather the difference in seniority. I'm much more worried about people who have an established place in the community starting relationships with newcomers, because it seems much easier to cause harm there.

Load more