Guy Raveh

Doing my master's in applied mathematics, where my research is on theoretical Machine Learning. I'm very interested in AI from theoretical and interdisciplinary points of view: psychology, ethics, safety.

Topic Contributions

Comments

The Future Might Not Be So Great.

Who do you think the relevant stakeholders are?

It seems to me that "having a safe community" is something that's relevant to the entire community.

I don't think long, toxic argument threads are necessary as a decision seems to have been made 3 years ago. The only question is what's changed. So I'm hoping we see some comment from CEA staff on the matter.

Fanatical EAs should support very weird projects

I noticed this on my own comment, came back to look at other comments here, and can say I'm already confused. But I already said I was against the idea, and maybe it's just about getting used to change.

So as more practical feedback: the meaning of the different karma types could be explained better in the hover texts. Currently they're presented as "agreement" vs. "overall" karma - it's not clear what the latter means. And the "agreement" hover text basically tries to explain both:

How much do you agree with this, separate from whether you think it's a good comment?

I would only put "How much do you agree with this?" there, and put "How good is this comment?" (or maybe some clearer but short explanation) in the regular karma hover text.

The Future Might Not Be So Great.

Thanks for writing this.

As everyone here knows, there has been an influx of people into EA and the forum in the last couple years, and it seems probable that most of the people here (including me) wouldn't have known about this if not for this reminder.

Fanatical EAs should support very weird projects

I definitely think that if you were 100% confident in the simple MWI view, that should really dominate your altruistic concern.

TBH I don't think this makes sense. Every decision you make in this scenario, including the one to promote or stop branching, would be a result of some quantum processes (because everything is a quantum process), so the universe where you decided to do it would be complemented by one where you didn't. None of your decisions have any effect on the amount of suffering etc., if it's taken as a sum over universes.

How To Prevent EA From Ever Turning Into a Cult

Do you think it would be better to not suggest any action, or to filter these suggestions without any input from other people? To me it reads like "here are some ideas for prevention" rather than "we must do all of these immediately". Though at least some of them look obviously true, like encouraging having non-EA friendships and discouraging intimate relationships between senior and junior employees of EA orgs.

How To Prevent EA From Ever Turning Into a Cult

Could you elaborate more on (some of) these considerations and why you think the cultishness risk is being overestimated relative to them?

My intuition is that it's being generally underestimated, as at least two cults have already sprung from EA-adjacent circles (one, two). While I don't think the ideas behind them are currently prevalent in EA, I do think the intellectual environments that brought them forth are, to some meaningful extent.

frugality could rather be seen as evidence of cultishness

I can't speak for OP, but it looks to me more like a poor choice of words, as OP explicitly wrote:

the ”EA standard” needs to be comfortable enough to be considered by individuals of the general public. But grants should not be used to pay for excesses for a selected few (ie. fancy hotels, resorts, or domestic work).

(I'm not sure what's meant by that last one). OP also suggested paying competitive salaries for EA jobs (which is what's already happening, at least in the roles relevant to me).

This is not to say I think all of these ideas are necessarily good. On the contrary, becuase I think this consideration is important, I'd value dialogue on how to succeed in preventing it, and I don't expect the first few ideas by anyone to all be right.

How To Prevent EA From Ever Turning Into a Cult

I swear I had the idea for this kind of post yesterday too. But this is much better than what I could've written, especially thanks to the concrete suggestions.

Germans' Opinions on Translations of "longtermism": Survey Results

I think I mixed some things up semantically (can't find anything good to back my understanding of it), so I retracted my comment. I'm not native in English either 😅

But I'd like to see more e.g. psychologists, anthropologists, social workers, artists of various kinds in EA.

Germans' Opinions on Translations of "longtermism": Survey Results

Cool work!

Disclaimer: I'm not really a German speaker, though I'm learning and I pride myself on understanding most of the description of Longtermism in the survey. Also I'm really unsure of everything I wrote.

Not that I have anything against "Zukunftsschutz", but is it possible the phrasing of the survey and the choice of framing skewed the results?

In the question itself:

"X" ist die Einstellung, dass der Schutz künftiger Generationen stärker priorisiert werden soll.

And in the explanation:

"Eine der größten Prioritäten unserer Generation sollte sein, die vielen Generationen in der Zukunft zu schützen und dafür zu sorgen, dass die langfristige Zukunft gut verläuft."

Andere wichtige Themen sind Gefahren... die die Zukunft der Menschheit bedrohen könnten.

There are references to the other possible translations too (e.g. "die langfristige Zukunft" in the quote above), but they might be less emphasized. And, for example, there's no mention of more philosophical-sounding ideas like "one's position in time should not affect one's moral worth".

I don't object to this framing, but maybe the answer will always depend on the framing (like the choice of the original English term probably did), so you should only use this as weak evidence in favour of any translation? On the other hand, maybe the choice of word won't be that impactful anyway, especially as the differences don't look that big compared to the standard deviations?

Load More