My name is Edo, I'm one of the co-organisers of EA Israel. I'm also helping out in moderation for the forum, feel free to reach out if I can help with anything.

I have studied mathematics, worked as an mathematical researcher in the IDF and was in training and leadership roles. After that I started a PhD in CS, where I helped to start a research center with the goal of advancing Biological research using general mathematical abstractions. After about 6 months I have decided to leave the center and the PhD program.

Currently, I'm mostly thinking about improving the scientific ecosystem and particularly how one can prioritize better within basic science.

Generally, I'm very excited about improving prioritisation within EA and how we conduct our research around it and EA causes in general. I'm also very interested in better coordination and initiative support within the EA community. Well, I'm pretty excited about the EA community and basically everything else that has to do with doing the most good.

My Virtue Ethic brain parts really appreciates honesty and openness, curiosity and self-improvement, caring and supporting, productivity and goal-orientedness, cooperating as the default option and fixing broken systems.


Why scientific research is less effective in producing value than it could be: a mapping

Great points! Re peer-review, I think that your argument makes sense but I feel like most of the impact on quality from better peer review would actually be in raising standards for the field as a whole, rather than the direct impact on the papers who didn't pass peer review. I'd love to have a much clearer analysis of the whole situation :) 

MichaelA's Shortform

Hey schethik, did you make progess with this?

A proposal for a small inducement prize platform

Somewhat related, and potentially relevant if someone sets this up:

  1. The Nonlinear Fund wrote up why they use RFPs (Requests For Proposals). 
  2. Certificates of Impact.
  3. There is an upcoming project platform for EAs, designed to coordinate projects with volunteers. A forum post should be out soon, but meanwhile you can see a prototype here.
What is meta Effective Altruism?

Is it a common use to consider GPR when talking about Meta-EA?

ESG investing needs thoughtful trade-offs

I'm very excited to see your blog on this topic! The Social Impact / ESG Investing communities seem like strong movements that could be very impactful. I think that the points you raise here and in your previous post could be very influential if implemented. 

Do you want your blog posts to already be shared outside of the EA community, or would you prefer to wait with that?

saulius's Shortform


Other analogies might be human rights and carbon emissions, as used in politics. Say that Party A cares about reducing emissions, then the opposing Party B has an incentive to appear as though they don't care about it at all and even propose actions that would increase emissions so that they could trade "not doing that" with some concession from Party A. I'm sure that we could find lots of real-world examples of that.

Similarly, some (totalitarian?) regimes might have some incentive to make major parts of the population politically conceived as unworthy and let them have a very poor lifestyle, so that other countries who care about that population would be open to trade where helping those people would be considered a benefit for those other countries. 


Recommender systems:






misinformation proper:


https://forum.effectivealtruism.org/posts/ixLPyMNCLH2Jg7aBc/ea-philly-s-infodemics-event-part-1-jeremy-blackburn and https://forum.effectivealtruism.org/posts/qsiFQyihEuQEeNsfJ/ea-philly-s-infodemics-event-part-2-aviv-ovadya 



sort of related:





Ah, I was thinking of Aligning Recommender Systems. I will find more relevant posts tomorrow


How about something like misinformation (Cause Area)? There are several posts on the topic and it appears under 80K's list of potential cause areas. 

This would be a subset of all forms of "improving collective epistemics", but I think that it's a widely enough discussed topic so that it makes sense to have it as a tag by itself 

MichaelA's Shortform

Ah, right! There still might be a need outside of longtermist research, but I definitely agree that it'd be very useful to reach out to them to learn more.

For further context for people who might potentially go ahead with this, BERI is a nonprofit that supports researchers working on existential risk. I guess that Sawyer is the person to reach out to.

Load More