My name is Edo, I'm one of the co-organisers of EA Israel. I'm also helping out in moderation for the forum, feel free to reach out if I can help with anything.

I have studied mathematics, worked as an mathematical researcher in the IDF and was in training and leadership roles. After that I started a PhD in CS, where I helped to start a research center with the goal of advancing Biological research using general mathematical abstractions. After about 6 months I have decided to leave the center and the PhD program.

Currently, I'm mostly thinking about improving the scientific ecosystem and particularly how one can prioritize better within basic science.

Generally, I'm very excited about improving prioritisation within EA and how we conduct our research around it and EA causes in general. I'm also very interested in better coordination and initiative support within the EA community. Well, I'm pretty excited about the EA community and basically everything else that has to do with doing the most good.

My Virtue Ethic brain parts really appreciates honesty and openness, curiosity and self-improvement, caring and supporting, productivity and goal-orientedness, cooperating as the default option and fixing broken systems.


Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?

Exactly, a private hypothes.is group is one where you only see annotations from members of the same group, and only annotations that were tagged as annotations for that group. 

Definitely agree that doing something like that should be hooked up to the forum, and that it is a bit of a technical challenge. 

I am not sure if engagement is the right metric to use here, though. Not sure if it isn't. I'm also not sure if that's an important point so I'll just keep this in the back of my head and maybe something will come up in the future. 

Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?

I think it was Aaron that raised a related suggestion - to add points for discussion of a post in the comment section.

Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?

Daydreaming a little bit. 

Imagine that there was an EA Browser, that acts just like your favorite browser but also has an option of up/downvoting, tagging, and writing comments on any web page. 

Imagine all the people in the EA community are using that browser as they go through their day and casually upvote some webpages or write some comments.

(Imagine there's no spamming.. 🎶)

How would you design a forum feed based on those web annotations then? Probably have some default high bar (or quantity? or perhaps randomly??) on what goes to the main feed, and an option to view all web annotations.

This could be implemented rather easily by building a chrome add-on (or starting with using a private EA group for https://hypothes.is/ and feeding that to the forum).

I imagine that this would be surely useful, if people would use something like this, in that I don't see major drawbacks. That makes me think that this is a solvable design problem.

How do you approach hard problems?

Another strategy that goes about the problem from the side is what Tiago Forte of Building a Second Brain calls The Slow Burn approach (9 min audio explanation). It's basically the approach of letting hard and motivating problems flow with you for a long period of time, collecting insights, ideas, resources, and different view points along the way. 

Richard Feynman supposedly gave the advice of always keeping in mind 12 favorite questions, and see if anything new that comes up shines a light on any of them.

You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!”

In How to Take Smart Notes, the author discusses the Zettelkasten method. It is based on the method of a prolific social scientist (Niklas Luhmann) to research; Something like: Have a trusted system for storing and reviewing notes, and engage with whatever you find interesting (and keep everything in the system). Once in a while, some ideas will develop into something coherent which could be published. 

[This book] describes how [Luhmann] implemented [the tools of note-taking] into his workflow so he could honestly say: “I never force myself to do anything Idon’t feel like. Whenever I am stuck, I do something else.” A good structure allows you to do that, to move seamlessly from one task to another – without threatening the whole arrangement or losing sight of the bigger picture.

Buck's Shortform

I tried searching the literature a bit, as I'm sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015). It seems to agree with your hypothesis. From the abstract:

Applying a dual-process framework to the study of social preferences, we show in two studies that individuals with a more reflective/deliberative cognitive style, as measured by scores on the Cognitive Reflection Test (CRT), are more likely to make choices consistent with “mild” altruism in simple non-strategic decisions. Such choices increase social welfare by increasing the other person’s payoff at very low or no cost for the individual. The choices of less reflective individuals (i.e. those who rely more heavily on intuition), on the other hand, are more likely to be associated with either egalitarian or spiteful motives. We also identify a negative link between reflection and choices characterized by “strong” altruism, but this result holds only in Study 2. Moreover, we provide evidence that the relationship between social preferences and CRT scores is not driven by general intelligence. We discuss how our results can reconcile some previous conflicting findings on the cognitive basis of social behavior.

Also relevant is This Review (2016) by Rand:

Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games. My meta-analysis was guided by the social heuristics hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is not in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions.

And This Paper (2016) on Belief in Altruism and Rationality claims that 

However, contra our predictions, cognitive reflection was not significantly negatively correlated with belief in altruism (r(285) = .04, p =.52, 95% CI [-.08,.15]).

Where belief in altruism is a measure of how much people believe that other people are acting out of care or compassion to others as opposed to self-interest.

Note: I think that this might be a delicate subject in EA and it might be useful to be more careful about alienating people. I definitely agree that better epistemics is very important to the EA community and to doing good generally and that the ties to the rationalist community probably played (and plays) a very important role, and in fact I think that it is sometimes useful to think of EA as rationality applied to altruism. However, many amazing altruistic people have a totally different view on what would be good epistemics (nevermind the question of "are they right?"), and many people already involved in the EA community seem to have a negative view of (at least some aspects of) the rationality community, both of which call for a more kind and appreciative conversation. 

In this shortform post, the most obvious point where I think that this becomes a problem is the example

For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

This is supposed to be an example of a case where people are not behaving rationally since that would stop them from having fun. You could have used a lot of abstract or personal examples where people in their day to day work are not taking time to think something through or seek negative feedback or update their actions based on (noticing when they) update their beliefs. 

Quantifying the Value of Evaluations

In the summary you wrote 

I have greatly upped my estimate of how difficult it is to create really useful assessments

Do you mean useful assessments of evaluations or useful evaluations? 

edoarad's Shortform

Fund projects, people, or organization? 
A thought that I keep coming back to.

An analysis of funding people over projects at the academia from Nintil.

Progress Open Thread: January 2021

Caution - negative outlook!

The IEA's annual report on access to electricity highlights that the pandemic had a huge negative impact on progress, and raise concerns about the potential for recovery. Furthermore, if the relevant SDG policies would continue as they are then they predict about 62% of people in sub-saharan africa with electricity (today we are at 48%). They suggest that further $35 billion per year is needed to get global worldwide access to electricity by 2030.

Load More