EdoArad

EdoArad's Comments

edoarad's Shortform

This 2015 post by Rob Wiblin (One of the top-voted in that year) is a nice example of how the community is actively cohesive

edoarad's Shortform

[a brief note on altruistic coordination in EA]

  1. EA as a community has a distribution over people of values and world-views (which themselves are uncertain and can bayesianly be modeled as distributions).
  2. Assuming everyone have already updated their values and world-view by virtue of epistemic modesty, each member of the community should want all the resources of the community to go a certain way.
    • That can include desires about the EA resource allocation mechanism.
  3. The differences between individuals undoubtedly causes friction and resentment.
  4. It seems like the EA community is incredible in it's cooperative norms and low levels of unneeded politics.
    • There are concerns about how steady this state is.
    • Many thanks to anyone working hard to keep this so!

There's bound to be a massive room for improvement, a clear goal of what would be the best outcome considering a distribution as above, a way of measuring where we're at, an analysis of where we are heading under the current status (an implicit parliamentary model perhaps?), and suggestions for better mechanisms and norms that result from the analysis.

Request for feedback on my career plan for impact (560 words)

This is interesting. Do you have specific example in mind where this can be applied to an EA cause?

My personal cruxes for working on AI safety

This reminds me of the discussion around the Hinge of History Hypothesis (and the subsequent discussion of Rob Wiblin and Will Macaskill).

I'm not sure that I understand the first point. What sort of prior would be supported by this view?

The second point I definitely agree with, and the general point of being extra careful about how to use priors :)

My personal cruxes for working on AI safety

Jaime Sevilla wrote a long (albeit preliminary) and interesting report on the topic

How much will local/university groups benefit from targeted EA content creation?

EAHub has a large and growing list of resources collected and written for local groups.

How to estimate the EV of general intellectual progress

I think so. While the main value of research lies in it's value of information, the problem here seems to be about how to go about estimating the impact and not so much about the modeling.

On Demopisty

Thanks. I'd be very excited to see a full post considering this set of ideas as a cause area proposal, possibly using the ITN framework, if you or anyone else is up to it.

I think that the discourse in EA is too thin on these topics, and that perhaps some posts exploring the basics while considering the effects of marginal contribution might be the way to see whether we should consider them worthwhile. I think this makes this post somewhat premature, although I appreciate the suggested terminology and the succinct but informative writing.

Load More