EdoArad🔸

Earning to Give @ Tech
5198 karmaJoined Working (6-15 years)Tel Aviv-Yafo, Israel

Bio

Participation
1

Cofounded EA Israel, background in math & CS, worked in prioritization research, and moderated on the forum. 

I'm currently earning to give at a tech company, currently giving everything I don't need to live. I'm currently prioritizing animal welfare, and I'm giving through Animal Welfare Fund. I'm also a board member at EA Israel and at ALTER.

I have struggled a lot with burnout and depression, and I'm still working to shape my life positively.

Comments
872

Topic contributions
32

Downvoted. I felt that the post was making a bunch of assertions in a way that was aimed at persuading rather than explaining. That said, I would really be interested in reading more from you about this topic. 

I think there is a lot to learn about the nature of consciousness and suffering from buddhist philosophy and practice, and I think that it is worthwhile to investigate how to apply it for AI risk.

In particular, there are some possibly interesting points here that I'd love to see expanded and explained in a way which I'd also feel comfortable engaging with the ideas.

Happy to help, feel free to DM me if still relevant :) 
 

This is really important and interesting to read, thank you!

Would the general point apply for insect farming? Seems like the recommendations are particularly relevant there

I'm really looking forward to the debate on this topic! 

Some thoughts:

  1. I like that debate topics aren't overly operationalized. Allowing people to take slightly different interpretations means that people can focus on the variation which seem most important to them. This can come at the expense of understanding each other crisply and when interpreting the (quantified) agreement scale.
    1. I'm not sure what were the main takeaways from previous debates, but I felt that I cared more about hearing interesting new takes and people's reactions to them than I cared about assessing the overall community opinion.
  2. "By default" - One possible ambiguity here is whether this means with >50% probability or with >99.9% probability.
  3. "The world where" -> "The worlds where". Also, perhaps this notion of conceiving of possible futures as possible worlds is a bit too heavy on EA/rationalist-lingo.
  4. "AI goes well for humans" - I broadly like this. I would be interested in people's opinion for both neartermist and lontermist worldviews, and under maxipok or flourishing futures.
  5. "Sentient beings" - Here I think the discussion should be contained to nonhuman animals because the other case seemed to be handled in the previous AI welfare debate.
  6. I don't think that the statement of the debate should be about "what we should do" but rather about the worldview directly. It's a bit hard for me to pinpoint exactly why I think so and I may regret this.
    1. I think that an operationalization which is too close to people's actual decisions may cause more people to defend their existing views or to take a stance based on what's more salient. I'm not sure why exactly, but framings like "Without extra animal-focused work, even aligned superintelligence would be bad for non-human animals" feel like they would generate more ideologically-oriented responses.
    2. This makes the question more complex with more moving parts.
  7. I think that the framing of "AGI which doesn't cause human extinction or disempowerment will value animal welfare" is quite good. Perhaps this should include CAIS or multipolar scenarios.

quickly, because I want to get back to reading - the first link to Anna's post is broken. Should probably be https://www.lesswrong.com/posts/xtuk9wkuSP6H7CcE2/ayn-rand-s-model-of-living-money-and-an-upside-of-burnout

Becoming a member means you join our bi-annual rounds and intend to donate ~$100,000 or more (soft commitment, we consider applications from funders who grant over $50,000 a year) to meta charities, either to charities you found through the Meta Charity Funding Circle or outside of it.

See also https://forum.effectivealtruism.org/posts/CMfrQBrSwpujaqF8Z/how-much-do-you-believe-your-results

This is beautiful, thank you! This has definitely planted some seeds in my mind. Perhaps the most interesting points to me have been the prevalence of cockfighting and the dominance of ethics centered around virtues

I love this! It is rare to see new intervention ideas and I really appreciate such write ups even if you end up with "this cause area doesn't need much attention" - it shines a light on an interesting problem and can spark possibly useful follow on work.

Downvoted in large part because of what looks like the unfiltered use of LLMs. I really appreciate satiric content, and honestly think that is a good way to criticize or talk about unconventional ideas. The basic idea in this post is simple and punchy and would have been much better presented in a much more concise essay

Load more