Cofounded EA Israel, background in math & CS, worked in prioritization research, and moderated on the forum.Â
I'm currently earning to give at a tech company, currently giving everything I don't need to live. I'm currently prioritizing animal welfare, and I'm giving through Animal Welfare Fund. I'm also a board member at EA Israel and at ALTER.
I have struggled a lot with burnout and depression, and I'm still working to shape my life positively.
I'm really looking forward to the debate on this topic!Â
Some thoughts:
Downvoted in large part because of what looks like the unfiltered use of LLMs. I really appreciate satiric content, and honestly think that is a good way to criticize or talk about unconventional ideas. The basic idea in this post is simple and punchy and would have been much better presented in a much more concise essay
Downvoted. I felt that the post was making a bunch of assertions in a way that was aimed at persuading rather than explaining. That said, I would really be interested in reading more from you about this topic.Â
I think there is a lot to learn about the nature of consciousness and suffering from buddhist philosophy and practice, and I think that it is worthwhile to investigate how to apply it for AI risk.
In particular, there are some possibly interesting points here that I'd love to see expanded and explained in a way which I'd also feel comfortable engaging with the ideas.