Hey! I'm Edo, married + 2 cats, I live in Tel-Aviv, Israel, and I feel weird writing about myself so I go meta.
I'm a mathematician, I love solving problems and helping people. My LinkedIn profile has some more stuff.
I'm a forum moderator, which mostly means that I care about this forum and about you! So let me know if there's anything I can do to help.
I'm currently working full-time at EA Israel, doing independent research and project management. Currently mostly working on evaluating the impact of for-profit tech companies, but I have many projects and this changes rapidly.
Downvoted in large part because of what looks like the unfiltered use of LLMs. I really appreciate satiric content, and honestly think that is a good way to criticize or talk about unconventional ideas. The basic idea in this post is simple and punchy and would have been much better presented in a much more concise essay
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I'd like people in the community to think less about the community should think. Err.. oops..
I think that this question will be better if it is framed not in terms of the EA community. This is because
For example, I like Dylan's reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Some thoughts:
This is a really cool idea, and the level of execution on the testing and reasoning is spot on👌In particular I think it was a great choice to start experimenting with "plaintext" shared state.
This kind of research can also give some clarity on multi-agent AGI risk scenarios (e.g Distributional AGI: https://www.alphaxiv.org/abs/2512.16856 ), in the sense of coordination between supposedly stateless agents.
One use case for the forum is as a curated database of relevant writings, allowing for discussion and discovery, and perhaps useful for AI models. Perhaps it will be good to spam the forum with much more cross-posted content from blogs of relevant people and organizations.
If this is done on old posts, they shouldn't appear on the frontpage, and automatically cross-posting should be possible and simple in current tech.
See also https://forum.effectivealtruism.org/posts/CMfrQBrSwpujaqF8Z/how-much-do-you-believe-your-results