Hey! I'm Edo, married + 2 cats, I live in Tel-Aviv, Israel, and I feel weird writing about myself so I go meta.
I'm a mathematician, I love solving problems and helping people. My LinkedIn profile has some more stuff.
I'm a forum moderator, which mostly means that I care about this forum and about you! So let me know if there's anything I can do to help.
I'm currently working full-time at EA Israel, doing independent research and project management. Currently mostly working on evaluating the impact of for-profit tech companies, but I have many projects and this changes rapidly.
Maybe controversial is too strong a word. I think EAs tend to think on the margin and are also less concerned with inequality or with non-existential risks from advanced tech.
I find Jaron Lanier's thoughts on the impacts of tech development interesting and probably controversial in EA circles.
Daron Acemoglu is a known growth economist, and he has recently published a new book and had an interesting episode with Rhys Lindmark.
I also remember following Vinay Gupta a while back, and that he had interesting opinions on civilizational resilience (say, here). He was also involved with Ethereum, and is doing blockchain stuff, not sure what.
Generally, I'm interested in more discussion with people in communities or ideas that could be EA-adjacent but there isn't a lot of (visible) overlap at the moment. Say, leftist economists, blockchain, metascience, or virtue-ethicists.
Thank you for sharing this!
My answer to the survey's question "Given the growing salience of AI safety, how would you like EA to evolve?":
I think EA is in a great place to influence the direction of AI progress and many orgs and people should be involved with this project. However, I think that many people in this forum think that the most important outcome of the EA community is by influencing this technology, and I think this is mistaken and misleading.
The alternative would be to continue supporting initiatives in this space, including AI safety-specific subcommunities, but to support a thriving EA community which is measured by the quality of thought and decision making, and the number of people actively dedicating a sizable proportion of their resources toward doing the most good they can (in contrast with measuring communities and individuals based on their deference to top-down cause prioritization).
I'm reasonably sure that the current wave of orgs and people working on AI safety is strong enough to maintain itself and grow well, and I'm worried about over-optimizing on near timelines.
Some ideas held by many EAs - whether right or wrong, and implied by EA philosophy or not - encourage risky behaviour. We could call these ideas risky beneficentrism (RB), and they include:
i. High risk appetite.
ii. Scope sensitivity
iii. Unilateralism
iv. Permission to violate societal norms. Violating or reshaping an inherited morality or other “received wisdom” for the greater good.
iv. Other naive consequentialism. Disregard of other second-order effects
Is the game something like "EA online discussion norms" and the strategy that you are proposing something like "make your writings independent of the EA forum, and allow for competing discussion spaces on your posts"?
I know the author personally, and I want to signal that Michelle is genuinely interested in doing good, that the proposed technology does seem highly promising, and that an informed take on this question would help her make a better decision.
Thanks for anyone experienced with the subject matter who is willing to help here!
Sure! Looking into GiveWell's main CEA, and their analysis of AMF (link), the location granularity is by country. However, AMF prioritizes their distribution location based on malaria prevalence rates and operational partners, so the results might be much better than the country's average.
Regarding the first question, I just briefly looked again at the report and indeed I don't see that explicitly taken into account. I only vaguely remember thinking about that, and I'm not sure how that was resolved.
I think the main causal pathway they used in their report is QALY gains from people quitting smoking, so that sounds like it wouldn't change drastically if the intervention was delayed by, say, a couple of years. So I agree that this is a good question to look into further, and I expect that could indeed reduce the cost effectiveness by 3x-10x. Great catch David!
(I didn't know this term)