Fair enough. I admit that I skimmed the post quickly, for which I apologize, and part of this was certainly a knee-jerk reaction to even considering Leverage as a serious intellectual project rather than a total failure as such, which is not entirely fair. But I think maybe a version of this post I would significantly prefer would first explain your interest in Leverage specifically: that while they are a particularly egregious failure of the closed-research genre, it's interesting to understand exactly how they failed and how the idea of a fast, les...
Edit: I mostly retract this comment. I skimmed and didn't read the post carefully (something one should never do before leaving a negative comment) and interpreted it as "Leverage wasn't perfect, but it is worth trying to make Leverage 2.0 work or have similar projects with small changes". On rereading, I see that Jeff's emphasis is more on analyzing and quantifying the failure modes than on salvaging the idea.
That said, I just want to point out that (at least as far as I understand it), there is a significant collection of people within and around E...
attempting similar things in the future
I intended this a bit more broadly than you seem to have interpreted it; I'm trying to include exploratory research groups in general.
gain any value from it (other than as a cautionary tale)
That is essentially what this post is: looking in detail at one specific way I think things went wrong, and thinking about how to avoid this in the future.
I expect tradeoffs around how much you should prioritize external communication will continue to be a major issue for research groups!
I enjoyed this post a lot!
I'm really curious about your mention of the "schism" pattern because I both haven't seen it and I sort of believe a version of it. What were the schism posts? And why are they bad?
I don't know if what you call "schismatics" want to burn the commons of EA cooperation (which would be bad), or if they just want to stop the tendency in EA (and really, everywhere) of people pushing for everyone to adopt convergent views (the focus of "if you believe X you should also believe Y" which I see and dislike in EA, versus "I don'...
I like hypothesis generation, and I particularly like that in this post a few of the points are mutually exclusive (like numbers 7 and 10), which should happen in a hypothesis generation post. However this list, as well as the topic, feels lazy to me, in the sense of needing much more specificity in other to generate more light than heat.
I think my main issue is the extremely vague use of"quality" here. It's ok to use vague terms when a concept is hard to define, but in this case it feels like there are more useful ways to narrow it down. For example you c...
I'm one of the people who submitted a post right before the deadline of the criticism contest. FWIW I think number 6 is off base. In my case, the deadline felt like a Schelling point. My post was long and kind of technical, and I didn't have any expectation of getting money - though having a fake deadline was very helpful and I would probably not have written it without the contest. I don't think that any of the posts that got prizes were written with an expectation of making a profit. They all looked like an investment of multiple hours by talented people...
But I agree with your meta-point that I implicitly assumed SSA together with my "assumption 5" and SSA might not follow from the other assumptions
Thanks! I didn't fully understand what people meant by that and how it's related to various forms of longtermism. Skimming the linked post was helpful to get a better picture.
Thanks for the links! They were interesting and I'm happy that philosophers, including ones close to EA, are trying to grapple with these questions.
I was confused by SIA, and found that I agree with Bostrom's critique of it much more than with the argument itself. The changes to the prior it proposes seem ad hoc, and I don't understand how to motivate them. Let me know if you know how to motivate them (without a posteriori arguments that they - essentially by definition - cancel the update terms in the DA). It also seems to me to quickly lead t...
I'm trying to optimise something like "expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me".
If this is true, then I think you would be an EA. But from what you wrote it seems that you have a relatively large term in your philosophical objective function (as opposed to your revealed objective function, which for most people gets corrupted by personal stuff) on status/glory. I think the question determining your core philosophy would be which term you consider primary. For exa...
I think a lot of people miss the idea that "being an EA" is a different thing from being "EA adjacent"/"in the EA community"/ "working for an EA organization" etc. I am saying this as someone who is close to the EA community, who has an enormous amount of intellectual affinity, but does not identify as an EA. If the difference between the EA label and the EA community is already clear to you, then I apologize for beating a dead horse.
It seems from your description of yourself like you're actually not an Effective Altruist in the sense of holding a signific...
I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes? What did he predict better than other people? What project did MIRI generate that either solved clearly interesting technical problems or got significant publicity in academic/AI circles outside of rationalism/EA? Maybe instead of a comment here this should be a short-form question on the forum.
I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes?
While he's not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn't a complete track record, but it's still a very important data-point. It's a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.
I have some serious issues with the way the information here is presented which make me think that this is best shared as something other than an EA forum post. My main issues are:
"Cardinal" and "Ordinal" denote a certain extremely crude way in economics in which different utility functions can still be compared in certain cases. They gesture at a very important issue in EA which everybody who thinks about it encounters: that different people (/different philosophies) have different ideas of the good, which correspond to different utility functions.
But the two terms come from math and are essentially only useful in theoretical arguments. In practical applications they are extremely weak to the point of being essentially meaningless ...
I like this criticism, but I think there are two essentially disjoint parts here that are being criticized. The first is excess legibility, i.e., the issue of having explicit metrics and optimizing to the metrics at all. The second is that a few of the measurements that determine how many resources a group gets/how quickly it grows are correlated with things that are not inherently valuable at best and harmful at worst.
The first problem seems really hard to me: the legibility/autonomy trade-off is an age-old problem that happens in politics, business...
I think this is a great post! It addresses a lot of my discomfort with the EA point of view, while retaining the value of the approach. Commenting in the spirit of this post.
I want to point out two things that I think work in Eric's favor in a more sophisticated model like the one you described.
First, I like the model that impact follows an approximately log distribution. But I would draw a different conclusion from this.
It seems to me that there is some set of current projects, S (this includes the project of "expand S"). They have impact given by some variable that I agree is closer to log normal than normal. Now one could posit two models: one idealized model in which people know (and agree on) magnitue of impac...
I agree with you that EA outreach to non-Western cultures is an important and probably neglected area — thank you for pointing that out!
There are lots of reasons to make EA more geographically (and otherwise) diverse, and also some things to be careful about, given that different cultures tend to have different ethical standards and discussion norms. See this article about translation of EA into Mandarin. Something to observe is that outreach is very language and culture-specific. I generally think that international outreach is best done in a granul...
There seems to be very little precedent of someone founding new successful universities, partially because the perceived success of a university is so dependent on pedigree. There is even less precedent of successful "themed" universities, and the only ones I know that have attained mainstream success (not counting women's universities or black universities, which are identity-based rather than movement-based) are old religious institutions like Saint John's or BYU. I think a more realistic alternative would be to buy something like EdX or a competing onli...
I think the Christian Science Monitor's popularity and reputation makes Christian Scientists (note: totally different from Scientologists) significantly more respectable than they would be otherwise.
From Britannica:
...The Christian Science Monitor, American daily online newspaper that is published under the auspices of the Church of Christ, Scientist. Its original print edition was established in 1908 at the urging of Mary Baker Eddy, founder of the church, as a protest against the sensationalism of the popular press. The Monitor became famous for
This is a nice project, but as many people point out this seems a bit fuzzy for a "FAQ" question. If it's an ongoing debate within the community, it seems unlikely to have a good 2-minute answer for the public. There's probably a broader consensus around the idea that if you commit to any realistic discount scheme, you see that the future deserves a lot more consideration than it is getting in the public and the academic mainstreams, and I wonder whether this can be phrased as a more precise question. I think a good strategy for public-facing answers would be to compare climate change (where people often have a more reasonable rate of discount) to other existential risks
Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good ...
I think that it's not always possible to check that a project is "best use, or at least decent use" of its resources. The issue is that these kinds of checks are really only good on the margin. If someone is doing something that jumps to a totally different part of the pareto manifold (like building a colony on Mars or harnessing nuclear fission for the first time), conventional cost-benefit analyses aren't that great. For example a standard post-factum justification of the original US space program is that it accelerated progress in materials science and ...
Some possible bugs:
*When I click on the "listen online" option it seems broken (using this on a mac)
*When I click on the "AGI safety fundamentals" courses as podcasts, they take me to the "EA forum curated and popular" podcast. Not sure if this is intentional, or if they're meant to point to a podcast containing just the course