Currently doing local AI safety Movement Building in Australia and NZ + assisting with the Alignment 201 Beta.
One difference between our perspectives is that I don't take for granted that this process will occur unless the conditions are right. And the faster a movement grows, the less likely it is for lessons to be passed on to those who are coming in. This isn't dismissing these people, just how group dynamics work and a reality of more experienced people having less time to engage.I want to see EA grow fast. But at a high enough speed, I'm not sure what exactly, at which this will most likely degrade our culture. All this said, I'm less concerned about this than before. As terrible as the FTX collapse and recent events have been, I wouldn't be surprised if we no longer have to worry about potentially growing too fast.
I'm in favor of running the experiment.
I would suggest providing people with a week or two notice before implementing this change so that people can get any last community posts out. Otherwise, it might lead to frustration for people who are currently working on posts.
What did you think worked so well about these unconferences?
I would love to see this happen. Having run an unconference at an AI Safety Retreat and then another unconference in person, I believe that unconferences rate pretty highly in terms of reward per effort.
Agreed, that a hits-based approach doesn't mean throwing money at everything. On the other hand, "lack of prior expertise" seems (at least in my books) to be the second strongest critique after the alleged misrepresentation.
So, while I conceded it doesn't really address the strongest argument against this grant, I don't see addressing the second strongest argument against the grant as being beside the point.
I would love to know why it was downvoted as well. I provided a strong upvote as I can't see the reason why this post should be downvoted, although I might change this if I'm persuaded there's a good reason. However, I would be extremely surprised if there were any such reason.
I think it's valuable to write critiques of grants that you believe to have mistakes, as I'm sure some of Open Philanthropy's grants will turn out to be mistakes in retrospect and you've raised some quite reasonable concerns.
On the other hand, I was disappointed to read the following sentence "Henry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems". I guess when I read sentences like that I apply some (small) level of discounting towards the other claims made, because it sounds like a less than completely objective analysis. To be clear, I think it is valid to write a critique of whether people are biting off more than they can chew, but I still think my point stands.
I also found this quote interesting: "What personal relationships or conflicts of interest are there between the two organizations?" since it makes it sound like there are personal relationships or conflicts of interest without actually claiming this is the case. There might be such conflicts or this implication may not be intentional, but I thought it was worth noting.
Regarding this grant in particular: if you view it from the original EA highly evidence-based philanthropy end, then it isn't the kind of grant that would rate highly in this framework. On the other hand, if you view it from the perspective of hits-based giving (thinking about philanthropy as a VC would), then it looks like a much more reasonable investment from this angle, as for instance, Mark Zuckerberg famously dropped out of college to start Facebook. Similarly, most start-ups have some degree of self-aggrandizement and I suspect that it might actually be functional in terms of pushing them toward greater ambition.That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.
Though of course, if the grant was made on the basis of details that were misrepresented (I haven't looked into those claims) then this would undercut this.
I would suggest that new paradigms are most likely to establish themselves among the young because they are still in the part of their life where they are figuring out their views.
Great question, I would love to have clarity on this!
Volunteering. Effective Altruism doesn't have as strong a culture of volunteering as other community groups. When we had access to massive amounts of funding we were able to substitute paying people for volunteering, but I think we're going to have to address this situation in the new funding environment.