It's been a year, have you explored this? I'm somewhat bullish on testing the idea of an EA bounty platform, and am curious as what others would think.
Relying on more billionaires (from various uncorrelated fields) might still be the more cost-effective strategy. But the community isn't just doing that anyway, we encourage everyone to give, even if just a little.
Same. I suggest "AI Safety Ideas: a collaborative AI safety research platform"
Thanks! What is your take on offering shorter fellowships (e.g. 4 weeks instead of 8), as suggested here?
CEEALAR (aka the EA Hotel) is hiring for a full-time Operations Manager. Location: Blackpool, UK. To start mid-late July. Deadline to apply is July 1st (end of the day in your own timezone).We are looking for someone dynamic to immediately take on the challenge of maintaining our operations.Responsibilities:
More info here.
Clean meat could also have a huge impact on CO2 levels. According to Vinod Khosla (source):
Agreed. And even in the scenario where we could continue to find more valuable patterns of matter even billions of years in the future, I don’t think that efforts to accelerate things now would have any significant impact on the value we will create in the future, because it seems very likely that our future value creation will mostly depend on major events that won’t have much to do with the current state of things.
Let’s consider the launch of Von Neumann probes throughout the universe as such a possible major event: even if we could increase our current growth rate by 1% with a better allocation of resources, it doesn’t mean that the future launch of these probes will be 1% more efficient. Rather, the outcomes of this event seem largely uncorrelated with our growth rate prior to that moment. At best, accelerating our growth would hasten the launch by a tiny bit, but this is very different than saying “increasing our growth by 1% now will increase our whole future utility by 1%”.
I vote based on how much I think something contributes to the discussion, aiming for a roughly equal split between upvotes and strong-upvotes (which I expect to be somewhat of a Schelling point). Same logic for downvotes vs strong-downvotes, though I obviously don't split 50/50 between upvotes and downvotes.
Yes I agree 100% that merely trying to create more EA jobs won't be enough, hence my 4th point. What I am suggesting is that we should both increase our internal capacity *and* change our message by making it clear that the work done at EA-branded orgs is only the tip of the iceberg when it comes to having an impact.
Thanks for this, we can clearly do better. Some ideas: