Some musings:
What counts as an idea? Is an entire book an idea, or does it have to be tweet-length? What about a whole EA Forum post, is that one idea or a collection of ideas? E =MC^2 is an idea... but it takes a lot of background knowledge to understand it. One should probably understand Newton's Laws first. So which idea is more important, relativity or the precursor? What about the idea of numbers?
Context seems really important. An idea without the resources to execute it is basically useless. There could be ecosystems where there are ideators and executors, but it takes a lot of coordination between those people.
There definitely seems to be a power law to ideas. But it's also not necessarily easy for people to identify how good an idea is in advance.
Often we need to build up the dependencies to effectuate a good idea or even recognize it in the first place. Maybe even just recognizing all the good ideas we already have laying around, and wiring them together appropriately. Maybe normal people are already good at acting on most of the goodness of ideas. Maybe even people in dire states, say a homeless drug addict, could already be tapping most of the good ideas that are already laying around just by the sheer fact of being a biological organism existing! You didn't specify whether the capability to have vision counts as an idea. I didn't expect to be making this point but I could argue we're already surfing some pretty damn good ideas like "seeing things" and on the margin the multipliers we can get from the additional stuff we call "ideating" isn't worth that much extra.
I would be hesitant to discount the accumulated wisdom of entrepreneurs on this question. One thing they're reacting to is that for every executor there's 10 idea people, or some ratio like that. "Talk is cheap". Having many ideas likely indicates some level of overthinking and paralysis. Success requires not just picking the right idea but sticking to it, and if one is trying to optimize for the best idea then something else more shiny may come along and derail the work that was built up, turning it into an unfinished bridge. Maybe it's good to have more discourse where people share their ideas, but also it makes sense why doing that too much would be penalized as it's a tax on bullshit.
Also, the best ideas often have something antimemetic about them that means they weren't picked before. This means it's hard to tell what's the best idea; it requires discernment, taste, and building up a solid worldview. Also, the idea's probably high variance and therefore risks negative externalities that could be bigger than the expected positive externalities. There's an optimizer's curse.
The world is often chaotic and the end result can only come from many iterations until the system is far from the initial conditions.
The goodness of ideas seems more easily evaluable in retrospect. Maybe the best result from this line of thinking is to do retroactive funding of the best ideas already out there. Speaking from tons of experience, I really doubt the best idea can come from sitting down afresh with a piece of paper and thinking "hmm what ideas can I list down" and then picking the best one.
OTOH, let me explore this idea more favorably, but with a different frame.
The world is a giant map. We need to get to X. It's somewhere but we don't know where. I like thinking in terms of navigation, at least because it befits my mind well. We don't have a map before us, so we start heading anywhere (since we don't know where we're going). This is foolish, except insofar as we have never navigated before and need to calibrate how it even works to traverse the territory before we try to do it for real. If you're going the wrong way you'll need to turn around. It's good to save tons of energy by getting your route right first. If you're leading a party of people it is especially important you have good discourse about where X is, especially if there are disagreements you should try to resolve them first. Unless you intentionally plan to split up and cover more ground! There should be some Xploration, but also groups should stick together in order to survive.
It is very important to discern between information saying to go in opposite directions and figure out what is true before heading there, or maintaining an average between those epistemic states until you learn more (e.g. AI doom vs optimism).
But I think most of the navigation is pretty straightforward. Eat and sleep well, have friends, save the world, don't hurt others. You can probably figure out you need to "head north" to get to X, in the analogy. Even if X ends up being in Norway instead of Sweden, it probably didn't change your instrumentally convergent trajectory that much, assuming the best ideas are near each other. But if there are wild swings between where X is, then one should stop moving and resolve those cruxes. It's about the journey though, and one should probably keep moving and doing various sidequests in the local city, checking the tavern's bounty board, while you debate which way to go next. This means building up convergent resources, with the Slack to keep exploring indefinitely.
People who play various games probably have something to say about ideal strategy. I worry I'm not cut out to be an entrepreneur as much as I'd like, I'm not that good at real-time strategy with fast decisionmaking under VUCA like Starcraft, or RPGs with complex decisionmaking on loadouts and inventory.
As a human with a tendency towards perfectionism, it's probably a bad idea for me to try to evaluate the bestness of ideas with too much granularity. Better for AI agents to pick up the work. Maybe we need to just generate more ideas and put them out there in the marketplace for them to be evaluated at all.
I talked to Claude a bit about this and slightly want to walk something back; I think idea generation and listmaking can be great if done structuredly and probably collectively. Charity Entrepreneurship goes through hundreds of ideas before picking the best one; this is a lot more structured and systematic than when I list things out on my notebok. That said I am also skeptical their approach scales that well, it feels high modernist and I'm more of the school that thinks founders should be the ones coming up with the ideas out of a personal Weltaanschaung, out of deep personal engagement with the world building up tons of context about how things work.
Reminder that there is an EA Focusmate group, where you can do 50 minute coworking calls with other EAs. Also, if you're already in the group, please give any feedback on it here or via DM.
This post is mostly noise because this is a basic point going back over a decade and you do nothing to elaborate it or incorporate objections to naive utilitarianism. There is prior literature on the topic. I want you to do better because this is an important topic to me. The SBF example is a poor one that's obfuscatory of the basic point because you don't address the hard question of whether his fraud-funded donations were or weren't worth the moral and reputational damage, which is debatable and a separate interesting topic I haven't seen hard analysis of; you open up a can of ethical worms and don't address it in a way that reasonably looks bad to low decouplers, which is probably the reason for the downvoting. Personally I would endorse downvoting because you haven't contributed anything novel about increasing the number of probably good high net worth philanthropists, though I didn't downvote. I only decided to give this feedback because your bio says you're an econ grad student at GMU, which is notorious for disagreeable economists, and so I think you can take it.
when we have no evidence that aligning AGIs with 'human values' would be any easier than aligning Palestinians with Israeli values, or aligning libertarian atheists with Russian Orthodox values -- or even aligning Gen Z with Gen X values?
When I ask an LLM to do something it usually outputs something that is its best attempt at being helpful. How is this not some evidence of alignment that is easier than inter-human alignment?
The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.
My take was inspired by seeing this take: https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=YbqaALPE3G2wRRCGt
EA's recruitment MO has been to recruit the best elites it can on the margin, which I agree with due to power laws. However I disagree how to measure "elite". Selecting from people attending Ivy Leagues does adverse selection on the kind of person who gets into Ivy Leagues. Other people get into this rabbithole by following links on the internet. I would rather engage with someone who cares about ideas than someone following the power-seeking gradient. Now, SBF was both someone who was an early contributor to Felicifia and went to an elite university, so it's not to say that college clubs aren't drawing from both sets. On the margin though, these clubs will want to recruit themselves more by say tabling at their college, and that makes sense they want to do that but if I was a funder I would rather support something like say paying for some NEET running a Discord server to grow their server (depending on the topic naturally). This does select for less conscientiousness and my specific story for what to do could be wrong, but I think the overall thrust is right that selectivity should be more weird and in the age of AI we have better tooling for this kind of selection.
Concrete operationalization: There's a long tail of search terms that orgs like CEA could do ad spend on that would be terms generated by highly thoughtful people. I would bet they are underspending on these terms. Also focusing on what these terms translate to in other languages, and doing more deep talent search in other countries and trying to integrate those people into our network. Is anyone buying ads on Baidu for the Chinese equivalent of the word "utilitarianism"? There could be a lot of low-hanging fruit like this that hasn't been considered.
I'm not sure what I think about this recent take about the attention arms race but I think we share a sense of "changing up how things are advertised". My point is more about subtle signalling in the information ecology.
It is possible I cached this thought a long time ago and haven't properly investigated to see whether the evidence reflects this or we are in fact in the world where most of the portfolio of outreach resources are being spent the way I'd endorse. Like maybe more of the resources are going to these new AI safety Youtube videos instead of uni clubs, and the actual form of my critique should be comparing those videos to some other outreach tactic.