I am visiting a Quaker church for the first time ever tomorrow. I've been out of religious community for 15 years or so and I'd like to explore one that is compatible with my current views.
I'm trying to think about what "EA but more religious" might look like. Could you form a religious community holding weekly assemblies to celebrate our aspirations to fill the lightcone with happy person moments? I think that is a profoundly spiritual and emotional activity and I think we can do it.
I'll post a longer post to this effect soon.
I conditionally disagree with the "Work trials". I think EA companies doing work trials is a pretty positive innovation that EA in particular does that enables them to hire for a better fit than not doing work trials. This is good in the long run for potential employees and for the employer.
This is conditional on
I can anticipate some reasonable disagreement on this matter but it seems to me it's not so unreasonable to give these work trials even if they last long enough that it requires the candidate to take a few days of leave off work. Being paid for the trial itself should compensate for the inconvenience. More than a week of time seems like it should have even stronger justification and I wouldn't endorse that.
Although I understand this process would still require some inconvenience to the candidate and their current employer, on balance, if the candidate is well-paid, it seems like a reasonable trade-off to ask for, considering the benefit to the candidate and the employer for finding a good fit.
JJ--thanks for all your words of support in the last few years. I appreciate your attitude, care, and your hard work. I'm sorry to hear about this. Hope you are well!
It seems like a safer bet that AI will have some kind of effect on lightening the labor load than it will solve either of those particular problems.
I've spent hours going over your arguments and this is a real crux for me. AI is likely to lessen the need for human workers, at least for maintaining our existing levels of wealth.
I've stumbled here after getting more interested in the object-level debate around pronatalism. I am glad you posted this because, in the abstract, I think it's worthwhile to point out where someone may not be engaging in good faith within our community.
Having said that, I wish you had framed the Collins' actions in a little more good faith yourself. I do not consider that one quoted tweet to be evidence that of an "opportunistic power grab". I think it's probably a bit unhealthy to see our movement in terms of competing factions, and to seek wins for one's own faction through strategic means rather than through open debate.
But I'm not sure Malcolm Collins is quite there, on the evidence you've said. It seems like he's happy that (according to him) his own favored cause area will get more attention (in the months since this has been posted, I don't think his prediction has proven correct). I don't think that's the same as actively seeking a power grab--it might just be a slightly cynical, though realistic, view that even in a community that tries to promote healthy epistemics, sociological forces are going to have an influence on what we do.
Have you looked at the fertility rate underlying the UN projections? They're projecting fertility rates across China, Japan, Europe, and the United States to arrest their yearly decline and begin to slowly move up back to somewhere in the 1.5 to 1.6 range.
That seems way too high because it's assuming not just that current trends stop but that they reverse to the opposite direction of that observed. Even their "low" scenario has fertility rebounding from a low in ~2030.
This despite all those countries still have a way to go before they get to the low South Korea has reached at 0.88.
I enjoyed this post. I think it is worth thinking about whether the problem is unsolveable! I think one takeaway I had from Tegmark's Life 3.0 was that we will almost certainly not get exactly what we want from AGI. It seems intuitively that any possible specification will have downsides, including the specification to not build AGI at all.
But asking for a perfect utopia seems a high bar for "Alignment"; on the other hand, "just avoid literal human extinction" would be far too low a bar and include the possibility for all sorts of dystopias.
So I think it's a well-made point that we need to define these terms more precisely, and start thinking about what sort of alignment (if any) is achievable.
I might end up at a different place than you did when it comes to actually defining "control" and "AGI", though I don't think I've thought about it enough to make any helpful comment. Seems important to think more about though!
I have a very uninformed view on the relative Alignment and Capabilities contributions of things like RLHF. My intuition is that RLHF is positive for alignment I'm almost entirely uninformed on that. If anyone's written a summary on where they think these grey-area research areas lie I'd be interested to read it. Scott's recent post was not a bad entry into the genre but obviously just worked a a very high level.
This sounds like hits-based cause selection. The median early Quaker cause area wasn't particularly effective, but their best cause area was probably worth all of the wasted time in all of the others.