Thoughts on liability insurance for global catastrophic risks (either voluntary or mandatory) such as for biolabs or AGI companies? Do you find this to be a high-potential line of intervention?
I think e.g. the GWWC pledge is bad for many people including me to take, and that starting for-profit businesses should be the default course of action for improving the world, not non-profits. I am in fact pretty anti-"donation" as a paradigm for getting anything done at this point. This is not that outside of the Overton window here - there are plenty who love markets in these parts - but it is definitely not squarely within it either. I find myself trying to exhort people to be more greedy so they will receive more reward signal that accurately tracks and internalizes their impact, rather than get lost in the vague cloud of abstractions divorced from reality that tend to permeate these parts.
Edit: obviously donations get things done sometimes. I mean as an absolute comparison, and on the margin. I would at least sorta frame churches as businesses offering a membership subscription in exchange for 10% of income. The people receiving these membership benefits are presumably strengthened and restored by this trade rather than weakened/self-sacrificial/self-abnegated as the altruism attractor so often engenders.
The title made me think it would be about No Shows to confirmed 1-on-1 appointments, which I could certainly see meriting an apology. But I think here you're talking about people sending a meeting request plus a message and not getting a response or confirmation. I agree people mostly get overwhelmed. Your solution, of messaging afterwards to get back to people, seems reasonable. Though a lot of people feel burnt out after EAGs and are wont to go back on the app.
What is the maximum length of a Short-Term Visit? (Edit: the form indicates 10 or less days as I recalled. It might be good to add that to the post or at least confirm it here that the max is 10 days, for travel planning purposes.)
I've had the thought recently that people in our circles underrate the benefits of being a big fish in a small pond. Being a small fish in a bigger pond means fiercer competition relative to others. Being the dumbest person in the room becomes mentally taxing. It's literally an invitation to be lower status, one of the most important commodities for an ape brain besides food. Of course there are still the benefits to associating with your equals or superiors, which probably outweigh the harms, but some nuanced balance is called for. It makes any zero sum dynamics more fierce and any positive sum dynamics more magnanimous.
Do we want popular YouTubers to be spreading awareness of this given it also increases some of the risk of bad actors getting ideas?
This comment and its OP strike me as a good explanation why.
https://www.lesswrong.com/posts/Zzar6BWML555xSt6Z/the-dial-of-progress?commentId=dg5iBNKRAJTjjSrP3
I think trying to figure out the common thread "explaining datapoints like FTX, Leverage Research, [and] the LaSota crew" won't yield much of worth because those three things aren't especially similar to each other, either in their internal workings or in their external effects.
There are many disparate factors between different cases. The particulars of each incident really matter lest people draw the wrong conclusions. However I think figuring out the common threads insofar as there are any is what we need, as otherwise we will overindex on particular cases and learn things that don't generalize. I have meditated on what the commonalities could be and think they at least share what one may call "Perverse Maximization", which I intend to apply to deontology (i.e. the sword stabbing) as well as utilitarianism. Maybe there's a better word than "maximize".
I think I discovered a shared commonality between the sort of overconfidence these extremist positions hold and the underconfidence that EAs who are overwhelmed by analysis paralysis hold: a sort of 'inability to terminate closure-seeking process'. In the case of overconfidents, it's "I have found the right thing and I just need to zealously keep applying it over and over". In the case of underconfidents, it's "this choice isn't right or perfect, I need to find something better, no I need something better, I must find the right thing". Both share an intolerance of ambiguity, uncertainty, and shades of gray. The latter is just more people suffering quiet misery of their perfectionism or being the ones manipulated than it ending up in any kind of outward explosion.
Yeah a significant consideration for me in whether to be less professionally involved in EA is exhaustion from centralized funding and the weird power dynamics that ensue. I would rather build products that lots of people can use and lots of investors or donors would find attractive to give money to than be beHolden to a small coterie of grantmakers no matter how well-intentioned.
What is your nuts'n'bolts analysis of the problem?