New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
15
NickLaing
11h
2
At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors). For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation received funding from GiveWell, we might only make slightly better use of that money than where it would otherwise have gone, and we're not going to be increasing the overall donor pool either. Who knows, for-profit investment dollars could be 10x -100x more counterfactually impactful than GiveWell, which could mean a for-profit company trying to do something good could plausibly be 10-100x less effective than a charity and still doing as much counterfactual good overall? Or is this a stretch? This would be hard to estimate but doable, and must have been done at least on a casual scale by some people. Examples ( and random guesses) of counterfactual comparisons of the value of each dollar given by a particular source might be something like.... 1. GiveWell                                                             1x 2. Gates Foundation                                            3x 3. Individual donors NEW donations                10x 4. Indivudal donors SHIFTING donations.        5x 5. Non EA-Aligned foundations                         8x 6. Climate funding                                               5x 7. For-profit investors.                                         20x Or this might be barking up the wrong tree, not sure (and I have mentioned it before)  
“Chief of Staff” models from a long-time Chief of Staff I have served in Chief of Staff or CoS-like roles to three leaders of CEA (Zach, Ben and Max), and before joining CEA I was CoS to a member of the UK House of Lords. I wrote up some quick notes on how I think about such roles for some colleagues, and one of them suggested they might be useful to other Forum readers. So here you go: Chief of Staff means many things to different people in different contexts, but the core of it in my mind is that many executive roles are too big to be done by one person (even allowing for a wider Executive or Leadership team, delegation to department leads, etc). Having (some parts of) the role split/shared between the principal and at least one other person increases the capacity and continuity of the exec function. Broadly, I think of there being two ways to divide up these responsibilities (using CEO and CoS as stand-ins, but the same applies to other principal/deputy duos regardless of titles): 1. Split the CEO's role into component parts and assign responsibility for each part to CEO or CoS 1. Example: CEO does fundraising; CoS does budgets 2. Advantages: focus, accountability 2. Share the CEO's role with both CEO and CoS actively involved in each component part 1. Example: CEO speaks to funders based on materials prepared by CoS; CEO assigns team budget allocations which are implemented by CoS 2. Advantages: flex capacity, gatekeeping Some things to note about these approaches: * In practice, it’s inevitably some combination of the two, but I think it’s really important to be intentional and explicit about what’s being split and what’s being shared * Failure to do this causes confusion, dropped balls, and duplication of effort * Sharing is especially valuable during the early phases of your collaboration because it facilitates context-swapping and model-building * I don’t think you’d ever want to get all the way or too far towards split, bec
I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk if the message is low fidelity, the issue becomes polarized, or priorities are poorly set, hence collaborating with experts. I doubt there's that much useful stuff to be done here, but marginal deregulation looks very easy right now and looks good to strike while the iron is hot. 
Thought these quotes from Holden's old (2011) GW blog posts were thought-provoking, unsure to what extent I agree. In In defense of the streetlight effect he argued that (I appreciate the bolded part, especially as something baked into GW's approach and top recs by $ moved.) That last link is to The most important problem may not be the best charitable cause. Quote that caught my eye:
I've spent some time in the last few months outlining a few epistemics/AI/EA projects I think could be useful.  Link here.  I'm not sure how to best write about these on the EA Forum / LessWrong. They feel too technical and speculative to gain much visibility.  But I'm happy for people interested in the area to see them. Like with all things, I'm eager for feedback.  Here's a brief summary of them, written by Claude. --- 1. AI-Assisted Auditing A system where AI agents audit humans or AI systems, particularly for organizations involved in AI development. This could provide transparency about data usage, ensure legal compliance, flag dangerous procedures, and detect corruption while maintaining necessary privacy. 2. Consistency Evaluations for Estimation AI Agents A testing framework that evaluates AI forecasting systems by measuring several types of consistency rather than just accuracy, enabling better comparison and improvement of prediction models. It's suggested to start with simple test sets and progress to adversarial testing methods that can identify subtle inconsistencies across domains. 3. AI for Epistemic Impact Estimation An AI tool that quantifies the value of information based on how it improves beliefs for specific AIs. It's suggested to begin with narrow domains and metrics, then expand to comprehensive tools that can guide research prioritization, value information contributions, and optimize information-seeking strategies. 4. Multi-AI-Critic Document Comments & Analysis A system similar to "Google Docs comments" but with specialized AI agents that analyze documents for logical errors, provide enrichment, and offer suggestions. This could feature a repository of different optional open-source agents for specific tasks like spot-checking arguments, flagging logical errors, and providing information enrichment. 5. Rapid Prediction Games for RL Specialized environments where AI agents trade or compete on predictions through market me