New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Announcing PauseCon, the PauseAI conference. Three days of workshops, panels, and discussions, culminating in our biggest protest to date. Twitter: https://x.com/PauseAI/status/1915773746725474581 Apply now: https://pausecon.org
“Chief of Staff” models from a long-time Chief of Staff I have served in Chief of Staff or CoS-like roles to three leaders of CEA (Zach, Ben and Max), and before joining CEA I was CoS to a member of the UK House of Lords. I wrote up some quick notes on how I think about such roles for some colleagues, and one of them suggested they might be useful to other Forum readers. So here you go: Chief of Staff means many things to different people in different contexts, but the core of it in my mind is that many executive roles are too big to be done by one person (even allowing for a wider Executive or Leadership team, delegation to department leads, etc). Having (some parts of) the role split/shared between the principal and at least one other person increases the capacity and continuity of the exec function. Broadly, I think of there being two ways to divide up these responsibilities (using CEO and CoS as stand-ins, but the same applies to other principal/deputy duos regardless of titles): 1. Split the CEO's role into component parts and assign responsibility for each part to CEO or CoS 1. Example: CEO does fundraising; CoS does budgets 2. Advantages: focus, accountability 2. Share the CEO's role with both CEO and CoS actively involved in each component part 1. Example: CEO speaks to funders based on materials prepared by CoS; CEO assigns team budget allocations which are implemented by CoS 2. Advantages: flex capacity, gatekeeping Some things to note about these approaches: * In practice, it’s inevitably some combination of the two, but I think it’s really important to be intentional and explicit about what’s being split and what’s being shared * Failure to do this causes confusion, dropped balls, and duplication of effort * Sharing is especially valuable during the early phases of your collaboration because it facilitates context-swapping and model-building * I don’t think you’d ever want to get all the way or too far towards split, bec
At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors). For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation received funding from GiveWell, we might only make slightly better use of that money than where it would otherwise have gone, and we're not going to be increasing the overall donor pool either. Who knows, for-profit investment dollars could be 10x -100x more counterfactually impactful than GiveWell, which could mean a for-profit company trying to do something good could plausibly be 10-100x less effective than a charity and still doing as much counterfactual good overall? Or is this a stretch? This would be hard to estimate but doable, and must have been done at least on a casual scale by some people. Examples ( and random guesses) of counterfactual comparisons of the value of each dollar given by a particular source might be something like.... 1. GiveWell                                                             1x 2. Gates Foundation                                            3x 3. Individual donors NEW donations                10x 4. Indivudal donors SHIFTING donations.        5x 5. Non EA-Aligned foundations                         8x 6. Climate funding                                               5x 7. For-profit investors.                                         20x Or this might be barking up the wrong tree, not sure (and I have mentioned it before)  
There's a famous quote, "It's easier to imagine the end of the world than the end of capitalism," attributed to both Fredric Jameson and Slavoj Žižek. I continue to be impressed by how little the public is able to imagine the creation of great software. LLMs seem to be bringing down the costs of software. The immediate conclusion that some people jump to is "software engineers will be fired." I think the impacts on the labor market are very uncertain. But I expect that software getting overall better should be certain. This means, "Imagine everything useful about software/web applications - then multiply that by 100x+." The economics of software companies today are heavily connected to the price of software. Primarily, software engineering is just incredibly expensive right now. Even the simplest of web applications with over 100k users could easily cost $1M-$10M/yr in development. And much of the market cap of companies like Meta and Microsoft is made up of their moat of expensive software. There's a long history of enthusiastic and optimistic programmers in Silicon Valley. I think that the last 5 years or so have seemed unusually cynical and hopeless for true believers in software (outside of AI). But if software genuinely became 100x cheaper (and we didn't quickly get to a TAI), I'd expect a Renaissance. A time for incredible change and experimentation. A wave of new VC funding and entrepreneurial enthusiasm. The result would probably feature some pretty bad things (as is always true with software and capitalism), but I'd expect some great things as well.
I've spent some time in the last few months outlining a few epistemics/AI/EA projects I think could be useful.  Link here.  I'm not sure how to best write about these on the EA Forum / LessWrong. They feel too technical and speculative to gain much visibility.  But I'm happy for people interested in the area to see them. Like with all things, I'm eager for feedback.  Here's a brief summary of them, written by Claude. --- 1. AI-Assisted Auditing A system where AI agents audit humans or AI systems, particularly for organizations involved in AI development. This could provide transparency about data usage, ensure legal compliance, flag dangerous procedures, and detect corruption while maintaining necessary privacy. 2. Consistency Evaluations for Estimation AI Agents A testing framework that evaluates AI forecasting systems by measuring several types of consistency rather than just accuracy, enabling better comparison and improvement of prediction models. It's suggested to start with simple test sets and progress to adversarial testing methods that can identify subtle inconsistencies across domains. 3. AI for Epistemic Impact Estimation An AI tool that quantifies the value of information based on how it improves beliefs for specific AIs. It's suggested to begin with narrow domains and metrics, then expand to comprehensive tools that can guide research prioritization, value information contributions, and optimize information-seeking strategies. 4. Multi-AI-Critic Document Comments & Analysis A system similar to "Google Docs comments" but with specialized AI agents that analyze documents for logical errors, provide enrichment, and offer suggestions. This could feature a repository of different optional open-source agents for specific tasks like spot-checking arguments, flagging logical errors, and providing information enrichment. 5. Rapid Prediction Games for RL Specialized environments where AI agents trade or compete on predictions through market me