AdamGleave

1158Joined Dec 2014

Comments
50

Note I don't see any results for FTX Foundation or FTX Philanthropy at https://apps.irs.gov/app/eos/determinationLettersSearch So it's possible it's not a 501(c)(3) (although it could still be a non-profit corporation).

Disclaimer: I do not work for FTX, and am basing this answer off publicly available information, which I have not vetted in detail.

Nick Beckstead in the Future Fund launch post described several entities (FTX Foundation Inc, DAFs) that funds will be disbursed out of: https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1?commentId=qtJ7KviYxWiZPubtY I would expect these entities to be sufficiently capitalized to provide continuity of operations, although presumably it'll have a major impact on their long-run scale.

IANAL but I'd expect the funds in the foundation/DAF to be fairly secure against bankruptcy or court proceedings. Bankruptcy courts can't just claw back money arbitrarily from other creditors, and limited liability corporations provide significant protection for directors. However, I'd expect assets donated to FTX Foundation or associated DAFs to largely be held  in-kind (again, this is speculation, but it's standard practice for large philanthropic foundations) not liquidated for cash. These assets mark-to-market value are likely worth a lot less than they were a week ago.

Hi Aaron, thanks for highlighting this. We inadvertently published an older version of the write-up before your feedback -- this has been corrected now. However, there are still a number of areas in the revised version which I expect you'll still take issue with, so I wanted to share a bit of perspective on this. I think it's excellent you brought up this disagreement in a comment, and would encourage people to form their own opinion.

First, for a bit of context, my grant write-ups are meant to accurately reflect my thought process, including any reservations I have about a grant. They're not meant to present all possible perspectives -- I certainly hope that donors use other data points when making their decisions, including of course CES's own fundraising materials.

My understanding is you have two main disagreements with the write-up: that I understate CES's ability to have an impact on the federal level, and that the cost effectiveness is lower than you believe to be true.

On the federal level, my updated write-up acknowledges that "CES may be able to have influence at the federal level by changing state-level voting rules on how senators and representatives are elected. This is not something they have accomplished yet, but would be a fairly natural extension of the work they have done so far." However, I remain skeptical regarding the Presidential general for the reasons stated: it'll remain effectively a two-candidate race until a majority of electoral college votes can be won by approval voting. I do not believe you ever addressed that concern.

Regarding the cost effectiveness, I believe your core concern was that we included your total budget as a cost, whereas much of your spending is allocated towards longer-term initiatives that do not directly win a present-day approval voting campaign. This was intended as a rough metric -- a more careful analysis would be needed to pinpoint the cost effectiveness. However, I'm not sure that such an analysis would necessarily give a more favorable figure. You presumably went after jurisdictions where winning approval voting reform is unusually easy; so we might well expect your cost per vote to increase in future. If you do have any internal analysis to share on that then I'm sure I and others would be interested to see it.

You could argue from a "flash of insight" and scientific paradigm shifts generally giving rise to sudden progress. We certainly know contemporary techniques are vastly less sample and compute efficient than the human brain -- so there does exist some learning algorithm much better than what we have today. Moreover there probably exists some learning algorithm that would give rise to AGI on contemporary (albeit expensive) hardware. For example, ACX notes there's a supercomputer than can do $10^17$ FLOPS vs the estimated $10^16 needed for a human brain. These kinds of comparisons are always a bit apples to oranges, but it does seem like compute is probably not the bottleneck  (or won't be in 10 years) for a maximally-efficient algorithm.

The nub of course is whether such an algorithm is plausibly reachable by human flash of insight (and not via e.g. detailed empirical study and refinement of a less efficient but working AGI). It's hard to rule out. How simple/universal we think the algorithm the human brain implements is one piece of evidence here -- the more complex and laden with inductive bias (e.g. innate behavior), the less likely we are to come up with it. But even if the human brain is a Rube Goldberg machine, perhaps there does exist some more straightforward algorithm evolution did not happen upon.

Personally I'd put little weight on this. I have <10% probability on AGI in next 10 years, and think I put no more than 15% on AGI being developed ever by something that looks like a sudden insight than more continuous progress. Notably even if such an insight does happen soon, I'd expect it to take at least 3-5 years for it to gain recognition and be sufficiently scaled up to work. I do think it's probable enough for us to actively keep an eye out for promising new ideas that could lead to AGI so we can be ahead of the game. I think it's good for example that a lot of people working on AI safety were working on language models "before it was cool" (I was not one of these people), for example, although we've maybe now piled too much into that area.

I agree with a lot of this post. In particular, getting more precision in timelines is probably not going to help much with persuading most people, or in influencing most of the high-level strategic questions that Miles mentions. I also expect that it's going to be hard to get much better predictions than we have now: much of the low-hanging fruit has been plucked. However, I'd personally find better timelines quite useful for prioritizing my technical research agenda problems to work on. I might be in a minority here, but I suspect not that small a one (say 25-50% of AI safety researchers).

There's two main ways timelines influence what I would want to work on. First, it directly changes the "deadline" I am working towards. If I thought the deadline was 5 years, I'd probably work on scaling up the most promising approaches we have now -- warts and all.  If I thought it was 10 years away, I'd try and make conceptual progress that could be scaled in the future. If it was 20 years away, I'd focus more on longer-term field building interventions: clarifying what the problems are, helping develop good community epistemics, mentoring people, etc. I do think what matters here is something like the log-deadline more than the deadline itself (5 vs 10 is very decision relevant, 20 vs 25 much less so) which we admittedly have a better sense of, although there's still some considerable disagreement.

The second way timelines are relevant is that my prediction on how AI is developed changes a lot conditioned on timelines. I think we should probably just try to forecast or analyze how-AI-is-developed directly -- but timelines are perhaps easier to formalize. If timelines are less than 10 years I'd be confident we develop it within the current deep learning paradigm. More than that and possibilities open up a lot. So overall longer timelines would push me towards more theoretical work (that's generally applicable across a range of paradigms) and taking bets on underdog areas of ML . There's not much research into, say, how to align an AI built on top of a probabilistic programming language. I'd say that's probably not a good use of resources right now -- but if we had a confident prediction human-level AI was 50 years away, I might change my mind.

Thanks for the post! This seems like a clearly important and currently quite neglected area and I'd love to see more work on it.

My current hot-take is that it seems viable to make AGI research labs a sufficiently hardened target that most actors cannot exploit them. But I don't really see a path to preventing the most well-resourced state actors from at least exfiltrating source code. There's just so many paths to this: getting insiders to defect, supply chain attacks, etc. Because of this I suspect it'll be necessary to get major state actors to play ball by other mechanisms (e.g. international treaties, mutually assured destruction). I'm curious if you agree or are more optimistic on this point?

I also want to note that espionage can reduce x-risk in some cases: e.g. actors may be less tempted to cut corners on safety if they have intelligence that their competitors are still far away from transformative AI. Similarly, it could be used as an (admittedly imperfect) mechanism for monitoring compliance with treaties or more informal agreements. I do still expect better infosec to be net-positive, though.

Making bets on new ambitious projects doesn't seem necessarily at odds with frugality: you can still execute on them in a lean way, some things just really do take a big CapEx. Granted whether Google or any major tech company really does this is debatable, but I do think they tend to at least try to instill it, even if there is some inefficiency e.g. due to principal-agent problems.

Thanks for writing this post, this is an area I've also sometimes felt concerned about so it's great to see some serious discussion.

A related point that I haven't seen called out explicitly is that monetary costs are often correlated with other more significant, but less visible, costs such as staff time. While I think the substantial longtermist funding overhang really does mean we should spend more money, I think it's still very important that we scrutinize where that money is being spent. One example that I've seen crop up a few time is retreats or other events being organized at very short notice (e.g. less than two weeks). In most of these cases there's not been a clear reason why it needs to happen right now, and can't wait a month or so. There's a monetary cost to doing things last minute (e.g. more expensive flights and hotel rooms) but the biggest cost is the event will be less effective than if the organizers and attendees had more time to plan for it.

More generally I'm concerned that too much funding can have a detrimental effect on organisational culture. It's often possible to make a problem temporarily go away just by throwing money at it. Sometimes that's the right call (focus on core competencies), but sometimes it's better off fixing the structural problem, before an organisation scales and it gets baked in. Anecdotally it seems like many of the world's most successful companies do try to make frugality part of their culture, e.g. it's one of Amazon's leadership principles.

In general, being inefficient at a small scale can still end up being very impactful if you work on the right problem. But I think to make a serious dent on the world's problems, we're likely going to need some mega-projects, spending billions of dollars with large headcount. Inefficiency at that scale is likely to result in project failure: oversight and incentives only get harder. So it seems critical that we continue to develop the ability in EA to execute on projects efficiently, even if in the short-term we might achieve more by neglecting that.

I do feel a bit confused about what to do in practice to address these problems, and would love to see more thinking on it. For individual decisions, I've found figuring out what my time (in some context) is worth and sticking to it for time-money tradeoffs is helpful. In general I'd be suspicious if someone is always choosing to spend money when it saves time, or vice-versa. For funding decisions, these concerns are one of the reasons I lean towards keeping the bar for funding relatively high even if that means we can't immediately deploy funding. I also support vetting people carefully to avoid incentivizing people pretending to be longtermists (or just having very bad epistemics).

I think it's important to distinguish people's expectations and the reality of what gets rewarded. Both matter: if people expect something to be unrewarding, they won't do it even if it would be appreciated; and perhaps even worse, if people expect to get rewarded for something but in fact there is limited support, they may waste time going down a dead end.

Another axis worth thinking about is what kind of rewards are given. The post prompts for social rewards, but I'm not sure why we should focus on this specifically: things like monetary compensation, work-life balance, location, etc all matter and are determined at least in part by EA orgs and grantmakers decisions. Even if we focus on social rewards, does this look like gratitude from your colleagues, being invited to interesting events, having social media followers, a set of close friends you like, ...? All of these can be rewarding, but the amount of weight people put on each varies a lot. I think it helps to be precise here, as otherwise two people might disagree about how rewarding a role is, even though they agree about the facts of the matter.

Off the top of my head, categories of people who I think often get rewarded too much / too little by the movement.

Overrated: AI safety researchers

I am an AI safety researcher, so I'll start with deprecating myself! To be clear, I think AI safety should be a priority, and people who are making progress here deserve resources to let them scale up their research. But it seems to sometimes be put on a pedestal I don't think it really belongs to. Biosecurity, cause prioritization, improving institutional decision making, etc all seem within an order of magnitude of AI at least -- and people's relative fit for the area can dwarf that. I think this is one of the cases where perception is more skewed than reality: e.g. although the bar for funding AI safety research does seem a bit lower than other areas, I've generally seen promising projects / people in other areas be able to attract funding relatively easily too.

I'd also like to see more critical evaluation of people's research agendas. I see more deference than I'm comfortable with. It's a tricky balance: we don't want to strangle a research agenda at birth just because it doesn't fit our preconceptions. So I think it makes sense to give individuals a decent amount of runway to pursue novel approaches. But I think accountability can actually help make people more productive, both by motivating them and giving useful feedback. Given time constraints I think it's OK to sometimes fund or otherwise support people without having a good inside view of why their work matters, but I'd like to see people be more explicit about that in their own reasoning and communication with others so we don't get a positive feedback loop. Concretely I've fairly often wished I could fund someone without giving an implied endorsement -- not because I think their work is bad, I'm just not confident.

Overrated: Parroting popular arguments

There's often a lot of deference to the opinions of high-status figures in the EA community. I don't think this is necessarily bad per se: no one has time to look into every possible issue, so relying on expert opinion is a necessary shortcut. However, the question then arises, how are the so-called experts selected?

A worrying trend I've seen is that people who agree with the current in-vogue opinion and parrot the popular arguments often seem to be given more epistemic credit than they deserve. While those who try hard to form their own opinions, and sometimes make mistakes, are more likely to be viewed with skepticism. The tricky thing here is the "parrots" are right more often than the "independent thinkers" -- but the marginal contribution of the parrots contribution to the debate is approximately zero.

I'm not sure how to fix this. I think one thing that can help is rewarding people for having good reasons for doing what they're working on, rather than you agreeing with their outcome per se. So, if I meet someone who is e.g. working on AI safety but does not seem to have a strong grasp of the arguments for it or why they're a good fit for it, I might encourage them to look at other options. Whereas if I meet someone working on e.g. asteroid deflection, which I'd personally guess is much less impactful, I'd be supportive of them if they had decent responses to my critique (even if I'm not convinced by the response).

Underrated: Micro-entrepreneurship

A key part of entrepeneurship is identifying an opportunity others are overlooking, and then taking initiative to exploit that opportunity. Entrepreneurs in the "Silicon Valley startup" form are adequately rewarded (although I'll note it's common for founders to face intense skepticism early on, before the idea is validated). But there's opportunities to apply this style of thinking and work at varying scales: setting up a new community event, helping an org you join run better, etc. These are often taken for granted, especially since once the idea has been executed, it may often seem trivial. But such "obvious" ideas frequently languish for many years as no one bothers to solve them.

For example, during my PhD at CHAI, I helped scale-up an internship program, fundraised for and helped run a program to give cash grants to other PhD students (not myself) who were being held back by funding constraints, helped lead meetings to help integrate new PhD students, fundraised for and helped set up a compute cluster, etc. None of these were particularly hard: I believe most other people in the group could have done them. But they didn't, and I expect <50% of them would have happened if I hadn't taken initiative.

I wasn't rewarded for these particularly, and they'll do little to help me in a research career. But I actually count this as a success case -- in many orgs I wouldn't have even had the freedom to take these actions! So I'd encourage leaders of orgs to at the least try to give your individual contributors freedom to take leadership of useful projects, and where possible try to reward them for it, even if it's not incentivized by the broader ecosystem.

Underrated: Direct work outside the community

Working directly for an EA org is rewarding in many ways (social connection, prestige), although by no means all (compensation low to middling relative to what many of the individuals could earn). But there's lots of direct paths to impact that don't involve working with EAs!

For example, if you want to improve institution decision-making, it might make sense to spend at least some time working at the kind of large governmental institution you seek to later reform. Even the most ardent civil servant would not claim that large government bureaucracies are a particularly exciting place to work.

Similarly, I see a lot of people working on AI safety at a handful of labs that have made safety a priority: e.g. DeepMind, OpenAI, Anthropic, Redwood. This makes a decent amount of sense, but might there not be considerable value working at a company that might build powerful AI which currently has few internal experts on safety, such as Google Brain or Meta AI Research? This isn't for everyone: you ideally should have some seniority already, and need strong communication skills to get leadership and other teams excited by your work. But I expect it could be higher impact, by getting a new group of people to work on safety problems, and helping ensure that any systems those labs build are aligned.

One thing that could help here is having a strong community outside of workplaces and narrow geographical hubs. And also evaluating people's career more by their long-term trajectory, and not just what they're working on right now, noting that direct impact outside EA orgs will often by necessity involve some work that by our lights would be of limited impact.

Load More