All posts

Old

2023

Building effective altruism 36
Community 32
AI safety 29
Global health & development 11
Philosophy 11
Cause prioritization 9
More

Frontpage Posts

36
· · 2m read
-1
187
Gavin
· · 2m read
19
[anonymous]
· · 1m read
46
TsviBT
· · 2m read
11
TeddyW
· · 1m read
22
[anonymous]
· · 1m read
28
JakubK
· · 1m read
24
JakubK
· · 1m read
9
JakubK
· · 1m read
31
JakubK
· · 1m read
46
1
· · 1m read
25
Nikola
· · 3m read
180
[anonymous]
· · 3m read

Personal Blogposts

Quick takes

I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (e.g. Toby's thoughts here and Holden's caution about maximising here and things I've posted about myself). Overall, I'm personally keen to take a step away from EA meta at least for a bit and try and do something that helps people where the route to impact is more direct and doesn't go via the EA community. 3. Less convinced of working on AI risk - Over the last year I've also become relatively less convinced about x-risk from AI - especially the case that agentic deceptive strategically-aware power-seeking AI is likely. I'm fairly convinced by the counterarguments e.g. this and this and I'm worried at the meta level about the quality of reasoning and discourse e.g. this. Though I'm still worried about a whole host of non-x-risk dangers from advanced AI. That makes me much more excited to work on something bio or global health related. So overall it seems like it was good to move on to something new and it took me a little while to find something I was as excited about as CE's incubator programme! I'll be at EAG London this weekend! And will hopefully you'll hear more from me later this year about the new thing I'm working on - so keep an eye out as no doubt I'll be fundraising and/or hiring at some point! :)
Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things: The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done. I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion. Briefly, though, and as I indicated before: I had no idea that Sam and others were misusing customer funds. Since November I’ve thought a lot about whether there were signs of this I really should have spotted, but even in hindsight I don’t think I had reason to suspect that that was happening.  Looking back, I wish I’d been far less trusting of Sam and those who’ve pleaded guilty. Looking forward, I’m going to be less likely to infer that, just because someone has sincere-seeming signals of being highly morally motivated, like being vegan or demonstrating credible plans to give away most of their wealth, they will have moral integrity in other ways, too.  I’m also more wary, now, of having such a high-trust culture within EA, especially as EA grows. This thought favours robust governance mechanisms even more than before ("trust but verify"), so that across EA we can have faith in organisations and institutions, rather than heavily relying on character judgement about the leaders of those organisations. EA has grown enormously the last few years; in many ways it feels like an adolescent, in the process of learning how to deal with its newfound role in the world. I’m grateful that we’re in a moment of opportunity for us to think more about how to improve ourselves, including both how we work and how we think and talk about effective altruism.  As part of that broader set of reflections (especially around the issue of (de)centralisation in EA), I’m making some changes to how I operate, which I describe, along with some of the other changes happening across EA, in my post on decision-making and decentralisation here. First, I plan to distance myself from the idea that I’m “the face of” or “the spokesperson for” EA; this isn’t how I think of myself, and I don’t think that description reflects reality, but I’m sometimes portrayed that way. I think moving in the direction of clarity on this will better reflect reality and be healthier for both me and the movement.  Second, I plan to step down from the board of Effective Ventures UK once it has more capacity and has recruited more trustees. I found it tough to come to this decision: I’ve been on the board of EV UK (formerly CEA) for 11 years now, and I care deeply, in a very personal way, about the projects housed under EV UK, especially CEA, 80,000 Hours, and Giving What We Can. But I think it’s for the best, and when I do step down I’ll know that EV will be in good hands.  Over the next year, I’ll continue to do learning and research on global priorities and cause prioritisation, especially in light of the astonishing (and terrifying) developments in AI over the last year. And I’ll continue to advocate for EA and related ideas: for example, in September, WWOTF will come out in paperback in the US and UK, and will come out in Spanish, German, and Finnish that month, too. Given all that’s happened in the world in the last few years — including a major pandemic, war in Europe, rapid AI advances, and an increase in extreme poverty rates — it’s more important than ever to direct people, funding and clear thinking towards the world’s most important issues. I’m excited to continue to help make that happen. 
Mildly against the Longtermism --> GCR shift Epistemic status: Pretty uncertain, somewhat rambly TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this: * Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building * This post from Claire Zabel (OP) * Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks" * Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund" * Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today. Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering): 1. From a longtermist (~totalist classical utilitarian) perspective, there's a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance. * (see Parfit Reasons and Persons for the full thought experiment) 2. From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn't differentiate between "humanity prevents GCRs and realises 1% of it's potential" and "humanity prevents GCRs realises 99% of its potential" * Preventing an extinction-level GCR might move us from 0% to 1% of future potential, but there's 99x more value in focusing on going from the "okay (1%)" to "great (100%)" future. * See Aird 2020 for more nuances on this point 3. From a longtermist (~suffering focused) perspective, reducing GCRs might be net-negative if the future is (in expectation) net-negative * E.g. if factory farming continues indefinitely, or due to increasing the chance of an S-Risk * See Melchin 2021 or DiGiovanni 2021  for more * (Note this isn't just a concern for suffering-focused ethics people) 4. From a longtermist perspective, a focus on GCRs neglects non-GCR longtermist interventions (e.g. trajectory changes, broad longtermism, patient altruism/philanthropy, global priorities research, institutional reform, ) 5. From a "current generations" perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people / animals alive today * I'm pretty uncertain about this, but my guess is that alleviating farmed animal suffering is more welfare-increasing than e.g. working to prevent an AI catastrophe, given the latter is pretty intractable (But I haven't done the numbers) * See discussion here * If GCRs actually are more cost-effective under a "current generations" worldview, then I question why EAs would donate to global health / animal charities (since this is no longer a question of "worldview diversification", just raw cost-effectiveness) More meta points 1. From a community-building perspective, pushing people straight into GCR-oriented careers might work short-term to get resources to GCRs, but could lose the long-run benefits of EA / Longtermist ideas. I worry this might worsen community epistemics about the motivation behind working on GCRs: * If GCRs only go through on longtermist grounds, but longtermism is false, then impartial altruists should rationally switch towards current-generations opportunities. Without a grounding in cause impartiality, however, people won't actually make that switch 2. From a general virtue ethics / integrity perspective, making this change on PR / marketing reasons alone - without an underlying change in longtermist motivation - feels somewhat deceptive. * As a general rule about integrity, I think it's probably bad to sell people on doing something for reason X, when actually you want them to do it for Y, and you're not transparent about that 3. There's something fairly disorienting about the community switching so quickly from [quite aggressive] "yay longtermism!" (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention / admission that this happened or why
142
Jason
10mo
6
The FTX and Alameda estates have filed an adversary complaint against the FTX Foundation, SBF, Ross Rheingans-Yoo, Nick Beckstead, and some biosciences firms, available here. I should emphasize that anyone can sue over anything, and allege anything in a complaint (although I take complaints signed by Sullivan & Cromwell attorneys significantly more seriously than I take the median complaint). I would caution against drawing any adverse inferences from a defendant's silence in response to the complaint. The complaint concerns a $3.25MM "philantrophic gift" made to a biosciences firm (PLS), and almost $70MM in non-donation payments (investments, advance royalties, etc.) -- most of which were also to PLS. The only count against Beckstead relates to the donation. The non-donation payments were associated with Latona, which according to the complaint "purports to be a non-profit, limited liability company organized under the laws of the Bahamas[,] incorporated in May 2022 for the purported purpose of investing in life sciences companies [which] held itself out as being part of the FTX Foundation." The complaint does not allege that either Beckstead or Rheingans-Yoo knew of the fraud at the core of FTX and Alameda. It does, however, allege (para. 46) that the "transfers were nominally made on behalf of the  FTX Foundation and Latona, but actually were made for the benefit of Bankman-Fried." This is a weak spot in the complaint for me. There's a quote from SBF about needing to do some biosecurity work for PR and political reasons (para. 47), but corporations do charitable stuff for PR and political reasons all the time. There are quotations about wanting to de-emphasize the profit motive / potential in public communications (para. 49-50), but if the profits flowed to a non-profit it's unclear how that would personally enrich SBF. Quoting Beckstead, the complaint alleges that the investments were "ill-advised and not well-evaluated" (para. 51). It further alleges that there was no, or very little due dilligence (para. 51), such as a lack of any valuation analysis and in most cases lack of access to a data room. As a result, Latona "often paid far more than fair or reasonably equivalent value for the  investments" (para. 52). As for the $3.25MM gift, it was "was made in a similarly slapdash fashion" as the Foundation agreed to it without even knowing whether the recipient was a non-profit (para. 53). There are also allegations about how that gift came to be (para. 53-64). Paragraph 68 is troubling in terms of Latona's lack of internal controls: * On May 23, 2022, Rheingans-Yoo sent a Slack message to the Head of  Operations for FTX Ventures, asking her to wire $50 million to PLS on behalf of Latona,  because “Latona doesn’t have a bank account yet, and we’d like to move these funds as soon as possible.” After the Head of Operations inquired why they were wiring $50 million when the SAFE agreement was only for $35 million, Rheingans-Yoo said there was a “separate purchase agreement [that] had a $15mln cash advance.” When the Head of Operations asked for the purchase agreement for $15 million, Rheingans-Yoo replied, “I have an email, no formal agreement,” but “if that’s not sufficient, we can send the 35 first and get the purchase more formally papered. There apparently wasn't an attempt to formally paper the $15MM until September, and that paper was woefully vague and inadequate if the complaint is credible (para. 70). Likewise, Rheingans-Yoo allegedly offered to send over $3MM to another firm, Lumen, "on a handshake basis" without any paperwork "while we hammer out the full details" (para. 75). However, the funding was not actually sent until an agreement was signed (para. 78-79). A third investment was made on Rheingans-Yoo's recommendation despite Beckstead describing it as "unattractive" for various reasons (para. 82-84). The "donate, then invest" approach seemed to also be in play with a fourth firm called Riboscience. Paragraph 93 doesn't sound great: "On June 29, 2022, Glenn emailed Rheingans-Yoo that “sufficient time ha[d] passed since the (most generous) donation to the Glenn Labs, that we can now proceed with your desired investment in Riboscience.” Counts One to Five are similar to what I would expect most clawback complaints would look like. Note that they do not allege any misconduct by the recipients as part of the cause of action, as no such misconduct is necessary for a fraudulent conveyance action. You can also see the bankruptcy power to reach beyond the initial transferee to subsequent transferees under 11 USC 550 at play here. Most of the prefactory material is doubtless there in an attempt to cut off a defense from the defendant biosciences firms that they gave something of reasonably equivalent value in exchange for the investments. In Count Eleven, the complaint alleges that "Rheingans-Yoo knew that the transactions with the Lifesciences Defendants did not provide and had virtually no prospect of providing Alameda with reasonably equivalent value, and that Bankman-Fried personally benefited from the transactions. Rheingans-Yoo thus knowingly assisted in and/or failed to prevent Bankman-Fried’s breaches of fiduciary duty to Alameda." (para. 169). This allegedly harmed Alameda to the tune of $68.3MM. Elsewhere, the complaint alleges that " [u]pon information and belief, Bankman-Fried and Rheingans-Yoo intended to benefit personally from any profits generated by any of  these companies if they turned out to be successful and/or developed a successful product." (para. 5). However, "upon information and belief" is lawyer-speak for "we're speculating, or at least don't have a clear factual basis for this allegation yet." In Count Twelve, the complaint alleges that "Beckstead and Rheingans-Yoo knew that the transfer to PLS funded by FTX did not provide and had virtually no prospect of providing FTX with reasonably equivalent value, and that Bankman-Fried personally benefited from the transaction. Beckstead and Rheingans-Yoo thus aided and abetted Bankman-Fried’s breaches of fiduciary duty to FTX." (para. 174). This allegedly harmed FTX to the tune of $3.25MM. In the end, the complaint doesn't exactly make me think highly of anyone involved with FTX or the FTX Foundation. However, from my non-specialist eyes, I'm not seeing a slam dunk case for critical assertions about Beckstead and Rheingans-Yoo's knowledge in paras. 169 and 174.
135
RyanCarey
9mo
37
Should we fund people for more years at a time? I've heard that various EA organisations and individuals with substantial track records still need to apply for funding one year at a time, because they either are refused longer-term funding, or they perceive they will be. For example, the LTFF page asks for applications to be "as few as possible", but clarifies that this means "established organizations once a year unless there is a significant reason for submitting multiple applications". Even the largest organisations seem to only receive OpenPhil funding every 2-4 years. For individuals, even if they are highly capable, ~12 months seems to be the norm. Offering longer (2-5 year) grants would have some obvious benefits: * Grantees spend less time writing grant applications * Evaluators spend less time reviewing grant applications * Grantees plan their activities longer-term The biggest benefit, though, I think, is that: * Grantees would have greater career security. Job security is something people value immensely. This is especially true as you get older (something I've noticed tbh), and would be even moreso for someone trying to raise kids. In the EA economy, many people get by on short-term grants and contracts, and even if they are employed, their organisation might itself not have a very steady stream of income. Overall, I would say that although EA has made significant progress in offering good salaries and great offices, the job stability is still not great. Moreover, career security is a potential blind spot for grantmakers, who generally do have ~permanent employment from a stable employer. What's more, I think that offering stable income may in many cases be cheaper than improving one's salary and office. Because some people have, for years, never been refused a grant, and would likely return any funds that turn out not to be needed. And despite the low chance of funding being "wasted", they still have to apply annually. In such cases, it seems especially clear that the time savings and talent retention benefits would outweigh any small losses.