Very strongly agree, based on watching the career trajectory of lots of EAs over the past 10 years. I think focusing on what broad kinds of activities you are good at and enjoy, and what skills you have or are well-positioned to obtain (within limits: e.g. "being a really clear and fast writer" is probably helpful in most cause areas, "being a great salsa dancer" maybe less so), then thinking about how to apply them in the cause area you think is most important, is generally much more productive than trying to entangle that exploration with personal cause prio exercises.
Our impression when we started to explore different options was that one can’t place a trustee on a leave of absence; it would conflict with their duties and responsibilities to the org, and so wasn’t a viable route.
Isn't the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question?
(I'm not trying to antagonize here – I'm genuinely trying to understand the decision-making of EA leadership better as I think it's very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to...
Chiming in from the EV UK side of things: First, +1 to Nicole’s thanks :)
As you and Nicole noted, Nick and Will have been recused from all FTX-related decision-making. And, Nicole mentioned the independent investigation we commissioned into that.
Like the EV US board, the EV UK board is also looking into adding more board members (though I think we are slightly behind the US board), and plans to do so soon. The board has been somewhat underwater with all the things happening (speaking for myself, it’s particularly difficult because a ...
My favorite is probably the movie Colossus: the Forbin Project. For this, would also weakly recommend the first section of Life 3.0.
Hi Claire,
Thanks for coming back to this comment.
I have heard it said that large funders often ask for a seat on the Board of charities they fund. I've never actually heard of a concrete example of this, but I'm happy to take it on faith.
What I'm more surprised about is that the funder would appoint someone to the Board who then assesses grant applications from that nonprofit. This is surely an unavoidable conflict of interest - the Board member has a direct interest in gaining the grant for the nonprofit, even if it's not in the grantor's best interests t...
That’s correct. It’s common for large funders of organizations to serve on the boards of organizations they support, and I joined the EVF board partly because we foresaw synergies between the roles (including for me acting as grant investigator on EVF grants). Leadership at both organizations are aware I am in both roles.
Also, though you didn’t ask: I don’t receive any compensation for my work as an EVF board member.
Hey, I wanted to clarify that Open Phil gave most of the funding for the purchase of Wytham Abbey (a small part of the costs were also committed by Owen and his wife, as a signal of “skin in the game”). I run the Longtermist EA Community Growth program at Open Phil (we recently launched a parallel program for EA community growth for global health and wellbeing, which I don’t run) and I was the grant investigator for this grant, so I probably have the most context on it from the side of the donor. I’m also on the board of the Effective Ventures Foundation (...
How much professional advice on the cost and resource requirements on refurbishing and maintaining the property did Owen obtain? I note this is a Grade 1 listed building.
Thank you Claire.
Just to understand fully: in your role in OpenPhil in November 2021, you acted as the key decision-maker to award a grant of ~£15m to the Effective Values Foundation while simultaneously acting as a Director of the Effective Values Foundation (appointment confirmed on 18 July 2019.)
Or have I misunderstood the role of "grant investigator" or some aspect of the timing?
I appreciate it may be worthwhile for OP to fund the acquisition of a dedicated EA events space, but the shift from:
"we should fund a dedicated EA events space"
to
"we should specifically fund the purchase of Wytham Abbey"
is alarming given the obvious challenges with Wytham Abbey (both with the property and the COI issues).
If EVF or OP wanted to purchase a dedicated event space and solicited applications/proposals for it, given all of the stated concerns, I am confident Wytham Abbey would not have won. I think it is worthwhile for OP to reflect on what went wrong here.
This reads as though the approach to grant making was “is this positive EV” rather than “does this maximise EV”, which seems bad.
It's no concern of mine how OP spends its money, but since it's come up here: I don't think your cost estimate can be correct.
Firstly, OP doesn't have the asset, so its resale value is irrelevant to you. It's all very well to say that proceeds would be used for EVF's general funding which would funge against OP's future grants, but (a) there doesn't seem to be anything stopping EVF from using the proceeds for some specific project which OP wouldn't otherwise fund and (b) it's possible to imagine a scenario in which OP ceases to fund EVF and there's n...
Hi Claire - thanks for the extra info here, which is very helpful.
Can you say whether you/Open Phil considered anything here to be a conflict of interest and if so how you managed that?
At a first glance, a trustee of EVF recommending a grant of £10m+ to EVF on behalf of their employer seems like a CoI.
Thanks for sharing this info, Claire!
I think your team correctly concluded that in-person events are enormously valuable for people making big career changes, but running in-person events are expensive and super logistically challenging. I think logistics are somewhat undervalued in the EA community, e.g. I read a lot of criticism along the lines of, "Why don't community organizers or EAGs just do some extremely time costly thing," without much appreciation for how hard it is to get things to happen.
From this perspective, lowering the barrier f...
Given the massive decline in expected EA liquidity since the purchase and the fact that the purchase was largely justified on the grounds that as a durable asset it could be converted back into liquid funds with minimal loss, why not sell it now?.
Not the intended audience, but as a US person who lives in the Bay Area, I enjoyed reading this really detailed list of what's often unusual or confusing to people from a specific different cultural context
I generally directionally agree with Eli Nathan and Habryka's responses. I also weak-downvoted this post (though felt borderline about that), for two reasons.
(1) I would have preferred a post that tried harder to even-handedly discuss and weigh up upsides and downsides, whereas this mostly highlighted upsides of expansion, and (2) I think it's generally easier to publicly call for increased inclusivity than to publicly defend greater selectivity (the former will generally structurally have more advocates and defenders). In that context I feel worse a...
Quite. I was in that Stanford EA group, I thought Kelsey was obviously very promising and I think the rest of us did too, including when she was taking a leave of absence.
I strongly disagree with Greg. I think CFAR messed up very badly, but I think the way they messed up is totally consistent with also being able to add value in some situations.
We have data I find convincing suggesting a substantial fraction of top EAs got value from CFAR. ~ 5 years have passed since I went to a CFAR workshop, and I still value what I learned and think it's been useful for my work. I would encourage other people who are curious to go (again, with the caveat that I don't know much about the new program), if they feel like they're in a ...
To build on Greg's example, I think in normal circumstances, if eg a school was linked with a summer camp for high schoolers, and the summer camp made the errors outlined in the post linked to, then the school would correctly sever ties with the summer camp.
The mistakes made seem to me to be outrageously bad - they put teenagers in the custody of someone they had lots of evidence was an unethical sociopath, and they even let him ask a minor to go to Burning Man with him, and after that still didn't ban him from their events (!). Although apparently l...
I don't find said data convincing re. CFAR, for reasons I fear you've heard me rehearse ad nauseum. But this is less relevant: if it were just 'CFAR, as an intervention, sucks' I'd figure (and have figured over the last decade) that folks don't need me to make up their own mind. The worst case, if that was true, is wasting some money and a few days of their time.
The doctor case was meant to illustrate that sufficiently consequential screw-ups in an activity can warrant disqualification from doing it again - even if one is candid and contrite about them. I ...
You said you wouldn’t tell anyone about your friend’s secret, but this seems like a situation where they wouldn’t mind, and it would be pretty awkward to say nothing…etc.
This isn't your main point, and I agree there's a lot of motivated cognition people can fall prey to. But I think this gets a bit tricky, because people often ask for vague commitments, that are different from what they actually want and intend. For example, I think sometimes when people say "don't share this" they actually mean something more like "don't share this with people that ...
This seems really exciting, and I agree that it's an underexplored area. I hope you share resources you develop and things you learn to make it easier for others to start groups like this.
PSA for people reading this thread in the future: Open Phil is also very open to and excited about supporting AI safety student groups (as well as other groups that seem helpful for longtermist priority projects); see here for a link to the application form.
I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated.
[Below written super fast, and while a bit sleep deprived]
An overly crude summary of my current picture is "if you do community-building via spoken interactions, it's somewhere between "helpful" and "necessary" to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, be...
A lot of what Claire says rings true to me.
Just to focus on my experience:
>It's fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn't be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren't trying to make calls about what kinds of community-building work needs to happen.
I put a bunch of weight on decision theories which support 2.
A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldn't have that happiness now if I didn't believe he would actually do it, and it'd be hard for him (a person who lives with me and who I've dated for many years) to make me believe that he actually would pursue them even if it weren't true (as well as seeming ske...
Thanks for this! Most of what you wrote here matches my experience and what I've seen grantees experience. It often feels weird and frustrating (and counter to econ 101 intuitions) to be like "idk, you just can't exchange money for good and services the obvious way, sorry, no, you can't just pay more money to get out of having to manage that person and have them still do their work well" and I appreciate this explanation of why.
Riffing off of the alliance mindset point, one shift I've personally found really helpful (though I could imagine it backfiring for other people) in decision-making settings is switching from thinking "my job is to come up with the right proposal or decision" to "my job is to integrate the evidence I've observed (firsthand, secondhand, etc.) and reason about it as clearly and well as I'm able".
The first framing made me feel like I was failing if other people contributed; I was "supposed" to get to the best decision, but instead I came to the wrong on...
This is a cool idea! It feels so much easier to me to get myself started reading a challenging text if there's a specified time and place with other people doing the same, especially if I know we can discuss right after.
I'm interested in and supportive of people running different experiments with meta-meta efforts, and I think they can be powerful levers for doing good. I'm pretty unsure right now if we're erring too far in the meta and meta-meta direction (potentially because people neglect the meta effects of object-level work) or should go farther, but hope to get more clarity on that down the road.
So to start, that comment was quite specific to my team and situation, and I think historically we've been super cautious about hiring (my sense is, much moreso than the average EA org, which in turn is more cautious than the next-most-specific reference class org).
Among the most common and strongest pieces of advice I give grantees with inexperienced executive teams is to be careful about hiring (generally, more careful than I think they'd have been otherwise), and more broadly to recognize that differences in people's skills and interests leads to ...
Thanks Akash. I think you're right that we can learn as much from successes and well-chosen actions as mistakes, and also it's just good to celebrate victories. A few things I feel really pleased about (on vacation so mostly saying what comes to mind, not doing a deep dive):
Thoughtful and well-informed criticism is really useful, and I'd be delighted for us to support it; criticism that successfully changes minds and points to important errors is IMO among the most impactful kinds of writing.
In general, I think we'd evaluate it similarly to other kinds of grant proposals, trying to gauge how relevant the proposal is to the cause area and how good a fit the team is to doing useful work. In this case, I think part of being a good fit for the work is having a deep understanding of EA/longtermism, having really strong epistemics, and buying into the high-level goal of doing as much good as possible.
I think a problem here is when people don't know if someone is being fully honest/transparent/calibrated or using more conventional positive-slanted discourse norms. E.g. a situation where this comes up sometimes is taking and giving references for a job applicant. I think the norm with references is that they should be very positive, and you're supposed to do downward adjustments on the positivity to figure out what's going on (e.g. noticing if someone said someone was "reliable" versus "extremely reliable"). If an EA gives a reference for a job applicant...
No, that's not what I'd say (and again, sorry that I'm finding it hard to communicate about this clearly). This isn't necessarily making a clear material difference in what we're willing to fund in many cases (though it could in some), it's more about what metrics we hold ourselves to and how that leads us to prioritize.
I think we'd fund at least many of the scholarships from a pure cost-effectiveness perspective. We think they meet the bar of beating the last dollar, despite being on average less cost-effective than 80k advising, because 80k advisi...
Hm yeah, I can see how this was confusing, sorry!
I actually wasn't trying to stake out a position about the relative value of 80k vs. our time. I was saying that with 80k advising, the basic inputs per career shift are a moderate amount of funding from us and a little bit of our time and a lot of 80k advisor time, while with scholarships, the inputs per career shift are a lot of funding and a moderate amount of our time, and no 80k time. So the scholarship model is, according to me, more expensive in dollars per career shift, but less time-consuming of ded...
Agree. If possible, also, lots of private rooms people can grab for sensitive conversations, and/or places outside they can easily and pleasantly walk together, side-by-side, for same.
I haven't looked closely, but from a fairly-but-not-completely uninformed perspective, Tim's allocation of part of his donor lottery winnings to the Czech Association for Effective Altruism looks prescient and potentially unusually counterfactually impactful.
[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]
I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just a...
I like this question :)
One thing I've found pretty helpful in the context of my failures is to try to separate out (a) my intuitive emotional disappointment, regret, feelings of mourning, etc. (b) the question of what lessons, if any, I can take from my failure, now that I've seen the failure take place (c) the question of whether, ex ante, I should have known the endeavor was doomed, and perhaps something more meta about my decision-making procedure was off and ought to be corrected.
I think all these things are valid and good to process, but I...
I’ll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset next time I check.
It's not out yet, so I expect you will get your wish if you check a bit after it's released :)
Just a personal note, in case it's helpful for others: in the past, I thought that medications for mental health issues were likely to be pretty bad, in terms of side effects, and generally associated them with people in situations of pretty extreme suffering. And so I thought it would only be worth it or appropriate to seek psychiatric help if I were really struggling, e.g. on the brink of a breakdown or full burn-out. So I avoided seeking help, even though I did have some issues that were bothering me. In my experience, a lot of other people ...
Seconding this. My partner was spooked by seeing a family member on heavy-duty medications for a more serious mental health situation, so our vague impression was that antidepressants might really change who I was. I did need to try a couple meds and try different times of day, etc to deal with side effects, but at this point I have a med and dose that makes my life better and has very minor side effects.
As a second data point, my thought process was pretty similar to Claire's - I didn't really consider medication until reading Rob's post because I didn't think I was capital D depressed, and I'm really glad now that I changed my mind about trying it for mild depression. I personally haven't had any negative side effects from Wellbutrin, although some of my friends have.
Scott's new practice, Lorien Psychiatry, also has some resources that I (at least) have found helpful.
Also, I believe it's much easier to become a teacher for high schoolers at top high schools than a teacher for students at top universities, because most teachers at top unis are professors, or at least lecturers with PhDs, while even at fancy high schools, most teachers don't have PhDs, and I think it's generally just much less selective. So EAs might have an easier time finding positions teaching high schoolers than uni students of a given eliteness level. (Of course, there are other ways to engage people, like student groups, for which different dynamics are at play.)
Huh, this is great to know. Personally, I'm the opposite, I find it annoying when people ask to meet and don't include a calendly link or similar, I am slightly annoyed by the time it takes to write a reply email and generate a calendar invite, and the often greater overall back-and-forth and attention drain from having the issue linger.
Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy.
My feelings are both that it's a great app and yet sometimes I'm irritated when the other person sends me theirs.
If I introspect on the times when I feel the irritation, I notice I feel like they are shirking some work. Previously we were working together to have a meeting, but now I'm doing the work to have a meeting with the other person, where it's my job and not theirs to make it happen.
I think I expect some of of the following asymmetries in responsibility to happen with a much higher frequency than with old-fashioned-coordination:
Some people are making predictions about this topic here.
On that link, someone comments:
Berkeley's incumbent mayor got the endorsement of Bernie Sanders in 2016, and Gavin Newsom for 2020. Berkeley also has a strong record of reelecting mayors. So I think his base rate for reelection should be above 80%, barring a JerryBrownesque run from a much larger state politician.
https://www.dailycal.org/2019/08/30/berkeley-mayor-jesse-arreguin-announces-campaign-for-reelection/
I just wanted to say I thought this was overall an impressively thorough and thoughtful comment. Thank you for making it!
I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people.
The survey is here: https://docs.google.com/forms/d/e/1FAIpQLScEwPFNCB5aFsv8ghIFFTbZS0X_JMnuquE3DItp8XjbkeE6HQ/viewform?usp=sf_link. Open Philanthropy a...
[meta] Carl, I think you should consider going through other long, highly upvoted comments you've written and making them top-level posts. I'd be happy to look over options with you if that'd be helpful.
Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
... (read more)