All of Jack Lewars's Comments + Replies

This is an absolutely staggering challenge - anyone who has ever run even a half marathon will know how long that feels, and this is tens of them back to back!

I also know as a former colleague how much effort Emma puts into training, getting up at 5am every day so she can be at her desk ready to go for 9am.

This may not be the type of fundraising EA usually does but a pledge of even $1 per mile could help to crowd in new donors who wouldn't give on a recurring basis to New Incentives. I really encourage promoting this campaign as HIA trials campaign fundraising through sport as a way to widen the base of effective donors.

6
Neel Nanda
1mo
I'd be curious to hear your or Emma's case for why it's notably higher impact for a forum reader to donate via the campaign rather than to New Incentives directly (if they're inclined to make the donation at all)

Thanks mate, this is really lovely to read. Will you be at EAG London?

One for the World has had the Double The Donation widget for a couple of years. Unfortunately, it is about to become considerably more expensive, as they are upgrading everyone to a service called Match365. The plus side of this is that it will search your database for people who could be getting matching and email them proactively, and try to smooth the process of them getting matching. The downside is that it's fairly expensive (~$4k/year), but I think it'd still be positive ROI, at least in the first year (probably with diminishing returns after that).

O... (read more)

2
Neil Warren
3mo
I did not know about benevity. High value comment overall, thank you for your contribution!

I believe GWWC do include 'income you have sacrificed to do direct work' in their pledge. (Correct me if I'm wrong, GWWC folks.)

Although, of course, your main argument (which I endorse) would still be true - you'd almost certainly still be earning substantially above the minimum wage in your region, and could most probably still give up 10% of your income, without materially affecting your wellbeing. So it's a bit debatable if you should then feel like you've "fulfilled a pledge" by doing direct work at a lower salary.

7
Luke Freeman
5mo
(Excuse the brevity, typing on my phone). The spirit/norm is: 1. Choosing a lower paying job for impact reasons (regardless of how high that impact may or may not be) ≠ pledged donation 2. Voluntary pre-tax donation (to an org that meets the pledge criteria) via payroll giving or other means = pledged donation 3. Voluntary pre-tax donation (to an org that meets the pledge criteria, where you also happen to work) via payroll giving or other means (e.g. writing to say please pay me $ less than is on my contract for Y period) = pledged donation 4. #3, but artificially inflating your salary (eg, you offered me X but can you say you offered me Y) ≠ pledged donation 5. #3, but the donation doesn’t meet the pledge criteria (eg “I asked GiveDirectly to pay me less than my contract but I truly believe that The Humane League is the best thing I could be donating to right now”.) ≠ pledged donation TL;DR Spirit of the pledge: If a voluntary pre-tax donation/salary sacrifice can be the means for which someone fulfils their pledge if it’s basically a different way of transacting a voluntary donation. Like if you owe a friend $20 and then you use your card to pay for $100 for a dinner for the both of you it might be easier for him to just pay you $30 (instead of him paying you $50 and you then giving him $20).
1
Péter Drótos
5mo
Yes, the wellbeing argument still applies even after the paycut if you’re still substantially above the regional minimum. But if you compare it to your original wage, given that you only committed to a barely noticable sacrifice, which the paycut itself likely even surpasses signifficantly, that additional 10% might just be too big of an ask. However if one is also just fine with earning around the lower wage and even donating from that, then I think considering something like taking the further pledge may also be an interesting idea.
6
Larks
5mo
This does not seem like a very plausible interpretation of the actual words in the pledge, and the GWWC FAQ is quite clear on this matter:
2
PeterSlattery
5mo
Is this true GWWC? I didn't realise that sacrificed income counted.

Thanks for clarifying, Lizka. Just for clarity on what follows - I absolutely don't think you're thinking this through in bad faith, so I don't want to come across as suggesting that. I do wonder if there might be some blind spots in your reasoning though, so I'm testing out the following to shine a light into those grey areas.

-

On your point 1 - what if was just "before my expected natural death/in my will"? I guess my point is: would you accept that it's reasonable to pledge to give away financial resources that you ultimately didn't need, after they had ... (read more)

One potential crux here is: in what percentage of potential universes should my runway be long enough? 100 percent is impossible, and 99.999999 percent is impossible too, with a possible exception for HNW individuals.

One potential diagnostic here (in general, not commenting on anyone's particular situation): if someone concludes that they need a very high coverage runway, how much of their current income are they devoting to getting there?

If I conclude that I needed 99 percent coverage, the risk of the runway not being complete when I needed it would be si... (read more)

Thanks for writing this - it definitely makes sense to me and resonates with another discussion we had in the Berlin EA office recently on what counts as "disposable income".

I would just note three things:

  1. If you are not pledging in order to build up financial security, would you consider pledging to donate these funds later in life if it turns out you don't need them? A lot of your reasoning seems to be about self-insurance, which makes sense, but in the event that you don't need the insurance (e.g. because you stay impactfully employed largely without a
... (read more)

Thanks for engaging! Quick thoughts:

  1. Yeah, I don't expect to be passing on a nontrivial inheritance to kids. Pledging to do something specific here currently seems unfeasible, though; I have no idea what the world will be like when I'm in my 70s. Examples of weirdness (even setting aside AI developments): maybe we've made serious medical breakthroughs and I'm still expecting to work for a long time, maybe money works in seriously different ways, etc. I haven't thought about this much, though, and it might be worth thinking about (e.g. maybe there's a nicely
... (read more)

David Milliband, CEO of IRC, for an EA-adjacent view on how to be most effective in global health.

David can speak to why he doesn't just follow EA orthodoxy in running a very large development org with a massive budget. These reasons might prove to be good or bad or just thought-provoking

1
Chantal
6mo
Yes I would love to hear a little more from "mainstream" aid and development orgs and have that discussion around how they see EA ideas, and how EA-compatible ideas are growing (or not) within those spaces. Also USAID, World Bank etc. although I don't have a specific name.

Thanks, I agree with your clarification on the point I was trying to make

Thanks, this also made me pause. I can imagine some occasions where you might encourage employees to break the law (although this still seems super ethically fraught) - for example, some direct action in e.g. animal welfare. However, the examples here are 'to gain recreational and productivity drugs' and to drive around doing menial tasks'.

So if you're saying "it isn't always unambiguously ethically wrong to encourage employees to commit crimes" then I guess yes, in some very limited cases I can see that.

But if you're saying "in these instances it was honest, honourable and conscientious to encourage employees to break the law" then I very strongly disagree.

I'm particularly interested in whether or not they were encouraged to break the law for people who had financial and professional power over them, which seems less nuanced than 'how threatening is or isn't this WhatsApp exchange'.

This gave me pause for thought, so thank you for writing it. I also respect that you likely won't engage with this response to protect your own wellbeing.

I just want to raise, however, that I think you have almost completely failed to address a) the power dynamics involved; and b) the apparently uncontroversial claim that people were asked to break laws by people who had professional and financial power over them.

It seems impossible to square the latter with being "honest, honourable and conscientious".

Ruby
7mo35
15
3

There are a lot of dumb laws. Without saying it was right in this case, I don't think that's categorical a big red line.

I understand that you are using this as an example of something you think is untrue and to demonstrate the asymmetrical burden of refuting a lot of claims.

However, if you're prioritising, I would be most interested in whether it is true that you a) encouraged someone who you had financial and professional power over to drive without a driving licence; and b) encouraged someone in the same situation to smuggle drugs across international borders for you.

Whether or not they are formally an employee, encouraging people you have financial and professional power... (read more)

We chose this example not because it’s the most important (although it certainly paints us in a very negative and misleading light) but simply because it was the fastest claim to explain where we had extremely clear evidence without having to add a lot of context, explanation, find more evidence, etc. 

Even so, it took us hours to put together and share. Both because we had to track down all of the old conversations, make sure we weren’t getting anything wrong, anonymize Alice, format the screenshots (they kept getting blurry), and importantly, write i... (read more)

I'm surprised to see many comments that treat something other than this (particularly the request to transport drugs across a country border) as the crux.

From my read of Ben Pace's post, Nonlinear admits that this is true.

Beautifully written, thank you for writing it

2
Omnizoid
7mo
Thank you!  It was your speech at the OFTW meeting that largely inspired it. 

Thank you for writing this and for the honest self evaluation.

Can I reserve two of these for me and my wife (not on the forum and so not under 'Going' above)? If not, I can book my own

1
AronM
9mo
Okay got 2 for you :) (I am out of tickets. Now you need to book yourself.)

Hey Lynn - so OFTW probably isn't best-placed for a national group, as our page would look something like this: https://donational.org/oftw-uk

However, Giving What We Can might be able to offer something a bit better.

What country are you based in?

1
lynn
9mo
Thanks for responding Jack!  I co-organise the EA UK group: https://effectivealtruism.uk/

Thanks for engaging so positively here.

A couple of quick reactions:

much more cost-effectively that GiveWell's top charities, whose sign of impact is unclear to me

This is a very bold claim, made quite casually! Especially in light of:

There is a sense in which feedback loops are short.

I would evaluate these options through the GiveWell criteria - evidence of effectiveness, cost-effectiveness and room for more funding.

For the GiveWell charities, they score very highly on each metric. For example, they are each supported by multiple randomised control trials. ... (read more)

3
Vasco Grilo
9mo
Likewise! Sorry. I had given some context in the post (not sure you noticed it). You can find more in section 4 of Mogensen 2019 (discussed here): You say that: I believe those criteria are great, and wish effective giving organisations applied them to their own operations. For example, doing retrospective and prospective cost-effectiveneness analyses. My concern is that GiveWell's metrics (roughly, lives saved per dollar[1]) may well not capture most of the expected effects of GiveWell's interventions. For example: From Mogensen 2019: Feel free to check section 4.1 for many positive and negative consequences of increasing and decreasing population size. Note I am not saying the relationships are simple or linear, just that they very much matter. Without nuclear weapons, there would be no risk of nuclear war. In any case, I agree the correlation between longtermist outcomes (e.g. lower extinction risk) and the measurable outputs of longtermist interventions (e.g. less nuclear weapons) will tend to be lower than the correlation between neartermist outcomes (e.g. deaths averted) and the measurable outputs of neartermist interventions (e.g. distributed bednets). However, my concern is that the correlation between neartermist outcomes (e.g. deaths averted) and ultimately relevant outcomes (welfare across all space and time, not just very nearterm welfare of humans) is quite poor.  Fair! For what is worth, I was not commited to donating solely to longtermist interventions from the onset. I used to split my donations evenly across all 4 EA Funds, and wrote articles about donating to GiveWell's top charities on the online newspaper of my former university. I agree longtermist organisations should do more to assess their cost-effectiveness and room for more funding. One factor is that longtermist organisations tend to be smaller (not Nuclear Threat Initiative), so they have less resources to do such analyses (although arguably still enough). 1. ^ In real

Finally, I think you acknowledge but probably underweight the importance of giving more weight to recent performance. For many organisations, the 'revenue curve' of donations will start out low but then grow rapidly. So the relevant thing for me is the direction of travel of Ayuda Efectiva, not its performance as an average of its first three years. You can see the value of looking at the direction of travel if you look at the performance of Effektiv Spenden and, to some extent, Giving What We Can (although GWWC had significant 'unfair advantages' in its early years). In each case, their performance has improved substantially over time.

2
Vasco Grilo
9mo
Thanks for noting that, Jack! Note the factual non-marginal multipliers I present for the whole period (e.g. 2019 to 2021 for Ayuda Efectiva) are not the mean across the factual non-marginal multipliers for the years of that period. I calculate the factual non-marginal multiplier for a given period from the ratio between donations received to be directed towards effective organisations and costs, so the years with greater volume of donations and costs will have a larger weight. This explains why Ayuda Efectiva's multiplier for 2019 to 2021 (1.34) is not too different from that for 2021 (1.72). It would be nice if effective giving organisations forecasted their future costs, donations received, and multipliers, thus doing a prospective cost-effectiveness analysis.

In contrast, 11 % and 15 % of GWWC’s pledge and non-pledge donations went towards the area of creating a better future, which I think is much more effective. I do not think this corresponds to an extreme position.


I don't think this position is "extreme" but it is certainly highly debatable. Longtermist giving has fewer donation opportunities; can absorb less extra funding and deploy it effectively; and has very long feedback loops, which are hard to measure and have untested theories of change. In the case of AI safety, it also seems to have... (read more)

8
Vasco Grilo
9mo
Thanks for the comment, Jack! Could you clarify what do you mean by donation opportunities? The way I think about it, it makes sense for small donors to donate to whatever organisation/fund they think is more cost-effective at the margin (for large donors, diminishing marginal returns are important, so it makes sense to follow a portfolio approach). Personally, I would say interventions around biosecurity and pandemic preparedness and civilisation resilience can aborb and deploy funding much more cost-effectively that GiveWell's top charities, whose sign of impact is unclear to me. There is a sense in which feedback loops are short. Some examples: * Outcome: decreasing extinction risk from climate change. Goal: decreasing emissions. Nearterm proxy: adoption of a policy to decrease emissions. * Outcome: decreasing extinction risk from nuclear war. Goal: decrease number of nuclear weapons. Nearterm proxy: agreements to limit nuclear arsenals. * Outcome: decreasing extinction risk from engineered pandemics. Goal: increase pandemic preparedness. Nearterm proxy: ability to rapidly scale up the production of vaccines. You can think about it in another way. Can projects funded by Open Philanthropy (or other) meaningfully increase the tiny amount of people working on AI Safety (90 % confidence interval, 200 to 1 k), or improve their ability to do so? I agree there is lots of uncertainty, but I do not think one should assume Open Philanthropy has figured out the best way to handle it. I believe individuals could use Open Phil's views as a starting point, but then should (to a certain extent) try to look into the arguments, and update their views (and donations) accordingly. Personally, I used to distribute all my donations evenly across the 4 EA Funds (25 % each), but am now directing 100 % of my donations to the Long-Term Future Fund. This does not mean I think neartermist interventions should receive 0 donations. Even if I thought the optimal fraction of longtermi

Thanks for this Vasco - always a useful exercise to look at cost-effectiveness, especially in an area like effective giving, where the money-moved is quite easily measured.

Some thoughts on this, which I'll split into different comments for ease of discussion:

"Nevertheless, the counterfactual marginal multipliers adjusted for cost-effectiveness and indirect impacts should ideally be equal. In other words, donating to any effective giving organisation should be similarly effective taking into account all effects."

This seems very unlikely to be true in p... (read more)

I'd also submit that the relative impact of effective-giving organizations nearer the "grassroots" level will likely be underestimated by looking solely at money moved. For example, grassroots effective-giving campaigns provide people with accessible ways to take action, which itself can spur greater commitment and downstream positive actions that aren't captured well by a money-moved analysis alone. In contrast, money moved likely does a better job capturing the bulk of the impact from UHNW outreach.

2
Vasco Grilo
9mo
Thanks, Jack! It is always good to receive feedback on such exercises too! I agree with all the points you make. As I said: However, although it is fine for the (all things considered) factual non-marginal multipliers to be different, the (all things considered) counterfactual marginal multipliers should be the same. If the marginal cost-effectiveness of donating to X is higher than that of donating to Y, one should donate more to X at the margin (which does not mean one should donate 0 to Y).

Thanks for publishing this Ollie - really interesting, and definitely great 'truth-seeking'.

I think I'd highlight some of your caveats (although this could be a case of me prioritising intuition over data and being misled by that).

My experience as a full-time ED within EA is that retreats are substantially more valuable than EAGs/EAGxs/EAGxVirtuals. For example, I would skip every conference to go to the Effective Giving Summit; and I felt that this summit was roughly 10x more valuable than what I would otherwise have spent the time on. I expect this relat... (read more)

2
OllieBase
9mo
Thanks, Jack. Yes, I wouldn't be surprised if some retreats are more valuable for people who are already engaged, perhaps because the admissions process is more selective. But I would say you also aren't the main target for community-building events; the difference seems small for those newer to the field.
5
Jason
9mo
I think your online-conference paragraph points, among other things, to the cost of attendee time as an important factor to weigh in many cases. It's plausible to me that the online vs in person decision would come down to a time-money tradeoff.

I think the application process is spelt out in the application pack, which is still live here: https://1fortheworld.org/jobs-at-oftw

We did an initial screen on the application form, then 3 people reviewed the remaining candidates in more depth to form a longlist, and now 10 people are having three 30-min 'informal chats' with me and two Board members. Finally, our recommendations will go to the whole Board. In each round, we had three opinions and the other two were Board members.

Grayden and the EA Good Governance Project will advise you to keep the ED ou... (read more)

Yes, this is more what I meant (although not sure this defuses the criticisms/disagreement)

This was basically my thinking. I think it is reasonable to keep an emergency fund to cover things like (in particular) unexpected healthcare bills, and that giving away 10% would make this hard to do. My anecdotal experience of high cost of living cities in the US is that it would be challenging to live there on $35k of take home salary.

Of course, in a strictly utilitarian sense, I guess it isn't "reasonable", because it's not more "reasonable" than protecting someone from a deadly case of malaria - but then none of us lives out that maximalist thinking i... (read more)

100%. Very happy to share our onboarding flow for anyone who is interested

Great question. Our two main sources were targeted messages to potential candidates using LinkedIn searches; and 80k, which has such a big reach that it bought in a lot of people who wouldn't consider themselves 'part of EA' (obviously you could argue that they are if they are on 80k's newsletter, but I think that is a low bar).

We are also a bit unusual in having a Board almost entirely composed of people who are 'EA adjacent but not core EA', and so that helped to spread the word to people within their networks but still able to offer an outside view.

Risks/objections/exemptions:

1. Bigger Boards are unwieldy - yes, they can be. This is really a question of Board process though. A good chairperson and CEO can keep meetings on track, prevent grandstanding etc.

2. Board members can interfere and slow things down - yes, bad Board members can, but you can manage this (especially with a competent chairperson). Imagine an employee told you that they won't accept having a manager, because managers can interfere and slow things down. I imagine the response would be 'yes, bad managers do that; but you don't get to... (read more)

Thanks for this Sjir. Unsurprisingly, I agree with almost all of it.

I think you make this point overall but just want to emphasise: I don't think giving 10% is feasible for the median American in their twenties (for example). So I think I would phrase this as something like 'giving effectively should be normal for those with disposable income in high income countries; and it should be normal to give significantly according to your means.'

You may be right that 10% is in fact feasible for most people within EA, but that is a function of EA attracting a lot of people with unusual financial security.

4
Joseph Lemien
10mo
I like this rephrasing, because it makes somewhat clear that if you lack disposable income we aren't expecting your to put yourself in hardship in order to donate.

Also I hope everyone who is downvote disagreeing with Jack here is giving away at least 10% :D :D :D.

[anonymous]10mo40
19
0

I feel like the spirit of early EA and Giving What We Can (and Christianity, TBH) was pushing back on this Occupy mindset of "We are the 99%." In other words, if you're part of the global elite (or even the 50%?), rather than focusing your energy on attacking people even more privileged than you, focus on taking responsibility for your own privilege and what you personally can give first.

I think it's even more important to promote this idea today as I get the sense that the Western world is getting even angrier and adopting even more of a victim mindset, s... (read more)

I think this might come down to opinion and lifestyle expectation, but I think 10% could be feesable for the median American in their 20s. 

 According to this forbes article, the median wage of someone in their 20s in the states is about $45,000 US - this puts you in the realm of the top 1% of earners worldwide (hard to know exactly).

Yes the cost of living is higher in America than many countries, but its less than the majority of other high income countries. I think that giving away 5,000 dollars-ish annually and living on 40k could be feesable f... (read more)

I'm interested to know if you think there could be a problem with attracting people where remuneration is the biggest decision-making factor.

I am somewhat sceptical of this argument (I have seen it used to say "we should pay minimum wage because if you really care you'll take the job anyway") - but I also wonder if we can sustain a bidding war against e.g. DeepMind for talent, and if a better approach might be something like GiveWell's ('salary shouldn't be the primary reason people leave GiveWell, but also not the primary reason that they join').

What do you think?

1
Prometheus
10mo
I don't think anyone can win a bidding war against OpenAI right now, because they've established themselves as the current "top dog". Even if some other company can pay them more, they'd probably still choose to work at OpenAI instead, just because they're OpenAI. But not everyone can work at OpenAI, so that still gives us a lot of opportunity. I don't think this would be much of a problem, as long as the metrics for success are set. As mentioned above, x gains in interpretability is something that can be demonstrated, and at that point it doesn't matter who does it, or why they do it. Other fields of alignment are harder to set metrics for, but there are still a good number of unsolved sub-problems that are demonstrable if solved. Set the metrics for success, and then you don't have to worry about value drift.

Thanks so much for writing this.

In the specific instance that someone challenges you over using QALYs/DALYs at all, what would you say in response?

It seems to me that you do at some point have to bite the bullet and believe that 'some disabilities are more life limiting than others'; and 'there are many disabilities that I would choose to avoid'. But then I feel like I'm implicitly saying something about valuing some people's lives less than others, or saying that I would ultimately choose to divert resources from one person's suffering to another's.

This actually came up in a corporate talk that I did and I fudged the answer, incidentally.

This is a conversation I have a fair amount when I talk to non-EA + non-medical friends about work, some quick thoughts:

If someone asks me Qs around DALYs at all (i.e. "why measure"), I would point to general cases where this happens fairly uncontroversially, e.g.:

-If you were in charge of the health system, how would you choose to distribute the resources you get?

-If you were building a hospital, how would you go about choosing how to allocate your wards to different specialties?

-If you were in an emergency waiting room and you had 10 people in the waitin

... (read more)

You might enjoy the book 'Thanks for the Feedback', which basically emphasises this point a lot.

I guess any of the following might be examples (emphasis on might):

  • it seems bad to buy expensive historic buildings, which don't seem fit-for-purpose for the proposed use case and have really high running costs - but the people involved are really smart, so...

  • it seems bad to fly people to the Bahamas to do coworking and collaboration, and like this is being driven by a billionaire's desire for company and personal convenience. It seems like this wouldn't be the method you would choose if you were starting from a point of maximising impact and cost-ef

... (read more)
2
MichaelPlant
10mo
Or, senior AI researcher says that AI poses no risk because it's years away. This doesn't really make sense - what will happen in a few years? But he does seem smart and work for a prestigious tech company, so...
3
NickLaing
10mo
Fantastic examples, I understand it better now And 100% agree with you that I  assessed all of those examples above and was bewildered that so many people seemed to defend them, often based on the fact that "smart and good people" had made the decision Nice one

I agree with your second point wholeheartedly.

Could you give some examples of the panics over minor incidents?

The Bostrom email situation, and the Tegmark grant proposal situation, both seem very minor to me, at least compared to many other things that have happened to EA in the past with the same amount of panic or less.

Thanks Tristan. I outlined these briefly above but I think they are things like:

  • anyone in a relationship with anyone else is recused from all professional decision-making affecting that person. They can't hire or fire them, they can't conduct performance reviews, they can't promote them, they can't set their pay. They definitely shouldn't be the decision-maker on these things, but ideally shouldn't have input either - they just are not able to be impartial, and any process they fed into could easily be challenged as unfair (by their partner if they don't
... (read more)

Thanks for posting this.

Just to check my understand - did the participants actually donate their own money? Or were they asked how many frictional units of money they would theoretically donate?

1
benleo
1y
Participants donated their own money. They received a bonus of £1 and could choose how much of it they wanted to keep or donate. 

This is my intuition as well - the phrasing of the 'strong demandingness' seemed quite jarring compared to the usual language of donation page copy.

I'm very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can't, because the Board becomes instantly non-quorate.

4
Jonas V
1y
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average. (I avoid the term "value-alignment" because different people interpret it very differently.)

It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.

I think that's false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.

Hi Robin - thanks for this and I see your point. I think Jason put it perfectly above - alignment is often about the median Board member, where expertise is about the best Board member in a given context. So you can have both.

I have also seen a lot of trustees learn about the mission of the charity as part of the recruitment process and we shouldn't assume the only aligned people are people who already identify as EAs.

The downsides of prioritising alignment almost to the exclusion of all else are pretty clear, I think, and harder to mitigate than the downsides or lacking technical expertise, which takes years to develop.

7
Jason
1y
The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.

Isn't part of this considering whether Will's comparative advantage is as a Board member? It seems very unlikely to me that it is, versus being a world class philosopher and communicator.

So I agree with your general point that leaders who make mistakes might not need to resign, but in the specific case I can't see how Will is most impactful by being a Board member at really any org, as opposed to e.g. a philosophical or grant-making advisor.

Thanks for making the case. I'm not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I'm highly confident is false.

Namely - it isn't hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I've recruited ~20 very good Board members in my career and have never run an open process that didn't find at least some qualified, diligent people, who did a good job.

EA ... (read more)

Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.

If you give people who aren't very dedicated to EA values legal control over EA organisations, they won't be EA organisations for very long.

There are under 5,000 EA community members in the world – most of them have no management experience.

Sure, you could give up 1/3 of the control to people outside of the community, but this doesn't solve the problem  (it only reduces the need for board members by 1/3).

no lawyers/accountants/governance experts

I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it's a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you're tied up in the decade's biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.

TL;DR: You're incorrectly assuming I'm into Nick mainly because of value alignment, and while that's a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.

I might write a longer response. For now, I'll say the following:

  • I think a lot of EA work is pretty high-context, and most people don't understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled
... (read more)

@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.

Alignment is super-important for EA organisations, I would put it as priority number 1, because if you're aligned to EA values then you're at least trying to do the most good for the world, whereas if you're not, you may not be even trying to do that.

For an example of a not-for-profit non-EA organisation that has suffered from a lack of alignment in recent times, I would point to the Wikimedia Foundation, which has regranted excess funds to extremely dubious organisations: https://twitter.com/echetus/status/1579776106034757633 (see also: https://en.wikiped... (read more)

Jason
1y55
24
5

And even if one really values "alignment," I suspect that a board's alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.

On the other hand, a board's level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.

So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)

Great post, thanks.

I might elaborate on your last category to include a) well-intentioned high competence people accidentally creating bad systems; and b) well-intentioned high competence people put into bad systems by leadership (so not just leaders but e.g. a community health team trying to deal with sexual harassment by one of their own Board members).

I think your section header covers this, but the body focuses specifically on CEOs and Boards. Lots of people in EA, not just leadership, can end up making mistakes because the systems/policies they work within aren't fit for purpose.

Hard agree with this. It's why I think healthy handling of relationships at work is so important as a stepping stone to a healthy community overall.

Thanks for writing this Amber. I pretty firmly disagree, but I'm upvoting it anyway because I think we need to discuss these issues in the open, and you've put across your point of view in a measured, reasonable way. I hope to draft a response soon with some alternative suggestions.

(Also, as usual, @Peter Wildeford has made most of my points in his comment.)

I think my main disagreement is that this is taking on a straw man argument. I'm not aware of anyone suggesting that we should "prevent people from forming relationships with whom they want", at least n... (read more)

1
Tristan Williams
1y
Your answer here may just be that it is organization specific, but something I'm keenly interested in is what the "X, Y and Z" from point 1 might look like. Questions come up here of a sort of federal vs state way of legislating how to navigate these, where a problem I see with your potential proposition is that it leaves the decisions in the hands of those who direct these various companies, potentially leading EA to (what I perceive to be) the predominant method of "X, Y and Z" being "romantic relationships are not allowed with colleagues". But I think the spirit of your comment is otherwise equally amenable to a process I think more, which is the federal equivalent here of discussing the issue as a community and putting forth ideals based on that discussion for entities relating to EA, guiding principles, if you will, that can be deviated from but that set a standard. This also carries the benefit of helping us to create standards for non-work contexts like EAGs. I see the next, most productive, outcropping of your comment to be figuring out what "X, Y and Z" should be, and not being experienced in this, would love to hear your thoughts about what might be reasonable. 

And some version of point 5 is the employer's legal obligation, at least under some circumstances in the US. E.g., this discussion:

https://everfi.com/blog/workplace-training/sexual-harassment-outside-of-work/

3
Peter Wildeford
1y
Yeah this is basically what I was trying to say in my comment.

Thanks Jack, I found this to be a very pragmatic and fair approach. Maybe the community health team could develop a suggested policy document for personal relationships within the professional context using this as a starting basis.

I think the word "probably" in this quotation is quite concerning - you should 100%, definitely, in every case and without question not let someone manage someone they are dating. It's an unresolvable conflict of interest and totally unprofessional.

But also, to Quinn's point, if it's a small org, even making this change might not really mitigate the problem. Imagine a 5 person team, where the CEO and one of the staff are dating, so then you change the reporting line for the junior person in the relationship. It seems highly probable that the new manager is going to be influenced by the fact that their boss is dating their subordinate.

I'm going to write a longer comment on how I think you can manage this below.

Load more