bruce

1385Joined Oct 2021

Bio

Doctor from NZ, now doing Global Health & Development Research @ Rethink Priorities, but interested and curious about most EA topics.

Outside of RP work, I spend some time doing independent "grand futures"/ GPR research (Anders Sandberg/BERI) and very sporadic grantmaking (EAIF). Also looking to re-engage with UN processes for OCA/Summit of the Future.

Feel free to reach out if you think there's anything I can do to help you or your work, or if you have any Qs about Rethink Priorities! If you're a medical student / junior doctor reconsidering your clinical future, or if you're quite new to EA / feel uncertain about how you fit in the EA space, have an especially low bar for reaching out.

Outside of EA, I do a bit of end of life care research and climate change advocacy, and outside of work I enjoy some casual basketball, board games and good indie films. (Very) washed up classical violinist and oly-lifter.

All comments in personal capacity unless otherwise stated.

Comments
82

Hey Ollie! Hope you're well. 

I think there’s a tricky trade-off between clarity and scope here....if we state guidelines that are very specific (e.g. a list of things you mustn’t do in specific contexts), we might fail to prevent harmful behaviour that isn’t on the list.

I want to gently push back on this a bit - I don't think this is necessarily a tradeoff. It's not clear to me that the guidelines have to be "all-inclusive or nothing". As an example, just because the guidelines say you can't use the swapcard app for dating purposes, it would be pretty unreasonable for people to interpret that as "oh, the guidelines don't say I can't use the swapcard app to scam people, that must mean this is endorsed by CEA".

And even if it's the case that the current guidelines don't explicitly comment against using swapcard to scam other attendees, and this contributes to some degree of "failing to prevent harmful behaviour that isn't on the list", that seems like a bad reason to choose to not state "don't use swapcard for sexual purposes".

RE: guidelines that include helpful examples, here's one that I found from 10secs of googling.

  • First it defines harrassment and sexual harrassment fairly broadly. Of course, what exactly counts as "reasonably be expected or be perceived to cause offence or humiliation" can differ between people, but this is a marginal improvement compared to current EAG guidelines that simply state "unwanted sexual attention or sexual harrassment".
  • It then gives a non-exhaustive list of fairly uncontroversial actions for its context - CEA can adopt its own standard! But I think it's fair to say that just because this list doesn't cover every possibility it doesn't necessarily mean the list is not worth including.
  • Notably, it also outlines a complaint process and details possible actions that may reasonably occur in response to a complaint. 

 As I responded to Julia's comment that you linked, I think these lists can be helpful because most reported cases are likely not from people intentionally wishing to cause harm, but differences in norms or communication or expectations around what might be considered harmful. Having a explicit list of actions helps get around these differences by being more precise about actions that are likely to be considered net negative in expectation.  If it's the case that there are a lot of examples that are in a grey area, then this may be an argument to exclude those examples, but it isn't really an argument against having a list that contains less ambiguous examples.

Ditto RE: different settings - this is an argument to have narrower scope for the guidelines, and to not write a single guideline that is intended to cover both the career fair and the afterparty, but not an argument against expressing what's unacceptable under one specific setting (especially when that setting is something as crucial as "EAG conference time")

Lastly, RE: "Responses should be shaped by the wishes of the person who experienced the problem" - of course it should be! But a list of possible actions that might be taken can be helpful without committing the team to a set response, but the inclusion of potential actions that can be taken is still reassuring and helpful for people to know what can be possible.

Again, this was just the first link I clicked, I don't think it's perfect, but I think there are multiple aspects of this that CEA could use to help with further iterations of its guidelines.

 

Another challenge is that CEA is the host of some events but not the host of some others associated with the conferences. We can’t force an afterparty host or a bar manager to agree to follow our guidelines though we sometimes collaborate on setting norms or encourage certain practices. 

I think it's fine to start from CEA's circle of influence and have good guidelines + norms for CEA events - if things go well this may incentivise other organisers to adopt these practices (or perhaps they won't adopt it, because the context is sufficiently different, which is fine too!) But even if other organisers don't adopt better guidelines, this doesn't seem like a particularly strong argument against adopting clearer guidelines for CEA events. The UNFCCC presumably aren't using "oh, we can't control what happens in UN Youth events globally, and we can't force them to agree to follow our guidelines" as an excuse to not have guidelines. But because they have their own guidelines, and many UN Youth events try to emulate what the UN event proper looks like, they will (at least try to) adopt a similar level of formality.

 

One last reason to err on the side of more precise guidelines echoes point 3 in what lilly shared above - if guidelines are vague and more open to interpretation by the Community Health team, this requires a higher level of trust in the CH team's track record and decision-making and management of CoIs, etc. To whatever extent recent events may reflect actual gaps in this process or even just a change in the perception here, erring on the side of clearer guidelines can help with accountability and trust building.

Thanks for writing this post!

I feel a little bad linking to a comment I wrote, but the thread is relevant to this post, so I'm sharing in case it's useful for other readers, though there's definitely a decent amount of overlap here.

TL; DR

I personally default to being highly skeptical of any mental health intervention that claims to have ~95% success rate, as well as a PHQ-9 reduction of 12 points over 12 weeks, as this is is a clear outlier in treatments for depression. The effectiveness figures from StrongMinds are also based on studies that are non-randomised and poorly controlled. There are other questionable methodology issues, e.g. surrounding adjusting for social desirability bias. The topline figure of $170 per head for cost-effectiveness is also possibly an underestimate, because while ~48% of clients were treated through SM partners in 2021, and Q2 results (pg 2) suggest StrongMinds is on track for ~79% of clients treated through partners in 2022, the expenses and operating costs of partners responsible for these clients were not included in the methodology.

(This mainly came from a cursory review of StrongMinds documents, and not from examining HLI analyses, though I do think "we’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money" seems a little overconfident. This is also not a comment on the appropriateness of recommendations by GWWC / FP)

 

(commenting in personal capacity etc)

 

Edit - this is more related to discussions around HLI's work as opposed to the strength of evidence in support of StrongMinds, but including as this is ultimately relevant for the topline conclusion about StrongMinds:

If I write a message like that because I find someone attractive (in some form), does that seem wrong to you? :) Genuinely curious about your reaction and am open to changing my mind, but this seems currently fine to me. I worry that if such a thing is entirely prohibited, so much value in new beautiful relationships is lost.

Yes, you're still contributing to harm (at least probabalistically) because the norm and expectation is currently that EAG / swapcard shouldn't be used as a speed-dating tool. So if you reaching out only because you find them attractive despite that, you are explicitly going against what other parties are expecting when engaging with swapcard, and they don't have a way to opt-out of receiving your norm-breaking message.

I'll also mention that you're arguing for the scenario of asking people for 1-1s at EAGS "only because you find them attractive". This means it would also allow for messages like, "Hey, I find you attractive and I'd love to meet." Would you also defend this? If not, what separates the two messages, and why did you choose the example you gave?

Sure, a new beautiful relationship is valuable, but how many non-work swapcard messages lead to a new beautiful relationship? Put yourself in the shoe of an undergrad who is attending EAG for the first time, wishing to learn more about a potential career in biosecurity or animal welfare or AI safety. Now imagine they receive a message from you, and 50 other people who also find them attractive. This doesn't seem like a good conference experience, nor a good introduction to the EA community. It also complicates the situation with people they want to reach out to as it increases uncertainty around whether people they want to meet with are responding in a purely professional sense, or whether they are just opportunistic. Then there's an additional layer of complexity when you add in things around power dynamics etc. Having shared professional standards and norms goes some way to reducing this uncertainty, but people need to actually follow them.

If you are worried that you'll lose the opportunity for beautiful relationships at EAGs, then there's nothing stopping you from attending something after the conference wraps up for the day, or even organising some kind of speed-dating thing yourself. But note how your organised speed-dating event would be something people choose to opt in to, unlike sending solicitation DMs via an app intended to be used for professional / networking purposes (or some other purpose explicit on their profile - i.e. if you're sending that DM to someone whose profile says "DM me if you're interested in dating me", then this doesn't apply. The appropriateness of that is a separate convo though).

Some questions for you:

  1. You say you're "open to changing your mind" - what would this look like? What kind of harm would need to be possible for you to believe that the expected benefit of a new beautiful relationship isn't worth it?
  2. What's the case that it's the role of CEA and EAG to facilitate new beautiful relationships? Do you apply this standard to other communities and conferences you attend?

 

I'll also note Kirsten's comment above, which already  talks about why it could be plausibly be bad "in general":
"The EAG team have repeatedly asked people not to use EAG or the Swapcard app for flirting. 1-1s at EAG are for networking, and if you're just asking to meet someone because you think they're attractive, there's a good chance you're wasting their time. It's also sexualizing someone who presumably doesn't want to be because they're at a work event."

And Lorenzo's comment above:
"Because EAG(x) conferences exist to enable people to do the most good, conference time is very scarce, misusing a 1-1 slot means someone is missing out on a potentially useful 1-1. Also, these kinds of interactions make it much harder for me to ask extremely talented and motivated people I know to participate in these events, and for me to participate personally. For people that really just want to do the most good, and are not looking for dates, this kind of interaction is very aversive."

While I agree that both sides are valuable, I agree with the anon here - I don't think these tradeoffs are particularly relevant to a community health team investigating interpersonal harm cases with the goal of "reduc[ing] risk of harm to members of the community while being fair to people who are accused of wrongdoing".

One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone's forum posts are? or whether they work at an EA org? or whether they are "EA leadership"?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team's own standards, taking someone's potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I'm basing this off the forum post, not any non-public information

Additionally, a common theme about basically every sexual violence scandal that I've read about is that there were (often multiple) warnings beforehand that were not taken seriously.

If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health team chose not to act because they decided it wasn't worth the tradeoff against the person/people's impact.

Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.

More weakly, even if you hold the view that it is in fact the community health team's role to "take the talent bottleneck seriously; don’t hamper hiring / projects too much" when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person's impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.

That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it's always easier to make comments of a critical nature than to make the difficult decisions they have to make. I'm grateful they exist, and don't want my comment to come across like an attack of the community health team or its individuals!

(commenting in personal capacity etc)

  1. ^

    used as an umbrella term to include things like verbal harassment. See definition here.

If this comment is more about "how could this have been foreseen", then this comment thread may be relevant. I should note that hindsight bias means that it's much easier to look back and assess problems as obvious and predictable ex post, when powerful investment firms and individuals who also had skin in the game also missed this. 

TL;DR: 
1) There were entries that were relevant (this one also touches on it briefly)
2) They were specifically mentioned
3) There were comments relevant to this. (notably one of these was apparently deleted because it received a lot of downvotes when initially posted)
4) There has been at least two other posts on the forum prior to the contest that engaged with this specifically

My tentative take is that these issues were in fact identified by various members of the community, but there isn't a good way of turning identified issues into constructive actions - the status quo is we just have to trust that organisations have good systems in place for this, and that EA leaders are sufficiently careful and willing to make changes or consider them seriously, such that all the community needs to do is "raise the issue". And I think looking at the systems within the relevant EA orgs or leadership is what investigations or accountability questions going forward should focus on - all individuals are fallible, and we should be looking at how we can build systems in place such that the community doesn't have to just trust that people who have power and who are steering the EA movement will get it right, and that there are ways for the community to hold them accountable to their ideals or stated goals if it appears to, or risks not playing out in practice.

i.e. if there are good processes and systems in place and documentation of these processes and decisions, it's more acceptable (because other organisations that probably have a very good due diligence process also missed it). But if there weren't good processes, or if these decisions weren't a careful + intentional decision, then that's much more concerning, especially in context of specific criticisms that have been raised,[1]  or previous precedent. For example, I'd be especially curious about the events surrounding Ben Delo,[2] and processes that were implemented in response. I'd be curious about whether there are people in EA orgs involved in steering who keep track of potential risks and early warning signs to the EA movement, in the same way the EA community advocates for in the case of pandemics, AI, or even general ways of finding opportunities for impact. For example, SBF, who is listed as a EtG success story on 80k hours, has publicly stated he's willing to go 5x over the Kelly bet, and described yield farming in a way that Matt Levine interpreted as a Ponzi. Again, I'm personally less interested in the object level decision (e.g. whether or not we agree with SBF's Kelly bet comments as serious, or whether Levine's interpretation as appropriate), but more about what the process was, how this was considered at the time with the information they had etc. I'd also be curious about the documentation of any SBF related concerns that were raised by the community, if any, and how these concerns were managed and considered (as opposed to critiquing the final outcome).

Outside of due diligence and ways to facilitate whistleblowers, decision-making processes around the steering of the EA movement is crucial as well. When decisions are made with benefits that clearly affect one part of the EA community while bringing risks which are pertinent to all,[3] we need to look at how these decisions were made and what was considered at the time of the decision, and going forward, how to either diversify those risks, or make decision-making more inclusive of a wider range stakeholders, keeping in mind the best interests of the EA movement as a whole.

(this is something I'm considering working on in a personal capacity along with the OP of this post, as well as some others - details to come, but feel free to DM me if you have any thoughts on this. It appears that CEA is also already considering this)

If this comment is about "are these red-teaming contests in fact valuable for the money and time put into it, if it misses problems like this"

I think my view here (speaking only for the red-teaming contest) is that even if this specific contest was framed in a way that it missed these classes of issues, the value of the very top submissions[4] may still have made the efforts worthwhile. The potential value of a different framing was mentioned by another panelist. If it's the case that red-teaming contests are systematically missing this class of issues regardless of framing, then I agree that would be pretty useful to know, but I don't have a good sense of how we would try to investigate this.

  

  1. ^

    This tweet seems to have aged particularly well. Despite supportive comments from high-profile EAs on the original forum post, the author seemed disappointed that nothing came of it in that direction. Again, without getting into the object level discussion of the claims of the original paper, it's still worth asking questions around the processes. If there was were actions planned, what did these look like? If not, was that because of a disagreement over the suggested changes, or the extent that it was an issue at all? How were these decisions made, and what was considered?

  2. ^

    Apparently a previous EA-aligned billionaire ?donor who got rich by starting a crypto trading firm, who pleaded guilty to violating the bank secrecy act

  3. ^

    Even before this, I had heard from a primary source in a major mainstream global health organisation that there were staff who wanted to distance themselves from EA because of misunderstandings around longtermism.

  4. ^

As requested, here are some submissions that I think are worth highlighting, or considered awarding but ultimately did not make the final cut. (This list is non-exhaustive, and should be taken more lightly than the Honorable mentions, because by definition these posts are less strongly endorsed  by those who judged it. Also commenting in personal capacity, not on behalf of other panelists, etc):

Bad Omens in Current Community Building
I think this was a good-faith description of some potential / existing issues that are important for community builders and the EA community, written by someone who "did not become an EA" but chose to go to the effort of providing feedback with the intention of benefitting the EA community. While these problems are difficult to quantify, they seem important if true, and pretty plausible based on my personal priors/limited experience. At the very least, this starts important conversations about how to approach community building that I hope will lead to positive changes, and a community that continues to strongly value truth-seeking and epistemic humility, which is personally one of the benefits I've valued most from engaging in the EA community.

Seven Questions for Existential Risk Studies
It's possible that the length and academic tone of this piece detracts from the reach it could have, and it (perhaps aptly) leaves me with more questions than answers, but I think the questions are important to reckon with, and this piece covers a lot of (important) ground. To quote a fellow (more eloquent) panelist, whose views I endorse: "Clearly written in good faith, and consistently even-handed and fair - almost to a fault. Very good analysis of epistemic dynamics in EA." On the other hand, this is likely less useful to those who are already very familiar with the ERS space.

Most problems fall within a 100x tractability range (under certain assumptions)
I was skeptical when I read this headline, and while I'm not yet convinced that 100x tractability range should be used as a general heuristic when thinking about tractability, I certainly updated in this direction, and I think this is a valuable post that may help guide cause prioritisation efforts.

The Effective Altruism movement is not above conflicts of interest
I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.


I'll also highlight one post that was awarded a prize, but I thought was particularly valuable:

Red Teaming CEA’s Community Building Work
I think this is particularly valuable because of the unique and difficult-to-replace position that CEA holds in the EA community, and as Max acknowledges, it benefits the EA community for important public organisations to be held accountable (and to a standard that is appropriate for their role and potential influence). Thus, even if listed problems aren't all fully on the mark, or are less relevant today than when the mistakes happened, a thorough analysis of these mistakes and an attempt at providing reasonable suggestions at least provides a baseline to which CEA can be held accountable for similar future mistakes, or help with assessing trends and patterns over time. I would personally be happy to see something like this on at least a semi-regular basis (though am unsure about exactly what time-frame would be most appropriate). On the other hand, it's important to acknowledge that this analysis is possible in large part because of CEA's commitment to transparency.

Thanks so much for doing this!

Do we know how these figures compares with previous years, as it might be interesting to see the trend here? I'm mindful this may be a lot of additional work, so perhaps just what you're happy to share in terms of data that already exists or is easily accessible (e.g. basic demographic data for applicants / attendees / speakers).

 

RE: the attendee pool being less diverse than the application pool - while there are multiple possible explanations for this, I wonder whether the current emphasis on the comparison to EA Survey respondent demographics may miss or underweigh potential contributing factors / issues in the application process? E.g. a 37.9% applicant to 22.2% attendee drop for POCs would still be more diverse than the 2020 survey. I'd be interested in knowing to what extent this hypothesis has been explored or ruled out, or what kinds of demographic differences in the data might prompt such exploration in the future.

(Also echoing lilly's comment about POC vs specific ethnic group breakdowns - I'd be similarly interested in the breakdown for the applicant pool, for example).

Thanks for clarifying what you meant here was about unconscious efforts - apologies for misunderstanding!

I think as other commentors are pointing out, the currently proposed policy does seem to have some potential flaws that are worth discussing. I'd be curious on your takes about how you'd appropriately account for the concerns you raise while also reducing the risks of overtly and covertly costly actions that might arise primarily because it is beneficial to building romantic relationships, or as a result of conscious / self-aware efforts.

My concern is that if we sexually neuter all EA groups, meetings, and interactions, and sever the deep human motivational links between our mating effort and our intellectual and moral work, we'll be taking the wind out of EA's sails. We'll end up as lonely, dispirited incels rowing our little boats around in circles, afraid to reach out, afraid to fall in love.

These are some pretty strong claims that don't seem particularly well substantiated.

 

Is trying to be romantically attractive the "wrong reason" for doing excellent intellectual work, displaying genuine moral virtues, and being friendly at meetings?

I also feel a bit confused about this. I think if someone is taking a particular action, or "investing in difficult, challenging behaviors to attract mates", it does seem clear there are contexts where the added intention of "to attract mates" changes how the interaction feels to me, and contexts where that added intention makes the interaction feel inappropriate. For example, if I'm at work and I think someone is friendly at the meeting because they primarily want to attract a mate vs if they are following professional norms vs if they're a kind person who cares about fostering a welcoming space for discussion, I do consider some reasons better than others.

While I don't think it's wrong to try to attract mates at a general level, I think this can happen in ways that are deceitful, and ways that leverage power dynamics in a way that's unfair and unpleasant (or worse) for the receiving party. In a similar vein, I particularly appreciated Dustin's tweet here.

I do think International Women's Day is a timely prompt for EA folks to celebrate and acknowledge the women in EA who are drawn to EA because they want to help find the best ways to help others, or to put them into practice. I appreciate (and am happy for you & Diana!) that there will be folks who benefit from finding like-minded mates in EA. I also agree that often there are overt actions that come with obvious social costs, and "going too far" in the other direction seems bad by definition. But I also want to recognise that sometimes there are likely actions that are not "overtly" costly, or may even be beneficial for those who are primarily motivated to attract mates, but may be costly in expectation for those who are primarily interested in EA as a professional space, or as a place where they can collaborate with people who also care about tackling some of the most important issues we face today. And I think this is a tradeoff that's important to consider - ultimately the EA I want to see and be part of is one that optimises for doing good, and while that's not mutually exclusive to trying to attract mates within EA, I'd be surprised if doing so as the primary goal also happened to be the best approach for doing good.

If you're saying it's not, can you give an example of an issue on which you disagree with the progressive/feminist/woke viewpoint?

I've downvoted this comment. It's not explicitly against the forum norms (maybe this?), but my personal view is that comments like these feel like you're asking someone to prove their tribe, and are divisive without being meaningfully useful.

I think if you are making a claim that this reads like progressive / feminist / woke advocacy, the burden of proof is on you to support your claim (i.e. it's more helpful to provide excerpts from the post that you think are poorly worded or read like they seem like an advocacy piece). Otherwise someone else can come along and just say "Ok sure, I don't think this reads like progressive / feminist / woke advocacy".

I think a charitable interpretation of your question is that you want to know if the author is "non-progressive / nonfeminist / nonwoke" in order to help you decide whether this post is in fact advocacy that advances those political aims. But I don't think asking this question is even helpful for that? Perhaps they share some views on the pay gap or minimum wage or intersectionality. How do the author's position on these reliably show whether this post is "progressive / feminist / woke advocacy"? It just also risks going down a completely unrelated discussion. In any case, given this is a pseudonym, they could literally just lie about a position.

Load More