Erin Braid

making a literal difference metaphorically

Topic Contributions

Comments

Critiques of EA that I want to read

Something I personally would like to see from this contest is rigorous and thoughtful versions of leftist critiques of EA, ideally translated as much as possible into EA-speak. For example, I find "bednets are colonialism" infuriating and hard to engage with, but things like "the reference class for rich people in western countries trying to help poor people in Africa is quite bad, so we should start with a skeptical prior here" or "isolationism may not be the good-maximizing approach, but it could be the harm-minimizing approach that we should retreat to when facing cluelessness" make more sense to me and are easier to engage with.

That's an imaginary example -- I myself am not a rigorous and thoughtful leftist critic and I've exaggerated the EA-speak for fun. But I hope it points at what I'd like to see!

‘EA Architect’: Updates on Civilizational Shelters & Career Options

I for one would listen to a podcast about shelters and their precedents! That's not to say you should definitely make it, since I'm not sure an audience of mes would be super impactful (I don't see myself personally working on shelters), but if you're just trying to judge audience enthusiasm, count me in!

Podcasts I've enjoyed on this topic (though much less impact-focused and more highly produced than I imagine you'd aim for): "The Habitat" from Gimlet Media; the Biosphere 2 episode of "Nice Try!"

Another Basefund

Interesting. Thanks for sharing your findings and experiences!

Michael Nielsen's "Notes on effective altruism"

I see [EA] as a key question of "how can we do the most good with any given unit of resource we devote to doing good" and then taking action upon what we find when we ask that.

I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it's too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:

  1. It's probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they're often the same people, and in the sense that even when they're different people, they'll share a lot of interests and it might make sense to share a movement.
  2. Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don't just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively - I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more - but negative framings are available too.

So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.

Another Basefund

I understand that this is no longer relevant to your plans, but I'm curious about this:

Unfortunately, the result of the vooroverleg was that the charity as described above cannot be registered in the Netherlands. The main reason for this is that those who would directly benefit directly from the charity (the donors) are relatively well-off.

I'm used to the US landscape, where lots of organizations serving the well-off, from private schools to symphony orchestras, are nonprofits that take tax-deductible donations and have tax-exempt status. Is that not the case in the Netherlands?

What are the coolest topics in AI safety, to a hopelessly pure mathematician?

Love this question! I too would identify as a hopelessly pure mathematician (I'm currently working on a master's thesis in category theory), and I too spent some time trying to relate my academic interests to AI safety. I didn't have much success; in particular, nothing ML-related ever appealed. I hope it works out better for you!

Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)

Thanks for this post Julia! I really related to some parts of it, while other parts were very different from my experience. I'll take this opportunity to share a draft I wrote sometime last year, since I think it's in a similar spirit:

I used to be pretty uncomfortable with, and even mad about, the prominence of AI safety in EA. I always saw the logic – upon reading the sequences circa 2012, I quickly agreed that creating superintelligent entities not perfectly aligned with human values could go really, really badly, so of course AI safety was important in that sense – but did it really have to be such a central part of the EA movement, which (I felt) could otherwise have much wider acceptance and thus save more children from malaria? Of course, it would be worth allowing some deaths now to prevent a misaligned AI from killing everyone, so even then I didn’t object exactly, but I was internally upset about the perception of my movement and about the dead kids. 

I don’t feel this way anymore. What changed?

  1. [people aren’t gonna like EA anyways – I’ve gotten more cynical and no longer think that AI was necessarily their true objection]
  2. [AI safety more concrete now – the sequences were extremely insistent but without much in the way of actual asks, which is an unsettling combo all by itself. Move to Berkeley? Devote your life to blogging about ethics? Spend $100k on cryo? On some level those all seemed like the best available ways to prove yourself a True Believer! I was willing to lowercase-b believe, but wary of being a capital-B Believer, which in the absence of actual work to do is the only way to signal that you understand the Most Important Thing In The World]
  3. [practice thinking about the general case, longtermism]

Unfortunately I no longer remember exactly what I was thinking with #3, though I could guess. #1 and #2 still make sense to me and I could try to expand on them if they're not clear to others. 

Thinking about it now, I might add something like:

4. [better internalization of the fact that EA isn't the only way to do good lol – people who care about global health and wouldn't care about AI are doing good work in global health as we speak]

Don’t think, just apply! (usually)

To support people in following this post's advice, employers (including Open Phil?) need to make it even quicker for applicants to submit the initial application materials

From my perspective as an applicant, fwiw, I would urge employers to reduce the scope of questions in the initial application materials, more so than the time commitment. EA orgs have a tendency to ask insanely big questions of their early-stage job applicants, like "How would you reason about the moral value of humans vs. animals?" or "What are the three most important ways our research could be improved?" Obviously these are important questions, but to my mind they have the perverse effect that the more an applicant has previously thought about EA ideas, the more daunting it seems to answer a question like that in 45 minutes. Case in point, I'm probably not going to get around to applying for some positions at this post's main author's organization, because I'm not sure how best to spend $10M to improve the long-term future and I have other stuff to do this week. 

Open Phil scores great on this metric by the way - in my recent experience, the initial screening was mostly an elaborate word problem and a prompt to explain your reasoning. I'd happily do as many of those as anyone wants me to.

EA group community service projects, good or bad idea?

Maybe the process of choosing a community service project could be a good exercise in EA principles (as long as you don't spend too long on it)? 

I like this idea and would even go further -- spend as much time on it as people are interested in spending, the decision-making process might prove educational!

I can't honestly say I'm excited about the idea of EA groups worldwide marching out to pick up litter. But it seems like a worthwhile experiment for some groups, to get buy-in on the idea of volunteering together, brainstorm volunteering possibilities, decide between them based on impact, and actually go and do it. 

I feel anxious that there is all this money around. Let's talk about it

The subquestion of high salaries at EA orgs is interesting to me. I think it pushes on an existing tension between  a conception of the EA community as a support network for people who feel the weight of the world's problems and are trying to solve them, vs. a conception of the EA community as the increasingly professional project of recruiting the rest of the world to work on those problems too. 

If you're thinking of the first thing, offering high salaries to people "in the network" seems weird and counterproductive. After all, the truly committed people will just donate the excess, minus a bunch of transaction costs,  and meanwhile you run the risk of high salaries attracting people who don't care about the mission at all, who will unhelpfully dilute the group.

Whereas if you're thinking of the second thing, it seems great to offer high salaries. Working on the world's biggest problems should pay as much as working at a hedge fund! I would love to be able to whole-heartedly recommend high-impact jobs to, say, college acquaintances who feel some pressure to go into high-earning careers, not just to the people who are already in the top tenth of a percentile for commitment to altruism. 

I really love the EA-community-as-a-support-network-for-people-who-feel-the-weight-of-the-world's-problems-and-are-trying-to-solve-them. I found Strangers Drowning a very moving read in part for its depiction of pre-EA-movement EAs, who felt very alone, struggled to balance their demanding beliefs with their personal lives, and probably didn't have as much impact as they would have had with more support. I want to hug them and tell them that it's going to be okay, people like them will gather and share their experiences and best practices and coping skills and they'll know that they aren't alone. (Even though this impulse doesn't make a lot of logical sense in the case of, say, young Julia Wise, who grew up to be a big part of the reason why things are better now!) I hope we can maintain this function of the EA community alongside the EA-community-as-the-increasingly-professional-project-of-recruiting-the-rest-of-the-world-to-work-on-those-problems-too. But to the extent that these two functions compete, I lean towards picking the second one, and paying the salaries to match. 

Load More