Bio

Head of Online (EA Forum, effectivealtruism.org, Virtual Programs) at CEA. Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://www.centreforeffectivealtruism.org/careers

Sequences
2

EA Hiring
EA Retention

Comments
546

Grantmakers are welcome to ask me for a reference. Yonatan is aligned and very dedicated, and is both knowledgeable about and helpful to many software engineers (see reviews here). He's also been directly helpful to us with recruiting, and I've referred him to multiple EA org's who are trying to hire software engineers.

Thanks! That sounds right to me, but I had thought that Nate was making a stronger objection, something like "looking at nonhuman brains is useless because you could have a perfect understanding of a chimpanzee brain but still completely fail to predict human behavior (after a 'sharp left turn')."

Is that wrong? Or is he just saying something like "looking at nonhuman brains is 90% less effective and given long enough timelines these research projects will pan out - I just don't expect us to have long enough timelines?"

Also, this is a remarkably unhelpful graph. Like, you have to genuinely put effort in to make data this hard to understand

It's interesting to see how strongly typed languages have taken over the mind share of engineers: https://insights.stackoverflow.com/survey/2021#most-loved-dreaded-and-wanted-language-love-dread

Democratizing risk post update
Earlier this week, a post was published criticizing democratizing risk. This post was deleted by the (anonymous) author. The forum moderation team did not ask them to delete it, nor are we aware of their reasons for doing so.
We are investigating some likely Forum policy violations, however, and will clarify the situation as soon as possible.

Thanks! I've added this to the issue tracking your original suggestion.

Thanks for the suggestion! I passed this on to our events team

CEA's Community Health team is hiring a project manager:

With recent media attention, increased funding, and growing ambition among community members, this is one of the most exciting times for EA, but also one of the riskiest. We need an ops-minded generalist to help address these risks, through both end-to-end ownership of targeted projects as well as building broader systems and processes to support other team members.

Example projects you might own:

  1. Create and manage a fund for community members’ physical and mental health
  2. Categorize the EA community into useful segments and conduct interviews to discover and understand their challenges, then summarize this feedback for stakeholders
  3. Organize a retreat for team members to discuss and work on key problems in the EA community

If this sounds like you, please apply!

You maintain this pretty well as it walks up through to primate, and then suddenly it takes a sharp left turn and invents its own internal language and a bunch of abstract concepts, and suddenly you find your visualization tools to be quite lacking for interpreting its abstract mathematical reasoning about topology or whatever.

Empirically speaking, scientists who are trying to understand human brains do spend a lot (most?) of their time looking at nonhuman brains, no?

Is Nate's objection here something like "human neuroscience is not at the level where we deal with 'sharp left turn' stuff, and I expect that once neuroscientists can understand chimpanzee brains very well they will discover that there is in fact a whole other set of problems they need to solve to understand human brains, and that this other set of problems is actually the harder one?"

Is anyone working on detecting symmetric persuasion capabilities? Does it go by another name? Searches here and on lw don't turn up much.

Load More