JP Addison

4793 karmaJoined Feb 2017Working (6-15 years)Cambridge, MA, USA
jpaddison.net

Bio

Head of the CEA Online Team, which runs this Forum.

A bit about me, to help you get to know me: Prior to CEA, I was a data engineer at an aerospace startup. I got into EA through reading the entire archive of Slate Star Codex in 2015. I found EA naturally compelling, and donated to AMF, then GFI, before settling on my current cause prioritization of meta-EA, with AI x-risk as my object-level preference. I try to have a wholehearted approach to morality, rather than thinking of it as an obligation or opportunity. You see my LessWrong profile here.

I love this Forum a bunch. I've been working on it for 5 years as of this writing, and founded the EA Forum 2.0. (Remember 1.0?) I have an intellectual belief in it as an impactful project, but also a deep love for it as an open platform where anyone can come participate in the project of effective altruism. We're open 24/7, anywhere there is an internet connection.

In my personal life, I hang out in the Boston EA and Gaymer communities, enjoy houseplants, table tennis, and playing coop games with my partner, who has more karma than me.

Comments
647

Topic contributions
17

I worked with Sam for 4 years and would recommend the experience. He's an absolute blast to talk tech with, and a great human.

Answer by JP AddisonFeb 27, 202423
11
0

Maybe a report from someone with a strong network in the silicon valley scene about how AI safety's reputation is evolving post-OAI-board-stuff. (I'm sure there are lots of takes that exist, and I guess I'd be curious for either a data driven approach or a post which tries to take a levelheaded survey of different archetypes.)

I'm not sure if this qualifies, but the Creative Writing Contest featured some really moving stories.

I have a spotify playlist of songs that seemed to rhyme with EA to me.

Some good kabbalistic significance to our issue tracker, but I'm not sure how.

First, a note: I have heard recommendations to try to lower the number of issues. I've never understood it except as a way to pretend like you don't have bugs. For sure some of those issues are stale and out of date, but quite a few are probably live but ultimately very edge-case and unimportant bugs, or feature requests we probably won't get to but could be good. I don't think it's a good use of time to prune it, and most of the approaches I've seen companies take is to auto-close old bugs, which strikes me as disingenuous.

In any case, we have a fairly normal process of setting OKRs for our larger projects, and tiny features / bugfixes get triaged into a backlog that we look at when planning our weekly sprints. The triage process done in our asana and is intentionally not visible publicly so we can feel comfortable marking something as low priority without worrying about needing to argue about it.

Thanks for the report. We currently do the second, which isn't ideal to be sure. If someone redrafts and republishes after a post has been up for a while, an admin will have to adjust the published date manually. This happens surprisingly infrequently relative to what I might've expected, so we haven't prioritized improving that.

Definitely. I agree, and so do a few other users. We have an open ticket on it.

No, sorry. I appreciate the question though, and I'll record a ticket about it.

My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%

This isn't obvious to me. If you want to generate generic workers for your animal welfare org, sure, you might prefer to fund a vegan group. But if you want people who are good at making explicit tradeoffs, focusing on scope sensitivity, and being exceptionally truth-seeking, I would bet that an EA group is more likely to get you those people. And so it seems plausible that a donor who only prioritized animal welfare would still fund EA groups if they otherwise wouldn't exist.

In a related point, I would have been nervous (before GPT-4 made this concern much less prominent) about whether funding an AI Safety group that mostly just talked about AI got more safety workers, or just got more people interested in working on explicit AGI. 

We've discussed something like this, I'm generally in favor, subject to opportunity cost.

Load more