JP Addison

4931 karmaJoined Feb 2017Working (6-15 years)Cambridge, MA, USA
jpaddison.net

Bio

Head of the CEA Online Team, which runs this Forum.

A bit about me, to help you get to know me: Prior to CEA, I was a data engineer at an aerospace startup. I got into EA through reading the entire archive of Slate Star Codex in 2015. I found EA naturally compelling, and donated to AMF, then GFI, before settling on my current cause prioritization of meta-EA, with AI x-risk as my object-level preference. I try to have a wholehearted approach to morality, rather than thinking of it as an obligation or opportunity. You see my LessWrong profile here.

I love this Forum a bunch. I've been working on it for 5 years as of this writing, and founded the EA Forum 2.0. (Remember 1.0?) I have an intellectual belief in it as an impactful project, but also a deep love for it as an open platform where anyone can come participate in the project of effective altruism. We're open 24/7, anywhere there is an internet connection.

In my personal life, I hang out in the Boston EA and Gaymer communities, enjoy houseplants, table tennis, and playing coop games with my partner, who has more karma than me.

Comments
655

Topic contributions
17

I want to throw in a bit of my philosophy here.

Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it.

I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants.

I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1]

Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2]

Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal.

Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content.

Some points of agreement: 

Old users are owed explanations, new users are (mostly) not

Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality content.)

I try really hard to not build an ideological echo chamber

Most of the work I do as a moderator is reading reports and recommending no official action. I have the internal experience of mostly fighting others to keep the Forum an open platform. Obviously that is a compatible experience with overmoderating the Forum into an echo chamber, but I will at least bring this up as a strong point of philosophical agreement.

Final points:

I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.

  1. ^

    Controversial point! Maybe if everyone adopted my own epistemic practices the community would be better off. It would certainly gain in the ability to communicate smoothly with itself, and would probably spend less effort pulling in opposite directions as a result, but I think the size constraints and/or deference to authority that would be required would not be worth it.

  2. ^

    Note that Habryka has been a huge influence on me. These disagreements are what remains after his large influence on me.

With the US presidential election coming up this year, some of y’all will probably want to discuss it.[1] I think it’s a good time to restate our politics policy. tl;dr Partisan politics content is allowed, but will be restricted to the Personal Blog category. On-topic policy discussions are still eligible as frontpage material.

  1. ^

    Or the expected UK elections.

I believe what you're looking for is the personal blog distinction. Authors can decide that they want to post their writing on the Forum, but not submit it to the frontpage. Examples might be a post that is political, or if someone is dumping a larger number of posts. Devin actually did this, so you'll notice that the posts you mentioned are in the personal blog category. If you're seeing them on the frontpage, then my guess is you've customized your feed.

FYI thanks for all the helpful comments here — I promptly got covid and haven't had a chance to respond 😅

This is a really nice idea, thanks!

Here’s a puzzle I’ve thought about a few times recently:

The impact of an activity () is due to two factors,  and . Those factors combine multiplicatively to produce impact. Examples include:

  • The funding of an organization and the people working at the org
  • A manager of a team who acts as a lever on the work of their reports
  • The EA Forum acts as a lever on top of the efforts of the authors
  • A product manager joins a team of engineers

Let’s assume in all of these scenarios that you are only one of the players in the situation, and you can only control your own actions.

From a counterfactual analysis, if you can increase your contribution by 10%, then you increase the impact by 10%, end of story.

From a Shapley Value perspective, it’s a bit more complicated, but we can start with a prior that you split your impact evenly with the other players.

Both these perspectives have a lot going for them! The counterfactual analysis has important correspondences to reality. If you do 10% better at your job the world gets  better. Shapley Values prevent the scenario where the multiplicative impact causes the involved agents to collectively contribute too much.

I notice myself feeling relatively more philosophically comfortable running with the Shapely Value analysis in the scenario where I feel aligned with the other players in the game. And potentially the Shapley Value approach downsides go down if I actually run the math (Fake edit: I ran a really hacky guess as to how I’d calculate this using this calculator and it wasn’t that helpful).

But I don’t feel 100% bought-in to the Shapley Value approach, and think there’s a value in paying attention to the counterfactuals. My unprincipled compromise approach would be to take some weighted geometric mean and call it a day.

Interested in comments.

I think it's a pretty important distinction that "EA" is a question which has no CEO, while the Centre for Effective Altruism does. I recommend changing the title here.

I agree with you, and so does our issue tracker. Sadly, it does seem a bit hard. Tagging @peterhartree as a person who might be able to tell me that it's less hard than I think.

I worked with Sam for 4 years and would recommend the experience. He's an absolute blast to talk tech with, and a great human.

Answer by JP AddisonFeb 27, 202423
11
0

Maybe a report from someone with a strong network in the silicon valley scene about how AI safety's reputation is evolving post-OAI-board-stuff. (I'm sure there are lots of takes that exist, and I guess I'd be curious for either a data driven approach or a post which tries to take a levelheaded survey of different archetypes.)

Load more