Will Howard🔹

Software Engineer @ Centre for Effective Altruism
1184 karmaJoined Working (0-5 years)Oxford, UK

Bio

I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.

You can contact me at will.howard@centreforeffectivealtruism.org

Comments
104

Topic contributions
45

See the doc linked in the quick take for our thinking on this. These are the main reasons from there (ones below number 3 are not that important imo), upon reflection I would now swap 2 and 3 in terms of importance:

  1. You can browse past editions with a much nicer + more familiar UI. I would guess this would convert a lot more people per impression than a signup box with no context
  2. We want to start crossing over with substack in various other ways (getting authors from there to crosspost on the forum), so it would be useful for us (Toby) to become more familiar with how the platform works, particularly in terms of social dynamics rather than features per se. E.g. I don’t really understand whether most people discover newsletters on substack itself vs being linked from elsewhere
  3. Substack has a good recommendations algorithm, which will hopefully recommend people other EA relevant content (this feels complementary with the thing above, where it’s facilitating some cross-flow of users between our owned channels and substack)

I'm curating this post. The facets listed are values that I believe in, but that are easy to forget due to short term concerns about optics and so on. I think it's good to be reminded of the importance of these things sometimes. I particularly liked the examples in the section on Open sharing of information, as these are things that other people can try and emulate.

Thanks for the comments everyone. This won't be done in time for this week's edition, but hopefully the next one

do they now

They do... as an exercise try and find them

Maybe I'd advocate for putting them all somewhere CEA owns as well in case substack stops being the right place.

Seems reasonable 👍

We're thinking of moving the Forum digest, and probably eventually the EA Newsletter to Substack. We're at least planning to try this out, hopefully starting with the next digest issue on the 23rd. Here's an internal doc with our reasoning behind this (not tailored for public consumption, but you should be able to follow the thread).

I'm interested in any takes people have on this. I'm not super familiar with Substack from an author perspective so if you have any crucial considerations about how the platform works that would be very helpful. General takes and agree/disagree (with the decision to move the digest to Substack) votes are also appreciated.

Separate from your comment, I have seen comments like this elsewhere (albeit also mainly from Bob and other RP people), so I still think it's interesting additional evidence that this is a thing.

It seems like some people find it borderline inconceivable that neuron counts correspond to higher experience intensity/size/moral weight, and some people find it inconceivable the other way. This is pretty interesting!

Here's a screenshot (open in new tab to see it in slightly higher resolution). I've also made a spreadsheet with the individual voting results, which gives all the info that was on the banner just in a slightly more annoying format.

We are also planning to add native way to look back at past events as they appeared on the site :), although this isn't a super high priority atm.

I'm curating this post. I see this and @NickLaing's post as the best in class on the topic of moral weights from the AW vs GH debate week, so I'm curating them as a pair[1].

I was impressed by titotal doing the fairly laborious work of replicating everyone's calculations and finding the points where they diverged. As discussed by the man himself, there were lots of different numbers flying around, and even if AW always comes out on top it matters by how much:

You might think this doesn't matter, but the difference between 1500 times and 35 times is actually quite important: if you're at 1500 times, you can disagree with a few assumptions a little and still be comfortable in the superiority of AW. But if it's 35 times, this is no longer the case, and as we shall see, there are some quite controversial assumptions that these models share.

  1. ^

I'm curating this post. The issue of moral weights turned out to be a major crux in the AW vs GH debate, and I'm excited about more progress being made now that some live disagreements have been surfaced. I'm curating this post and @titotal's (link) as the "best in class" from the debate week on this topic.

On this post: The post itself does a good job of laying out some reasonable-to-hold objections to RP's moral weights. Particularly I think the point about discounting behavioural proxies is important and likely to come up again in future.

I think comment section is also very interesting, there are quite a few good threads:
1. @David Mathers🔸' comment which raises the point that the idea of estimating intensity/size of experience from neuron counts doesn't come up (much) in the academic literature. This was surprising to me!
2. @Bob Fischer's counterpoint making the RP case
3. This thread which gets into the issue of what counts as an uninformed prior wrt moral weights

Digression but I would recommend reading about Thompson sampling :) (wikipedia, inscrutable LessWrong post). It's a good model to have for thinking about explore-exploit tradeoffs in general.

Load more