I'm a software engineer on the CEA Online team, mostly working on the EA Forum. We are currently interested in working with impactful projects as contractors/consultants, please fill in this form if you think you might be a good fit for this.
You can contact me at will.howard@centreforeffectivealtruism.org
I'm curating this post. The facets listed are values that I believe in, but that are easy to forget due to short term concerns about optics and so on. I think it's good to be reminded of the importance of these things sometimes. I particularly liked the examples in the section on Open sharing of information, as these are things that other people can try and emulate.
We're thinking of moving the Forum digest, and probably eventually the EA Newsletter to Substack. We're at least planning to try this out, hopefully starting with the next digest issue on the 23rd. Here's an internal doc with our reasoning behind this (not tailored for public consumption, but you should be able to follow the thread).
I'm interested in any takes people have on this. I'm not super familiar with Substack from an author perspective so if you have any crucial considerations about how the platform works that would be very helpful. General takes and agree/disagree (with the decision to move the digest to Substack) votes are also appreciated.
Separate from your comment, I have seen comments like this elsewhere (albeit also mainly from Bob and other RP people), so I still think it's interesting additional evidence that this is a thing.
It seems like some people find it borderline inconceivable that neuron counts correspond to higher experience intensity/size/moral weight, and some people find it inconceivable the other way. This is pretty interesting!
Here's a screenshot (open in new tab to see it in slightly higher resolution). I've also made a spreadsheet with the individual voting results, which gives all the info that was on the banner just in a slightly more annoying format.
We are also planning to add native way to look back at past events as they appeared on the site :), although this isn't a super high priority atm.
I'm curating this post. I see this and @NickLaing's post as the best in class on the topic of moral weights from the AW vs GH debate week, so I'm curating them as a pair[1].
I was impressed by titotal doing the fairly laborious work of replicating everyone's calculations and finding the points where they diverged. As discussed by the man himself, there were lots of different numbers flying around, and even if AW always comes out on top it matters by how much:
You might think this doesn't matter, but the difference between 1500 times and 35 times is actually quite important: if you're at 1500 times, you can disagree with a few assumptions a little and still be comfortable in the superiority of AW. But if it's 35 times, this is no longer the case, and as we shall see, there are some quite controversial assumptions that these models share.
See also the other curation comment
I'm curating this post. The issue of moral weights turned out to be a major crux in the AW vs GH debate, and I'm excited about more progress being made now that some live disagreements have been surfaced. I'm curating this post and @titotal's (link) as the "best in class" from the debate week on this topic.
On this post: The post itself does a good job of laying out some reasonable-to-hold objections to RP's moral weights. Particularly I think the point about discounting behavioural proxies is important and likely to come up again in future.
I think comment section is also very interesting, there are quite a few good threads:
1. @David Mathers🔸' comment which raises the point that the idea of estimating intensity/size of experience from neuron counts doesn't come up (much) in the academic literature. This was surprising to me!
2. @Bob Fischer's counterpoint making the RP case
3. This thread which gets into the issue of what counts as an uninformed prior wrt moral weights
Digression but I would recommend reading about Thompson sampling :) (wikipedia, inscrutable LessWrong post). It's a good model to have for thinking about explore-exploit tradeoffs in general.
See the doc linked in the quick take for our thinking on this. These are the main reasons from there (ones below number 3 are not that important imo), upon reflection I would now swap 2 and 3 in terms of importance: