1) How often (in absolute and relative terms) a given forum topic appears with another given topic
2) Visualizing the popularity of various tags
An updated Forum scrape including the full text and attributes of 10k-ish posts as of Christmas, '22
See the data without full text in Google Sheets here
Post explaining version 1.0 from a few months back
From the data in no. 2, a few effortposts that never garnered an accordant amount of attention (qualitatively filtered from posts with (1) long read times (2) modest positive karma (3) not a ton of comments.
A (potential) issue with MacAskill's presentation of moral uncertainty
Not able to write a real post about this atm, though I think it deserves one.
MacAskill makes a similar point in WWOTF, but IMO the best and most decision-relevant quote comes from his second appearance on the 80k podcast:
There are possible views in which you should give more weight to suffering...I think we should take that into account too, but then what happens? You end up with kind of a mix between the two, supposing you were 50/50 between classical utilitarian view and just strict negative utilitarian view. Then I think on the natural way of making the comparison between the two views, you give suffering twice as much weight as you otherwise would.
I don't think the second bolded sentence follows in any objective or natural manner from the first. Rather, this reasoning takes a distinctly total utilitarian meta-level perspective, summing the various signs of utility and then implicitly considering them under total utilitarianism.
Even granting that the mora arithmetic is appropriate and correct, it's not at all clear what to do once the 2:1 accounting is complete. MacAskill's suffering-focused twin might have reasoned instead that
Negative and total utilitarianism are both 50% likely to be true, so we must give twice the normal amount of weight to happiness. However, since any sufficiently severe suffering morally outweighs any amount of happiness, the moral outlook on a world with twice as much wellbeing is the same as before
A better proxy for genuine neutrality (and the best one I can think of) might be to simulate bargaining over real-world outcomes from each perspective, which would probably result in at least some proportion of one's resources being deployed as though negative utilitarianism were true (perhaps exactly 50%, though I haven't given this enough thought to make the claim outright).
WWOTF: what did the publisher cut? [answer: nothing]
Contextual note: this post is essentially a null result. It seemed inappropriate both as a top-level post and as an abandoned Google Doc, so I’ve decided to put out the key bits (i.e., everything below) as Shortform. Feel free to comment/message me if you think that was the wrong call!
Actual post
On hisrecent appearance on the 80,000 Hours Podcast, Will MacAskill noted thatDoing Good Better was significantly influenced by the book’s publisher:[1]
Rob Wiblin: ...But in 2014 you wrote Doing Good Better, and that somewhat soft pedals longtermism when you’re introducing effective altruism. So it seems like it was quite a long time before you got fully bought in.
Will MacAskill: Yeah. I should say for 2014, writing Doing Good Better, in some sense, the most accurate book that was fully representing my and colleagues’ EA thought would’ve been broader than the particular focus. And especially for my first book, there was a lot of equivalent of trade — like agreement with the publishers about what gets included. I also wanted to include a lot more on animal issues, but the publishers really didn’t like that, actually. Their thought was you just don’t want to make it too weird.
Rob Wiblin: I see, OK. They want to sell books and they were like, “Keep it fairly mainstream.”
Will MacAskill: Exactly...
I thought it was important to know whether the same was true with respect to What We Owe the Future, so I reached out to Will's team and received the following response from one of his colleagues [emphasis mine]:
Hi Aaron, thanks for sending these questions and considering to make this info publicly available.
However, in contrast to what one might perhaps reasonably expect given what Will said about Doing Good Better, I think there is actually very little of interest that can be said on this topic regarding WWOTF. In particular:
I'm not aware of any material that was cut, or any other significant changes to the content of the book that were made significantly because of the publisher's input. (At least since I joined Forethought in mid-2021; it's possible there was some of this at earlier stages of the project, though I doubt it.) To be clear: The UK publisher's editor read multiple drafts of the book and provided helpful comments, but Will generally changed things in response to these comments if and only if he was actually convinced by them.
(There are things other than the book's content where the publisher exerted more influence – for instance, the publishers asked us for input on the book's cover but made clear that the cover is ultimately their decision. Similarly, the publisher set the price of the book, and this is not something we were involved in at all.)
As Will talks about in more detail here, the book's content would have been different in some ways if it had been written for a different audience – e.g., people already engaged in the EA community as opposed to the general public. But this was done by Will's own choice/design rather than because of publisher intervention. And to be clear, I think this influenced the content in mundane and standard ways that are present in ~all communication efforts – understanding what your audience is, aiming to meet them where they are and delivering your messages in way that is accessible to them (rather than e.g. using overly technical language the audience might not be familiar with).
There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)
It wasn't too hard to put together a text doc with (at least some of each of) all 1470ish shortform posts, which you can view or download here.
Pros: (practically) infinite scroll of insight porn
Cons:
longer posts get cut off at about 300 words
Each post is an ugly block of text
No links to the original post [see doc for more]
Various other disclaimers/notes at the top of the document
I was starting to feel like the If You Give a Mouse a Cookie's eternally-doomed protagonist (it'll look presentable if I just do this one more thing), so I'm cutting myself off here to see whether it might be worth me (or someone else) making it better.
Newer Thing (?)
I do think this could be an MVP (minimal viable product) for a much nicer-looking and readable document, such as:
"this but without the posts cut off and with spacing figured out"
"nice-looking searchable pdf with original media and formatting"
"WWOTF-level-production book and audiobook"
Any of those ^ three options but only for the top 10/100/n posts
So by all means, copy and paste and turn it into something better!
Oh yeah and, if you haven't done so already, I highly recommend going through the top Shortform posts for each of the last four years here
Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)
Event as evidence
The default: normal old Bayesian evidence
The realm of "updates," "priors," and "credences"
Pseudo-definition: Induces [1] a change to or within a model (of whatever the model's user is trying to understand)
Corresponds to models that are (as is often assumed):
Well-defined (i.e. specific, complete, and without latent or hidden information)
Stable except in response to 'surprising' new information
Event as spotlight
Pseudo-definition: Alters the how a person views, understands, or interacts with a model, just as a spotlight changes how an audience views what's on stage
In particular, spotlights change the salience of some part of a model
This can take place both/either:
At an individual level (think spotlight before an audience of one); and/or
To a community's shared model (think spotlight before an audience of many)
They can also which information latent in a model is functionally available to a person or community, just as restricting one's field of vision increases the resolution of whichever part of the image shines through
Example
You're hiking a bit of the Appalachian Trail with two friends, going north, using the following of a map (the "external model")
An hour in, your mental/internal model probably looks like this:
Event: the collapse of a financial institutionyou hear traffic
As evidence, thiscauses you to change where you think you are—namely, a bit south of the first road you were expecting to cross
As spotlight, this causes the three of you to stare at the same map as before model but in such a way that your internal models are all very similar, each looking something like this
Really the crop should be shifted down some but I don't feel like redoing it rn
Ok so things that get posted in the Shortform tab also appear in your (my) shortform post , which can be edited to not have the title "___'s shortform" and also has a real post body that is empty by default but you can just put stuff in.
There's also the usual "frontpage" checkbox, so I assume an individual's own shortform page can appear alongside normal posts(?).
A few Forum meta things you might find useful or interesting:
Open Philanthropy: Our Approach to Recruiting a Strong Team
Histories of Value Lock-in and Ideology Critique
Why I think strong general AI is coming soon
Anthropics and the Universal Distribution
Range and Forecasting Accuracy
A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming
Strategic considerations for effective wild animal suffering work
Red teaming a model for estimating the value of longtermist interventions - A critique of Tarsney's "The Epistemic Challenge to Longtermism"
Welfare stories: How history should be written, with an example (early history of Guam)
Summary of Evidence, Decision, and Causality
Some AI research areas and their relevance to existential safety
Maximizing impact during consulting: building career capital, direct work and more.
Independent Office of Animal Protection
Investigating how technology-focused academic fields become self-sustaining
Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW
Crucial questions about optimal timing of work and donations
Will we eventually be able to colonize other stars? Notes from a preliminary review
Philanthropists Probably Shouldn't Mission-Hedge AI Progress
So the EA Forum has, like, an ancestor? Is this common knowledge? Lol
Felicifia: not functional anymore but still available to view. Learned about thanks to a Tweet from Jacy
From Felicifia Is No Longer Accepting New Users:
Update: threw together
A (potential) issue with MacAskill's presentation of moral uncertainty
Not able to write a real post about this atm, though I think it deserves one.
MacAskill makes a similar point in WWOTF, but IMO the best and most decision-relevant quote comes from his second appearance on the 80k podcast:
I don't think the second bolded sentence follows in any objective or natural manner from the first. Rather, this reasoning takes a distinctly total utilitarian meta-level perspective, summing the various signs of utility and then implicitly considering them under total utilitarianism.
Even granting that the mora arithmetic is appropriate and correct, it's not at all clear what to do once the 2:1 accounting is complete. MacAskill's suffering-focused twin might have reasoned instead that
A better proxy for genuine neutrality (and the best one I can think of) might be to simulate bargaining over real-world outcomes from each perspective, which would probably result in at least some proportion of one's resources being deployed as though negative utilitarianism were true (perhaps exactly 50%, though I haven't given this enough thought to make the claim outright).
WWOTF: what did the publisher cut? [answer: nothing]
Contextual note: this post is essentially a null result. It seemed inappropriate both as a top-level post and as an abandoned Google Doc, so I’ve decided to put out the key bits (i.e., everything below) as Shortform. Feel free to comment/message me if you think that was the wrong call!
Actual post
On his recent appearance on the 80,000 Hours Podcast, Will MacAskill noted that Doing Good Better was significantly influenced by the book’s publisher:[1]
I thought it was important to know whether the same was true with respect to What We Owe the Future, so I reached out to Will's team and received the following response from one of his colleagues [emphasis mine]:
Quote starts at 39:47
A resource that might be useful: https://tinyapps.org/
There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)
New Thing
Last week I complained about not being able to see all the top shortform posts in one list. Thanks to Lorenzo for pointing me to the next best option:
It wasn't too hard to put together a text doc with (at least some of each of) all 1470ish shortform posts, which you can view or download here.
I was starting to feel like the If You Give a Mouse a Cookie's eternally-doomed protagonist (it'll look presentable if I just do this one more thing), so I'm cutting myself off here to see whether it might be worth me (or someone else) making it better.
Newer Thing (?)
Oh yeah and, if you haven't done so already, I highly recommend going through the top Shortform posts for each of the last four years here
Infinitely easier said than done, of course, but some Shortform feedback/requests
For 2.a the closest I found is https://forum.effectivealtruism.org/allPosts?sortedBy=topAdjusted&timeframe=yearly&filter=all, you can see the inflation-adjusted top posts and shortforms by year.
For 1 it's probably best to post in the EA Forum feature suggestion thread
Late but thanks on both, and commented there!
Events as evidence vs. spotlights
Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)
Event as evidence
Event as spotlight
Example
the collapse of a financial institutionyou hear trafficOr fails to induce
Ok so things that get posted in the Shortform tab also appear in your (my) shortform post , which can be edited to not have the title "___'s shortform" and also has a real post body that is empty by default but you can just put stuff in.
There's also the usual "frontpage" checkbox, so I assume an individual's own shortform page can appear alongside normal posts(?).
The link is: [Draft] Used to be called "Aaron Bergman's shortform" (or smth)
I assume only I can see this but gonna log out and check
Effective Altruism Georgetown will be interviewing Rob Wiblin for our inaugural podcast episode this Friday! What should we ask him?