New Comment
12 comments, sorted by Click to highlight new comments since: Today at 4:13 PM

A few Forum meta things you might find useful or interesting:

  1.  Two super basic interactive data viz apps 
    1. 1) How often (in absolute and relative terms) a given forum topic appears with another given topic
    2. 2) Visualizing the popularity of various tags
  2. An updated Forum scrape including the full text and attributes of 10k-ish posts as of Christmas, '22
    1. See the data without full text in Google Sheets here
    2. Post explaining version 1.0 from a few months back
  3. From the data in no. 2, a few effortposts that never garnered an accordant amount of attention (qualitatively filtered from posts with (1) long read times (2) modest positive karma (3) not a ton of comments.
    1.  Columns labels should be (left to right):
      1. Title/link
      2. Author(s)
      3. Date posted
      4. Karma (as of a week ago)
      5. Comments (as of a week ago)
 
Open Philanthropy: Our Approach to Recruiting a Strong Teampmk10/23/2021110
Histories of Value Lock-in and Ideology Critiqueclem9/2/2022111
Why I think strong general AI is coming soonporby9/28/2022131
Anthropics and the Universal DistributionJoe_Carlsmith11/28/2021180
Range and Forecasting Accuracyniplav5/27/2022122
A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warmingturchin9/11/2022161
Strategic considerations for effective wild animal suffering workAnimal_Ethics1/18/2022210
Red teaming a model for estimating the value of longtermist interventions - A critique of Tarsney's "The Epistemic Challenge to Longtermism"Anjay F, Chris Lonsberry, Bryce Woodworth7/16/2022210
Welfare stories: How history should be written, with an example (early history of Guam)kbog1/2/2020181
Summary of Evidence, Decision, and CausalityDawn Drescher9/5/2020270
Some AI research areas and their relevance to existential safetyAndrew Critch12/15/2020270
Maximizing impact during consulting: building career capital, direct work and more.Vaidehi Agarwalla, Jakob, Jona, Peter44448/13/2021212
Independent Office of Animal ProtectionAnimal Ask, Ren Springlea11/22/2022212
Investigating how technology-focused academic fields become self-sustainingBen Snodin, Megan Kinniment9/6/2021252
Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAWRethink Priorities, Tapinder Sidhu10/28/2022223
Crucial questions about optimal timing of work and donationsMichaelA8/14/2020284
Will we eventually be able to colonize other stars? Notes from a preliminary reviewNick_Beckstead6/22/2014297
Philanthropists Probably Shouldn't Mission-Hedge AI ProgressMichaelDickens8/23/2022279

So the EA Forum has, like, an ancestor? Is this common knowledge? Lol

Felicifia: not functional anymore but still available to view. Learned about thanks to a Tweet  from Jacy

From Felicifia Is No Longer Accepting New Users:

Update: threw together

  • some data with authors, post title names, date, and number of replies (and messed one section up so some rows are missing links)
  • A rather long PDF with the posts and replies together (for quick keyword searching), with decent but not great formatting 

A (potential) issue with MacAskill's presentation of moral uncertainty

Not able to write a real post about this atm, though I think it deserves one. 

MacAskill makes a similar point in WWOTF, but IMO the best and most decision-relevant quote comes from his second appearance on the 80k podcast:

There are possible views in which you should give more weight to suffering...I think we should take that into account too, but then what happens? You end up with kind of a mix between the two, supposing you were 50/50 between classical utilitarian view and just strict negative utilitarian view. Then I think on the natural way of making the comparison between the two views, you give suffering twice as much weight as you otherwise would. 

I don't think the second bolded sentence follows in any objective or natural manner from the first. Rather, this reasoning takes a distinctly total utilitarian meta-level perspective, summing the various signs of utility and then implicitly considering them under total utilitarianism. 

Even granting that the mora arithmetic is appropriate and correct, it's not at all clear what to do once the 2:1 accounting is complete. MacAskill's suffering-focused twin might have reasoned instead that 

Negative and total utilitarianism are both 50% likely to be true, so we must give twice the normal amount of weight to happiness. However, since any sufficiently severe suffering morally outweighs any amount of happiness, the moral outlook on a world with twice as much wellbeing is the same as before

A better proxy for genuine neutrality (and the best one I can think of) might be to simulate bargaining over real-world outcomes from each perspective, which would probably result in at least some proportion of one's resources being deployed as though negative utilitarianism were true (perhaps exactly 50%, though I haven't given this enough thought to make the claim outright).

WWOTF: what did the publisher cut? [answer: nothing]

Contextual note: this post is essentially a null result. It seemed inappropriate both as a top-level post and as an abandoned Google Doc, so I’ve decided to put out the key bits (i.e., everything below) as Shortform. Feel free to comment/message me if you think that was the wrong call! 

Actual post

On his recent appearance on the 80,000 Hours Podcast, Will MacAskill noted that Doing Good Better was significantly influenced by the book’s publisher:[1] 

Rob Wiblin: ...But in 2014 you wrote Doing Good Better, and that somewhat soft pedals longtermism when you’re introducing effective altruism. So it seems like it was quite a long time before you got fully bought in.

Will MacAskill: Yeah. I should say for 2014, writing Doing Good Better, in some sense, the most accurate book that was fully representing my and colleagues’ EA thought would’ve been broader than the particular focus. And especially for my first book, there was a lot of equivalent of trade — like agreement with the publishers about what gets included. I also wanted to include a lot more on animal issues, but the publishers really didn’t like that, actually. Their thought was you just don’t want to make it too weird.

Rob Wiblin: I see, OK. They want to sell books and they were like, “Keep it fairly mainstream.”

Will MacAskill: Exactly...

I thought it was important to know whether the same was true with respect to What We Owe the Future, so I reached out to Will's team and received the following response from one of his colleagues [emphasis mine]:

Hi Aaron, thanks for sending these questions and considering to make this info publicly available.

However, in contrast to what one might perhaps reasonably expect given what Will said about Doing Good Better, I think there is actually very little of interest that can be said on this topic regarding WWOTF. In particular:

I'm not aware of any material that was cut, or any other significant changes to the content of the book that were made significantly because of the publisher's input. (At least since I joined Forethought in mid-2021; it's possible there was some of this at earlier stages of the project, though I doubt it.) To be clear: The UK publisher's editor read multiple drafts of the book and provided helpful comments, but Will generally changed things in response to these comments if and only if he was actually convinced by them. 

(There are things other than the book's content where the publisher exerted more influence – for instance, the publishers asked us for input on the book's cover but made clear that the cover is ultimately their decision. Similarly, the publisher set the price of the book, and this is not something we were involved in at all.)

As Will talks about in more detail here, the book's content would have been different in some ways if it had been written for a different audience – e.g., people already engaged in the EA community as opposed to the general public. But this was done by Will's own choice/design rather than because of publisher intervention. And to be clear, I think this influenced the content in mundane and standard ways that are present in ~all communication efforts – understanding what your audience is, aiming to meet them where they are and delivering your messages in way that is accessible to them (rather than e.g. using overly technical language the audience might not be familiar with).

  1. ^

     Quote starts at 39:47

A resource that might be useful: https://tinyapps.org/ 

 

There's a ton there, but one anecdote from yesterday: referred me to this $5 IOS desktop app which (among other more reasonable uses) made me this full quality, fully intra-linked >3600 page PDF of (almost) every file/site linked to by every file/site linked to from Tomasik's homepage (works best with old-timey simpler sites like that)

New Thing

Last week I complained about not being able to see all the top shortform posts in one list. Thanks to Lorenzo for pointing me to the next best option: 

...the closest I found is https://forum.effectivealtruism.org/allPosts?sortedBy=topAdjusted&timeframe=yearly&filter=all, you can see the inflation-adjusted top posts and shortforms by year.

It wasn't too hard to put together a text doc with (at least some of each of) all 1470ish shortform posts, which you can view or download here.

  • Pros: (practically) infinite scroll of insight porn 
  • Cons: 
    • longer posts get cut off at about 300 words
    • Each post is an ugly block of text
    • No links to the original post [see doc for more]
  • Various  other disclaimers/notes at the top of the document

I was starting to feel like the If You Give a Mouse a Cookie's eternally-doomed protagonist  (it'll look presentable if I just do this one more thing), so I'm cutting myself off here to see whether it might be worth me (or someone else) making it better. 

Newer Thing (?)

  • I do think this could be an MVP  (minimal viable product) for a much nicer-looking and readable document, such as:
    • "this but without the posts cut off and with spacing figured out" 
    • "nice-looking searchable pdf with original media and formatting"
    •  "WWOTF-level-production book and audiobook"
    • Any of those ^ three options but only for the top 10/100/n posts
    • So by all means, copy and paste and turn it into something better!

 

Oh yeah and, if you haven't done so already, I highly recommend going through the top Shortform posts for each of the last four years here

Infinitely easier said than done, of course, but some Shortform feedback/requests

  1. The link to get here from the main page is awfully small and inconspicuous (1 of 145 individual links on the page according to a Chrome extension)
    1. I can imagine it being near/stylistically  like:
      1. "All Posts" (top of sidebar)
      2. "Recommendations" in the center
      3. "Frontpage Posts", but to the main section's side or maybe as a replacement for it you can easily toggle back and forth from
  2. Would be cool to be able to sort and aggregate like with the main posts (nothing to filter by afaik)
    1. I'd really appreciate being able to see the highest-scoring Shortform posts ever, but afaik can't easily do that atm. 

For 2.a the closest I found is https://forum.effectivealtruism.org/allPosts?sortedBy=topAdjusted&timeframe=yearly&filter=all, you can see the inflation-adjusted top posts and shortforms by year.

For 1 it's probably best to post in the EA Forum feature suggestion thread

Late but thanks on both, and commented there! 

Events as evidence vs. spotlights

Note: inspired by the FTX+Bostrom fiascos and associated discourse. May (hopefully) develop into longform by explicitly connecting this taxonomy to those recent events (but my base rate of completing actual posts cautions humility)

Event as evidence

  • The default: normal old Bayesian evidence
    • The realm of "updates," "priors," and "credences" 
  • Pseudo-definition: Induces [1] a change to or within a model (of whatever the model's user is trying to understand)
  • Corresponds to models that are (as is often assumed):
    1. Well-defined (i.e. specific, complete, and without latent or hidden information)
    2. Stable except in response to 'surprising' new information

Event as spotlight

  • Pseudo-definition: Alters the how a person views, understands, or interacts with a model, just as a spotlight changes how an audience views what's on stage
    • In particular, spotlights change the salience of some part of a model
  • This can take place both/either:
    • At an individual level (think spotlight before an audience of one); and/or
    • To a community's shared model (think spotlight before an audience of many)
  • They can also which information latent in a model is functionally available to a person or community, just as restricting one's field of vision increases the resolution of whichever part of the image shines through

Example

  1. You're hiking a bit of the Appalachian Trail with two friends, going north, using the following of a map (the "external model")   
  2. An hour in, your mental/internal model probably looks like this:
  3. Event: the collapse of a financial institution you hear traffic
    1. As evidence, this causes you to change where you think you are—namely, a bit south of the first road you were expecting to cross
    2. As spotlight, this causes the three of you to stare at the same map as before model but in such a way that your internal models are all very similar, each looking something like this
Really the crop should be shifted down some but I don't feel like redoing it rn
  1. ^

    Or fails to induce

Ok so things that get posted in the Shortform tab also appear in your (my) shortform post , which can be edited to not have the title "___'s shortform" and also has a real post body that is empty by default but you can just put stuff in.

There's also the usual "frontpage" checkbox, so I assume an individual's own shortform page can appear alongside normal posts(?).

The link is: [Draft] Used to be called "Aaron Bergman's shortform" (or smth)

I assume only I can see this but gonna log out and check

[This comment is no longer endorsed by its author]Reply

Effective Altruism Georgetown will be interviewing Rob Wiblin for our inaugural podcast episode this Friday! What should we ask him?