All Posts

Sorted by Magic (New & Upvoted)

Friday, October 7th 2022
Fri, Oct 7th 2022

Thursday, October 6th 2022
Thu, Oct 6th 2022

Shortform
9ChanaMessinger1d
Ambitious Altruism When I was doing a bunch of explaining of EA and my potential jobs during my most recent job search to friends, family and anyone else, one framing I landed on I found helpful was "ambitious altruism." It let me explain why just helping one person didn't feel like enough without coming off as a jerk (i.e. "I want to be more ambitious than that" rather than "that's not effective"). It doesn't have the maximizing quality, but it doesn't not have it either, since if there's something more you can do with the same resources, there's room to be more ambitious.
4ChanaMessinger1d
Habits of thought I'm working on * Trying to be more gearsy, less heuristics-y. What's actually good or bad about this, what do they actually think not just what general direction are they pulling the rope, etc * Noticing when we're arguing about the wrong thing, when we e.g. should be arguing about the breakdown of what percent one thing versus another * Noticing when we're skating over a real object level [https://www.lesswrong.com/posts/Js34Ez9nrDeJCTYQL/politics-is-way-too-meta] disagreement [https://forum.effectivealtruism.org/posts/qgQaWub8iR2EERq7i/criticism-of-ea-criticisms-is-the-real-disagreement-about] * Noticing whether I feel able to think thoughts * Noticing when I'm only consuming / receiving ideas but not actually thinking * Listing all the things that could be true about something * More predictions / forecasts / models * More often looking things up / tracking down a fact rather than sweeping it by or deciding I don't know * Paraphrasing a lot and asking if I've got things right * "Is that a lot?" - putting numbers in context [https://twitter.com/ChanaMessinger/status/1287801755221270528] * If there's a weird fact from a study, you can question the study as well as the fact * Say why you think things, including "I saw a headline about this" Habits of thought I might work on someday * Reversal tests: reversing every statement to see if the opposite also seems true More I like: https://twitter.com/ChanaMessinger/status/1287737689849176065 [https://twitter.com/ChanaMessinger/status/1287737689849176065]
4ChanaMessinger1d
Conversational moves in EA / Rationality that I like for epistemics * “So you are saying that” * “But I’d change my mind if” * “But I’m open to push back here” * “I’m curious for your take here” * “My model says” * “My current understanding is…” * “...I think this because…” * “...but I’m uncertain about…” * “What could we bet on?” * “Can you lay out your model for me?” * “This is a butterfly idea [https://acesounderglass.com/2022/02/04/butterfly-ideas/]” * “Let’s do a babble [https://www.lesswrong.com/s/pC6DYFLPMTCbEwH8W#:~:text=Babble%20and%20Prune%20is%20an,eternal%20conflict%20over%20your%20mind.] ” * “I want to gesture at something / I think this gestures at something true”

Wednesday, October 5th 2022
Wed, Oct 5th 2022

Frontpage Posts
Personal Blogposts
Shortform
8Linch2d
Is anybody trying to model/think about what actions we can do that are differentially leveraged during/in case of nuclear war, or the threat of nuclear war? In the early days of covid, most of us were worried early on, many of us had reasonable forecasts, many of us did stuff like buy hand sanitizers and warn our friends, very few of us shorted airline stocks or lobbied for border closures or did other things that could've gotten us differential influence or impact from covid. I hope we don't repeat this mistake.
1
6rodeo_flagellum2d
THOUGHTS AND NOTES: OCTOBER 5TH 0002022 (1) As per my last shortform, over the next couple of weeks I will be moving my brief profiles for different catastrophes from my draft existential risk frameworks post into shortform posts to make the existential risk frameworks post lighter and more simple. In my last shortform, I included the profile for the use of nuclear weapons and today I will include the profile for climate change. CLIMATE CHANGE * Risk: (sections from the well written Wikipedia page on Climate Change [https://en.wikipedia.org/wiki/Climate_change]): "Contemporary climate change includes both global warming and its impacts on Earth's weather patterns. There have been previous periods of climate change [https://en.wikipedia.org/wiki/Climate_variability_and_change], but the current rise in global average temperature is more rapid and is primarily caused by humans [https://en.wikipedia.org/wiki/Scientific_consensus_on_climate_change].[2] [https://en.wikipedia.org/wiki/Climate_change#cite_note-2][3] [https://en.wikipedia.org/wiki/Climate_change#cite_note-Lynas_2021-3] Burning fossil fuels [https://en.wikipedia.org/wiki/Fossil_fuel] adds greenhouse gases [https://en.wikipedia.org/wiki/Greenhouse_gas] to the atmosphere, most importantly carbon dioxide [https://en.wikipedia.org/wiki/Carbon_dioxide] (CO 2) and methane [https://en.wikipedia.org/wiki/Methane]. Greenhouse gases warm the air [https://en.wikipedia.org/wiki/Greenhouse_effect] by absorbing heat radiated by the Earth, trapping the heat near the surface. Greenhouse gas emissions [https://en.wikipedia.org/wiki/Greenhouse_gas_emissions] amplify this effect, causing the Earth to take in more energy from sunlight than it can radiate [https://en.wikipedia.org/wiki/Earth%27s_Energy_Imbalance] back into space." In general, the risk from climate change mostly comes from the destabilizing downstream effects it has on civilization, rather than from i
4Aaron Bergman2d
WWOTF: WHAT DID THE PUBLISHER CUT? [ANSWER: NOTHING] Contextual note: this post is essentially a null result. It seemed inappropriate both as a top-level post and as an abandoned Google Doc, so I’ve decided to put out the key bits (i.e., everything below) as Shortform. Feel free to comment/message me if you think that was the wrong call! ACTUAL POST On hisrecent appearance [https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/] on the 80,000 Hours Podcast, Will MacAskill noted thatDoing Good Better [https://forum.effectivealtruism.org/topics/doing-good-better] was significantly influenced by the book’s publisher:[1] [#fntg59610762] I thought it was important to know whether the same was true with respect toWhat We Owe the Future, so I reached out to Will's team and received the following response from one of his colleagues [emphasis mine]: 1. ^ [#fnreftg59610762]Quote starts at39:47 [https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/?startTime=2387.00&btp=59c71e40]
2ChanaMessinger2d
My Recommended Reading About Epistemics For content, but also the vibe it immerses me in I think makes me better * Carl Shulman's research advice [https://docs.google.com/document/d/1_yuuheVqp1quDfkuRcpoW_HO7jPaI7QnRjF1zl_VovU/edit] * Buck's Some thoughts on deference and inside-view models [https://forum.effectivealtruism.org/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models]

Tuesday, October 4th 2022
Tue, Oct 4th 2022

Frontpage Posts
Shortform
9Charlie_Guthmann3d
Grant-making as we currently do it seems pretty analogous to a command economy.
2
2Nathan Young3d
Maximise useful feedback, minimise rudeness When someone says of your organisation "I want you to do X" do not say "You are wrong to want X" This rudely discourages them from giving you feedback in future. Instead, there are a number of options: * If you want their feedback "Why do you want X?" "How does a lack of X affect you?" * If you don't want their feedback "Sorry, we're not taking feedback on that right now" or "Doing X isn't a priority for us" * If you think they fundamentally misunderstand something "Can I ask you a question relating to X?" None of these options tell them they are wrong. I do a lot of user testing. Sometimes a user tells me something I disagree with. But they are the user. They know what they want. If I disagree, it's either because they aren't actually a user I want to support, they misunderstand how hard something is, or they don't know how to solve their own problems. None of these are solved by telling them they are wrong. Often I see people responding to feedback with correction. I often do it myself. I think it has the wrong incentives. Rather than trying to tell someone they are wrong, now I try to either react with curiosity or to explain that I'm not taking feedback right now. That's about me rather than them.
-3jack jay3d
CANDID STREAM OF CONSCIOUSNESS Perhaps my greatest skill is not accepting mediocrity Mediocrity doesn’t fit my identity or character plot I’m tired of the “it’s ok” culture We need strict culture if we want to become great. When culture itself has become sick, we need pro-habitats that push people to a higher echelon. WHY ARE YOU SURPRISED THAT YOU ARE GIVEN SO MUCH TO STRUGGLE FOR? Better to ask how good a movie is when the character starts out great and barely improves? And look, I get it, you literally can’t be for certain that everything will go as planned you always have to fight for your faith too. This very passage is a writing built by me fighting my own doubts. And at the end of the day, if this really is nothing more than what it seems. Does that not make faith and action towards creating a grand future even more important? Perfect reality may very well mean the risk of not achieving it is also real, and completely based off of YOUR action. To Christians: What does hierarchy look like in heaven? Use your resources for good if you want to get ahead in in the kingdom. The time to act is now. Traditionally, we have been taught since the beginning of time that being happy is the ultimate goal. But rather, the goal is to achieve a state of life where happiness, sorrow, joy, grief, comfort, misery all come together, simultaneously, to paint a bigger picture that thrives on both the highs and the lows. A picture where shadows are admired as much as highlights. https://youtu.be/m7Y_R9BGyGA [https://youtu.be/m7Y_R9BGyGA] IF there isn’t some global land ownership redistribution then Phillipines is going to skyrocket in price. It’s beautiful, english infrastructure (signs), good weather, and amazing scenery. Love is the most selfish emotion you can have Ain’t that beautiful Physical laws cannot be broken When the laws do not operate. There is no reality No small shit Only existing in moments of stories that lead to glory. I feel like morty with

Monday, October 3rd 2022
Mon, Oct 3rd 2022

Shortform
5Nathan Young4d
I think the EA forum wiki should allow longer and more informative articles. I think that it would get 5x traffic. So I've created a market to bet.
5Gavin4d
Lovely satire of international development. [https://signalsinthefog.wordpress.com/2015/12/19/the-development-set-by-ross-coggins/] (h/t Eva Vivalt)
3rodeo_flagellum4d
THOUGHTS AND NOTES: OCTOBER 3RD 0002022 (1) I have been working on a post which introduces a framework for existential risks that I have not seen covered on the either LW [https://www.lesswrong.com/] or EAF [https://forum.effectivealtruism.org/], but I think I've impeded my progress by setting out to do more than I originally intended. Rather than simply introduce the framework and compare it to the Bostrom's 2013 framework [https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002] and the Wikipedia page on GCRs [https://en.wikipedia.org/wiki/GCR], I've tried to aggregate all global and existential catastrophes I could find under the "new" framework. Creating an anthology of global and existential catastrophes is something I would like to complete at some point, but doing so in the post I've written would be overkill and would not in line with the goal of"making the introduction of this little known framework brief and simple". To make my life easier, I am going to remove the aggregated catastrophes section of my post. I will work incrementally (and somewhat informally) on accumulating links and notes for and thinking about each global and/or existential catastrophe through shortform posts. Each shortform post in this vein will pertain to a single type of catastrophe. Of course, I may post other shortforms in between, but my goal generally is to cover the different global and existential risks one by one via shortform. As was the case in my original post, I include DALLE-2 art with each catastrophe, and the loose structure for each catastrophe is Risk, Links, Forecasts. Here is the first catastrophe in the list. Again note that I am not aiming for comprehensiveness here, but rather am trying to get the ball rolling for a more extensive review of the catastrophic or existential risks that I plan to complete at a later date. The forecasts were observed on October 3 0002022 and represent the community's uniform median forecast. USE OF NUCLEAR WEAPONS(AN
2Kaleem4d
We're thinking of naming an office "Focal Point" - let me know what you think ! [https://docs.google.com/forms/d/e/1FAIpQLSdxPfekVdh4d6UBCivY84cthvpuUc2Y5A2KRL5d0jjhJHBZ2g/viewform]

Sunday, October 2nd 2022
Sun, Oct 2nd 2022

Shortform
10Gavin5d
Bostrom selects his most neglected paper here [https://econjwatch.org/File+download/1236/UnderappreciatedWorksSept2022.pdf?mimetype=pdf] .
2
5Hauke Hillebrandt5d
I created a Zapier to post Pablo's ea.news [https://www.ea.news] feed of EA blogs and website to this subreddit: https://reddit.com/r/eackernews [https://reddit.com/r/eackernews] I wonder how much demand there'd be for a 'Hackernews' style high-frequency link only subreddit. I feel there's too much of a barrier to post links on the EA forum. Thoughts?

Saturday, October 1st 2022
Sat, Oct 1st 2022

Frontpage Posts
Shortform
4Esben Kran6d
Elon Musk's perspective on AGI safety (from the Tesla AI day, source [https://youtu.be/ODSJsviD_SU?t=10037]) * If Tesla contributes significantly to AGI, they will invest a lot in AI safety research. * There should be a governmental AI regulatory authority like the FDA that tries to ensure public safety for AGI. * Tesla will probably make a significant contribution to AGI because of their unique real-world data advantage once their robots roll out.
Topic Page Edits and Discussion

Friday, September 30th 2022
Fri, Sep 30th 2022

Frontpage Posts
Shortform
2niplav7d
BAD INFORMATION HAZARD DISCLOSURE FOR EFFECTIVE ALTRUISTS (BAD I.D.E.A) Epistemic effort: Four hours of armchair thinking, and two hours of discussion. No literature review, and the equations are intended as pointers rather than anything near conclusive. Currently, the status quo in information sharing is that a suboptimally large amount of information hazards are likely being shared. In order to decrease infohazard sharing, we have modeled out a potential system for achieving such a goal. As with all issues related to information hazards, we strongly discourage unilateral action. Below you find a rough outline of such a possible system and descriptions of its downsides. We furthermore currently believe that the for the described system, in the domain of biosecurity the disadvantages likely outweigh the advantages (that’s why we called it Bad IDEA), and in the domain of AI capabilities research the advantages outweigh the disadvantages (due to suboptimal sharing norms such as “publishing your infohazard on arXiv”). It’s worth noting that there are potentially many more downside risks that neither author thought of. Note: We considered using the termsociohazard/outfohazard/exfohazard [https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info] , but decided against it for reasons of understandability. CURRENT SITUATION * Few incentives not to publish dangerous information * Based in previous known examples * We’d like a system to incentivize people not to publish infohazards MODEL * Researcher discovers infohazard * Researcher writes up description of infohazard (longer is better) * Researcher computes cryptographic hash of infohazard * Researcher sends hash of description to IDEA * Bad IDEA stores hash * Two possibilities: * Infohazard gets published * Researcher sends in description of infohazard * Bad IDEA computes the cryptographic hash and compares the two * Bad IDEA e
Topic Page Edits and Discussion

Thursday, September 29th 2022
Thu, Sep 29th 2022

Shortform
4Lizka8d
Here are slides [https://docs.google.com/presentation/d/1Ijowxx_aBzwhpgZmU8w3laLm04ef6UrS7HOpD3C-Djk/edit?usp=sharing] from my "Writing on the Forum" workshop at EAGxBerlin.
1heylight9d
question for longtermists... which scenario is better, in your mind? a. We solve all the worlds problems by the end of the year, but we do it by actually putting a halt to technological progress. Everyone is happy and no one is suffering but we won’t have billions of billions of billions of digital entities in 1,000 years. b. People die and suffer now but we focus on improving tech and in 100 years we have billions of billions of billions of digital entities and they’re all very happy

Wednesday, September 28th 2022
Wed, Sep 28th 2022

Frontpage Posts
Shortform
13Linch9d
Honestly I don't understand the mentality of being skeptical of lots of spending on EA outreach. Didn't we have the fight about overhead ratios, fundraising costs, etc with Charity Navigator many years ago? (and afaict decisively won).
8
7Esben Kran10d
🏆📈 We've created Alignment Markets [https://alignmentmarkets.com]! Here, you can bet on how AI safety benchmark competitions go. The current ones are about the Autocast warmup competition [https://forecasting.mlsafety.org/] (meta), the Moral Uncertainty Research Competition [https://moraluncertainty.mlsafety.org/], and the Trojan Detection Challenge [https://trojandetection.ai/]. It's hosted through Manifold Markets so you'll set up an account on their site. I've chatted with them about creating a A-to-B prediction market so maybe they'll be updated when we get there. Happy betting!
4Gavin9d
There is a vast amount of philosophical progress. But almost all of it is outside philosophy. Jaw-dropping list, just on the topic of democracy; things that Rousseau writing on democracy suffers from lacking: * "Historical experiences with developed democracies * Empirical evidence regarding democratic movements in developing countries * Various formal theorems regarding collective decision making and preference aggregation, such as the Condorcet Jury-Theorem, Arrow’s Impossibility-Results, the Hong-Page-Theorem, the median voter theorem, the miracle of aggregation, etc. * Existing studies on voter behavior, polarization, deliberation, information * Public choice economics, incl. rational irrationality, democratic realism" * ... https://www.tandfonline.com/doi/full/10.1080/0020174X.2022.2124542
1
2Max Clarke10d
A defense of the inner ring, excerpts from the original. Fill your heart not with acidic desires to impress, or to accomplish; to go to EA Global, or to win the approval of "the core EAs". And fill it instead with a burning resolve to help all beings live good lives now and forever. Then, to the extent that EA inner rings succeed at their purpose you will find yourself in one — which you once wanted terminally but now, instrumentally. An EA inner ring is just a resource for the betterment of others. It is utilized, but not exploited enjoyed, but not pursued. It is comfortably inhabited — but left empty at the start of every day as the sun rises over the early morning frost.
Topic Page Edits and Discussion

Load More Days