All Posts

Sorted by Magic (New & Upvoted)

Thursday, February 27th 2020
Thu, Feb 27th 2020

Shortform [Beta]
2MichaelA1dSources I've found that seem very relevant to the topic of downside risks [https://www.lesswrong.com/posts/RY9XYoqPeMc8W8zbH/mapping-downside-risks-and-information-hazards] /accidental harm (See also my/Convergence’s posts on the topic [https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE].) Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel [https://80000hours.org/articles/accidental-harm/] [80,000 Hours] How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and Jonas Vollmer [https://www.youtube.com/watch?v=RU168E9fLIM] [EAG] Sources that seem somewhat relevant https://en.wikipedia.org/wiki/Unintended_consequences [https://en.wikipedia.org/wiki/Unintended_consequences] (in particular, "Unexpected drawbacks" and "Perverse results", not "Unintended benefits") (See also my "shortform comment" lists of sources related to information hazard [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=dTghHNHmc5qf5znMQ] s, differential progress [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=xrj9XYjvsGLz6R2i6] , and the unilateralist's curse [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=y3o9YFvj4iXiAqKWa] .) I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Wednesday, February 26th 2020
Wed, Feb 26th 2020

Shortform [Beta]
16Linch2dOver a year ago, someone asked the EA community whether it’s valuable to become world-class at an unspecified non-EA niche or field. Our Forum’s own Aaron Gertler [https://forum.effectivealtruism.org/users/aarongertler] responded in a post [https://forum.effectivealtruism.org/posts/4WwcNSGd3XcpBC72Y/on-becoming-world-class] , saying basically that there’s a bunch of intangible advantages for our community to have many world-class people, even if it’s in fields/niches that are extremely unlikely to be directly EA-relevant. Since then, Aaron became (entirely in his spare time, while working 1.5 jobs) a world-class Magic the Gathering player, recently winning [https://www.hipstersofthecoast.com/2020/02/aaron-gertler-wins-the-first-dreamhack-arena-open] the DreamHack MtGA tournament and getting $30,000 in prize monies, half of which he donated to Givewell. I didn’t find his arguments overwhelmingly persuasive at the time, and I still don’t. But it’s exciting to see other EAs come up with unusual theories of change, actually executing on them, and then being wildly successful.
5vaidehi_agarwalla2dMeta-level thought: When asking about resources, a good practice might be to mention resources you've already come across and why those sources weren't helpful (if you found any), so that people don't need to recommend the most common resources multiple times. Also, once we have an EA-relevant search engine, it would be useful to refer people to that even before they ask a question in case that question has been asked or that resource already exists. The primary goal of both suggestions would be to make questions more specific, in-depth and hopefully either expanding movement knowledge or identifying gaps in knowledge. The secondary goal would be to save time!
3Wei_Dai2dSomeone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn't give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about? [https://www.lesswrong.com/posts/BNbxueXEcm6dCkDuk/is-the-potential-astronomical-waste-in-our-universe-too] for a related idea.) I think this means we have to find other ways [https://www.greaterwrong.com/posts/jAixPHwn5bmSLXiMZ/open-and-welcome-thread-february-2020/comment/7XeLMhRprukE7Crfb] of explaining/modeling charity giving, including the kind encouraged [https://www.givingwhatwecan.org/pledge/] in the EA community.

Tuesday, February 25th 2020
Tue, Feb 25th 2020

Shortform [Beta]
1MichaelStJules3dUtility functions (preferential or ethical, e.g. social welfare functions) can have weak lexicality without strong lexicality, so that a difference in category A can be larger than the maximum difference in category B, but we can still make tradeoffs between them. This can be done, for example, by having separate utility functions, fA:X→R and fB:X→R for A and B, respectively, such that * fA(x)−fA(y)≥1 for all x satisfying the condition P(x) and all y satisfying Q(y) (e.g. Q(y) can be the negation of P(y), although this would normally lead to discontinuity). * fB is bounded to have range in the interval [0,1] (or range in an interval of length at most 1). Then we can define our utility function as the sum f=fA+fB , so f(x)=fA(x)+fB(x)This ensures that all outcomes with P(x) are at least as good as all outcomes with Q(x), without being Pascalian/fanatical to maximize fA regardless of what happens to fB. For example, fA(x)≤−1 if there is any suffering in x that meets a certain threshold of intensity, Q(x), and fA(x)=0 if there is no at all suffering in x, P(x). f can still be continuous this way. If the probability that this threshold is met is p,0≤p<1 and the expected value of fA conditional on this is bounded below by −L, L>0, regardless of p for the choices available to you, then increasing fB by at least pL, which can be small, is better than trying to reduce p. As another example, an AI could be incentivized to ensure it gets monitored by law enforcement. Its reward function could look like f(x)=∞∑i=1IMi(x)+fB(x)where IMi(x) is 1 if the AI is monitored by law enforcement and passes some test in period i, and 0 otherwise. You could put an upper bound on the number of periods or use discounting to ensure the right term can't evaluate to infinity since that would allow fB to be ignored (maybe the AI will predict its expected lifetime to be infinite), but this would eventually allow fB to overcome the IMi. This overall approach can be repeated f

Monday, February 24th 2020
Mon, Feb 24th 2020

Shortform [Beta]
13Linch4dcross-posted from Facebook [https://www.facebook.com/linchuan.zhang/posts/2736056929818407]. Catalyst (biosecurity conference funded by the Long-Term Future Fund) was incredibly educational and fun. Random scattered takeaways: 1. I knew going in that everybody there will be much more knowledgeable about bio than I was. I was right. (Maybe more than half the people there had PhDs?) 2. Nonetheless, I felt like most conversations were very approachable and informative for me, from Chris Bakerlee explaining the very basics of genetics to me, to asking Anders Sandberg about some research he did that was relevant to my interests, to Tara Kirk Sell detailing recent advances in technological solutions in biosecurity, to random workshops where novel ideas were proposed... 3. There's a strong sense of energy and excitement from everybody at the conference, much more than other conferences I've been in (including EA Global). 4. From casual conversations in EA-land, I get the general sense that work in biosecurity was fraught with landmines and information hazards, so it was oddly refreshing to hear so many people talk openly about exciting new possibilities to de-risk biological threats and promote a healthier future, while still being fully cognizant of the scary challenges ahead. I guess I didn't imagine there were so many interesting and "safe" topics in biosecurity! 5. I got a lot more personally worried about coronavirus than I was before the conference, to the point where I think it makes sense to start making some initial preparations and anticipate lifestyle changes. 6. There was a lot more DIY/Community Bio representation at the conference than I would have expected. I suspect this had to do with the organizers' backgrounds; I imagine that if most other people were to organize biosecurity conferences, it'd be skewed academic a lot more. 7. I didn't meet many (any?) people with a public health or epidemiology background. 8. The Stanford representation was rea
4MichaelA4dAll prior work I found that seemed substantially relevant to information hazards (See also my/Convergence’s posts on the topic [https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE].) Information hazards [https://concepts.effectivealtruism.org/concepts/information-hazards/] [EA concepts] Information Hazards in Biotechnology - Lewis et al. - 2019 - Risk Analysis [https://onlinelibrary.wiley.com/doi/full/10.1111/risa.13235] [open access paper] Bioinfohazards [https://forum.effectivealtruism.org/posts/ixeo9swGQTbYtLhji/bioinfohazards-1] [EA Forum] Information Hazards [https://nickbostrom.com/information-hazards.pdf] [Bostrom’s original paper; open access] Terrorism, Tylenol, and dangerous information [https://www.lesswrong.com/posts/Ek7M3xGAoXDdQkPZQ/terrorism-tylenol-and-dangerous-information] [LessWrong] Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical [https://www.lesswrong.com/posts/k8qLzbHTubMjCHL2E/lessons-from-the-cold-war-on-information-hazards-why] [LessWrong] Horsepox synthesis: A case of the unilateralist's curse? [https://thebulletin.org/2018/02/horsepox-synthesis-a-case-of-the-unilateralists-curse/] [Lewis] Information hazard [https://wiki.lesswrong.com/wiki/Information_hazard] [LW Wiki] Informational hazards and the cost-effectiveness of open discussion of catastrophic risks [https://forum.effectivealtruism.org/posts/KPwgmDyHaceoEFSPm/informational-hazards-and-the-cost-effectiveness-of-open] [EA Forum] A point of clarification on infohazard terminology [https://www.lesswrong.com/posts/Rut5wZ7qyHoj3dj4k/a-point-of-clarification-on-infohazard-terminology] [LessWrong] Somewhat less directly relevant The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? [https://arxiv.org/abs/2001.00463] [open access paper] (commentary here [https://www.lesswrong.com/posts/H8fTYkNkpYio7XG8L/link-and-commentary-the-offense-defense-balance-of] ) The Vulnerable World Hypothesis [http
3MichaelA4dSources I've found that seem very relevant to the topic of civilizational collapse Civilization Re-Emerging After a Catastrophe - Karim Jebari [https://www.youtube.com/watch?v=Zhx5ieX-HPY] [EAGx Nordics] Civilizational Collapse: Scenarios, Prevention, Responses - Dave Denkenberger, Jeffrey Ladish [https://www.youtube.com/watch?v=gbYWHBoQ9gM&] [talks + Q&A] Update on civilizational collapse research - Ladish [https://forum.effectivealtruism.org/posts/wGqSqWLTwDrytxeER/update-on-civilizational-collapse-research] [EA Forum] (I found his talk more useful, personally) The long-term significance of reducing global catastrophic risks - Nick Beckstead [https://blog.givewell.org/2015/08/13/the-long-term-significance-of-reducing-global-catastrophic-risks/] [GiveWell/OPP] (Beckstead never actually writes "collapse", but discusses the chances humanity would "recover" following a GCR) Long-Term Trajectories of Human Civilization - Baum et al. [http://gcrinstitute.org/papers/trajectories.pdf] [open access paper] (the authors never actually write "collapse", but their section 4 is very relevant to the topic, and the paper is great in general) Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter - Cotton-Barratt, Daniel, Sandberg [https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12786] [open access paper] (collapse is only explicitly addressed briefly, but the paper as a whole still seems quite relevant and useful) Civilization: Institutions, Knowledge and the Future - Samo Burja [https://www.youtube.com/watch?v=OiNmTVThNEY] [Foresight talk] Things I haven't properly read/watched/listened to yet but which might be relevant The long-term significance of reducing global catastrophic risks - Beckstead [https://blog.givewell.org/2015/08/13/the-long-term-significance-of-reducing-global-catastrophic-risks/] [GiveWell/OPP] Why and how civilisations collapse - Kemp [https://www.cser.ac.uk/news/why-and-how-civilisations-
3MichaelA4dAll prior work I've found that seemed substantially relevant to the unilateralist’s curse Unilateralist's curse [https://concepts.effectivealtruism.org/concepts/unilateralists-curse/] [EA Concepts] Horsepox synthesis: A case of the unilateralist's curse? [https://thebulletin.org/2018/02/horsepox-synthesis-a-case-of-the-unilateralists-curse/] [Lewis] (usefully connects the curse to other factors) The Unilateralist's Curse and the Case for a Principle of Conformity [https://www.nickbostrom.com/papers/unilateralist.pdf] [Bostrom et al.’s original paper] Hard-to-reverse decisions destroy option value [https://www.centreforeffectivealtruism.org/blog/hard-to-reverse-decisions-destroy-option-value/] [CEA] Somewhat less directly relevant Managing risk in the EA policy space [https://forum.effectivealtruism.org/posts/Q7qzxhwEWeKC3uzK3/managing-risk-in-the-ea-policy-space] [EA Forum] (touches briefly on the curse) Ways people trying to do good accidentally make things worse, and how to avoid them [https://80000hours.org/articles/accidental-harm/] [80k] (only one section on the curse) I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
2MichaelA4dAll prior work I found that explicitly uses the terms differential progress / intellectual progress / technological development [https://forum.effectivealtruism.org/posts/XCwNigouP88qhhei2/differential-progress-intellectual-progress-technological] Differential Intellectual Progress as a Positive-Sum Project [https://foundational-research.org/differential-intellectual-progress-as-a-positive-sum-project/] [FRI] Differential technological development: Some early thinking [https://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/] [GiveWell] Differential progress [https://concepts.effectivealtruism.org/concepts/differential-progress/] [EA Concepts] Differential technological development [https://en.wikipedia.org/wiki/Differential_technological_development] [Wikipedia] On Progress and Prosperity [https://forum.effectivealtruism.org/posts/L9tpuR6ZZ3CGHackY/on-progress-and-prosperity] [EA Forum] Differential intellectual progress [https://wiki.lesswrong.com/wiki/Differential_intellectual_progress] [LW Wiki] Existential Risks: Analyzing Human Extinction Scenarios [https://www.nickbostrom.com/existential/risks.html] [open access paper] (section 9.4) (introduced the term differential technological development, I think) Intelligence Explosion: Evidence and Import [http://web.archive.org/web/20190430130748/http://intelligence.org/files/IE-EI.pdf] [MIRI] (section 4.2) (introduced the term differential intellectual development, I think) Some things that are quite relevant but that don’t explicitly use the terms Strategic Implications of Openness in AI Development [https://www.nickbostrom.com/papers/openness.pdf] [open access paper] I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Sunday, February 23rd 2020
Sun, Feb 23rd 2020

No posts for February 23rd 2020
Shortform [Beta]
6EdoArad5d[a brief note on altruistic coordination in EA] 1. EA as a community has a distribution over people of values and world-views (which themselves are uncertain and can bayesianly be modeled as distributions). 2. Assuming everyone have already updated their values and world-view by virtue of epistemic modesty, each member of the community should want all the resources of the community to go a certain way. * That can include desires about the EA resource allocation mechanism. 3. The differences between individuals undoubtedly causes friction and resentment. 4. It seems like the EA community is incredible in it's cooperative norms and low levels of unneeded politics. * There are concerns about how steady this state is. * Many thanks to anyone working hard to keep this so! There's bound to be a massive room for improvement, a clear goal of what would be the best outcome considering a distribution as above, a way of measuring where we're at, an analysis of where we are heading under the current status (an implicit parliamentary model perhaps?), and suggestions for better mechanisms and norms that result from the analysis.

Thursday, February 20th 2020
Thu, Feb 20th 2020

Shortform [Beta]
3MichaelA8dSome concepts/posts/papers I find myself often wanting to direct people to https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence [https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence] https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey [https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey] https://forum.effectivealtruism.org/posts/voDm6e6y4KHAPJeJX/act-utilitarianism-criterion-of-rightness-vs-decision [https://forum.effectivealtruism.org/posts/voDm6e6y4KHAPJeJX/act-utilitarianism-criterion-of-rightness-vs-decision] http://gcrinstitute.org/papers/trajectories.pdf [http://gcrinstitute.org/papers/trajectories.pdf] (Will likely be expanded as I find and remember more)

Wednesday, February 19th 2020
Wed, Feb 19th 2020

Shortform [Beta]
18Max_Daniel9d[On https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/] ] * [ETA: After having talked to more people, it now seems to me that disagreeing on this point more often explains different reactions than I thought it would. I'm also now less confident that my impression that there wasn't bad faith from the start is correct, though I think I still somewhat disagree with many EAs on this. In particular, I've also seen plenty of non-EA people who don't plausibly have a "protect my family" reaction say the piece felt like a failed attempt to justify a negative bottom line that was determined in advance.] (Most of the following doesn't apply in cases where someone is acting in bad faith and is determined to screw you over. And in fact I've seen the opposing failure mode of people assuming good faith for too long. But I don't think this is a case of bad faith.) * I've seen some EAs react pretty negatively or angrily to that piece. (Tbc, I've also seen different reactions.) Some have described the article as a "hit piece". * I don't think it qualifies as a hit piece. More like a piece that's independent/pseudo-neutral/ambiguous and tried to stick to dry facts/observations but in some places provides a distorted picture by failing to be charitable / arguably missing the point / being one-sided and selective in the observation it reports. * I still think that reporting like this is net good, and that the world would be better if there was more of it at the margin, even if it has flaws similarly severe to that one. (Tbc, I think there would have been a plausibly realistic/achievable version of that article that would have been better, and that there is fair criticism one can direct at it.) * To put it bluntly, I don't believe t
2ofer9dThe 2020 annual letter [https://www.gatesnotes.com/2020-Annual-Letter] of Bill and Melinda Gates is titled "Why we swing for the fences" and it seems to spotlight an approach that resembles OpenPhil's hits-based giving [https://www.openphilanthropy.org/blog/hits-based-giving] approach. From the 2020 annual letter:
1Ben Cottier9dTL;DR are there any forum posts or similarly accessible writing that clarify different notions of x-risk? If not, does it seem worth writing? My impression is that prevailing notions of x-risk (i.e. what it means, not specific cause areas) have broadened or shifted over time, but there's a lack of clarity about what notion/definition people are basing arguments on in discourse. At the same time, discussion of x-risk sometimes seems too narrow. For example, in the most recent 80K podcast with Will MacAskill, they at one point talk about x-risk in terms of literal 100% human annihilation. IMO this is one of the least relevant notions of x-risk, for cause prioritisation purposes. Perhaps there's a bias because literal human extinction is the most concrete/easy to explain/easy to reason about? Nowadays I frame longtermist cause prioritisation more like "what could cause the largest losses to the expected value of the future" than "what could plausibly annihilate humanity". Bostrom (2002 [https://www.nickbostrom.com/existential/risks.html]) defined x-risk as "one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". There is also a taxonomy in section 3 of the paper. Torres (2019 [https://docs.wixstatic.com/ugd/d9aaad_f82846cf065645ad87897f2a7281cebf.pdf]) explains and analyses five different definitions of x-risk, which I think all have some merit. To be clear I think many people have internalised broader notions of x-risk in their thoughts and arguments, both generally and for specific cause areas. I just think it could use some clarification and a call for people to clarify themselves, e.g. in a forum post.

Load More Days