All Posts

Sorted by Magic (New & Upvoted)

Saturday, July 2nd 2022
Sat, Jul 2nd 2022

Shortform
61RyanCarey21h
Comments on Jacy Reese Anthis' Some Early History of EA [https://jacyanthis.com/some-early-history-of-effective-altruism] (archived version [https://archive.ph/b8pZr]). Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case. I'll follow the chronological structure of Jacy's post, focusing first on 2008-2012, then 2012-2021. Finally, I'll discuss "founders" of EA, and sum up. 2008-2012 Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great - so far I agree. What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostly oriented toward its own "rationality" community. Finally, Felicifia is discussed at greatest length in the piece, and Jacy clearly has a special affinity to it, based on his history there, as do I. He goes as far as to describe the 2008-12 period as a history of "Felicifia and other proto-EA communities". Although I would love to take credit for the development of EA in this period, I consider Felicifia to have had the third- or fourth-largest role in "founding EA" of groups on this list. I understand its role as roughly analogous to the one currently played (in 2022) by the EA Forum, as compared to those of CEA and OpenPhil: it provides a loose social scaffolding that extends to parts
5

Friday, July 1st 2022
Fri, Jul 1st 2022

Shortform
1JoyOptimizer2d
This is a call for test prompts for GPT-EA. (announcement post: https://forum.effectivealtruism.org/posts/AqfWhMvfiakEcpwfv/training-a-gpt-model-on-ea-texts-what-data [https://forum.effectivealtruism.org/posts/AqfWhMvfiakEcpwfv/training-a-gpt-model-on-ea-texts-what-data] ) I want testcases and interesting prompts you want to see tried. This helps track and guide the development of GPT-EA versions. The first version, GPT-EA-Forum-v1 has been developed. GPT-EA-Forum-v2 will include more posts and also comments.

Thursday, June 30th 2022
Thu, Jun 30th 2022

Frontpage Posts
Shortform
12Jacy3d
BRIEF THOUGHTS ON THE PRIORITIZATION OF QUALITY RISKS This is a brief shortform post to accompany "The Future Might Not Be So Great." [https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great] These are just some scattered thoughts on the prioritization of quality risks not quite relevant enough to go in the post itself. Thanks to those who gave feedback on the draft of that post, particularly on this section. I present a more detailed argument for the prioritization of quality risks (particularly moral circle expansion) over extinction risk reduction (particularly through certain sorts of AI research) in Anthis (2018) [https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial] , but here I will briefly note some thoughts on importance, tractability, and neglectedness [https://forum.effectivealtruism.org/topics/itn-framework-1]. Two related EA Forum posts are “Cause Prioritization for Downside-Focused Value Systems” (Gloor 2018) [https://forum.effectivealtruism.org/posts/225Aq4P4jFPoWBrb5/cause-prioritization-for-downside-focused-value-systems] and “Reducing Long-Term Risks from Malevolent Actors” (Althaus and Baumann 2020) [https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors] . Additionally, at this early stage of the longtermist movement, the top priorities for population and quality risk may largely intersect. Both issues suggest foundational research of topics such as the nature of AI control and likely trajectories of the long-term future, community-building of thoughtful do-gooders, and field-building of institutional infrastructure to use for steering the long-term future. IMPORTANCE One important application of the EV of human expansion is to the “importance” of population and quality risks. Importance can be operationalized as the good done if the entire cause succeeded in solving its corresponding problem
2Gavin2d
"Effective Accelerationism" [https://swarthy.substack.com/p/effective-accelerationism-eacc] (Kent Brockman: I for one welcome our Vile Offspring.)
1RogerAckroyd3d
From animal EAs in the US there is talk about upcoming Supreme court case where California import restrictions on pork produced to lower standards are likely to be overturned. A sad turn of events if it happens. Also find it annoying that some activists are trying to ally it with larger left-wing cause, and warn it will lead to general race to bottom when it comes to regulations. As someone who is more right-wing on many issues I am not very worried about race to bottom when it comes to labor market regulation. I also don't see how it is tactically smart to tie defense of animal welfare standards to larger project of ending domestic free trade in US. SC is never going to write an opinion that would allow Californa to ban import from states with lower minimum wage, and that would also be a step much too far for the Biden adminstration and most Democrats, yet animal-friendly lawyers on Twitter seem completely unconcerned to suggest that this is the principle they want.
1sawyer3d
Today is Asteroid Day [https://asteroidday.org/]. From the website: I didn't know about this until today. Seems like a potential opportunity for more general communication on global catastrophic risks.

Monday, June 27th 2022
Mon, Jun 27th 2022

Frontpage Posts
Shortform
3dominicroser6d
Looking for help: what's the opposite of counterfactual reasoning -- in other words: when EAs encourage counterfactual reasoning, what do they discourage? I ask because I'm writing about good epistemic practices and mindsets. I am trying to structure my writing as a list of opposites (scout mindset vs soldier mindset, numerical vs verbal reasoning, etc). Would it be correct to say that in the case of counterfactual reasoning there is no real opposite? Rather, the appropriate contrast is: "counterfactual reasoning done well vs. counterfactual reasoning done badly"?
1rileyharris6d
Recently, I was reading David Thorstad’s new paper “Existential risk pessimism and the time of perils” [https://globalprioritiesinstitute.org/existential-risk-pessimism-and-the-time-of-perils-david-thorstad-global-priorities-institute-university-of-oxford/] . In it, he models the value of reducing existential risk on a range of different assumptions. The headline result is that 1) most plausibly, existential risk reduction is not overwhelmingly valuable–though it may still be quite valuable, it doesn’t probably swamp all other cause areas. And 2) thinking that extinction is more likely tends to weaken the case for existential risk reduction rather than strengthen it. It struck me that one of the results is particularly interesting, I call it the repugnant [https://plato.stanford.edu/entries/repugnant-conclusion/]solution: If we can reduce existential risk to 0% per century across all future centuries, this act is infinitely valuable, even if the initial risk was absolutely tiny and each century is only just of positive value. This act is therefore, better than basically anything else we could do. Perhaps, in a Pascalian way [https://nickbostrom.com/papers/pascal.pdf], if we think there is a tiny chance that some particular action will lead to a permanent reduction in existential risk, that act too is infinitely valuable, and everything breaks. This is also true even if we decrease the value of each century from “really amazingly great” to “only just net positive”.
Topic Page Edits and Discussion

Sunday, June 26th 2022
Sun, Jun 26th 2022

Frontpage Posts
Shortform
2Liam Gong6d
Hello! I have read that sometimes infrastructure is built but goes unused because it's easier to raise money to build than it is to raise money to maintain. If this is a real problem, I thought maybe it would make sense to offer micro-endowments, the idea being that solving a problem in perpetuity would raise the emotional stakes back to being on par with new construction. My main questions are: 1) Is this a real problem? and 2) Do you know of any case where maintenance costs are low enough that a micro-endoment would be on par with initial cost (maybe a yearly maintenance cost of 1/25th of installation cost)? Thank you for any and all feedback!

Saturday, June 25th 2022
Sat, Jun 25th 2022

Shortform
7Yonatan Cale8d
MY ATTEMPT TO HELP WITH AI SAFETY Meta: This feels like something emotional where if somebody would look at my plan from the outside, they'd have obvious and good feedback, but my own social circle is not worried or knowledgable about AGI, and so I hope someone will read this. BEST BET: META SOFTWARE PROJECTS It would be my best personal fit, running one or multiple software projects that require product work such as understanding what the users actually want. My bottle neck: Talking to actual users with pain points (researchers? meta orgs with software problems? funders? I don't know) PLAN B: ADVOCACY I think I have potential to grow into a role where I explain complicated things in a simple way, without annoying people. Advocacy seems scary, but I think my experience strongly suggests I should try. PLAN C: RESEARCH? Usually when I look closely at a field, I have new stuff to contribute. I do have impostor syndrome around AGI Safety research, but again, probably people like me should try (?) [I am not a mathematician at all. Am I just wrong here?] BOTTLE NECK FOR PLANS B+C: GETTING A BETTER MODEL What model specifically: If you'd erase all information I heard about experts speculating "when will we have AGI" and "what's the chance it will kill us all?", could I re-invent it? could I figure out which expert is right? This seems like the first layer, and an important one My actionable items: 1. Talk to friends about AGI. They ask questions, like "can't the AGI simply ADVICE us on what to do?", and I answer. 1. We both improve our model (specifically, if what I say doesn't seem convincing, then maybe it's wrong?) 2. I slowly exit my comfort zone of "being the weird person talking about AGI" 2. Write my own model, post it for comments 1. Maybe my agreements/disagreements with this [https://forum.effectivealtruism.org/posts/j7rj3ZyYbmacZXycn/linkpost-christiano-on-agreement-disagreement-with-yudkowsky]
1

Friday, June 24th 2022
Fri, Jun 24th 2022

Personal Blogposts
Shortform
8quinn8d
We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases [https://www.pewresearch.org/religion/fact-sheet/public-opinion-on-abortion/], while for one example veganism is at 6% [https://www.trulyexperiences.com/blog/veganism-statistics-usa/#Vegan-Population-Statistics-in-the-US] ) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights. While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity. Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication [https://www.lesswrong.com/tag/shut-up-and-multiply] as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by ou
3
1Puggy Knudson8d
Marketing AI reform: You might be able to have a big impact on AI reform by changing the framing. Right now framing it as “AI” alignment sells the idea that there will be computers with agency. Or something like free will. Or they will choose acts like a human. It could instead be marketed as something like preventing “automated weapons” or “computational genocide”. By emphasizing the fact that a large part of the reason we work on this problem is that humans could use computers to systematically cleanse populations, we could win people to our side. Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other. You could probably get more funding, more serious attention, and better reception by just marketing the idea in a better way. Who knows, maybe some previously unsympathetic billionaire or government would be willing to commit hundreds of millions to this area just by changing the way we talk about it.
1

Load More Days