All Posts

Sorted by Magic (New & Upvoted)

2020

Shortform [Beta]
23Pablo_Stafforini1moI don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage [https://forum.effectivealtruism.org/] has a number of problems: * The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage. * [Note: in light of Oli's comment [https://forum.effectivealtruism.org/posts/HfSfZ2ekXaryadBrB/pablo_stafforini-s-shortform#JwhC4dJGvAaiz2ZSB] below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session. * I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that betw
22Max_Daniel3d[Is longtermism bottlenecked by "great people"?] Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don't have anyone in the EA community who can do X, (ii) the bottleneck for this isn't credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more "great people". I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as "experience a utility-monster amount of pleasure" or "come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it". But of course more great people don't help with solving impossible tasks. Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented "great people", and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so. My best guess for the genesis of the "we need more great people" perspective: Suppose I talk a lot to people at an organization that thinks there's a decent chance we'll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable
16Linch18dcross-posted from Facebook. Reading Bryan Caplan and Zach Weinersmith's new book has made me somewhat more skeptical about Open Borders (from a high prior belief in its value). Before reading the book, I was already aware of the core arguments (eg, Michael Huemer's right to immigrate, basic cosmopolitanism, some vague economic stuff about doubling GDP). I was hoping the book will have more arguments, or stronger versions of the arguments I'm familiar with. It mostly did not. The book did convince me that the prima facie case for open borders was stronger than I thought. In particular, the section where he argued that a bunch of different normative ethical theories should all-else-equal lead to open borders was moderately convincing. I think it will have updated me towards open borders if I believed in stronger "weight all mainstream ethical theories equally" moral uncertainty, or if I previously had a strong belief in a moral theory that I previously believed was against open borders. However, I already fairly strongly subscribe to cosmopolitan utilitarianism and see no problem with aggregating utility across borders. Most of my concerns with open borders are related to Chesterton's fence, and Caplan's counterarguments were in three forms: 1. Doubling GDP is so massive that it should override any conservativism prior. 2. The US historically had Open Borders (pre-1900) and it did fine. 3. On the margin, increasing immigration in all the American data Caplan looked at didn't seem to have catastrophic cultural/institutional effects that naysayers claim. I find this insufficiently persuasive. ___ Let me outline the strongest case I'm aware of against open borders: Countries are mostly not rich and stable because of the physical resources, or because of the arbitrary nature of national boundaries. They're rich because of institutions and good governance. (I think this is a fairly mainstream belief among political economists). These institutions are, again, ev
13Khorton1moWho should pay the cost of Googling studies on the EA Forum? 1. Many EA Forum posts have minimal engagement with relevant academic literature 2. If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong. 3. Many people say they'd rather see an imperfect post or comment than not have it at all. 4. But people tend to remember an original claim, even if it's later debunked. 5. Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
11Khorton1moI'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy"). I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
Load More (5/31)

2019

Frontpage Posts
Personal Blogposts
Shortform [Beta]
50Max_Daniel2mo[Some of my high-level views on AI risk.] [I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.] [In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’ [https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty%23cubpmCn7XJE5FQYEq] ] Summary * Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on. * The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out. Why I'm interested in TAI as a lever to improve the long-run future I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular, * My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and * I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism [https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism],​ i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.” Less standard but not highly unusual (within EA) high-level views I hold mo
49JP Addison5moAppreciation post for Saulius I realized recently that the same author [https://forum.effectivealtruism.org/users/saulius] that made the corporate commitments [https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments] post and the misleading cost effectiveness post [https://forum.effectivealtruism.org/posts/zdAst6ezi45cChRi6/list-of-ways-in-which-cost-effectiveness-estimates-can-be] also made all three of these excellent posts on neglected animal welfare concerns that I remembered reading. Fish used as live bait by recreational fishermen [https://forum.effectivealtruism.org/posts/gGiiktK69R2YY7FfG/fish-used-as-live-bait-by-recreational-fishermen] Rodents farmed for pet snake food [https://forum.effectivealtruism.org/posts/pGwR2xc39PMSPa6qv/rodents-farmed-for-pet-snake-food] 35-150 billion fish are raised in captivity to be released into the wild every year [https://forum.effectivealtruism.org/posts/4FSANaX3GvKHnTgbw/35-150-billion-fish-are-raised-in-captivity-to-be-released] For the first he got this notable comment [https://forum.effectivealtruism.org/posts/gGiiktK69R2YY7FfG/fish-used-as-live-bait-by-recreational-fishermen#FfySjSzLL8YFZpih5] from OpenPhil's Lewis Bollard. Honorable mention includes this post [https://forum.effectivealtruism.org/posts/SMRHnGXirRNpvB8LJ/fact-checking-comparison-between-trachoma-surgeries-and] which I also remembered, doing good epistemic work fact-checking a commonly cited comparison.
42Stefan_Schubert4moThe Nobel Prize in Economics [https://www.nobelprize.org/prizes/economic-sciences/2019/summary/] awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
36Raemon8moMid-level EA communities, and cultivating the skill of thinking I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you've read all the introductory content, but before you're ready to tackle anything real ambitious... what should you do, and what should your local EA community encourage people to do? My sense is that grassroots EA groups default to "discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory." I have varying opinions on those things, but even if they were all good ideas... they leave an unsolved problem where there isn't a very good "bread and butter" activity that you can do repeatedly, that continues to be interesting after you've learned the basics. My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like "figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it's okay if to not do a very good job because you're still learning." I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are: * LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking * Updating the Cause Prioritization wiki (either this one [https://causeprioritization.org/Forecasting] or this one [https://priority.wiki/], I'm not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts. I'm interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive vers
34Max_Daniel2moWhat's the right narrative about global poverty and progress? Link dump of a recent debate. The two opposing views are: (a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great . [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth. * Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely. * In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker's books in his My worldview in 5 books [https://lukemuehlhauser.com/my-worldview-in-5-books/] post, saying that "Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism." (b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds: * 1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that "real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless." * 2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality. Link dump (not necessarily comprehensive) If you only read two things, I'd recommend (1) Hasell's and Roser's article [https://ourworldindata.org/extreme-history-methods] explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic [https://www.globalpolicyjournal.com/blog/11/02/2019/global-poverty-over-long-term-legitimate-issues] . By Hickel (i.e. against "
Load More (5/82)

Load More Years